Abstract / Description of output
This paper introduces AdaSDCA: an adaptive variant of stochastic dual coordinate ascent (SDCA) for solving the regularized empirical risk minimization problems. Our modification consists in allowing the method adaptively change the probability distribution over the dual variables throughout the iterative process. AdaSDCA achieves provably better complexity bound than SDCA with the best fixed probability distribution, known as importance sampling. However, it is of a theoretical character as it is expensive to implement. We also propose AdaSDCA+: a practical variant which in our experiments outperforms existing non-adaptive methods.
Original language | English |
---|---|
Publication status | Published - 27 Feb 2015 |
Event | 32nd International Conference on Machine Learning - Lille, France Duration: 6 Jul 2015 → 11 Jul 2015 https://icml.cc/2015/ |
Conference
Conference | 32nd International Conference on Machine Learning |
---|---|
Abbreviated title | ICML 2015 |
Country/Territory | France |
City | Lille |
Period | 6/07/15 → 11/07/15 |
Internet address |
Keywords / Materials (for Non-textual outputs)
- math.OC
- cs.LG
- stat.ML