Projects per year
Abstract / Description of output
We propose a new randomized coordinate descent method for minimizing the sum of convex functions, each of which depends on a small number of coordinates only. Our method (APPROX) is simultaneously Accelerated, Parallel, and PROXimal; this is the first time such a method has been proposed. In the special case when the number of processors is equal to the number of coordinates, the method converges at the rate 2¯ω ¯LR2/(k + 1)2, where k is the iteration counter, ¯ω is a data-weighted average degree of separability of
the loss function, ¯L is the average of Lipschitz constants associated with the coordinates and individual functions in the sum, and R is the distance of the initial point from the minimizer. We show that the method can be implemented without the need to perform full-dimensional vector operations, which is the major bottleneck of accelerated coordinate descent, rendering it impractical. The fact that the method depends on the average degree of separability, and not on the maximum degree, can be attributed to the use of new safe
large stepsizes, leading to improved expected separable overapproximation (ESO). These are of independent interest and can be utilized in all existing parallel randomized coordinate descent algorithms based on the concept of ESO. In special cases, our method recovers several classical and recent algorithms such as simple and accelerated proximal gradient descent, as well as serial, parallel, and distributed versions of randomized block coordinate
descent. Due to this flexibility, APPROX had been used successfully by the authors in a graduate class setting as a modern introduction to deterministic and randomized proximal gradient methods. Our bounds match or improve upon the best known bounds for each of the methods APPROX specializes to. Our method has applications in a number of areas, including machine learning, submodular optimization, and linear and semidefinite programming.
the loss function, ¯L is the average of Lipschitz constants associated with the coordinates and individual functions in the sum, and R is the distance of the initial point from the minimizer. We show that the method can be implemented without the need to perform full-dimensional vector operations, which is the major bottleneck of accelerated coordinate descent, rendering it impractical. The fact that the method depends on the average degree of separability, and not on the maximum degree, can be attributed to the use of new safe
large stepsizes, leading to improved expected separable overapproximation (ESO). These are of independent interest and can be utilized in all existing parallel randomized coordinate descent algorithms based on the concept of ESO. In special cases, our method recovers several classical and recent algorithms such as simple and accelerated proximal gradient descent, as well as serial, parallel, and distributed versions of randomized block coordinate
descent. Due to this flexibility, APPROX had been used successfully by the authors in a graduate class setting as a modern introduction to deterministic and randomized proximal gradient methods. Our bounds match or improve upon the best known bounds for each of the methods APPROX specializes to. Our method has applications in a number of areas, including machine learning, submodular optimization, and linear and semidefinite programming.
Original language | English |
---|---|
Pages (from-to) | 739-771 |
Number of pages | 33 |
Journal | Siam review |
Volume | 58 |
Issue number | 4 |
DOIs | |
Publication status | Published - 3 Nov 2016 |
Fingerprint
Dive into the research topics of 'Optimization in High Dimensions via Accelerated, Parallel, and Proximal Coordinate Descent'. Together they form a unique fingerprint.Projects
- 2 Finished
-
Science and Innovation: Numerical Algorithms and Intelligent Software for the Evolving HPC Platform
1/08/09 → 31/07/14
Project: Research