Adding vs. Averaging in Distributed Primal-Dual Optimization

Chenxin Ma, Virginia Smith, Martin Jaggi, Michael I. Jordan, Peter Richtárik, Martin Takáč

Research output: Contribution to conferencePaperpeer-review

Abstract / Description of output

Distributed optimization methods for large-scale machine learning suffer from a communication bottleneck. It is difficult to reduce this bottleneck while still efficiently and accurately aggregating partial work from different machines. In this paper, we present a novel generalization of the recent communication-efficient primal-dual framework (CoCoA) for distributed optimization. Our framework, CoCoA+, allows for additive combination of local updates to the global parameters at each iteration, whereas previous schemes with convergence guarantees only allow conservative averaging. We give stronger (primal-dual) convergence rate guarantees for both CoCoA as well as our new variants, and generalize the theory for both methods to cover non-smooth convex loss functions. We provide an extensive experimental comparison that shows the markedly improved performance of CoCoA+ on several real-world distributed datasets, especially when scaling up the number of machines.
Original languageEnglish
Publication statusPublished - 6 Jul 2015
Event32nd International Conference on Machine Learning - Lille, France
Duration: 6 Jul 201511 Jul 2015
https://icml.cc/2015/

Conference

Conference32nd International Conference on Machine Learning
Abbreviated titleICML 2015
Country/TerritoryFrance
CityLille
Period6/07/1511/07/15
Internet address

Keywords / Materials (for Non-textual outputs)

  • cs.LG
  • 90C25, 68W15
  • G.1.6; C.1.4

Fingerprint

Dive into the research topics of 'Adding vs. Averaging in Distributed Primal-Dual Optimization'. Together they form a unique fingerprint.

Cite this