SGD and Hogwild! Convergence Without the Bounded Gradients Assumption

Lam M. Nguyen, Phuong Ha Nguyen, Marten van Dijk, Peter Richtárik, Katya Scheinberg, Martin Takáč

Research output: Contribution to conferencePaperpeer-review

Abstract

Stochastic gradient descent (SGD) is the optimization algorithm of choice in many machine learning applications such as regularized empirical risk minimization and training deep neural networks. The classical analysis of convergence of SGD is carried out under the assumption that the norm of the stochastic gradient is uniformly bounded. While this might hold for some loss functions, it is always violated for cases where the objective function is strongly convex. In (Bottou et al.,2016) a new analysis of convergence of SGD is performed under the assumption that stochastic gradients are bounded with respect to the true gradient norm. Here we show that for stochastic problems arising in machine learning such bound always holds. Moreover, we propose an alternative convergence analysis of SGD with diminishing learning rate regime, which is results in more relaxed conditions that those in (Bottou et al.,2016). We then move on the asynchronous parallel setting, and prove convergence of the Hogwild! algorithm in the same regime, obtaining the first convergence results for this method in the case of diminished learning rate.
Original languageEnglish
Publication statusAccepted/In press - 11 May 2018
EventThirty-fifth International Conference on Machine Learning - Stockholmsmässan, Stockholm, Sweden
Duration: 10 Jul 201815 Jul 2018
https://icml.cc/
https://icml.cc/

Conference

ConferenceThirty-fifth International Conference on Machine Learning
Abbreviated titleICML 2018
CountrySweden
CityStockholm
Period10/07/1815/07/18
Internet address

Keywords

  • math.OC
  • cs.LG
  • stat.ML

Fingerprint Dive into the research topics of 'SGD and Hogwild! Convergence Without the Bounded Gradients Assumption'. Together they form a unique fingerprint.

Cite this