Nonasymptotic estimates for Stochastic Gradient Langevin Dynamics under local conditions in nonconvex optimization

Ying Zhang, Ömer Deniz Akyildiz, Theo Damoulas, Sotirios Sabanis

Research output: Contribution to journalArticlepeer-review

Abstract

Within the context of empirical risk minimization, see Raginsky, Rakhlin, and Telgarsky (2017), we are concerned with a non-asymptotic analysis of sampling algorithms used in optimization. In particular, we obtain non-asymptotic error bounds for a popular class of algorithms called Stochastic Gradient Langevin Dynamics (SGLD). These results are derived in appropriate Wasserstein distances in the absence of the log-concavity of the target distribution. More precisely, the local Lipschitzness of the stochastic gradient $H(\theta, x)$ is assumed, and furthermore, the dissipativity and convexity at infinity condition are relaxed by removing the uniform dependence in $x$.
Original languageEnglish
Article number25
JournalApplied Mathematics and Optimization
Volume87
DOIs
Publication statusPublished - 13 Jan 2023

Keywords / Materials (for Non-textual outputs)

  • math.ST
  • cs.LG
  • math.PR
  • stat.ML
  • stat.TH
  • 65C40, 62L10

Fingerprint

Dive into the research topics of 'Nonasymptotic estimates for Stochastic Gradient Langevin Dynamics under local conditions in nonconvex optimization'. Together they form a unique fingerprint.

Cite this