Abstract
Within the context of empirical risk minimization, see Raginsky, Rakhlin, and Telgarsky (2017), we are concerned with a non-asymptotic analysis of sampling algorithms used in optimization. In particular, we obtain non-asymptotic error bounds for a popular class of algorithms called Stochastic Gradient Langevin Dynamics (SGLD). These results are derived in appropriate Wasserstein distances in the absence of the log-concavity of the target distribution. More precisely, the local Lipschitzness of the stochastic gradient $H(\theta, x)$ is assumed, and furthermore, the dissipativity and convexity at infinity condition are relaxed by removing the uniform dependence in $x$.
Original language | English |
---|---|
Article number | 25 |
Journal | Applied Mathematics and Optimization |
Volume | 87 |
DOIs | |
Publication status | Published - 13 Jan 2023 |
Keywords / Materials (for Non-textual outputs)
- math.ST
- cs.LG
- math.PR
- stat.ML
- stat.TH
- 65C40, 62L10