The Tamed Unadjusted Langevin Algorithm

Nicolas Brosse, Alain Durmus, Éric Moulines, Sotirios Sabanis

Research output: Contribution to journalArticlepeer-review


In this article, we consider the problem of sampling from a probability measure $\pi$ having a density on $\mathbb{R}^d$ known up to a normalizing constant, $x\mapsto \mathrm{e}^{-U(x)} / \int_{\mathbb{R}^d} \mathrm{e}^{-U(y)} \mathrm{d} y$. The Euler discretization of the Langevin stochastic differential equation (SDE) is known to be unstable in a precise sense, when the potential $U$ is superlinear, i.e. $\liminf_{\Vert x \Vert\to+\infty} \Vert \nabla U(x) \Vert / \Vert x \Vert = +\infty$. Based on previous works on the taming of superlinear drift coefficients for SDEs, we introduce the Tamed Unadjusted Langevin Algorithm (TULA) and obtain non-asymptotic bounds in $V$-total variation norm and Wasserstein distance of order $2$ between the iterates of TULA and $\pi$, as well as weak error bounds. Numerical experiments are presented which support our findings.
Original languageEnglish
Pages (from-to)3638-3663
Number of pages26
JournalStochastic processes and their applications
Issue number10
Early online date16 Oct 2018
Publication statusPublished - 31 Oct 2019


  • stat.ME


Dive into the research topics of 'The Tamed Unadjusted Langevin Algorithm'. Together they form a unique fingerprint.

Cite this