Efficient MCMC Sampling with Dimension-Free Convergence Rate using ADMM-type Splitting

Maxime Vono*, Daniel Paulin, Arnaud Doucet

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract / Description of output

Performing exact Bayesian inference for complex models is computationally intractable. Markov chain Monte Carlo (MCMC) algorithms can provide reliable approximations of the posterior distribution but are expensive for large datasets and high-dimensional models. A standard approach to mitigate this complexity consists in using subsampling techniques or distributing the data across a cluster. However, these approaches are typically unreliable in high-dimensional scenarios. We focus here on a recent alternative class of MCMC schemes exploiting a splitting strategy akin to the one used by the celebrated ADMM optimization algorithm. These methods appear to provide empirically state-of-the-art performance but their theoretical behavior in high dimension is currently unknown. In this paper, we propose a detailed theoretical study of one of these algorithms known as the split Gibbs sampler. Under regularity conditions, we establish explicit convergence rates for this scheme using Ricci curvature and coupling ideas. We support our theory with numerical illustrations.
Original languageEnglish
Pages (from-to)1−69
Number of pages69
JournalJournal of Machine Learning Research
Issue number25
Publication statusPublished - 3 Feb 2022

Keywords / Materials (for Non-textual outputs)

  • ADMM, Approximate Bayesian inference, convergence rates, Markov chain Monte Carlo, splitting


Dive into the research topics of 'Efficient MCMC Sampling with Dimension-Free Convergence Rate using ADMM-type Splitting'. Together they form a unique fingerprint.

Cite this