# High-dimensional Bayesian inference via the Unadjusted Langevin Algorithm

Abstract : We consider in this paper the problem of sampling a high-dimensional probability distribution $\pi$ having a density wrt the Lebesgue measure on $\mathbb{R}^d$, known up to a normalisation factor $x \mapsto \mathrm{e}^{−U (x)} / \int_{\mathbb{R}^d} \mathrm{e}^{−U (y)}\mathrm{d}y$. Such problem naturally occurs for example in Bayesian inference and machine learning. Under the assumption that $U$ is continuously differentiable, $\nabla U$ is globally Lipschitz and $U$ is strongly convex, we obtain non-asymptotic bounds for the convergence to stationarity in Wasserstein distance of order $2$ and total variation distance of the sampling method based on the Euler discretization of the Langevin stochastic differential equation, for both constant and decreasing step sizes. The dependence on the dimension of the state space of the obtained bounds is studied to demonstrate the applicability of this method. The convergence of an appropriately weighted empirical measure is also investigated and bounds for the mean square error and exponential deviation inequality are reported for functions which are either Lipchitz continuous or measurable and bounded. An illustration to a Bayesian inference for binary regression is presented.
Keywords :
Type de document :
Pré-publication, Document de travail
2016
Domaine :

Littérature citée [39 références]

https://hal.archives-ouvertes.fr/hal-01304430
Contributeur : Alain Durmus <>
Soumis le : vendredi 9 décembre 2016 - 03:24:34
Dernière modification le : jeudi 10 mai 2018 - 02:04:21
Document(s) archivé(s) le : jeudi 23 mars 2017 - 09:04:00

### Fichier

main.pdf
Fichiers produits par l'(les) auteur(s)

### Identifiants

• HAL Id : hal-01304430, version 2

### Citation

Alain Durmus, Eric Moulines. High-dimensional Bayesian inference via the Unadjusted Langevin Algorithm. 2016. 〈hal-01304430v2〉

### Métriques

Consultations de la notice

## 549

Téléchargements de fichiers