Check the preview of 2nd version of this platform being developed by the open MLCommons taskforce on automation and reproducibility as a free, open-source and technology-agnostic on-prem platform.

Convergence of Langevin MCMC in KL-divergence

lib:3d4a3e9385cdd3bf (v1.0.0)

Authors: Xiang Cheng,Peter Bartlett
ArXiv: 1705.09048
Document:  PDF  DOI 
Abstract URL: http://arxiv.org/abs/1705.09048v2


Langevin diffusion is a commonly used tool for sampling from a given distribution. In this work, we establish that when the target density $p^*$ is such that $\log p^*$ is $L$ smooth and $m$ strongly convex, discrete Langevin diffusion produces a distribution $p$ with $KL(p||p^*)\leq \epsilon$ in $\tilde{O}(\frac{d}{\epsilon})$ steps, where $d$ is the dimension of the sample space. We also study the convergence rate when the strong-convexity assumption is absent. By considering the Langevin diffusion as a gradient flow in the space of probability distributions, we obtain an elegant analysis that applies to the stronger property of convergence in KL-divergence and gives a conceptually simpler proof of the best-known convergence results in weaker metrics.

Relevant initiatives  

Related knowledge about this paper Reproduced results (crowd-benchmarking and competitions) Artifact and reproducibility checklists Common formats for research projects and shared artifacts Reproducibility initiatives

Comments  

Please log in to add your comments!
If you notice any inapropriate content that should not be here, please report us as soon as possible and we will try to remove it within 48 hours!