Two popular classes of methods for approximate inference are Markov chain Monte Carlo (MCMC) and variational inference. MCMC tends to be accurate if run for a long enough time, while variational inference tends to give better approximations at shorter time horizons. However, the amount of time needed for MCMC to exceed the performance of variational methods can be quite high, motivating more fine-grained tradeoffs. This paper derives a distribution over variational parameters, designed to minimize a bound on the divergence between the resulting marginal distribution and the target, and gives an example of how to sample from this distribution in a way that interpolates between the behavior of existing methods based on Langevin dynamics and stochastic gradient variational inference (SGVI).
Justin Domke (University of Massachusetts, Amherst)
Related Events (a corresponding poster, oral, or spotlight)
2017 Talk: A Divergence Bound for Hybrids of MCMC and Variational Inference and an Application to Langevin Dynamics and SGVI »
Tue Aug 8th 04:06 -- 04:24 AM Room C4.9& C4.10
More from the Same Authors
2020 Poster: Provable Smoothness Guarantees for Black-Box Variational Inference »