Timezone: »

 
Poster
Quasi-Monte Carlo Variational Inference
Alexander Buchholz · Florian Wenzel · Stephan Mandt

Wed Jul 11 09:15 AM -- 12:00 PM (PDT) @ Hall B #54

Many machine learning problems involve Monte Carlo gradient estimators. As a prominent example, we focus on Monte Carlo variational inference (MCVI) in this paper. The performance of MCVI crucially depends on the variance of its stochastic gradients. We propose variance reduction by means of Quasi-Monte Carlo (QMC) sampling. QMC replaces N i.i.d. samples from a uniform probability distribution by a deterministic sequence of samples of length N. This sequence covers the underlying random variable space more evenly than i.i.d. draws, reducing the variance of the gradient estimator. With our novel approach, both the score function and the reparameterization gradient estimators lead to much faster convergence. We also propose a new algorithm for Monte Carlo objectives, where we operate with a constant learning rate and increase the number of QMC samples per iteration. We prove that this way, our algorithm can converge asymptotically at a faster rate than SGD . We furthermore provide theoretical guarantees on qmc for Monte Carlo objectives that go beyond MCVI , and support our findings by several experiments on large-scale data sets from various domains.

Author Information

Alexander Buchholz (ENSAE-CREST Paris)
Florian Wenzel (University of Kaiserslautern)
Stephan Mandt (UC Irvine)

I am a research scientist at Disney Research Pittsburgh, where I lead the statistical machine learning group. From 2014 to 2016 I was a postdoctoral researcher with David Blei at Columbia University, and a PCCM Postdoctoral Fellow at Princeton University from 2012 to 2014. I did my Ph.D. with Achim Rosch at the Institute for Theoretical Physics at the University of Cologne, where I was supported by the German National Merit Scholarship.

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors