Timezone: »

 
Oral
Quasi-Monte Carlo Variational Inference
Alexander Buchholz · Florian Wenzel · Stephan Mandt

Wed Jul 11 07:50 AM -- 08:00 AM (PDT) @ A4

Many machine learning problems involve MonteCarlo gradient estimators. As a prominent example, we focus on Monte Carlo variational inference (MCVI) in this paper. The performanceof MCVI crucially depends on the variance of itsstochastic gradients. We propose variance reduction by means of Quasi-Monte Carlo (QMC) sampling. QMC replaces N i.i.d. samples from a uniform probability distribution by a deterministicsequence of samples of length N. This sequencecovers the underlying random variable space moreevenly than i.i.d. draws, reducing the variance ofthe gradient estimator. With our novel approach,both the score function and the reparameterization gradient estimators lead to much faster convergence. We also propose a new algorithm forMonte Carlo objectives, where we operate witha constant learning rate and increase the numberof QMC samples per iteration. We prove that thisway, our algorithm can converge asymptoticallyat a faster rate than SGD . We furthermore providetheoretical guarantees on qmc for Monte Carloobjectives that go beyond MCVI , and support ourfindings by several experiments on large-scaledata sets from various domains.

Author Information

Alexander Buchholz (ENSAE-CREST Paris)
Florian Wenzel (University of Kaiserslautern)
Stephan Mandt (UC Irvine)

I am a research scientist at Disney Research Pittsburgh, where I lead the statistical machine learning group. From 2014 to 2016 I was a postdoctoral researcher with David Blei at Columbia University, and a PCCM Postdoctoral Fellow at Princeton University from 2012 to 2014. I did my Ph.D. with Achim Rosch at the Institute for Theoretical Physics at the University of Cologne, where I was supported by the German National Merit Scholarship.

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors