Poster
Adaptive Antithetic Sampling for Variance Reduction
Hongyu Ren · Shengjia Zhao · Stefano Ermon
Pacific Ballroom #205
Keywords: [ Approximate Inference ] [ Bayesian Deep Learning ] [ Bayesian Methods ] [ Deep Generative Models ] [ Monte Carlo Methods ]
Variance reduction is crucial in stochastic estimation and optimization problems. Antithetic sampling reduces the variance of a Monte Carlo estimator by drawing correlated, rather than independent, samples. However, designing an effective correlation structure is challenging and application specific, thus limiting the practical applicability of these methods. In this paper, we propose a general-purpose adaptive antithetic sampling framework. We provide gradient-based and gradient-free methods to train the samplers such that they reduce variance while ensuring that the underlying Monte Carlo estimator is provably unbiased. We demonstrate the effectiveness of our approach on Bayesian inference and generative model training, where it reduces variance and improves task performance with little computational overhead.
Live content is unavailable. Log in and register to view live content