Skip to yearly menu bar Skip to main content


Poster

Oops I Took A Gradient: Scalable Sampling for Discrete Distributions

Will Grathwohl · Kevin Swersky · Milad Hashemi · David Duvenaud · Chris Maddison

Virtual

Keywords: [ Generative Models ]

Outstanding Paper Honorable Mention Outstanding Paper Honorable Mention
[ ] [ Paper PDF ]
[ Slides
[ Paper ]

Abstract:

We propose a general and scalable approximate sampling strategy for probabilistic models with discrete variables. Our approach uses gradients of the likelihood function with respect to its discrete inputs to propose updates in a Metropolis-Hastings sampler. We show empirically that this approach outperforms generic samplers in a number of difficult settings including Ising models, Potts models, restricted Boltzmann machines, and factorial hidden Markov models. We also demonstrate our improved sampler for training deep energy-based models on high dimensional discrete image data. This approach outperforms variational auto-encoders and existing energy-based models. Finally, we give bounds showing that our approach is near-optimal in the class of samplers which propose local updates.

Chat is not available.