Timezone: »
Variational Autoencoders (VAEs) are expressive latent variable models that can be used to learn complex probability distributions from training data. However, the quality of the resulting model crucially relies on the expressiveness of the inference model. We introduce Adversarial Variational Bayes (AVB), a technique for training Variational Autoencoders with arbitrarily expressive inference models. We achieve this by introducing an auxiliary discriminative network that allows to rephrase the maximum-likelihood-problem as a two-player game, hence establishing a principled connection between VAEs and Generative Adversarial Networks (GANs). We show that in the nonparametric limit our method yields an exact maximum-likelihood assignment for the parameters of the generative model, as well as the exact posterior distribution over the latent variables given an observation. Contrary to competing approaches which combine VAEs with GANs, our approach has a clear theoretical justification, retains most advantages of standard Variational Autoencoders and is easy to implement.
Author Information
Lars Mescheder (MPI Tübingen)
Sebastian Nowozin (Microsoft Research)
Andreas Geiger (MPI Tübingen)
Related Events (a corresponding poster, oral, or spotlight)
-
2017 Talk: Adversarial Variational Bayes: Unifying Variational Autoencoders and Generative Adversarial Networks »
Mon. Aug 7th 01:24 -- 01:42 AM Room Parkside 1
More from the Same Authors
-
2018 Poster: Which Training Methods for GANs do actually Converge? »
Lars Mescheder · Andreas Geiger · Sebastian Nowozin -
2018 Oral: Which Training Methods for GANs do actually Converge? »
Lars Mescheder · Andreas Geiger · Sebastian Nowozin