Skip to yearly menu bar Skip to main content


talk
in
Workshop: INNF+: Invertible Neural Networks, Normalizing Flows, and Explicit Likelihood Models

Invited talk 5: Adversarial Learning of Prescribed Generative Models

Adji Bousso Dieng


Abstract:

Parameterizing latent variable models with deep neural networks has become a major approach to probabilistic modeling. The usual way of fitting these deep latent variable models is to use maximum likelihood. This gives rise to variational autoencoders (VAEs). They jointly learn an approximate posterior distribution over the latent variables and the model parameters by maximizing a lower bound to the log-marginal likelihood of the data. In this talk, I will present an alternative approach to fitting parameters of deep latent-variable models. The idea is to marry adversarial learning and entropy regularization. The family of models fit with this procedure is called Prescribed Generative Adversarial Networks (PresGANs). I will describe PresGANs and discuss how they generate samples with high perceptual quality while avoiding the ubiquitous mode collapse issue of GANs.

Chat is not available.