Timezone: »

 
Poster
Semi-Amortized Variational Autoencoders
Yoon Kim · Sam Wiseman · Andrew Miller · David Sontag · Alexander Rush

Fri Jul 13 09:15 AM -- 12:00 PM (PDT) @ Hall B #134

Amortized variational inference (AVI) replaces instance-specific local inference with a global inference network. While AVI has enabled efficient training of deep generative models such as variational autoencoders (VAE), recent empirical work suggests that inference networks can produce suboptimal variational parameters. We propose a hybrid approach, to use AVI to initialize the variational parameters and run stochastic variational inference (SVI) to refine them. Crucially, the local SVI procedure is itself differentiable, so the inference network and generative model can be trained end-to-end with gradient-based optimization. This semi-amortized approach enables the use of rich generative models without experiencing the posterior-collapse phenomenon common in training VAEs for problems like text generation. Experiments show this approach outperforms strong autoregressive and variational baselines on standard text and image datasets.

Author Information

Yoon Kim (Harvard University)
Sam Wiseman (Harvard University)
Andrew Miller (Harvard)
David Sontag (Massachusetts Institute of Technology)
Alexander Rush (Harvard University)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors