Skip to yearly menu bar Skip to main content


Poster

The Usual Suspects? Reassessing Blame for VAE Posterior Collapse

Bin Dai · Ziyu Wang · David Wipf

Keywords: [ Bayesian Methods ] [ Deep Generative Models ] [ Generative Models ] [ Autoencoders ] [ Probabilistic Inference - Models and Probabilistic Programming ]


Abstract:

In narrow asymptotic settings Gaussian VAE models of continuous data have been shown to possess global optima aligned with ground-truth distributions. Even so, it is well known that poor solutions whereby the latent posterior collapses to an uninformative prior are sometimes obtained in practice. However, contrary to conventional wisdom that largely assigns blame for this phenomena on the undue influence of KL-divergence regularization, we will argue that posterior collapse is, at least in part, a direct consequence of bad local minima inherent to the loss surface of deep autoencoder networks. In particular, we prove that even small nonlinear perturbations of affine VAE decoder models can produce such minima, and in deeper models, analogous minima can force the VAE to behave like an aggressive truncation operator, provably discarding information along all latent dimensions in certain circumstances. Regardless, the underlying message here is not meant to undercut valuable existing explanations of posterior collapse, but rather, to refine the discussion and elucidate alternative risk factors that may have been previously underappreciated.

Chat is not available.