Timezone: »

Inference Suboptimality in Variational Autoencoders
Chris Cremer · Xuechen Li · David Duvenaud

Wed Jul 11 09:15 AM -- 12:00 PM (PDT) @ Hall B #176

Amortized inference allows latent-variable models trained via variational learning to scale to large datasets. The quality of approximate inference is determined by two factors: a) the capacity of the variational distribution to match the true posterior and b) the ability of the recognition network to produce good variational parameters for each datapoint. We examine approximate inference in variational autoencoders in terms of these factors. We find that divergence from the true posterior is often due to imperfect recognition networks, rather than the limited complexity of the approximating distribution. We show that this is due partly to the generator learning to accommodate the choice of approximation. Furthermore, we show that the parameters used to increase the expressiveness of the approximation play a role in generalizing inference rather than simply improving the complexity of the approximation.

Author Information

Chris Cremer (University of Toronto)
Xuechen Li (University of Toronto)
David Duvenaud (University of Toronto)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors