Skip to yearly menu bar Skip to main content


Oral

Variational Laplace Autoencoders

Yookoon Park · Chris Kim · Gunhee Kim

Abstract:

Variational autoencoders employ an amortized inference model to predict the approximate posterior of latent variables. However, such amortized variational inference (AVI) faces two challenges: 1) limited expressiveness of the fully-factorized Gaussian posterior assumption and 2) the amortization error of the inference model. We propose an extended model named Variational Laplace Autoencoders that overcome both challenges to improve the training of the deep generative models. Specifically, we start from a class of rectified linear activation neural networks with Gaussian output and make a connection to probabilistic PCA. As a result, we derive iterative update equations that discover the mode of the posterior and define a local full-covariance Gaussian approximation centered at the mode. From the perspective of Laplace approximation, a generalization to a differentiable class of output distributions and activation functions is presented. Empirical results on MNIST, OMNIGLOT, FashionMNIST, SVHN and CIFAR10 show that the proposed approach significantly outperforms other amortized or iterative methods.

Chat is not available.