Timezone: »

Variational Laplace Autoencoders
Yookoon Park · Chris Kim · Gunhee Kim

Thu Jun 13 09:20 AM -- 09:25 AM (PDT) @ Hall A

Variational autoencoders employ an amortized inference model to predict the approximate posterior of latent variables. However, such amortized variational inference (AVI) faces two challenges: 1) limited expressiveness of the fully-factorized Gaussian posterior assumption and 2) the amortization error of the inference model. We propose an extended model named Variational Laplace Autoencoders that overcome both challenges to improve the training of the deep generative models. Specifically, we start from a class of rectified linear activation neural networks with Gaussian output and make a connection to probabilistic PCA. As a result, we derive iterative update equations that discover the mode of the posterior and define a local full-covariance Gaussian approximation centered at the mode. From the perspective of Laplace approximation, a generalization to a differentiable class of output distributions and activation functions is presented. Empirical results on MNIST, OMNIGLOT, FashionMNIST, SVHN and CIFAR10 show that the proposed approach significantly outperforms other amortized or iterative methods.

Author Information

Yookoon Park (Seoul National University)
Chris Kim (Seoul National University)
Gunhee Kim (Seoul National University)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors