Timezone: »
Poster
On Implicit Regularization in $\beta$-VAEs
Abhishek Kumar · Ben Poole
Tue Jul 14 07:00 AM -- 07:45 AM & Tue Jul 14 06:00 PM -- 06:45 PM (PDT) @ Virtual
While the impact of variational inference (VI) on posterior inference in a fixed generative model is well-characterized, its role in regularizing a learned generative model when used in variational autoencoders (VAEs) is poorly understood.
We study the regularizing effects of variational distributions on learning in generative models from two perspectives. First, we analyze the role that the choice of variational family plays in imparting uniqueness to the learned model by restricting the set of optimal generative models. Second, we study the regularization effect of the variational family on the local geometry of the decoding model. This analysis uncovers the regularizer implicit in the $\beta$-VAE objective, and leads to an approximation consisting of a deterministic autoencoding objective plus analytic regularizers that depend on the Hessian or Jacobian of the decoding model, unifying VAEs with recent heuristics proposed for training regularized autoencoders. We empirically verify these findings, observing that the proposed deterministic objective exhibits similar behavior to the $\beta$-VAE in terms of objective value and sample quality.
Author Information
Abhishek Kumar (Google Brain)
Ben Poole (Google Brain)
More from the Same Authors
-
2023 : Saving a Split for Last-layer Retraining can Improve Group Robustness without Group Annotations »
Tyler LaBonte · Vidya Muthukumar · Abhishek Kumar -
2023 : To Aggregate or Not? Learning with Separate Noisy Labels »
Jiaheng Wei · Zhaowei Zhu · Tianyi Luo · Ehsan Amid · Abhishek Kumar · Yang Liu -
2021 Poster: Implicit rate-constrained optimization of non-decomposable objectives »
Abhishek Kumar · Harikrishna Narasimhan · Andrew Cotter -
2021 Spotlight: Implicit rate-constrained optimization of non-decomposable objectives »
Abhishek Kumar · Harikrishna Narasimhan · Andrew Cotter -
2020 Poster: Weakly-Supervised Disentanglement Without Compromises »
Francesco Locatello · Ben Poole · Gunnar Ratsch · Bernhard Schölkopf · Olivier Bachem · Michael Tschannen -
2019 : poster session I »
Nicholas Rhinehart · Yunhao Tang · Vinay Prabhu · Dian Ang Yap · Alexander Wang · Marc Finzi · Manoj Kumar · You Lu · Abhishek Kumar · Qi Lei · Michael Przystupa · Nicola De Cao · Polina Kirichenko · Pavel Izmailov · Andrew Wilson · Jakob Kruse · Diego Mesquita · Mario Lezcano Casado · Thomas Müller · Keir Simmons · Andrei Atanov -
2019 Poster: On Variational Bounds of Mutual Information »
Ben Poole · Sherjil Ozair · Aäron van den Oord · Alexander Alemi · George Tucker -
2019 Oral: On Variational Bounds of Mutual Information »
Ben Poole · Sherjil Ozair · Aäron van den Oord · Alexander Alemi · George Tucker -
2018 Poster: Fixing a Broken ELBO »
Alexander Alemi · Ben Poole · Ian Fischer · Joshua V Dillon · Rif Saurous · Kevin Murphy -
2018 Oral: Fixing a Broken ELBO »
Alexander Alemi · Ben Poole · Ian Fischer · Joshua V Dillon · Rif Saurous · Kevin Murphy -
2017 Poster: Continual Learning Through Synaptic Intelligence »
Friedemann Zenke · Ben Poole · Surya Ganguli -
2017 Talk: Continual Learning Through Synaptic Intelligence »
Friedemann Zenke · Ben Poole · Surya Ganguli -
2017 Poster: On the Expressive Power of Deep Neural Networks »
Maithra Raghu · Ben Poole · Surya Ganguli · Jon Kleinberg · Jascha Sohl-Dickstein -
2017 Talk: On the Expressive Power of Deep Neural Networks »
Maithra Raghu · Ben Poole · Surya Ganguli · Jon Kleinberg · Jascha Sohl-Dickstein