Timezone: »
We develop a generalisation of disentanglement in variational auto-encoders (VAEs)—decomposition of the latent representation—characterising it as the fulfilment of two factors: a) the latent encodings of the data having an appropriate level of overlap, and b) the aggregate encoding of the data conforming to a desired structure, represented through the prior. Decomposition permits disentanglement, i.e. explicit independence between latents, as a special case, but also allows for a much richer class of properties to be imposed on the learnt representation, such as sparsity, clustering, independent subspaces, or even intricate hierarchical dependency relationships. We show that the beta-VAE varies from the standard VAE predominantly in its control of latent overlap and that for the standard choice of an isotropic Gaussian prior, its objective is invariant to rotations of the latent representation. Viewed from the decomposition perspective, breaking this invariance with simple manipulations of the prior can yield better disentanglement with little or no detriment to reconstructions. We further demonstrate how other choices of prior can assist in producing different decompositions and introduce an alternative training objective that allows the control of both decomposition factors in a principled manner.
Author Information
Emile Mathieu (University of Oxford)
Tom Rainforth (University of Oxford)
N Siddharth (Unversity of Oxford)
Yee-Whye Teh (Oxford and DeepMind)
Related Events (a corresponding poster, oral, or spotlight)
-
2019 Poster: Disentangling Disentanglement in Variational Autoencoders »
Thu. Jun 13th 01:30 -- 04:00 AM Room Pacific Ballroom #5
More from the Same Authors
-
2021 : Continual Learning via Function-Space Variational Inference: A Unifying View »
Tim G. J. Rudner · Freddie Bickford Smith · Qixuan Feng · Yee-Whye Teh · Yarin Gal -
2021 : Active Learning under Pool Set Distribution Shift and Noisy Data »
Andreas Kirsch · Tom Rainforth · Yarin Gal -
2021 : Active Learning under Pool Set Distribution Shift and Noisy Data »
Andreas Kirsch · Tom Rainforth · Yarin Gal -
2022 : Challenges and Opportunities in Offline Reinforcement Learning from Visual Observations »
Cong Lu · Philip Ball · Tim G. J Rudner · Jack Parker-Holder · Michael A Osborne · Yee-Whye Teh -
2023 : Synthetic Experience Replay »
Cong Lu · Philip Ball · Yee-Whye Teh · Jack Parker-Holder -
2023 Poster: Modality-Agnostic Variational Compression of Implicit Neural Representations »
Jonathan Richard Schwarz · Jihoon Tack · Yee-Whye Teh · Jaeho Lee · Jinwoo Shin -
2023 Poster: Learning Instance-Specific Augmentations by Capturing Local Invariances »
Ning Miao · Tom Rainforth · Emile Mathieu · Yann Dubois · Yee-Whye Teh · Adam Foster · Hyunjik Kim -
2023 Poster: CO-BED: Information-Theoretic Contextual Optimization via Bayesian Experimental Design »
Desi Ivanova · Joel Jennings · Tom Rainforth · Cheng Zhang · Adam Foster -
2023 Poster: Drug Discovery under Covariate Shift with Domain-Informed Prior Distributions over Functions »
Leo Klarner · Tim G. J. Rudner · Michael Reutlinger · Torsten Schindler · Garrett Morris · Charlotte Deane · Yee-Whye Teh -
2022 Poster: Continual Learning via Sequential Function-Space Variational Inference »
Tim G. J Rudner · Freddie Bickford Smith · QIXUAN FENG · Yee-Whye Teh · Yarin Gal -
2022 Spotlight: Continual Learning via Sequential Function-Space Variational Inference »
Tim G. J Rudner · Freddie Bickford Smith · QIXUAN FENG · Yee-Whye Teh · Yarin Gal -
2021 : Active Learning under Pool Set Distribution Shift and Noisy Data »
Yarin Gal · Tom Rainforth · Andreas Kirsch -
2021 : Continual Learning via Function-Space Variational Inference: A Unifying View »
Yarin Gal · Yee-Whye Teh · Qixuan Feng · Freddie Bickford Smith · Tim G. J. Rudner -
2021 Poster: Equivariant Learning of Stochastic Fields: Gaussian Processes and Steerable Conditional Neural Processes »
Peter Holderrieth · Michael Hutchinson · Yee-Whye Teh -
2021 Spotlight: Equivariant Learning of Stochastic Fields: Gaussian Processes and Steerable Conditional Neural Processes »
Peter Holderrieth · Michael Hutchinson · Yee-Whye Teh -
2021 Test Of Time: Bayesian Learning via Stochastic Gradient Langevin Dynamics »
Yee Teh · Max Welling -
2021 Poster: Active Testing: Sample-Efficient Model Evaluation »
Jannik Kossen · Sebastian Farquhar · Yarin Gal · Tom Rainforth -
2021 Poster: Deep Adaptive Design: Amortizing Sequential Bayesian Experimental Design »
Adam Foster · Desi Ivanova · ILYAS MALIK · Tom Rainforth -
2021 Poster: On Signal-to-Noise Ratio Issues in Variational Inference for Deep Gaussian Processes »
Tim G. J. Rudner · Oscar Key · Yarin Gal · Tom Rainforth -
2021 Spotlight: Active Testing: Sample-Efficient Model Evaluation »
Jannik Kossen · Sebastian Farquhar · Yarin Gal · Tom Rainforth -
2021 Oral: Deep Adaptive Design: Amortizing Sequential Bayesian Experimental Design »
Adam Foster · Desi Ivanova · ILYAS MALIK · Tom Rainforth -
2021 Spotlight: On Signal-to-Noise Ratio Issues in Variational Inference for Deep Gaussian Processes »
Tim G. J. Rudner · Oscar Key · Yarin Gal · Tom Rainforth -
2021 Poster: Probabilistic Programs with Stochastic Conditioning »
David Tolpin · Yuan Zhou · Tom Rainforth · Hongseok Yang -
2021 Spotlight: Probabilistic Programs with Stochastic Conditioning »
David Tolpin · Yuan Zhou · Tom Rainforth · Hongseok Yang -
2021 Poster: LieTransformer: Equivariant Self-Attention for Lie Groups »
Michael Hutchinson · Charline Le Lan · Sheheryar Zaidi · Emilien Dupont · Yee-Whye Teh · Hyunjik Kim -
2021 Spotlight: LieTransformer: Equivariant Self-Attention for Lie Groups »
Michael Hutchinson · Charline Le Lan · Sheheryar Zaidi · Emilien Dupont · Yee-Whye Teh · Hyunjik Kim -
2020 : "Designing Bayesian-Optimal Experiments with Stochastic Gradients" »
Tom Rainforth -
2020 Poster: MetaFun: Meta-Learning with Iterative Functional Updates »
Jin Xu · Jean-Francois Ton · Hyunjik Kim · Adam Kosiorek · Yee-Whye Teh -
2020 Poster: Divide, Conquer, and Combine: a New Inference Strategy for Probabilistic Programs with Stochastic Support »
Yuan Zhou · Hongseok Yang · Yee-Whye Teh · Tom Rainforth -
2020 Poster: Fractional Underdamped Langevin Dynamics: Retargeting SGD with Momentum under Heavy-Tailed Gradient Noise »
Umut Simsekli · Lingjiong Zhu · Yee-Whye Teh · Mert Gurbuzbalaban -
2020 Poster: Uncertainty Estimation Using a Single Deep Deterministic Neural Network »
Joost van Amersfoort · Lewis Smith · Yee-Whye Teh · Yarin Gal -
2019 Oral: Hybrid Models with Deep and Invertible Features »
Eric Nalisnick · Akihiro Matsukawa · Yee-Whye Teh · Dilan Gorur · Balaji Lakshminarayanan -
2019 Poster: Hybrid Models with Deep and Invertible Features »
Eric Nalisnick · Akihiro Matsukawa · Yee-Whye Teh · Dilan Gorur · Balaji Lakshminarayanan -
2019 Poster: Set Transformer: A Framework for Attention-based Permutation-Invariant Neural Networks »
Juho Lee · Yoonho Lee · Jungtaek Kim · Adam Kosiorek · Seungjin Choi · Yee-Whye Teh -
2019 Poster: Amortized Monte Carlo Integration »
Adam Golinski · Frank Wood · Tom Rainforth -
2019 Oral: Set Transformer: A Framework for Attention-based Permutation-Invariant Neural Networks »
Juho Lee · Yoonho Lee · Jungtaek Kim · Adam Kosiorek · Seungjin Choi · Yee-Whye Teh -
2019 Oral: Amortized Monte Carlo Integration »
Adam Golinski · Frank Wood · Tom Rainforth -
2018 Poster: On Nesting Monte Carlo Estimators »
Tom Rainforth · Rob Cornish · Hongseok Yang · andrew warrington · Frank Wood -
2018 Poster: Progress & Compress: A scalable framework for continual learning »
Jonathan Richard Schwarz · Wojciech Czarnecki · Jelena Luketina · Agnieszka Grabska-Barwinska · Yee Teh · Razvan Pascanu · Raia Hadsell -
2018 Poster: Mix & Match - Agent Curricula for Reinforcement Learning »
Wojciech Czarnecki · Siddhant Jayakumar · Max Jaderberg · Leonard Hasenclever · Yee Teh · Nicolas Heess · Simon Osindero · Razvan Pascanu -
2018 Oral: On Nesting Monte Carlo Estimators »
Tom Rainforth · Rob Cornish · Hongseok Yang · andrew warrington · Frank Wood -
2018 Oral: Progress & Compress: A scalable framework for continual learning »
Jonathan Richard Schwarz · Wojciech Czarnecki · Jelena Luketina · Agnieszka Grabska-Barwinska · Yee Teh · Razvan Pascanu · Raia Hadsell -
2018 Oral: Mix & Match - Agent Curricula for Reinforcement Learning »
Wojciech Czarnecki · Siddhant Jayakumar · Max Jaderberg · Leonard Hasenclever · Yee Teh · Nicolas Heess · Simon Osindero · Razvan Pascanu -
2018 Poster: Conditional Neural Processes »
Marta Garnelo · Dan Rosenbaum · Chris Maddison · Tiago Ramalho · David Saxton · Murray Shanahan · Yee Teh · Danilo J. Rezende · S. M. Ali Eslami -
2018 Poster: Tighter Variational Bounds are Not Necessarily Better »
Tom Rainforth · Adam Kosiorek · Tuan Anh Le · Chris Maddison · Maximilian Igl · Frank Wood · Yee-Whye Teh -
2018 Oral: Tighter Variational Bounds are Not Necessarily Better »
Tom Rainforth · Adam Kosiorek · Tuan Anh Le · Chris Maddison · Maximilian Igl · Frank Wood · Yee-Whye Teh -
2018 Oral: Conditional Neural Processes »
Marta Garnelo · Dan Rosenbaum · Chris Maddison · Tiago Ramalho · David Saxton · Murray Shanahan · Yee Teh · Danilo J. Rezende · S. M. Ali Eslami