Poster
Disentangling Disentanglement in Variational Autoencoders
Emile Mathieu · Tom Rainforth · N Siddharth · Yee-Whye Teh
Pacific Ballroom #5
Keywords: [ Bayesian Methods ] [ Deep Generative Models ] [ Generative Models ] [ Representation Learning ]
[
Abstract
]
Abstract:
We develop a generalisation of disentanglement in variational autoencoders (VAEs)---decomposition of the latent representation---characterising it as the fulfilment of two factors: a) the latent encodings of the data having an appropriate level of overlap, and b) the aggregate encoding of the data conforming to a desired structure, represented through the prior. Decomposition permits disentanglement, i.e. explicit independence between latents, as a special case, but also allows for a much richer class of properties to be imposed on the learnt representation, such as sparsity, clustering, independent subspaces, or even intricate hierarchical dependency relationships. We show that the $\beta$-VAE varies from the standard VAE predominantly in its control of latent overlap and that for the standard choice of an isotropic Gaussian prior, its objective is invariant to rotations of the latent representation. Viewed from the decomposition perspective, breaking this invariance with simple manipulations of the prior can yield better disentanglement with little or no detriment to reconstructions. We further demonstrate how other choices of prior can assist in producing different decompositions and introduce an alternative training objective that allows the control of both decomposition factors in a principled manner.
Live content is unavailable. Log in and register to view live content