Timezone: »
The performance of Beta-Variational-Autoencoders and their variants on learning semantically meaningful, disentangled representations is unparalleled. On the other hand, there are theoretical arguments suggesting the impossibility of unsupervised disentanglement. In this work, we shed light on the inductive bias responsible for the success of VAE-based architectures. We show that in classical datasets the structure of variance, induced by the generating factors, is conveniently aligned with the latent directions fostered by the VAE objective. This builds the pivotal bias on which the disentangling abilities of VAEs rely. By small, elaborate perturbations of existing datasets, we hide the convenient correlation structure that is easily exploited by a variety of architectures. To demonstrate this, we construct modified versions of standard datasets in which (i) the generative factors are perfectly preserved; (ii) each image undergoes a mild transformation causing a small change of variance; (iii) the leading VAE-based disentanglement architectures fail to produce disentangled representations whilst the performance of a non-variational method remains unchanged.
Author Information
Dominik Zietlow (Max Planck Institute for Intelligent Systems)
Michal Rolinek (Max Planck Institute for Intelligent Systems)
Georg Martius (Max Planck Institute for Intelligent Systems)
Related Events (a corresponding poster, oral, or spotlight)
-
2021 Poster: Demystifying Inductive Biases for (Beta-)VAE Based Architectures »
Fri. Jul 23rd 04:00 -- 06:00 AM Room Virtual
More from the Same Authors
-
2021 : Planning from Pixels in Environments with Combinatorially Hard Search Spaces »
Marco Bagatella · Miroslav Olšák · Michal Rolinek · Georg Martius -
2021 : Oral Presentation: Planning from Pixels in Environments with Combinatorially Hard Search Spaces »
Georg Martius · Marco Bagatella -
2021 Poster: CombOptNet: Fit the Right NP-Hard Problem by Learning Integer Programming Constraints »
Anselm Paulus · Michal Rolinek · Vit Musil · Brandon Amos · Georg Martius -
2021 Spotlight: CombOptNet: Fit the Right NP-Hard Problem by Learning Integer Programming Constraints »
Anselm Paulus · Michal Rolinek · Vit Musil · Brandon Amos · Georg Martius -
2021 Poster: Neuro-algorithmic Policies Enable Fast Combinatorial Generalization »
Marin Vlastelica · Michal Rolinek · Georg Martius -
2021 Spotlight: Neuro-algorithmic Policies Enable Fast Combinatorial Generalization »
Marin Vlastelica · Michal Rolinek · Georg Martius -
2018 Poster: Learning equations for extrapolation and control »
Subham S Sahoo · Christoph H. Lampert · Georg Martius -
2018 Oral: Learning equations for extrapolation and control »
Subham S Sahoo · Christoph H. Lampert · Georg Martius