Timezone: »
Disentangled generative models map a latent code vector to a target space, while enforcing that a subset of the learned latent codes are interpretable and associated with distinct properties of the target distribution. Recent advances have been dominated by Variational AutoEncoder (VAE)-based methods, while training disentangled generative adversarial networks (GANs) remains challenging. In this work, we show that the dominant challenges facing disentangled GANs can be mitigated through the use of self-supervision. We make two main contributions: first, we design a novel approach for training disentangled GANs with self-supervision. We propose contrastive regularizer, which is inspired by a natural notion of disentanglement: latent traversal. This achieves higher disentanglement scores than state-of-the-art VAE- and GAN-based approaches. Second, we propose an unsupervised model selection scheme called ModelCentrality, which uses generated synthetic samples to compute the medoid (multi-dimensional generalization of median) of a collection of models. Perhaps surprisingly, this unsupervised ModelCentrality is able to select a model that outperforms those trained with existing supervised hyper-parameter selection techniques. Combining contrastive regularization with ModelCentrality, we obtain state-of-the-art disentanglement scores by a substantial margin, without requiring supervised hyper-parameter selection.
Author Information
Zinan Lin (Carnegie Mellon University)
Kiran Thekumparampil (University of Illinois at Urbana-Champaign)
Giulia Fanti (CMU)
Sewoong Oh (University of Washington)
More from the Same Authors
-
2021 : Multistage stepsize schedule in Federated Learning: Bridging Theory and Practice »
Charlie Hou · Kiran Thekumparampil -
2022 Poster: MAML and ANIL Provably Learn Representations »
Liam Collins · Aryan Mokhtari · Sewoong Oh · Sanjay Shakkottai -
2022 Spotlight: MAML and ANIL Provably Learn Representations »
Liam Collins · Aryan Mokhtari · Sewoong Oh · Sanjay Shakkottai -
2022 Poster: De novo mass spectrometry peptide sequencing with a transformer model »
Melih Yilmaz · William Fondrie · Wout Bittremieux · Sewoong Oh · William Noble -
2022 Spotlight: De novo mass spectrometry peptide sequencing with a transformer model »
Melih Yilmaz · William Fondrie · Wout Bittremieux · Sewoong Oh · William Noble -
2021 Poster: Pareto GAN: Extending the Representational Power of GANs to Heavy-Tailed Distributions »
Todd Huster · Jeremy Cohen · Zinan Lin · Kevin Chan · Charles Kamhoua · Nandi O. Leslie · Cho-Yu Chiang · Vyas Sekar -
2021 Spotlight: Pareto GAN: Extending the Representational Power of GANs to Heavy-Tailed Distributions »
Todd Huster · Jeremy Cohen · Zinan Lin · Kevin Chan · Charles Kamhoua · Nandi O. Leslie · Cho-Yu Chiang · Vyas Sekar -
2021 Poster: Defense against backdoor attacks via robust covariance estimation »
Jonathan Hayase · Weihao Kong · Raghav Somani · Sewoong Oh -
2021 Spotlight: Defense against backdoor attacks via robust covariance estimation »
Jonathan Hayase · Weihao Kong · Raghav Somani · Sewoong Oh -
2021 Poster: KO codes: inventing nonlinear encoding and decoding for reliable wireless communication via deep-learning »
Ashok Vardhan Makkuva · Xiyang Liu · Mohammad Vahid Jamali · Hessam Mahdavifar · Sewoong Oh · Pramod Viswanath -
2021 Spotlight: KO codes: inventing nonlinear encoding and decoding for reliable wireless communication via deep-learning »
Ashok Vardhan Makkuva · Xiyang Liu · Mohammad Vahid Jamali · Hessam Mahdavifar · Sewoong Oh · Pramod Viswanath -
2020 Poster: Optimal transport mapping via input convex neural networks »
Ashok Vardhan Makkuva · Amirhossein Taghvaei · Sewoong Oh · Jason Lee -
2020 Poster: Meta-learning for Mixed Linear Regression »
Weihao Kong · Raghav Somani · Zhao Song · Sham Kakade · Sewoong Oh