Timezone: »
Generative adversarial networks (GANs) have been shown to produce realistic samples from high-dimensional distributions, but training them is considered hard. A possible explanation for training instabilities is the inherent imbalance between the networks: While the discriminator is trained directly on both real and fake samples, the generator only has control over the fake samples it produces since the real data distribution is fixed by the choice of a given dataset. We propose a simple modification that gives the generator control over the real samples which leads to a tempered learning process for both generator and discriminator. The real data distribution passes through a lens before being revealed to the discriminator, balancing the generator and discriminator by gradually revealing more detailed features necessary to produce high-quality results. The proposed module automatically adjusts the learning process to the current strength of the networks, yet is generic and easy to add to any GAN variant. In a number of experiments, we show that this can improve quality, stability and/or convergence speed across a range of different GAN architectures (DCGAN, LSGAN, WGAN-GP).
Author Information
Mehdi S. M. Sajjadi (Max Planck Institute for Intelligent Systems)
Giambattista Parascandolo (Max Planck Institute for Intelligent Systems and ETH Zurich)
Arash Mehrjou (Max Planck Institute for Intelligent Systems)
Bernhard Schölkopf (MPI for Intelligent Systems Tübingen, Germany)
Bernhard Scholkopf received degrees in mathematics (London) and physics (Tubingen), and a doctorate in computer science from the Technical University Berlin. He has researched at AT&T Bell Labs, at GMD FIRST, Berlin, at the Australian National University, Canberra, and at Microsoft Research Cambridge (UK). In 2001, he was appointed scientific member of the Max Planck Society and director at the MPI for Biological Cybernetics; in 2010 he founded the Max Planck Institute for Intelligent Systems. For further information, see www.kyb.tuebingen.mpg.de/~bs.
Related Events (a corresponding poster, oral, or spotlight)
-
2018 Oral: Tempered Adversarial Networks »
Thu Jul 12th 11:50 AM -- 12:00 PM Room A7
More from the Same Authors
-
2020 Workshop: Inductive Biases, Invariances and Generalization in Reinforcement Learning »
Anirudh Goyal · Rosemary Nan Ke · Stefan Bauer · Jane Wang · Theophane Weber · Fabio Viola · Bernhard Schölkopf · Stefan Bauer -
2020 Poster: Weakly-Supervised Disentanglement Without Compromises »
Francesco Locatello · Ben Poole · Gunnar Ratsch · Bernhard Schölkopf · Olivier Bachem · Michael Tschannen -
2019 Poster: Robustly Disentangled Causal Mechanisms: Validating Deep Representations for Interventional Robustness »
Raphael Suter · Djordje Miladinovic · Bernhard Schölkopf · Stefan Bauer -
2019 Oral: Robustly Disentangled Causal Mechanisms: Validating Deep Representations for Interventional Robustness »
Raphael Suter · Djordje Miladinovic · Bernhard Schölkopf · Stefan Bauer -
2019 Poster: Kernel Mean Matching for Content Addressability of GANs »
Wittawat Jitkrittum · Wittawat Jitkrittum · Patsorn Sangkloy · Muhammad Waleed Gondal · Amit Raj · James Hays · Bernhard Schölkopf -
2019 Oral: Kernel Mean Matching for Content Addressability of GANs »
Wittawat Jitkrittum · Wittawat Jitkrittum · Patsorn Sangkloy · Patsorn Sangkloy · Muhammad Waleed Gondal · Muhammad Waleed Gondal · Amit Raj · Amit Raj · James Hays · James Hays · Bernhard Schölkopf · Bernhard Schölkopf -
2019 Poster: First-Order Adversarial Vulnerability of Neural Networks and Input Dimension »
Carl-Johann Simon-Gabriel · Yann Ollivier · Leon Bottou · Bernhard Schölkopf · David Lopez-Paz -
2019 Poster: Challenging Common Assumptions in the Unsupervised Learning of Disentangled Representations »
Francesco Locatello · Stefan Bauer · Mario Lucic · Gunnar Ratsch · Sylvain Gelly · Bernhard Schölkopf · Olivier Bachem -
2019 Oral: First-Order Adversarial Vulnerability of Neural Networks and Input Dimension »
Carl-Johann Simon-Gabriel · Yann Ollivier · Leon Bottou · Bernhard Schölkopf · David Lopez-Paz -
2019 Oral: Challenging Common Assumptions in the Unsupervised Learning of Disentangled Representations »
Francesco Locatello · Stefan Bauer · Mario Lucic · Gunnar Ratsch · Sylvain Gelly · Bernhard Schölkopf · Olivier Bachem -
2018 Poster: Detecting non-causal artifacts in multivariate linear regression models »
Dominik Janzing · Bernhard Schölkopf -
2018 Poster: On Matching Pursuit and Coordinate Descent »
Francesco Locatello · Anant Raj · Sai Praneeth Reddy Karimireddy · Gunnar Ratsch · Bernhard Schölkopf · Sebastian Stich · Martin Jaggi -
2018 Oral: Detecting non-causal artifacts in multivariate linear regression models »
Dominik Janzing · Bernhard Schölkopf -
2018 Oral: On Matching Pursuit and Coordinate Descent »
Francesco Locatello · Anant Raj · Sai Praneeth Reddy Karimireddy · Gunnar Ratsch · Bernhard Schölkopf · Sebastian Stich · Martin Jaggi -
2018 Poster: Differentially Private Database Release via Kernel Mean Embeddings »
Matej Balog · Ilya Tolstikhin · Bernhard Schölkopf -
2018 Oral: Differentially Private Database Release via Kernel Mean Embeddings »
Matej Balog · Ilya Tolstikhin · Bernhard Schölkopf -
2018 Poster: Learning Independent Causal Mechanisms »
Giambattista Parascandolo · Niki Kilbertus · Mateo Rojas-Carulla · Bernhard Schölkopf -
2018 Oral: Learning Independent Causal Mechanisms »
Giambattista Parascandolo · Niki Kilbertus · Mateo Rojas-Carulla · Bernhard Schölkopf -
2017 Invited Talk: Causal Learning »
Bernhard Schölkopf