Timezone: »
Generative adversarial networks (GANs) are a class of deep generative models which aim to learn a target distribution in an unsupervised fashion. While they were successfully applied to many problems, training a GAN is a notoriously challenging task and requires a significant number of hyperparameter tuning, neural architecture engineering, and a non-trivial amount of ``tricks". The success in many practical applications coupled with the lack of a measure to quantify the failure modes of GANs resulted in a plethora of proposed losses, regularization and normalization schemes, as well as neural architectures. In this work we take a sober view of the current state of GANs from a practical perspective. We discuss and evaluate common pitfalls and reproducibility issues, open-source our code on Github, and provide pre-trained models on TensorFlow Hub.
Author Information
Karol Kurach (Google Brain)
Mario Lucic (Google Brain)
Xiaohua Zhai (Google Brain)
Marcin Michalski (Google Brain)
Sylvain Gelly (Google Brain)
Related Events (a corresponding poster, oral, or spotlight)
-
2019 Oral: A Large-Scale Study on Regularization and Normalization in GANs »
Wed. Jun 12th 07:10 -- 07:15 PM Room Hall A
More from the Same Authors
-
2022 : SI-Score »
Jessica Yung · Rob Romijnders · Alexander Kolesnikov · Lucas Beyer · Josip Djolonga · Neil Houlsby · Sylvain Gelly · Mario Lucic · Xiaohua Zhai -
2022 : SI-Score »
Jessica Yung · Rob Romijnders · Alexander Kolesnikov · Lucas Beyer · Josip Djolonga · Neil Houlsby · Sylvain Gelly · Mario Lucic · Xiaohua Zhai -
2019 Poster: Parameter-Efficient Transfer Learning for NLP »
Neil Houlsby · Andrei Giurgiu · Stanislaw Jastrzebski · Bruna Morrone · Quentin de Laroussilhe · Andrea Gesmundo · Mona Attariyan · Sylvain Gelly -
2019 Poster: Breaking the Softmax Bottleneck via Learnable Monotonic Pointwise Non-linearities »
Octavian-Eugen Ganea · Sylvain Gelly · Gary Becigneul · Aliaksei Severyn -
2019 Oral: Breaking the Softmax Bottleneck via Learnable Monotonic Pointwise Non-linearities »
Octavian-Eugen Ganea · Sylvain Gelly · Gary Becigneul · Aliaksei Severyn -
2019 Oral: Parameter-Efficient Transfer Learning for NLP »
Neil Houlsby · Andrei Giurgiu · Stanislaw Jastrzebski · Bruna Morrone · Quentin de Laroussilhe · Andrea Gesmundo · Mona Attariyan · Sylvain Gelly -
2019 Poster: Challenging Common Assumptions in the Unsupervised Learning of Disentangled Representations »
Francesco Locatello · Stefan Bauer · Mario Lucic · Gunnar Ratsch · Sylvain Gelly · Bernhard Schölkopf · Olivier Bachem -
2019 Poster: High-Fidelity Image Generation With Fewer Labels »
Mario Lucic · Michael Tschannen · Marvin Ritter · Xiaohua Zhai · Olivier Bachem · Sylvain Gelly -
2019 Oral: High-Fidelity Image Generation With Fewer Labels »
Mario Lucic · Michael Tschannen · Marvin Ritter · Xiaohua Zhai · Olivier Bachem · Sylvain Gelly -
2019 Oral: Challenging Common Assumptions in the Unsupervised Learning of Disentangled Representations »
Francesco Locatello · Stefan Bauer · Mario Lucic · Gunnar Ratsch · Sylvain Gelly · Bernhard Schölkopf · Olivier Bachem -
2017 Poster: Distributed and Provably Good Seedings for k-Means in Constant Rounds »
Olivier Bachem · Mario Lucic · Andreas Krause -
2017 Poster: Uniform Deviation Bounds for k-Means Clustering »
Olivier Bachem · Mario Lucic · Hamed Hassani · Andreas Krause -
2017 Talk: Uniform Deviation Bounds for k-Means Clustering »
Olivier Bachem · Mario Lucic · Hamed Hassani · Andreas Krause -
2017 Talk: Distributed and Provably Good Seedings for k-Means in Constant Rounds »
Olivier Bachem · Mario Lucic · Andreas Krause