Timezone: »
Generative Adversarial Networks (GANs) have achieved remarkable results in the task of generating realistic natural images. In most successful applications, GAN models share two common aspects: solving a challenging saddle point optimization problem, interpreted as an adversarial game between a generator and a discriminator functions; and parameterizing the generator and the discriminator as deep convolutional neural networks. The goal of this paper is to disentangle the contribution of these two factors to the success of GANs. In particular, we introduce Generative Latent Optimization (GLO), a framework to train deep convolutional generators using simple reconstruction losses. Throughout a variety of experiments, we show that GLO enjoys many of the desirable properties of GANs: synthesizing visually-appealing samples, interpolating meaningfully between samples, and performing linear arithmetic with noise vectors; all of this without the adversarial optimization scheme.
Author Information
Piotr Bojanowski (Facebook)
Armand Joulin (Facebook)
David Lopez-Paz (Facebook AI Research)
Arthur Szlam (Facebook)
Related Events (a corresponding poster, oral, or spotlight)
-
2018 Poster: Optimizing the Latent Space of Generative Networks »
Thu. Jul 12th 04:15 -- 07:00 PM Room Hall B #5
More from the Same Authors
-
2023 : Cross-Risk Minimization: Inferring Groups Information for Improved Generalization »
Mohammad Pezeshki · Diane Bouchacourt · Mark Ibrahim · Nicolas Ballas · Pascal Vincent · David Lopez-Paz -
2023 : A Closer Look at In-Context Learning under Distribution Shifts »
Kartik Ahuja · David Lopez-Paz -
2023 Poster: Model Ratatouille: Recycling Diverse Models for Out-of-Distribution Generalization »
Alexandre Rame · Kartik Ahuja · Jianyu Zhang · Matthieu Cord · Leon Bottou · David Lopez-Paz -
2023 Oral: Why does Throwing Away Data Improve Worst-Group Error? »
Kamalika Chaudhuri · Kartik Ahuja · Martin Arjovsky · David Lopez-Paz -
2023 Poster: Why does Throwing Away Data Improve Worst-Group Error? »
Kamalika Chaudhuri · Kartik Ahuja · Martin Arjovsky · David Lopez-Paz -
2022 : Invited talks I, Q/A »
Bernhard Schölkopf · David Lopez-Paz -
2022 : Invited Talks 1, Bernhard Schölkopf and David Lopez-Paz »
Bernhard Schölkopf · David Lopez-Paz -
2022 Poster: Rich Feature Construction for the Optimization-Generalization Dilemma »
Jianyu Zhang · David Lopez-Paz · Léon Bottou -
2022 Spotlight: Rich Feature Construction for the Optimization-Generalization Dilemma »
Jianyu Zhang · David Lopez-Paz · Léon Bottou -
2021 Poster: CURI: A Benchmark for Productive Concept Learning Under Uncertainty »
Shanmukha Ramakrishna Vedantam · Arthur Szlam · Maximilian Nickel · Ari Morcos · Brenden Lake -
2021 Spotlight: CURI: A Benchmark for Productive Concept Learning Under Uncertainty »
Shanmukha Ramakrishna Vedantam · Arthur Szlam · Maximilian Nickel · Ari Morcos · Brenden Lake -
2021 Poster: Not All Memories are Created Equal: Learning to Forget by Expiring »
Sainbayar Sukhbaatar · Da JU · Spencer Poff · Stephen Roller · Arthur Szlam · Jason Weston · Angela Fan -
2021 Oral: Not All Memories are Created Equal: Learning to Forget by Expiring »
Sainbayar Sukhbaatar · Da JU · Spencer Poff · Stephen Roller · Arthur Szlam · Jason Weston · Angela Fan -
2020 : Collaboration in Situated Instruction Following Q&A »
Yoav Artzi · Arthur Szlam -
2020 : Collaborative Construction and Communication in Minecraft Q&A »
Julia Hockenmaier · Arthur Szlam -
2020 Workshop: Workshop on Learning in Artificial Open Worlds »
Arthur Szlam · Katja Hofmann · Ruslan Salakhutdinov · Noboru Kuno · William Guss · Kavya Srinet · Brandon Houghton -
2020 Workshop: Workshop on Continual Learning »
Haytham Fayek · Arslan Chaudhry · David Lopez-Paz · Eugene Belilovsky · Jonathan Richard Schwarz · Marc Pickett · Rahaf Aljundi · Sayna Ebrahimi · Razvan Pascanu · Puneet Dokania -
2020 Poster: Fast Adaptation to New Environments via Policy-Dynamics Value Functions »
Roberta Raileanu · Max Goldstein · Arthur Szlam · Facebook Rob Fergus -
2019 Poster: Manifold Mixup: Better Representations by Interpolating Hidden States »
Vikas Verma · Alex Lamb · Christopher Beckham · Amir Najafi · Ioannis Mitliagkas · David Lopez-Paz · Yoshua Bengio -
2019 Poster: First-Order Adversarial Vulnerability of Neural Networks and Input Dimension »
Carl-Johann Simon-Gabriel · Yann Ollivier · Leon Bottou · Bernhard Schölkopf · David Lopez-Paz -
2019 Oral: Manifold Mixup: Better Representations by Interpolating Hidden States »
Vikas Verma · Alex Lamb · Christopher Beckham · Amir Najafi · Ioannis Mitliagkas · David Lopez-Paz · Yoshua Bengio -
2019 Oral: First-Order Adversarial Vulnerability of Neural Networks and Input Dimension »
Carl-Johann Simon-Gabriel · Yann Ollivier · Leon Bottou · Bernhard Schölkopf · David Lopez-Paz -
2018 Poster: Composable Planning with Attributes »
Amy Zhang · Sainbayar Sukhbaatar · Adam Lerer · Arthur Szlam · Facebook Rob Fergus -
2018 Oral: Composable Planning with Attributes »
Amy Zhang · Sainbayar Sukhbaatar · Adam Lerer · Arthur Szlam · Facebook Rob Fergus -
2017 Poster: Efficient softmax approximation for GPUs »
Edouard Grave · Armand Joulin · Moustapha Cisse · David Grangier · Herve Jegou -
2017 Poster: Parseval Networks: Improving Robustness to Adversarial Examples »
Moustapha Cisse · Piotr Bojanowski · Edouard Grave · Yann Dauphin · Nicolas Usunier -
2017 Talk: Efficient softmax approximation for GPUs »
Edouard Grave · Armand Joulin · Moustapha Cisse · David Grangier · Herve Jegou -
2017 Talk: Parseval Networks: Improving Robustness to Adversarial Examples »
Moustapha Cisse · Piotr Bojanowski · Edouard Grave · Yann Dauphin · Nicolas Usunier -
2017 Poster: Unsupervised Learning by Predicting Noise »
Piotr Bojanowski · Armand Joulin -
2017 Talk: Unsupervised Learning by Predicting Noise »
Piotr Bojanowski · Armand Joulin