Timezone: »
Generative adversarial networks (GANs) are pow-erful generative models based on providing feed-back to a generative network via a discriminatornetwork. However, the discriminator usually as-sesses individual samples. This prevents the dis-criminator from accessing global distributionalstatistics of generated samples, and often leads tomode dropping: the generator models only partof the target distribution. We propose to feedthe discriminator with mixed batches of true andfake samples, and train it to predict the ratio oftrue samples in the batch. The latter score doesnot depend on the order of samples in a batch.Rather than learning this invariance, we introducea generic permutation-invariant discriminator ar-chitecture. This architecture is provably a uni-versal approximator of all symmetric functions.Experimentally, our approach reduces mode col-lapse in GANs on two synthetic datasets, andobtains good results on the CIFAR10 and CelebAdatasets, both qualitatively and quantitatively.
Author Information
Thomas LUCAS (Inria)
Corentin Tallec (INRIA)
Yann Ollivier (Facebook Artificial Intelligence Research)
Jakob Verbeek (INRIA)
Related Events (a corresponding poster, oral, or spotlight)
-
2018 Poster: Mixed batches and symmetric discriminators for GAN training »
Fri. Jul 13th 04:15 -- 07:00 PM Room Hall B #120
More from the Same Authors
-
2019 Poster: Understanding Priors in Bayesian Neural Networks at the Unit Level »
Mariia Vladimirova · Jakob Verbeek · Pablo Mesejo · Julyan Arbel -
2019 Oral: Understanding Priors in Bayesian Neural Networks at the Unit Level »
Mariia Vladimirova · Jakob Verbeek · Pablo Mesejo · Julyan Arbel -
2019 Poster: First-Order Adversarial Vulnerability of Neural Networks and Input Dimension »
Carl-Johann Simon-Gabriel · Yann Ollivier · Leon Bottou · Bernhard Schölkopf · David Lopez-Paz -
2019 Poster: White-box vs Black-box: Bayes Optimal Strategies for Membership Inference »
Alexandre Sablayrolles · Douze Matthijs · Cordelia Schmid · Yann Ollivier · Herve Jegou -
2019 Poster: Making Deep Q-learning methods robust to time discretization »
Corentin Tallec · Leonard Blier · Yann Ollivier -
2019 Poster: Separable value functions across time-scales »
Joshua Romoff · Peter Henderson · Ahmed Touati · Yann Ollivier · Joelle Pineau · Emma Brunskill -
2019 Oral: White-box vs Black-box: Bayes Optimal Strategies for Membership Inference »
Alexandre Sablayrolles · Douze Matthijs · Cordelia Schmid · Yann Ollivier · Herve Jegou -
2019 Oral: Separable value functions across time-scales »
Joshua Romoff · Peter Henderson · Ahmed Touati · Yann Ollivier · Joelle Pineau · Emma Brunskill -
2019 Oral: Making Deep Q-learning methods robust to time discretization »
Corentin Tallec · Leonard Blier · Yann Ollivier -
2019 Oral: First-Order Adversarial Vulnerability of Neural Networks and Input Dimension »
Carl-Johann Simon-Gabriel · Yann Ollivier · Leon Bottou · Bernhard Schölkopf · David Lopez-Paz