Timezone: »
Generative adversarial networks (GANs) are pow- erful generative models based on providing feed- back to a generative network via a discriminator network. However, the discriminator usually as- sesses individual samples. This prevents the dis- criminator from accessing global distributional statistics of generated samples, and often leads to mode dropping: the generator models only part of the target distribution. We propose to feed the discriminator with mixed batches of true and fake samples, and train it to predict the ratio of true samples in the batch. The latter score does not depend on the order of samples in a batch. Rather than learning this invariance, we introduce a generic permutation-invariant discriminator ar- chitecture. This architecture is provably a uni- versal approximator of all symmetric functions. Experimentally, our approach reduces mode col- lapse in GANs on two synthetic datasets, and obtains good results on the CIFAR10 and CelebA datasets, both qualitatively and quantitatively.
Author Information
Thomas LUCAS (Inria)
Corentin Tallec (INRIA)
Yann Ollivier (Facebook Artificial Intelligence Research)
Jakob Verbeek (INRIA)
Related Events (a corresponding poster, oral, or spotlight)
-
2018 Oral: Mixed batches and symmetric discriminators for GAN training »
Fri. Jul 13th 07:30 -- 07:50 AM Room A7
More from the Same Authors
-
2019 Poster: Understanding Priors in Bayesian Neural Networks at the Unit Level »
Mariia Vladimirova · Jakob Verbeek · Pablo Mesejo · Julyan Arbel -
2019 Oral: Understanding Priors in Bayesian Neural Networks at the Unit Level »
Mariia Vladimirova · Jakob Verbeek · Pablo Mesejo · Julyan Arbel -
2019 Poster: First-Order Adversarial Vulnerability of Neural Networks and Input Dimension »
Carl-Johann Simon-Gabriel · Yann Ollivier · Leon Bottou · Bernhard Schölkopf · David Lopez-Paz -
2019 Poster: White-box vs Black-box: Bayes Optimal Strategies for Membership Inference »
Alexandre Sablayrolles · Douze Matthijs · Cordelia Schmid · Yann Ollivier · Herve Jegou -
2019 Poster: Making Deep Q-learning methods robust to time discretization »
Corentin Tallec · Leonard Blier · Yann Ollivier -
2019 Poster: Separable value functions across time-scales »
Joshua Romoff · Peter Henderson · Ahmed Touati · Yann Ollivier · Joelle Pineau · Emma Brunskill -
2019 Oral: White-box vs Black-box: Bayes Optimal Strategies for Membership Inference »
Alexandre Sablayrolles · Douze Matthijs · Cordelia Schmid · Yann Ollivier · Herve Jegou -
2019 Oral: Separable value functions across time-scales »
Joshua Romoff · Peter Henderson · Ahmed Touati · Yann Ollivier · Joelle Pineau · Emma Brunskill -
2019 Oral: Making Deep Q-learning methods robust to time discretization »
Corentin Tallec · Leonard Blier · Yann Ollivier -
2019 Oral: First-Order Adversarial Vulnerability of Neural Networks and Input Dimension »
Carl-Johann Simon-Gabriel · Yann Ollivier · Leon Bottou · Bernhard Schölkopf · David Lopez-Paz