We introduce a new algorithm named WGAN, an alternative to traditional GAN training. In this new model, we show that we can improve the stability of learning, get rid of problems like mode collapse, and provide meaningful learning curves useful for debugging and hyperparameter searches. Furthermore, we show that the corresponding optimization problem is sound, and provide extensive theoretical work highlighting the deep connections to different distances between distributions.
Martin Arjovsky (New York University)
Soumith Chintala (Facebook)
Léon Bottou (Facebook)
Related Events (a corresponding poster, oral, or spotlight)
2017 Talk: Wasserstein Generative Adversarial Networks »
Mon Aug 7th 04:42 -- 05:00 AM Room Parkside 1