Oral
in
Workshop: 2nd ICML Workshop on New Frontiers in Adversarial Machine Learning
Tunable Dual-Objective GANs for Stable Training
Monica Welfert · Kyle Otstot · Gowtham Kurri · Lalitha Sankar
Keywords: [ training instabilities ] [ dual objectives ] [ Generative Adversarial Networks ]
Abstract:
In an effort to address the training instabilities of GANs, we introduce a class of dual-objective GANs with different value functions (objectives) for the generator (G) and discriminator (D). In particular, we model each objective using -loss, a tunable classification loss, to obtain -GANs, parameterized by . For sufficiently large number of samples and capacities for G and D, we show that the resulting non-zero sum game simplifies to minimizing an -divergence under appropriate conditions on . We highlight the value of tuning in alleviating training instabilities for the synthetic 2D Gaussian mixture ring, the Celeb-A, and the LSUN Classroom datasets.
Chat is not available.