Timezone: »

 
Tunable Dual-Objective GANs for Stable Training
Monica Welfert · Kyle Otstot · Gowtham Kurri · Lalitha Sankar
Event URL: https://openreview.net/forum?id=w5wvlU2G0e »
In an effort to address the training instabilities of GANs, we introduce a class of dual-objective GANs with different value functions (objectives) for the generator (G) and discriminator (D). In particular, we model each objective using $\alpha$-loss, a tunable classification loss, to obtain $(\alpha_D,\alpha_G)$-GANs, parameterized by $(\alpha_D,\alpha_G)\in (0,\infty]^2$. For sufficiently large number of samples and capacities for G and D, we show that the resulting non-zero sum game simplifies to minimizing an $f$-divergence under appropriate conditions on $(\alpha_D,\alpha_G)$. We highlight the value of tuning $(\alpha_D,\alpha_G)$ in alleviating training instabilities for the synthetic 2D Gaussian mixture ring, the Celeb-A, and the LSUN Classroom datasets.

Author Information

Monica Welfert (Arizona State University)
Kyle Otstot (Arizona State University)
Gowtham Kurri (Arizona State University)
Lalitha Sankar (Arizona State University)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors