Timezone: »
Tunable Dual-Objective GANs for Stable Training
Monica Welfert · Kyle Otstot · Gowtham Kurri · Lalitha Sankar
Fri Jul 28 01:20 PM -- 01:30 PM (PDT) @
Event URL: https://openreview.net/forum?id=w5wvlU2G0e »
In an effort to address the training instabilities of GANs, we introduce a class of dual-objective GANs with different value functions (objectives) for the generator (G) and discriminator (D). In particular, we model each objective using $\alpha$-loss, a tunable classification loss, to obtain $(\alpha_D,\alpha_G)$-GANs, parameterized by $(\alpha_D,\alpha_G)\in (0,\infty]^2$. For sufficiently large number of samples and capacities for G and D, we show that the resulting non-zero sum game simplifies to minimizing an $f$-divergence under appropriate conditions on $(\alpha_D,\alpha_G)$. We highlight the value of tuning $(\alpha_D,\alpha_G)$ in alleviating training instabilities for the synthetic 2D Gaussian mixture ring, the Celeb-A, and the LSUN Classroom datasets.
In an effort to address the training instabilities of GANs, we introduce a class of dual-objective GANs with different value functions (objectives) for the generator (G) and discriminator (D). In particular, we model each objective using $\alpha$-loss, a tunable classification loss, to obtain $(\alpha_D,\alpha_G)$-GANs, parameterized by $(\alpha_D,\alpha_G)\in (0,\infty]^2$. For sufficiently large number of samples and capacities for G and D, we show that the resulting non-zero sum game simplifies to minimizing an $f$-divergence under appropriate conditions on $(\alpha_D,\alpha_G)$. We highlight the value of tuning $(\alpha_D,\alpha_G)$ in alleviating training instabilities for the synthetic 2D Gaussian mixture ring, the Celeb-A, and the LSUN Classroom datasets.
Author Information
Monica Welfert (Arizona State University)
Kyle Otstot (Arizona State University)
Gowtham Kurri (Arizona State University)
Lalitha Sankar (Arizona State University)
Related Events (a corresponding poster, oral, or spotlight)
-
2023 : Tunable Dual-Objective GANs for Stable Training »
Dates n/a. Room
More from the Same Authors
-
2021 : Neural Network-based Estimation of the MMSE »
Mario Diaz · Peter Kairouz · Lalitha Sankar -
2021 : Realizing GANs via a Tunable Loss Function »
Gowtham Raghunath Kurri · Tyler Sypherd · Lalitha Sankar -
2022 : Fair Universal Representations using Adversarial Models »
Monica Welfert · Peter Kairouz · Jiachun Liao · Chong Huang · Lalitha Sankar -
2022 : AugLoss: A Robust, Reliable Methodology for Real-World Corruptions »
Kyle Otstot · John Kevin Cava · Tyler Sypherd · Lalitha Sankar -
2023 Poster: The Saddle-Point Method in Differential Privacy »
Wael Alghamdi · Felipe Gomez · Shahab Asoodeh · Flavio Calmon · Oliver Kosut · Lalitha Sankar -
2022 Poster: Being Properly Improper »
Tyler Sypherd · Richard Nock · Lalitha Sankar -
2022 Spotlight: Being Properly Improper »
Tyler Sypherd · Richard Nock · Lalitha Sankar -
2021 : Invited Talk: Lalitha Sankar »
Lalitha Sankar