Timezone: »
Realizing GANs via a Tunable Loss Function
Gowtham Raghunath Kurri · Tyler Sypherd · Lalitha Sankar
We introduce a tunable GAN, called $\alpha$-GAN, parameterized by $\alpha \in (0,\infty]$, which interpolates between various $f$-GANs and Integral Probability Metric based GANs (under constrained discriminator set). We construct $\alpha$-GAN using a supervised loss function, namely, $\alpha$-loss, which is a tunable loss function capturing several canonical losses. We show that $\alpha$-GAN is intimately related to the Arimoto divergence, which was first proposed by \"{O}sterriecher (1996), and later studied by Liese and Vajda (2006). We posit that the holistic understanding that $\alpha$-GAN introduces will have practical benefits of addressing both the issues of vanishing gradients and mode collapse.
Author Information
Gowtham Raghunath Kurri (Arizona State University)
Tyler Sypherd (Arizona State University)
Lalitha Sankar (Arizona State University)
More from the Same Authors
-
2021 : Neural Network-based Estimation of the MMSE »
Mario Diaz · Peter Kairouz · Lalitha Sankar -
2022 : Fair Universal Representations using Adversarial Models »
Monica Welfert · Peter Kairouz · Jiachun Liao · Chong Huang · Lalitha Sankar -
2022 : AugLoss: A Robust, Reliable Methodology for Real-World Corruptions »
Kyle Otstot · John Kevin Cava · Tyler Sypherd · Lalitha Sankar -
2023 : Tunable Dual-Objective GANs for Stable Training »
Monica Welfert · Kyle Otstot · Gowtham Kurri · Lalitha Sankar -
2023 Poster: The Saddle-Point Method in Differential Privacy »
Wael Alghamdi · Felipe Gomez · Shahab Asoodeh · Flavio Calmon · Oliver Kosut · Lalitha Sankar -
2022 Poster: Being Properly Improper »
Tyler Sypherd · Richard Nock · Lalitha Sankar -
2022 Spotlight: Being Properly Improper »
Tyler Sypherd · Richard Nock · Lalitha Sankar -
2021 : Invited Talk: Lalitha Sankar »
Lalitha Sankar