Timezone: »

On Characterizing GAN Convergence Through Proximal Duality Gap
Sahil Sidheekh · Aroof Aimen · Narayanan Chatapuram Krishnan

Tue Jul 20 09:00 PM -- 11:00 PM (PDT) @ Virtual

Despite the accomplishments of Generative Adversarial Networks (GANs) in modeling data distributions, training them remains a challenging task. A contributing factor to this difficulty is the non-intuitive nature of the GAN loss curves, which necessitates a subjective evaluation of the generated output to infer training progress. Recently, motivated by game theory, Duality Gap has been proposed as a domain agnostic measure to monitor GAN training. However, it is restricted to the setting when the GAN converges to a Nash equilibrium. But GANs need not always converge to a Nash equilibrium to model the data distribution. In this work, we extend the notion of duality gap to proximal duality gap that is applicable to the general context of training GANs where Nash equilibria may not exist. We show theoretically that the proximal duality gap can monitor the convergence of GANs to a broader spectrum of equilibria that subsumes Nash equilibria. We also theoretically establish the relationship between the proximal duality gap and the divergence between the real and generated data distributions for different GAN formulations. Our results provide new insights into the nature of GAN convergence. Finally, we validate experimentally the usefulness of proximal duality gap for monitoring and influencing GAN training.

Author Information

Sahil Sidheekh (Indian Institute of Technology Ropar)
Aroof Aimen (Indian Institute of Techology, Ropar)
Narayanan Chatapuram Krishnan (Indian Institute of Technology Ropar)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors