Timezone: »

Pareto GAN: Extending the Representational Power of GANs to Heavy-Tailed Distributions
Todd Huster · Jeremy Cohen · Zinan Lin · Kevin Chan · Charles Kamhoua · Nandi O. Leslie · Cho-Yu Chiang · Vyas Sekar

Thu Jul 22 05:25 PM -- 05:30 PM (PDT) @ None

Generative adversarial networks (GANs) are often billed as "universal distribution learners", but precisely what distributions they can represent and learn is still an open question. Heavy-tailed distributions are prevalent in many different domains such as financial risk-assessment, physics, and epidemiology. We observe that existing GAN architectures do a poor job of matching the asymptotic behavior of heavy-tailed distributions, a problem that we show stems from their construction. Additionally, common loss functions produce unstable or near-zero gradients when faced with the infinite moments and large distances between outlier points characteristic of heavy-tailed distributions. We address these problems with the Pareto GAN. A Pareto GAN leverages extreme value theory and the functional properties of neural networks to learn a distribution that matches the asymptotic behavior of the marginal distributions of the features. We identify issues with standard loss functions and propose the use of alternative metric spaces that enable stable and efficient learning. Finally, we evaluate our proposed approach on a variety of heavy-tailed datasets.

Author Information

Todd Huster (Perspecta Labs)
Jeremy Cohen (Perspecta Labs / Peraton Labs)
Zinan Lin (Carnegie Mellon University)
Kevin Chan (US army)
Charles Kamhoua (Army Research Lab)
Nandi O. Leslie (Army Research Laboratory)
Cho-Yu Chiang (Perspecta Labs)
Vyas Sekar (Carnegie Mellon University)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors