Timezone: »

 
Poster
Variational Annealing of GANs: A Langevin Perspective
Chenyang Tao · Shuyang Dai · Liqun Chen · Ke Bai · Junya Chen · Chang Liu · RUIYI (ROY) ZHANG · Georgiy Bobashev · Lawrence Carin

Wed Jun 12 06:30 PM -- 09:00 PM (PDT) @ Pacific Ballroom #10

The generative adversarial network (GAN) has received considerable attention recently as a model for data synthesis, without an explicit specification of a likelihood function. There has been commensurate interest in leveraging likelihood estimates to improve GAN training. To enrich the understanding of this fast-growing yet almost exclusively heuristic-driven subject, we elucidate the theoretical roots of some of the empirical attempts to stabilize and improve GAN training with the introduction of likelihoods. We highlight new insights from variational theory of diffusion processes to derive a likelihood-based regularizing scheme for GAN training, and present a novel approach to train GANs with an unnormalized distribution instead of empirical samples. To substantiate our claims, we provide experimental evidence on how our theoretically-inspired new algorithms improve upon current practice.

Author Information

Chenyang Tao (Duke University)
Shuyang Dai (Duke University)
Liqun Chen (Duke University)
Ke Bai (Duke University)
Junya Chen (Duke U)
Chang Liu (Tsinghua University)
RUIYI (ROY) ZHANG (Duke University)
Georgiy Bobashev (RTI International)
Lawrence Carin (Duke)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors