Timezone: »
Building on the success of deep learning, two modern approaches to learn a probability model from the data are Generative Adversarial Networks (GANs) and Variational AutoEncoders (VAEs). VAEs consider an explicit probability model for the data and compute a generative distribution by maximizing a variational lower-bound on the log-likelihood function. GANs, however, compute a generative model by minimizing a distance between observed and generated probability distributions without considering an explicit model for the observed data. The lack of having explicit probability models in GANs prohibits computation of sample likelihoods in their frameworks and limits their use in statistical inference problems. In this work, we resolve this issue by constructing an explicit probability model that can be used to compute sample likelihood statistics in GANs. In particular, we prove that under this probability model, a family of Wasserstein GANs with an entropy regularization can be viewed as a generative model that maximizes a variational lower-bound on average sample log likelihoods, an approach that VAEs are based on. This result makes a principled connection between two modern generative models, namely GANs and VAEs. In addition to the aforementioned theoretical results, we compute likelihood statistics for GANs trained on Gaussian, MNIST, SVHN, CIFAR-10 and LSUN datasets. Our numerical results match consistently with the proposed theory.
Author Information
Yogesh Balaji (University of Maryland)
Hamed Hassani (University of Pennsylvania)

I am an assistant professor in the Department of Electrical and Systems Engineering (as of July 2017). I hold a secondary appointment in the Department of Computer and Information Systems. I am also a faculty affiliate of the Warren Center for Network and Data Sciences. Before joining Penn, I was a research fellow at the Simons Institute, UC Berkeley (program: Foundations of Machine Learning). Prior to that, I was a post-doctoral scholar and lecturer in the Institute for Machine Learning at ETH Zürich. I received my Ph.D. degree in Computer and Communication Sciences from EPFL.
Rama Chellappa (University of Maryland)
Soheil Feizi (University of Maryland)
Related Events (a corresponding poster, oral, or spotlight)
-
2019 Poster: Entropic GANs meet VAEs: A Statistical Approach to Compute Sample Likelihoods in GANs »
Wed. Jun 12th 01:30 -- 04:00 AM Room Pacific Ballroom #17
More from the Same Authors
-
2021 : Minimax Optimization: The Case of Convex-Submodular »
Arman Adibi · Aryan Mokhtari · Hamed Hassani -
2021 : Out-of-Distribution Robustness in Deep Learning Compression »
Eric Lei · Hamed Hassani -
2022 : Towards Better Understanding of Self-Supervised Representations »
Neha Mukund Kalibhat · Kanika Narang · Hamed Firooz · Maziar Sanjabi · Soheil Feizi -
2022 : Toward Certified Robustness Against Real-World Distribution Shifts »
Haoze Wu · TERUHIRO TAGOMORI · Alex Robey · Fengjun Yang · Nikolai Matni · George J. Pappas · Hamed Hassani · Corina Pasareanu · Clark Barrett -
2022 : Improved Certified Defenses against Data Poisoning with (Deterministic) Finite Aggregation »
Wenxiao Wang · Alexander Levine · Soheil Feizi -
2022 : Certifiably Robust Multi-Agent Reinforcement Learning against Adversarial Communication »
Yanchao Sun · Ruijie Zheng · Parisa Hassanzadeh · Yongyuan Liang · Soheil Feizi · Sumitra Ganesh · Furong Huang -
2023 : Text + Sketch: Image Compression at Ultra Low Rates »
Eric Lei · Yigit Berkay Uslu · Hamed Hassani · Shirin Bidokhti -
2023 : Adversarial Training Should Be Cast as a Non-Zero-Sum Game »
Alex Robey · Fabian Latorre · George J. Pappas · Hamed Hassani · Volkan Cevher -
2023 Poster: Fundamental Limits of Two-layer Autoencoders, and Achieving Them with Gradient Methods »
Aleksandr Shevchenko · Kevin Kögler · Hamed Hassani · Marco Mondelli -
2023 Poster: Run-off Election: Improved Provable Defense against Data Poisoning Attacks »
Keivan Rezaei · Kiarash Banihashem · Atoosa Malemir Chegini · Soheil Feizi -
2023 Oral: Fundamental Limits of Two-layer Autoencoders, and Achieving Them with Gradient Methods »
Aleksandr Shevchenko · Kevin Kögler · Hamed Hassani · Marco Mondelli -
2023 Poster: Identifying Interpretable Subspaces in Image Representations »
Neha Mukund Kalibhat · Shweta Bhardwaj · C. Bayan Bruss · Hamed Firooz · Maziar Sanjabi · Soheil Feizi -
2023 Poster: Text-To-Concept (and Back) via Cross-Model Alignment »
Mazda Moayeri · Keivan Rezaei · Maziar Sanjabi · Soheil Feizi -
2023 Poster: Demystifying Disagreement-on-the-Line in High Dimensions »
Donghwan Lee · Behrad Moniri · Xinmeng Huang · Edgar Dobriban · Hamed Hassani -
2022 : Panel discussion »
Steffen Schneider · Aleksander Madry · Alexei Efros · Chelsea Finn · Soheil Feizi -
2022 : Improved Certified Defenses against Data Poisoning with (Deterministic) Finite Aggregation »
Wenxiao Wang · Alexander Levine · Soheil Feizi -
2022 : Toward Efficient Robust Training against Union of Lp Threat Models »
Gaurang Sriramanan · Maharshi Gor · Soheil Feizi -
2022 Poster: Improved Certified Defenses against Data Poisoning with (Deterministic) Finite Aggregation »
Wenxiao Wang · Alexander Levine · Soheil Feizi -
2022 Poster: Probabilistically Robust Learning: Balancing Average- and Worst-case Performance »
Alex Robey · Luiz F. O. Chamon · George J. Pappas · Hamed Hassani -
2022 Poster: FOCUS: Familiar Objects in Common and Uncommon Settings »
Priyatham Kattakinda · Soheil Feizi -
2022 Spotlight: Probabilistically Robust Learning: Balancing Average- and Worst-case Performance »
Alex Robey · Luiz F. O. Chamon · George J. Pappas · Hamed Hassani -
2022 Spotlight: Improved Certified Defenses against Data Poisoning with (Deterministic) Finite Aggregation »
Wenxiao Wang · Alexander Levine · Soheil Feizi -
2022 Spotlight: FOCUS: Familiar Objects in Common and Uncommon Settings »
Priyatham Kattakinda · Soheil Feizi -
2021 : Minimax Optimization: The Case of Convex-Submodular »
Hamed Hassani · Aryan Mokhtari · Arman Adibi -
2021 : Contributed Talk #1 »
Eric Lei · Hamed Hassani · Shirin Bidokhti -
2021 : Invited Talk 6: Towards Understanding Foundations of Robust Learning »
Soheil Feizi -
2021 Poster: Exploiting Shared Representations for Personalized Federated Learning »
Liam Collins · Hamed Hassani · Aryan Mokhtari · Sanjay Shakkottai -
2021 Spotlight: Exploiting Shared Representations for Personalized Federated Learning »
Liam Collins · Hamed Hassani · Aryan Mokhtari · Sanjay Shakkottai -
2021 Poster: Improved, Deterministic Smoothing for L_1 Certified Robustness »
Alexander Levine · Soheil Feizi -
2021 Poster: Skew Orthogonal Convolutions »
Sahil Singla · Soheil Feizi -
2021 Spotlight: Skew Orthogonal Convolutions »
Sahil Singla · Soheil Feizi -
2021 Oral: Improved, Deterministic Smoothing for L_1 Certified Robustness »
Alexander Levine · Soheil Feizi -
2020 Poster: Curse of Dimensionality on Randomized Smoothing for Certifiable Robustness »
Aounon Kumar · Alexander Levine · Tom Goldstein · Soheil Feizi -
2020 Poster: Quantized Decentralized Stochastic Learning over Directed Graphs »
Hossein Taheri · Aryan Mokhtari · Hamed Hassani · Ramtin Pedarsani -
2020 Poster: Second-Order Provable Defenses against Adversarial Attacks »
Sahil Singla · Soheil Feizi -
2020 Poster: On Second-Order Group Influence Functions for Black-Box Predictions »
Samyadeep Basu · Xuchen You · Soheil Feizi -
2020 Tutorial: Submodular Optimization: From Discrete to Continuous and Back »
Hamed Hassani · Amin Karbasi -
2019 Poster: Understanding Impacts of High-Order Loss Approximations and Features in Deep Learning Interpretation »
Sahil Singla · Eric Wallace · Shi Feng · Soheil Feizi -
2019 Oral: Understanding Impacts of High-Order Loss Approximations and Features in Deep Learning Interpretation »
Sahil Singla · Eric Wallace · Shi Feng · Soheil Feizi -
2019 Poster: Hessian Aided Policy Gradient »
Zebang Shen · Alejandro Ribeiro · Hamed Hassani · Hui Qian · Chao Mi -
2019 Oral: Hessian Aided Policy Gradient »
Zebang Shen · Alejandro Ribeiro · Hamed Hassani · Hui Qian · Chao Mi -
2018 Poster: Decentralized Submodular Maximization: Bridging Discrete and Continuous Settings »
Aryan Mokhtari · Hamed Hassani · Amin Karbasi -
2018 Oral: Decentralized Submodular Maximization: Bridging Discrete and Continuous Settings »
Aryan Mokhtari · Hamed Hassani · Amin Karbasi