Timezone: »
Training normalizing flow generative models can be challenging due to the need to calculate computationally expensive determinants of Jacobians. This paper studies the likelihood-free training of flows and proposes the energy objective, an alternative sample-based loss based on proper scoring rules. The energy objective is determinant-free and supports flexible model architectures that are not easily compatible with maximum likelihood training, including semi-autoregressive energy flows, a novel model family that interpolates between fully autoregressive and non-autoregressive models. Energy flows feature competitive sample quality, posterior inference, and generation speed relative to likelihood-based flows; this performance is decorrelated from the quality of log-likelihood estimates, which are generally very poor. Our findings question the use of maximum likelihood as an objective or a metric, and contribute to a scientific study of its role in generative modeling. Code is available at https://github.com/ps789/SAEF.
Author Information
Phillip Si (Cornell University)
Zeyi Chen (Tsinghua University, Tsinghua University)
Subham S Sahoo (Indian Institute of Technology)
I'm currently an undergraduate student at Indian Institute of Technology - Kharagpur pursuing my major in Electrical Engineering and a minor in Computer Science.
Yair Schiff (Department of Computer Science, Cornell University)

I'm a second year PhD student in the Computer Science department at Cornell University. I have also worked as a software engineer at IBM and collaborated with the Trusted AI department in IBM Research. Prior to joining IBM, I completed a MS in Computer Science at Courant Institute at NYU and a BA in Economics at the University of Pennsylvania.
Volodymyr Kuleshov (Cornell Tech)
More from the Same Authors
-
2023 : Calibrated Propensities for Causal Effect Estimation »
Shachi Deshpande · Volodymyr Kuleshov -
2023 : Regularized Data Programming with Automated Bayesian Prior Selection »
Jacqueline Maasch · Hao Zhang · Qian Yang · Fei Wang · Volodymyr Kuleshov -
2023 Poster: InfoDiffusion: Representation Learning Using Information Maximizing Diffusion Models »
Yingheng Wang · Yair Schiff · Aaron Gokaslan · Weishen Pan · Fei Wang · Chris De Sa · Volodymyr Kuleshov -
2022 Poster: Calibrated and Sharp Uncertainties in Deep Learning via Density Estimation »
Volodymyr Kuleshov · Shachi Deshpande -
2022 Spotlight: Calibrated and Sharp Uncertainties in Deep Learning via Density Estimation »
Volodymyr Kuleshov · Shachi Deshpande -
2019 : Networking Lunch (provided) + Poster Session »
Abraham Stanway · Alex Robson · Aneesh Rangnekar · Ashesh Chattopadhyay · Ashley Pilipiszyn · Benjamin LeRoy · Bolong Cheng · Ce Zhang · Chaopeng Shen · Christian Schroeder · Christian Clough · Clement DUHART · Clement Fung · Cozmin Ududec · Dali Wang · David Dao · di wu · Dimitrios Giannakis · Dino Sejdinovic · Doina Precup · Duncan Watson-Parris · Gege Wen · George Chen · Gopal Erinjippurath · Haifeng Li · Han Zou · Herke van Hoof · Hillary A Scannell · Hiroshi Mamitsuka · Hongbao Zhang · Jaegul Choo · James Wang · James Requeima · Jessica Hwang · Jinfan Xu · Johan Mathe · Jonathan Binas · Joonseok Lee · Kalai Ramea · Kate Duffy · Kevin McCloskey · Kris Sankaran · Lester Mackey · Letif Mones · Loubna Benabbou · Lynn Kaack · Matthew Hoffman · Mayur Mudigonda · Mehrdad Mahdavi · Michael McCourt · Mingchao Jiang · Mohammad Mahdi Kamani · Neel Guha · Niccolo Dalmasso · Nick Pawlowski · Nikola Milojevic-Dupont · Paulo Orenstein · Pedram Hassanzadeh · Pekka Marttinen · Ramesh Nair · Sadegh Farhang · Samuel Kaski · Sandeep Manjanna · Sasha Luccioni · Shuby Deshpande · Soo Kim · Soukayna Mouatadid · Sunghyun Park · Tao Lin · Telmo Felgueira · Thomas Hornigold · Tianle Yuan · Tom Beucler · Tracy Cui · Volodymyr Kuleshov · Wei Yu · yang song · Ydo Wexler · Yoshua Bengio · Zhecheng Wang · Zhuangfang Yi · Zouheir Malki -
2019 : Towards a Sustainable Food Supply Chain Powered by Artificial Intelligence »
Volodymyr Kuleshov -
2019 Poster: Calibrated Model-Based Deep Reinforcement Learning »
Ali Malik · Volodymyr Kuleshov · Jiaming Song · Danny Nemer · Harlan Seymour · Stefano Ermon -
2019 Oral: Calibrated Model-Based Deep Reinforcement Learning »
Ali Malik · Volodymyr Kuleshov · Jiaming Song · Danny Nemer · Harlan Seymour · Stefano Ermon -
2018 Poster: Accurate Uncertainties for Deep Learning Using Calibrated Regression »
Volodymyr Kuleshov · Nathan Fenner · Stefano Ermon -
2018 Oral: Accurate Uncertainties for Deep Learning Using Calibrated Regression »
Volodymyr Kuleshov · Nathan Fenner · Stefano Ermon