Timezone: »
Normalizing flows are explicit likelihood models using invertible neural networks to construct flexible probability distributions of high-dimensional data. Compared to other generative models, the main advantage of normalizing flows is that they can offer exact and efficient likelihood computation and data generation. Since their recent introduction, flow-based models have seen a significant resurgence of interest in the machine learning community. As a result, powerful flow-based models have been developed, with successes in density estimation, variational inference, and generative modeling of images, audio and video.
This workshop is the 2nd iteration of the ICML 2019 workshop on Invertible Neural Networks and Normalizing Flows. While the main goal of last year’s workshop was to make flow-based models more accessible to the general machine learning community, as the field is moving forward, we believe there is now a need to consolidate recent progress and connect ideas from related fields. In light of the interpretation of latent variable models and autoregressive models as flows, this year we expand the scope of the workshop and consider likelihood-based models more broadly, including flow-based models, latent variable models and autoregressive models. We encourage the researchers to use these models in conjunction to exploit the their benefits at once, and to work together to resolve some common issues of likelihood-based methods, such as mis-calibration of out-of-distribution uncertainty.
Sat 2:25 a.m. - 2:30 a.m.
|
Opening remarks
(
Introduction
)
|
🔗 |
Sat 2:30 a.m. - 3:05 a.m.
|
Invited talk 1: Unifying VAEs and Flows
(
talk
)
VAEs and Flows are two of the most popular methods for density estimation (well, except GANs I guess, but nevermind... 😱). In this work we will argue they are really two sides of the same coin. A flow is based on deterministically transforming an input density through an invertible transformation to a target density. If the transformation changes a volume element we pick up a log-Jacobian term. After decomposing the ELBO in the only way that was not yet considered in the literature, we find that the log-Jacobian corresponds to log[p(x|z)/q(z|x)] of a VAE, where the maps q and p are now stochastic. This suggests a third possibility that bridges the gap between the two: a surjective map which is deterministic and surjective in one direction, and probabilistic in the reverse direction. We find that these ideas unify many methods out there in the literature, such as dequantization, and augmented flows, and we also add a few new methods of our own based on our SurVAE Flows framework. If time permits I will also say a few words on a new type of flow based on the exponential map which is trivially invertible and adds a new tool to the invertible flows toolbox. Joint work with Didrik Nielsen, Priyank Jaini, Emiel Hoogeboom. |
Max Welling 🔗 |
Sat 3:05 a.m. - 3:10 a.m.
|
Q&A with Max Welling
(
Q&A
)
|
🔗 |
Sat 3:10 a.m. - 3:15 a.m.
|
Spotlight talk: Neural Manifold Ordinary Differential Equations ( talk ) | Invertible Workshop INNF 🔗 |
Sat 3:15 a.m. - 3:20 a.m.
|
Spotlight talk: The Convolution Exponential ( talk ) | Invertible Workshop INNF 🔗 |
Sat 3:20 a.m. - 3:25 a.m.
|
Spotlight talk: WaveNODE: A Continuous Normalizing Flow for Speech Synthesis ( talk ) | Invertible Workshop INNF 🔗 |
Sat 3:25 a.m. - 3:30 a.m.
|
Spotlight talk: Neural Ordinary Differential Equations on Manifolds ( talk ) | Invertible Workshop INNF 🔗 |
Sat 3:30 a.m. - 4:10 a.m.
|
Poster session 1
(
poster
)
|
🔗 |
Sat 4:10 a.m. - 4:35 a.m.
|
Invited talk 2: Detecting Distribution Shift with Deep Generative Models
(
talk
)
Detecting distribution shift is crucial for ensuring the safety and integrity of autonomous systems and computational pipelines. Recent advances in deep generative models (DGMs) make them attractive for this use case. However, their application is not straightforward: DGMs fail to detect distribution shift when using naive likelihood thresholds. In this talk, I synthesize the recent literature on using DGMs for out-of-distribution detection. I categorize techniques into two broad classes: model-selection and omnibus methods. I close the talk by arguing that many real-world, safety-critical scenarios require the latter approach. |
Eric Nalisnick 🔗 |
Sat 4:35 a.m. - 4:40 a.m.
|
Q&A with Eric Nalisnick
(
Q&A
)
|
🔗 |
Sat 4:40 a.m. - 5:05 a.m.
|
Invited talk 3: Representational limitations of invertible models
(
talk
)
This talk will review recent work on the representational limitations of invertible models both in the context of neural ODEs and normalizing flows. In particular, it has been shown that invertible neural networks are topology preserving and can therefore not map between spaces with different topologies. This has both theoretical and numerical consequences. In the context of normalizing flows for example, the source and target density often have different topologies leading to numerically ill-posed models and training. On top of reviewing the theoretical and practical aspects of this, the talk will also cover several recent models, methods and ideas for alleviating some of these limitations. |
Emilien Dupont 🔗 |
Sat 5:05 a.m. - 5:10 a.m.
|
Q&A with Emilien Dupont
(
Q&A
)
|
🔗 |
Sat 5:10 a.m. - 5:15 a.m.
|
Spotlight talk: You say Normalizing Flows I see Bayesian Networks ( talk ) | Invertible Workshop INNF 🔗 |
Sat 5:15 a.m. - 5:20 a.m.
|
Spotlight talk: Variational Inference with Continuously-Indexed Normalizing Flows
(
talk
)
SlidesLive Video » |
Invertible Workshop INNF 🔗 |
Sat 5:20 a.m. - 5:25 a.m.
|
Spotlight talk: NOTAGAN: Flows for the data manifold ( talk ) | Invertible Workshop INNF 🔗 |
Sat 5:25 a.m. - 5:30 a.m.
|
Spotlight talk: Ordering Dimensions with Nested Dropout Normalizing Flows ( talk ) | Invertible Workshop INNF 🔗 |
Sat 5:30 a.m. - 5:35 a.m.
|
Spotlight talk: The Lipschitz Constant of Self-Attention
(
talk
)
SlidesLive Video » |
Invertible Workshop INNF 🔗 |
Sat 5:35 a.m. - 5:40 a.m.
|
Spotlight talk: Autoregressive flow-based causal discovery and inference
(
talk
)
SlidesLive Video » |
Invertible Workshop INNF 🔗 |
Sat 5:40 a.m. - 7:00 a.m.
|
Lunch break
|
🔗 |
Sat 7:00 a.m. - 7:25 a.m.
|
Invited talk 4: Divergence Measures in Variational Inference and How to Choose Them
(
talk
)
SlidesLive Video » Variational inference (VI) plays an essential role in approximate Bayesian inference due to its computational efficiency and broad applicability. Crucial to the performance of VI is the selection of the associated divergence measure, as VI approximates the intractable distribution by minimizing this divergence. In this talk, I will discuss variational inference with different divergence measures first. Then, I will present a new meta-learning algorithm to learn the divergence metric suited for the task of interest, automating the design of VI methods. |
Cheng Zhang 🔗 |
Sat 7:25 a.m. - 7:30 a.m.
|
Q&A with Cheng Zhang
(
Q&A
)
|
🔗 |
Sat 7:30 a.m. - 7:55 a.m.
|
Invited talk 5: Adversarial Learning of Prescribed Generative Models
(
talk
)
Parameterizing latent variable models with deep neural networks has become a major approach to probabilistic modeling. The usual way of fitting these deep latent variable models is to use maximum likelihood. This gives rise to variational autoencoders (VAEs). They jointly learn an approximate posterior distribution over the latent variables and the model parameters by maximizing a lower bound to the log-marginal likelihood of the data. In this talk, I will present an alternative approach to fitting parameters of deep latent-variable models. The idea is to marry adversarial learning and entropy regularization. The family of models fit with this procedure is called Prescribed Generative Adversarial Networks (PresGANs). I will describe PresGANs and discuss how they generate samples with high perceptual quality while avoiding the ubiquitous mode collapse issue of GANs. |
Adji Bousso Dieng 🔗 |
Sat 7:55 a.m. - 8:00 a.m.
|
Q&A with Adji Bousso Dieng
(
Q&A
)
|
🔗 |
Sat 8:00 a.m. - 8:25 a.m.
|
Contributed talk: Modeling Continuous Stochastic Processes with Dynamic Normalizing Flows ( talk ) | Invertible Workshop INNF 🔗 |
Sat 8:25 a.m. - 8:30 a.m.
|
Q&A with authors of contributed talk
(
Q&A
)
|
🔗 |
Sat 8:30 a.m. - 8:55 a.m.
|
Invited talk 6: Likelihood Models for Science
(
talk
)
Statistical inference is at the heart of the scientific method, and the likelihood function is at the heart of statistical inference. However, many scientific theories are formulated as mechanistic models that do not admit a tractable likelihood. While traditional approaches to confronting this problem may seem somewhat naive, they reveal numerous other considerations in the scientific workflow beyond the approximation error of the likelihood. I will highlight how normalizing flows and other techniques from machine learning are impacting scientific practice, discuss current challenges for state-of-the-art methods, and identify promising new directions in this line of research. |
Kyle Cranmer 🔗 |
Sat 8:55 a.m. - 9:00 a.m.
|
Q&A with Kyle Cranmer
(
Q&A
)
|
🔗 |
Sat 9:00 a.m. - 9:25 a.m.
|
Invited talk 7: Flows in Probabilistic Modeling & Inference
(
talk
)
I give an overview of the many uses of flows in probabilistic modeling and inference. I focus on settings in which flows are used to speed up or otherwise improve inference (i.e. settings in which flows are not part of the model specification), including applications to Optimal Experimental Design, Hamiltonian Monte Carlo, and Likelihood-Free Inference. I conclude with a brief discussion of how flows enter into probabilistic programming language (PPL) systems and suggest research directions that are important for improved PPL integration. |
Martin Jankowiak 🔗 |
Sat 9:25 a.m. - 9:30 a.m.
|
Q&A with Martin Jankowiak
(
Q&A
)
|
🔗 |
Sat 9:30 a.m. - 9:55 a.m.
|
Contributed talk: Learning normalizing flows from Entropy-Kantorovich potentials ( talk ) | Invertible Workshop INNF 🔗 |
Sat 9:55 a.m. - 10:00 a.m.
|
Q&A with authors of contributed talk
(
Q&A
)
|
🔗 |
Sat 10:00 a.m. - 10:40 a.m.
|
Poster session 2
(
poster
)
|
🔗 |
-
|
Poster presentation: Improving Sample Quality by Training and Sampling from Latent Energy
(
talk
)
|
Invertible Workshop INNF 🔗 |
-
|
Poster presentation: Exhaustive Neural Importance Sampling applied to Monte Carlo event generation
(
talk
)
|
Invertible Workshop INNF 🔗 |
-
|
Poster presentation: Stochastic Normalizing Flows
(
talk
)
|
Invertible Workshop INNF 🔗 |
-
|
Poster presentation: Quasi-Autoregressive Residual (QuAR) Flows
(
talk
)
|
Invertible Workshop INNF 🔗 |
-
|
Poster presentation: Time Series Decomposition with Slow Flows
(
talk
)
|
Invertible Workshop INNF 🔗 |
-
|
Poster presentation: Faster Orthogonal Parameterization with Householder Matrices
(
talk
)
|
Invertible Workshop INNF 🔗 |
-
|
Poster presentation: The Power Spherical distribution
(
talk
)
SlidesLive Video » |
Invertible Workshop INNF 🔗 |
-
|
Poster presentation: Woodbury Transformations for Deep Generative Flows
(
talk
)
|
Invertible Workshop INNF 🔗 |
-
|
Poster presentation: Super-resolution Variational Auto-Encoders
(
talk
)
|
Invertible Workshop INNF 🔗 |
-
|
Poster presentation: Conditional Normalizing Flows for Low-Dose Computed Tomography Image Reconstruction
(
talk
)
SlidesLive Video » |
Invertible Workshop INNF 🔗 |
-
|
Poster presentation: Why Normalizing Flows Fail to Detect Out-of-Distribution Data
(
talk
)
SlidesLive Video » |
Invertible Workshop INNF 🔗 |
-
|
Poster presentation: Density Deconvolution with Normalizing Flows
(
talk
)
|
Invertible Workshop INNF 🔗 |
-
|
Poster presentation: Consistency Regularization for Variational Auto-encoders
(
talk
)
|
Invertible Workshop INNF 🔗 |
-
|
Poster presentation: Normalizing Flows with Multi-Scale Autoregressive Priors
(
talk
)
|
Invertible Workshop INNF 🔗 |
-
|
Poster presentation: Metropolized Flow: from Invertible Flow to MCMC
(
talk
)
|
Invertible Workshop INNF 🔗 |
-
|
Poster presentation: Robust model training and generalisation with Studentising flows
(
talk
)
|
Invertible Workshop INNF 🔗 |
-
|
Poster presentation: Scaling RBMs to High Dimensional Data with Invertible Neural Networks
(
talk
)
SlidesLive Video » |
Invertible Workshop INNF 🔗 |
-
|
Poster presentation: On the Variational Posterior of Dirichlet Process Deep Latent Gaussian Mixture Models
(
talk
)
|
Invertible Workshop INNF 🔗 |
-
|
Poster presentation: A Fourier State Space Model for Bayesian ODE Filters
(
talk
)
|
Invertible Workshop INNF 🔗 |
-
|
Poster presentation: MoFlow: An Invertible Flow Model for Molecular Graph Generation
(
talk
)
SlidesLive Video » |
Invertible Workshop INNF 🔗 |
-
|
Poster presentation: TraDE: Transformers for Density Estimation
(
talk
)
|
Invertible Workshop INNF 🔗 |
-
|
Poster presentation: WeakFlow: Iterative Invertible Distribution Transformations via Weak Destructive Flows
(
talk
)
SlidesLive Video » |
Invertible Workshop INNF 🔗 |
-
|
Poster presentation: Flow-based SVDD for anomaly detection
(
talk
)
|
Invertible Workshop INNF 🔗 |
-
|
Poster presentation: Black-box Adversarial Example Generation with Normalizing Flows
(
talk
)
|
Invertible Workshop INNF 🔗 |
-
|
Poster presentation: Sequential Autoregressive Flow-Based Policies
(
talk
)
|
Invertible Workshop INNF 🔗 |
-
|
Poster presentation: Relative gradient optimization of the Jacobian term in unsupervised deep learning
(
talk
)
|
Invertible Workshop INNF 🔗 |
-
|
Poster presentation: Deep Generative Video Compression with Temporal Autoregressive Transforms
(
talk
)
|
Invertible Workshop INNF 🔗 |
-
|
Poster presentation: Normalizing Flows Across Dimensions
(
talk
)
|
Invertible Workshop INNF 🔗 |
-
|
Poster presentation: Differentially Private Normalizing Flows for Privacy-Preserving Density Estimation
(
talk
)
|
Invertible Workshop INNF 🔗 |
-
|
Poster presentation: Model-Agnostic Searches for New Physics with Normalizing Flows
(
talk
)
|
Invertible Workshop INNF 🔗 |
-
|
Link: Slack
(
link
)
link »
to join the slack, please use the link https://join.slack.com/t/innf2020/shared_invite/zt-fp5gvn7l-1XAZrKGL1xtIP03Fsd5ZQQ |
🔗 |
-
|
Link: Poster presentations and zoom links
(
Link
)
link »
Author's availability and zoom link https://docs.google.com/spreadsheets/d/1HJa8F0bMSlM2qQ9WNCJ3LO26hVmndhrFzcgDZPz_VtQ/edit?usp=sharing |
🔗 |
Author Information
Chin-Wei Huang (MILA)
David Krueger (Universit? de Montr?al)
Rianne Van den Berg (University of Amsterdam)
George Papamakarios (DeepMind)
Chris Cremer (University of Toronto)
Ricky T. Q. Chen (U of Toronto)
Danilo J. Rezende (DeepMind)

Danilo is a Senior Staff Research Scientist at Google DeepMind, where he works on probabilistic machine reasoning and learning algorithms. He has a BA in Physics and MSc in Theoretical Physics from Ecole Polytechnique (Palaiseau – France) and from the Institute of Theoretical Physics (SP – Brazil) and a Ph.D. in Computational Neuroscience at Ecole Polytechnique Federale de Lausanne, EPFL (Lausanne – Switzerland). His research focuses on scalable inference methods, generative models of complex data (such as images and video), applied probability, causal reasoning and unsupervised learning for decision-making.
More from the Same Authors
-
2021 : A Variational Perspective on Diffusion-Based Generative Models and Score Matching »
Chin-Wei Huang -
2022 : Learning to induce causal structure »
Rosemary Nan Ke · Silvia Chiappa · Jane Wang · Jorg Bornschein · Anirudh Goyal · Melanie Rey · Matthew Botvinick · Theophane Weber · Michael Mozer · Danilo J. Rezende -
2023 Poster: Compositional Score Modeling for Simulation-Based Inference »
Tomas Geffner · George Papamakarios · Andriy Mnih -
2022 Poster: From data to functa: Your data point is a function and you can treat it like one »
Emilien Dupont · Hyunjik Kim · S. M. Ali Eslami · Danilo J. Rezende · Dan Rosenbaum -
2022 Poster: Goal Misgeneralization in Deep Reinforcement Learning »
Lauro Langosco di Langosco · Jack Koch · Lee Sharkey · Jacob Pfau · David Krueger -
2022 Spotlight: Goal Misgeneralization in Deep Reinforcement Learning »
Lauro Langosco di Langosco · Jack Koch · Lee Sharkey · Jacob Pfau · David Krueger -
2022 Spotlight: From data to functa: Your data point is a function and you can treat it like one »
Emilien Dupont · Hyunjik Kim · S. M. Ali Eslami · Danilo J. Rezende · Dan Rosenbaum -
2022 Poster: Continual Repeated Annealed Flow Transport Monte Carlo »
Alexander Matthews · Michael Arbel · Danilo J. Rezende · Arnaud Doucet -
2022 Spotlight: Continual Repeated Annealed Flow Transport Monte Carlo »
Alexander Matthews · Michael Arbel · Danilo J. Rezende · Arnaud Doucet -
2021 Workshop: INNF+: Invertible Neural Networks, Normalizing Flows, and Explicit Likelihood Models »
Chin-Wei Huang · David Krueger · Rianne Van den Berg · George Papamakarios · Ricky T. Q. Chen · Danilo J. Rezende -
2021 Poster: The Lipschitz Constant of Self-Attention »
Hyunjik Kim · George Papamakarios · Andriy Mnih -
2021 Spotlight: The Lipschitz Constant of Self-Attention »
Hyunjik Kim · George Papamakarios · Andriy Mnih -
2021 Oral: NeRF-VAE: A Geometry Aware 3D Scene Generative Model »
Adam Kosiorek · Heiko Strathmann · Daniel Zoran · Pol Moreno · Rosalia Schneider · Sona Mokra · Danilo J. Rezende -
2021 Poster: NeRF-VAE: A Geometry Aware 3D Scene Generative Model »
Adam Kosiorek · Heiko Strathmann · Daniel Zoran · Pol Moreno · Rosalia Schneider · Sona Mokra · Danilo J. Rezende -
2021 Poster: "Hey, that's not an ODE": Faster ODE Adjoints via Seminorms »
Patrick Kidger · Ricky T. Q. Chen · Terry Lyons -
2021 Spotlight: "Hey, that's not an ODE": Faster ODE Adjoints via Seminorms »
Patrick Kidger · Ricky T. Q. Chen · Terry Lyons -
2020 Poster: AR-DAE: Towards Unbiased Neural Entropy Gradient Estimation »
Jae Hyun Lim · Aaron Courville · Christopher Pal · Chin-Wei Huang -
2020 Poster: On Contrastive Learning for Likelihood-free Inference »
Conor Durkan · Iain Murray · George Papamakarios -
2020 Poster: Normalizing Flows on Tori and Spheres »
Danilo J. Rezende · George Papamakarios · Sebastien Racaniere · Michael Albergo · Gurtej Kanwar · Phiala Shanahan · Kyle Cranmer -
2020 Tutorial: Representation Learning Without Labels »
S. M. Ali Eslami · Irina Higgins · Danilo J. Rezende -
2019 Workshop: Invertible Neural Networks and Normalizing Flows »
Chin-Wei Huang · David Krueger · Rianne Van den Berg · George Papamakarios · Aidan Gomez · Chris Cremer · Aaron Courville · Ricky T. Q. Chen · Danilo J. Rezende -
2019 : Poster discussion »
Roman Novak · Maxime Gabella · Frederic Dreyer · Siavash Golkar · Anh Tong · Irina Higgins · Mirco Milletari · Joe Antognini · Sebastian Goldt · Adín Ramírez Rivera · Roberto Bondesan · Ryo Karakida · Remi Tachet des Combes · Michael Mahoney · Nicholas Walker · Stanislav Fort · Samuel Smith · Rohan Ghosh · Aristide Baratin · Diego Granziol · Stephen Roberts · Dmitry Vetrov · Andrew Wilson · César Laurent · Valentin Thomas · Simon Lacoste-Julien · Dar Gilboa · Daniel Soudry · Anupam Gupta · Anirudh Goyal · Yoshua Bengio · Erich Elsen · Soham De · Stanislaw Jastrzebski · Charles H Martin · Samira Shabanian · Aaron Courville · Shorato Akaho · Lenka Zdeborova · Ethan Dyer · Maurice Weiler · Pim de Haan · Taco Cohen · Max Welling · Ping Luo · zhanglin peng · Nasim Rahaman · Loic Matthey · Danilo J. Rezende · Jaesik Choi · Kyle Cranmer · Lechao Xiao · Jaehoon Lee · Yasaman Bahri · Jeffrey Pennington · Greg Yang · Jiri Hron · Jascha Sohl-Dickstein · Guy Gur-Ari -
2019 Poster: Hierarchical Importance Weighted Autoencoders »
Chin-Wei Huang · Kris Sankaran · Eeshan Dhekane · Alexandre Lacoste · Aaron Courville -
2019 Oral: Hierarchical Importance Weighted Autoencoders »
Chin-Wei Huang · Kris Sankaran · Eeshan Dhekane · Alexandre Lacoste · Aaron Courville -
2019 Poster: Emerging Convolutions for Generative Normalizing Flows »
Emiel Hoogeboom · Rianne Van den Berg · Max Welling -
2019 Poster: Invertible Residual Networks »
Jens Behrmann · Will Grathwohl · Ricky T. Q. Chen · David Duvenaud · Joern-Henrik Jacobsen -
2019 Oral: Invertible Residual Networks »
Jens Behrmann · Will Grathwohl · Ricky T. Q. Chen · David Duvenaud · Joern-Henrik Jacobsen -
2019 Oral: Emerging Convolutions for Generative Normalizing Flows »
Emiel Hoogeboom · Rianne Van den Berg · Max Welling -
2018 Poster: Neural Autoregressive Flows »
Chin-Wei Huang · David Krueger · Alexandre Lacoste · Aaron Courville -
2018 Oral: Neural Autoregressive Flows »
Chin-Wei Huang · David Krueger · Alexandre Lacoste · Aaron Courville -
2018 Poster: Generative Temporal Models with Spatial Memory for Partially Observed Environments »
Marco Fraccaro · Danilo J. Rezende · Yori Zwols · Alexander Pritzel · S. M. Ali Eslami · Fabio Viola -
2018 Poster: Conditional Neural Processes »
Marta Garnelo · Dan Rosenbaum · Chris Maddison · Tiago Ramalho · David Saxton · Murray Shanahan · Yee Teh · Danilo J. Rezende · S. M. Ali Eslami -
2018 Poster: Inference Suboptimality in Variational Autoencoders »
Chris Cremer · Xuechen Li · David Duvenaud -
2018 Oral: Inference Suboptimality in Variational Autoencoders »
Chris Cremer · Xuechen Li · David Duvenaud -
2018 Oral: Generative Temporal Models with Spatial Memory for Partially Observed Environments »
Marco Fraccaro · Danilo J. Rezende · Yori Zwols · Alexander Pritzel · S. M. Ali Eslami · Fabio Viola -
2018 Oral: Conditional Neural Processes »
Marta Garnelo · Dan Rosenbaum · Chris Maddison · Tiago Ramalho · David Saxton · Murray Shanahan · Yee Teh · Danilo J. Rezende · S. M. Ali Eslami