Keywords: Generative Models Invertible neural networks Normalizing flows Likelihood-based models Latent Variable models Autoregressive models
Normalizing flows are explicit likelihood models using invertible neural networks to construct flexible probability distributions of high-dimensional data. Compared to other generative models, the main advantage of normalizing flows is that they can offer exact and efficient likelihood computation and data generation. Since their recent introduction, flow-based models have seen a significant resurgence of interest in the machine learning community. As a result, powerful flow-based models have been developed, with successes in density estimation, variational inference, and generative modeling of images, audio and video.
This workshop is the 2nd iteration of the ICML 2019 workshop on Invertible Neural Networks and Normalizing Flows. While the main goal of last year’s workshop was to make flow-based models more accessible to the general machine learning community, as the field is moving forward, we believe there is now a need to consolidate recent progress and connect ideas from related fields. In light of the interpretation of latent variable models and autoregressive models as flows, this year we expand the scope of the workshop and consider likelihood-based models more broadly, including flow-based models, latent variable models and autoregressive models. We encourage the researchers to use these models in conjunction to exploit the their benefits at once, and to work together to resolve some common issues of likelihood-based methods, such as mis-calibration of out-of-distribution uncertainty.
Sat 2:25 a.m. - 2:30 a.m.
|
Opening remarks
(
Introduction
)
|
🔗 |
Sat 2:30 a.m. - 3:05 a.m.
|
Invited talk 1: Unifying VAEs and Flows
(
talk
)
VAEs and Flows are two of the most popular methods for density estimation (well, except GANs I guess, but nevermind... 😱). In this work we will argue they are really two sides of the same coin. A flow is based on deterministically transforming an input density through an invertible transformation to a target density. If the transformation changes a volume element we pick up a log-Jacobian term. After decomposing the ELBO in the only way that was not yet considered in the literature, we find that the log-Jacobian corresponds to log[p(x|z)/q(z|x)] of a VAE, where the maps q and p are now stochastic. This suggests a third possibility that bridges the gap between the two: a surjective map which is deterministic and surjective in one direction, and probabilistic in the reverse direction. We find that these ideas unify many methods out there in the literature, such as dequantization, and augmented flows, and we also add a few new methods of our own based on our SurVAE Flows framework. If time permits I will also say a few words on a new type of flow based on the exponential map which is trivially invertible and adds a new tool to the invertible flows toolbox. Joint work with Didrik Nielsen, Priyank Jaini, Emiel Hoogeboom. |
Max Welling 🔗 |
Sat 3:05 a.m. - 3:10 a.m.
|
Q&A with Max Welling
(
Q&A
)
|
🔗 |
Sat 3:10 a.m. - 3:15 a.m.
|
Spotlight talk: Neural Manifold Ordinary Differential Equations ( talk ) | Invertible Workshop INNF 🔗 |
Sat 3:15 a.m. - 3:20 a.m.
|
Spotlight talk: The Convolution Exponential ( talk ) | Invertible Workshop INNF 🔗 |
Sat 3:20 a.m. - 3:25 a.m.
|
Spotlight talk: WaveNODE: A Continuous Normalizing Flow for Speech Synthesis ( talk ) | Invertible Workshop INNF 🔗 |
Sat 3:25 a.m. - 3:30 a.m.
|
Spotlight talk: Neural Ordinary Differential Equations on Manifolds ( talk ) | Invertible Workshop INNF 🔗 |
Sat 3:30 a.m. - 4:10 a.m.
|
Poster session 1
(
poster
)
|
🔗 |
Sat 4:10 a.m. - 4:35 a.m.
|
Invited talk 2: Detecting Distribution Shift with Deep Generative Models
(
talk
)
Detecting distribution shift is crucial for ensuring the safety and integrity of autonomous systems and computational pipelines. Recent advances in deep generative models (DGMs) make them attractive for this use case. However, their application is not straightforward: DGMs fail to detect distribution shift when using naive likelihood thresholds. In this talk, I synthesize the recent literature on using DGMs for out-of-distribution detection. I categorize techniques into two broad classes: model-selection and omnibus methods. I close the talk by arguing that many real-world, safety-critical scenarios require the latter approach. |
Eric Nalisnick 🔗 |
Sat 4:35 a.m. - 4:40 a.m.
|
Q&A with Eric Nalisnick
(
Q&A
)
|
🔗 |
Sat 4:40 a.m. - 5:05 a.m.
|
Invited talk 3: Representational limitations of invertible models
(
talk
)
This talk will review recent work on the representational limitations of invertible models both in the context of neural ODEs and normalizing flows. In particular, it has been shown that invertible neural networks are topology preserving and can therefore not map between spaces with different topologies. This has both theoretical and numerical consequences. In the context of normalizing flows for example, the source and target density often have different topologies leading to numerically ill-posed models and training. On top of reviewing the theoretical and practical aspects of this, the talk will also cover several recent models, methods and ideas for alleviating some of these limitations. |
Emilien Dupont 🔗 |
Sat 5:05 a.m. - 5:10 a.m.
|
Q&A with Emilien Dupont
(
Q&A
)
|
🔗 |
Sat 5:10 a.m. - 5:15 a.m.
|
Spotlight talk: You say Normalizing Flows I see Bayesian Networks ( talk ) | Invertible Workshop INNF 🔗 |
Sat 5:15 a.m. - 5:20 a.m.
|
Spotlight talk: Variational Inference with Continuously-Indexed Normalizing Flows
(
talk
)
SlidesLive Video » |
Invertible Workshop INNF 🔗 |
Sat 5:20 a.m. - 5:25 a.m.
|
Spotlight talk: NOTAGAN: Flows for the data manifold ( talk ) | Invertible Workshop INNF 🔗 |
Sat 5:25 a.m. - 5:30 a.m.
|
Spotlight talk: Ordering Dimensions with Nested Dropout Normalizing Flows ( talk ) | Invertible Workshop INNF 🔗 |
Sat 5:30 a.m. - 5:35 a.m.
|
Spotlight talk: The Lipschitz Constant of Self-Attention
(
talk
)
SlidesLive Video » |
Invertible Workshop INNF 🔗 |
Sat 5:35 a.m. - 5:40 a.m.
|
Spotlight talk: Autoregressive flow-based causal discovery and inference
(
talk
)
SlidesLive Video » |
Invertible Workshop INNF 🔗 |
Sat 5:40 a.m. - 7:00 a.m.
|
Lunch break
|
🔗 |
Sat 7:00 a.m. - 7:25 a.m.
|
Invited talk 4: Divergence Measures in Variational Inference and How to Choose Them
(
talk
)
SlidesLive Video » Variational inference (VI) plays an essential role in approximate Bayesian inference due to its computational efficiency and broad applicability. Crucial to the performance of VI is the selection of the associated divergence measure, as VI approximates the intractable distribution by minimizing this divergence. In this talk, I will discuss variational inference with different divergence measures first. Then, I will present a new meta-learning algorithm to learn the divergence metric suited for the task of interest, automating the design of VI methods. |
Cheng Zhang 🔗 |
Sat 7:25 a.m. - 7:30 a.m.
|
Q&A with Cheng Zhang
(
Q&A
)
|
🔗 |
Sat 7:30 a.m. - 7:55 a.m.
|
Invited talk 5: Adversarial Learning of Prescribed Generative Models
(
talk
)
Parameterizing latent variable models with deep neural networks has become a major approach to probabilistic modeling. The usual way of fitting these deep latent variable models is to use maximum likelihood. This gives rise to variational autoencoders (VAEs). They jointly learn an approximate posterior distribution over the latent variables and the model parameters by maximizing a lower bound to the log-marginal likelihood of the data. In this talk, I will present an alternative approach to fitting parameters of deep latent-variable models. The idea is to marry adversarial learning and entropy regularization. The family of models fit with this procedure is called Prescribed Generative Adversarial Networks (PresGANs). I will describe PresGANs and discuss how they generate samples with high perceptual quality while avoiding the ubiquitous mode collapse issue of GANs. |
Adji Bousso Dieng 🔗 |
Sat 7:55 a.m. - 8:00 a.m.
|
Q&A with Adji Bousso Dieng
(
Q&A
)
|
🔗 |
Sat 8:00 a.m. - 8:25 a.m.
|
Contributed talk: Modeling Continuous Stochastic Processes with Dynamic Normalizing Flows ( talk ) | Invertible Workshop INNF 🔗 |
Sat 8:25 a.m. - 8:30 a.m.
|
Q&A with authors of contributed talk
(
Q&A
)
|
🔗 |
Sat 8:30 a.m. - 8:55 a.m.
|
Invited talk 6: Likelihood Models for Science
(
talk
)
Statistical inference is at the heart of the scientific method, and the likelihood function is at the heart of statistical inference. However, many scientific theories are formulated as mechanistic models that do not admit a tractable likelihood. While traditional approaches to confronting this problem may seem somewhat naive, they reveal numerous other considerations in the scientific workflow beyond the approximation error of the likelihood. I will highlight how normalizing flows and other techniques from machine learning are impacting scientific practice, discuss current challenges for state-of-the-art methods, and identify promising new directions in this line of research. |
Kyle Cranmer 🔗 |
Sat 8:55 a.m. - 9:00 a.m.
|
Q&A with Kyle Cranmer
(
Q&A
)
|
🔗 |
Sat 9:00 a.m. - 9:25 a.m.
|
Invited talk 7: Flows in Probabilistic Modeling & Inference
(
talk
)
I give an overview of the many uses of flows in probabilistic modeling and inference. I focus on settings in which flows are used to speed up or otherwise improve inference (i.e. settings in which flows are not part of the model specification), including applications to Optimal Experimental Design, Hamiltonian Monte Carlo, and Likelihood-Free Inference. I conclude with a brief discussion of how flows enter into probabilistic programming language (PPL) systems and suggest research directions that are important for improved PPL integration. |
Martin Jankowiak 🔗 |
Sat 9:25 a.m. - 9:30 a.m.
|
Q&A with Martin Jankowiak
(
Q&A
)
|
🔗 |
Sat 9:30 a.m. - 9:55 a.m.
|
Contributed talk: Learning normalizing flows from Entropy-Kantorovich potentials ( talk ) | Invertible Workshop INNF 🔗 |
Sat 9:55 a.m. - 10:00 a.m.
|
Q&A with authors of contributed talk
(
Q&A
)
|
🔗 |
Sat 10:00 a.m. - 10:40 a.m.
|
Poster session 2
(
poster
)
|
🔗 |
-
|
Poster presentation: Improving Sample Quality by Training and Sampling from Latent Energy
(
talk
)
|
Invertible Workshop INNF 🔗 |
-
|
Poster presentation: Exhaustive Neural Importance Sampling applied to Monte Carlo event generation
(
talk
)
|
Invertible Workshop INNF 🔗 |
-
|
Poster presentation: Stochastic Normalizing Flows
(
talk
)
|
Invertible Workshop INNF 🔗 |
-
|
Poster presentation: Quasi-Autoregressive Residual (QuAR) Flows
(
talk
)
|
Invertible Workshop INNF 🔗 |
-
|
Poster presentation: Time Series Decomposition with Slow Flows
(
talk
)
|
Invertible Workshop INNF 🔗 |
-
|
Poster presentation: Faster Orthogonal Parameterization with Householder Matrices
(
talk
)
|
Invertible Workshop INNF 🔗 |
-
|
Poster presentation: The Power Spherical distribution
(
talk
)
SlidesLive Video » |
Invertible Workshop INNF 🔗 |
-
|
Poster presentation: Woodbury Transformations for Deep Generative Flows
(
talk
)
|
Invertible Workshop INNF 🔗 |
-
|
Poster presentation: Super-resolution Variational Auto-Encoders
(
talk
)
|
Invertible Workshop INNF 🔗 |
-
|
Poster presentation: Conditional Normalizing Flows for Low-Dose Computed Tomography Image Reconstruction
(
talk
)
SlidesLive Video » |
Invertible Workshop INNF 🔗 |
-
|
Poster presentation: Why Normalizing Flows Fail to Detect Out-of-Distribution Data
(
talk
)
SlidesLive Video » |
Invertible Workshop INNF 🔗 |
-
|
Poster presentation: Density Deconvolution with Normalizing Flows
(
talk
)
|
Invertible Workshop INNF 🔗 |
-
|
Poster presentation: Consistency Regularization for Variational Auto-encoders
(
talk
)
|
Invertible Workshop INNF 🔗 |
-
|
Poster presentation: Normalizing Flows with Multi-Scale Autoregressive Priors
(
talk
)
|
Invertible Workshop INNF 🔗 |
-
|
Poster presentation: Metropolized Flow: from Invertible Flow to MCMC
(
talk
)
|
Invertible Workshop INNF 🔗 |
-
|
Poster presentation: Robust model training and generalisation with Studentising flows
(
talk
)
|
Invertible Workshop INNF 🔗 |
-
|
Poster presentation: Scaling RBMs to High Dimensional Data with Invertible Neural Networks
(
talk
)
SlidesLive Video » |
Invertible Workshop INNF 🔗 |
-
|
Poster presentation: On the Variational Posterior of Dirichlet Process Deep Latent Gaussian Mixture Models
(
talk
)
|
Invertible Workshop INNF 🔗 |
-
|
Poster presentation: A Fourier State Space Model for Bayesian ODE Filters
(
talk
)
|
Invertible Workshop INNF 🔗 |
-
|
Poster presentation: MoFlow: An Invertible Flow Model for Molecular Graph Generation
(
talk
)
SlidesLive Video » |
Invertible Workshop INNF 🔗 |
-
|
Poster presentation: TraDE: Transformers for Density Estimation
(
talk
)
|
Invertible Workshop INNF 🔗 |
-
|
Poster presentation: WeakFlow: Iterative Invertible Distribution Transformations via Weak Destructive Flows
(
talk
)
SlidesLive Video » |
Invertible Workshop INNF 🔗 |
-
|
Poster presentation: Flow-based SVDD for anomaly detection
(
talk
)
|
Invertible Workshop INNF 🔗 |
-
|
Poster presentation: Black-box Adversarial Example Generation with Normalizing Flows
(
talk
)
|
Invertible Workshop INNF 🔗 |
-
|
Poster presentation: Sequential Autoregressive Flow-Based Policies
(
talk
)
|
Invertible Workshop INNF 🔗 |
-
|
Poster presentation: Relative gradient optimization of the Jacobian term in unsupervised deep learning
(
talk
)
|
Invertible Workshop INNF 🔗 |
-
|
Poster presentation: Deep Generative Video Compression with Temporal Autoregressive Transforms
(
talk
)
|
Invertible Workshop INNF 🔗 |
-
|
Poster presentation: Normalizing Flows Across Dimensions
(
talk
)
|
Invertible Workshop INNF 🔗 |
-
|
Poster presentation: Differentially Private Normalizing Flows for Privacy-Preserving Density Estimation
(
talk
)
|
Invertible Workshop INNF 🔗 |
-
|
Poster presentation: Model-Agnostic Searches for New Physics with Normalizing Flows
(
talk
)
|
Invertible Workshop INNF 🔗 |
-
|
Link: Slack
(
link
)
link »
to join the slack, please use the link https://join.slack.com/t/innf2020/shared_invite/zt-fp5gvn7l-1XAZrKGL1xtIP03Fsd5ZQQ |
🔗 |
-
|
Link: Poster presentations and zoom links
(
Link
)
link »
Author's availability and zoom link https://docs.google.com/spreadsheets/d/1HJa8F0bMSlM2qQ9WNCJ3LO26hVmndhrFzcgDZPz_VtQ/edit?usp=sharing |
🔗 |