Timezone: »

 
Workshop
INNF+: Invertible Neural Networks, Normalizing Flows, and Explicit Likelihood Models
Chin-Wei Huang · David Krueger · Rianne Van den Berg · George Papamakarios · Chris Cremer · Tian Qi Chen · Danilo J. Rezende

Sat Jul 18 02:25 AM -- 10:40 AM (PDT) @ None
Event URL: https://invertibleworkshop.github.io/ »

Normalizing flows are explicit likelihood models using invertible neural networks to construct flexible probability distributions of high-dimensional data. Compared to other generative models, the main advantage of normalizing flows is that they can offer exact and efficient likelihood computation and data generation. Since their recent introduction, flow-based models have seen a significant resurgence of interest in the machine learning community. As a result, powerful flow-based models have been developed, with successes in density estimation, variational inference, and generative modeling of images, audio and video.

This workshop is the 2nd iteration of the ICML 2019 workshop on Invertible Neural Networks and Normalizing Flows. While the main goal of last year’s workshop was to make flow-based models more accessible to the general machine learning community, as the field is moving forward, we believe there is now a need to consolidate recent progress and connect ideas from related fields. In light of the interpretation of latent variable models and autoregressive models as flows, this year we expand the scope of the workshop and consider likelihood-based models more broadly, including flow-based models, latent variable models and autoregressive models. We encourage the researchers to use these models in conjunction to exploit the their benefits at once, and to work together to resolve some common issues of likelihood-based methods, such as mis-calibration of out-of-distribution uncertainty.

Sat 2:25 a.m. - 2:30 a.m. [iCal]
Opening remarks (Introduction)
Sat 2:30 a.m. - 3:05 a.m. [iCal]

VAEs and Flows are two of the most popular methods for density estimation (well, except GANs I guess, but nevermind... 😱). In this work we will argue they are really two sides of the same coin. A flow is based on deterministically transforming an input density through an invertible transformation to a target density. If the transformation changes a volume element we pick up a log-Jacobian term. After decomposing the ELBO in the only way that was not yet considered in the literature, we find that the log-Jacobian corresponds to log[p(x|z)/q(z|x)] of a VAE, where the maps q and p are now stochastic. This suggests a third possibility that bridges the gap between the two: a surjective map which is deterministic and surjective in one direction, and probabilistic in the reverse direction. We find that these ideas unify many methods out there in the literature, such as dequantization, and augmented flows, and we also add a few new methods of our own based on our SurVAE Flows framework. If time permits I will also say a few words on a new type of flow based on the exponential map which is trivially invertible and adds a new tool to the invertible flows toolbox.

Joint work with Didrik Nielsen, Priyank Jaini, Emiel Hoogeboom.

Max Welling
Sat 3:05 a.m. - 3:10 a.m. [iCal]
Q&A with Max Welling (Q&A)
Sat 3:10 a.m. - 3:15 a.m. [iCal]
Invertible Workshop INNF
Sat 3:15 a.m. - 3:20 a.m. [iCal]
Invertible Workshop INNF
Sat 3:20 a.m. - 3:25 a.m. [iCal]
Invertible Workshop INNF
Sat 3:25 a.m. - 3:30 a.m. [iCal]
Invertible Workshop INNF
Sat 3:30 a.m. - 4:10 a.m. [iCal]
Poster session 1 (poster)
Sat 4:10 a.m. - 4:35 a.m. [iCal]

Detecting distribution shift is crucial for ensuring the safety and integrity of autonomous systems and computational pipelines. Recent advances in deep generative models (DGMs) make them attractive for this use case. However, their application is not straightforward: DGMs fail to detect distribution shift when using naive likelihood thresholds. In this talk, I synthesize the recent literature on using DGMs for out-of-distribution detection. I categorize techniques into two broad classes: model-selection and omnibus methods. I close the talk by arguing that many real-world, safety-critical scenarios require the latter approach.

Eric Nalisnick
Sat 4:35 a.m. - 4:40 a.m. [iCal]
Q&A with Eric Nalisnick (Q&A)
Sat 4:40 a.m. - 5:05 a.m. [iCal]

This talk will review recent work on the representational limitations of invertible models both in the context of neural ODEs and normalizing flows. In particular, it has been shown that invertible neural networks are topology preserving and can therefore not map between spaces with different topologies. This has both theoretical and numerical consequences. In the context of normalizing flows for example, the source and target density often have different topologies leading to numerically ill-posed models and training. On top of reviewing the theoretical and practical aspects of this, the talk will also cover several recent models, methods and ideas for alleviating some of these limitations.

Emilien Dupont
Sat 5:05 a.m. - 5:10 a.m. [iCal]
Q&A with Emilien Dupont (Q&A)
Sat 5:10 a.m. - 5:15 a.m. [iCal]
Invertible Workshop INNF
Sat 5:15 a.m. - 5:20 a.m. [iCal]
Invertible Workshop INNF
Sat 5:20 a.m. - 5:25 a.m. [iCal]
Invertible Workshop INNF
Sat 5:25 a.m. - 5:30 a.m. [iCal]
Invertible Workshop INNF
Sat 5:30 a.m. - 5:35 a.m. [iCal]
Invertible Workshop INNF
Sat 5:35 a.m. - 5:40 a.m. [iCal]
Invertible Workshop INNF
Sat 5:40 a.m. - 7:00 a.m. [iCal]
Lunch break (break)
Sat 7:00 a.m. - 7:25 a.m. [iCal]

Variational inference (VI) plays an essential role in approximate Bayesian inference due to its computational efficiency and broad applicability. Crucial to the performance of VI is the selection of the associated divergence measure, as VI approximates the intractable distribution by minimizing this divergence. In this talk, I will discuss variational inference with different divergence measures first. Then, I will present a new meta-learning algorithm to learn the divergence metric suited for the task of interest, automating the design of VI methods.

Cheng Zhang
Sat 7:25 a.m. - 7:30 a.m. [iCal]
Q&A with Cheng Zhang (Q&A)
Sat 7:30 a.m. - 7:55 a.m. [iCal]

Parameterizing latent variable models with deep neural networks has become a major approach to probabilistic modeling. The usual way of fitting these deep latent variable models is to use maximum likelihood. This gives rise to variational autoencoders (VAEs). They jointly learn an approximate posterior distribution over the latent variables and the model parameters by maximizing a lower bound to the log-marginal likelihood of the data. In this talk, I will present an alternative approach to fitting parameters of deep latent-variable models. The idea is to marry adversarial learning and entropy regularization. The family of models fit with this procedure is called Prescribed Generative Adversarial Networks (PresGANs). I will describe PresGANs and discuss how they generate samples with high perceptual quality while avoiding the ubiquitous mode collapse issue of GANs.

Adji Bousso Dieng
Sat 7:55 a.m. - 8:00 a.m. [iCal]
Q&A with Adji Bousso Dieng (Q&A)
Sat 8:00 a.m. - 8:25 a.m. [iCal]
Invertible Workshop INNF
Sat 8:25 a.m. - 8:30 a.m. [iCal]
Q&A with authors of contributed talk (Q&A)
Sat 8:30 a.m. - 8:55 a.m. [iCal]

Statistical inference is at the heart of the scientific method, and the likelihood function is at the heart of statistical inference. However, many scientific theories are formulated as mechanistic models that do not admit a tractable likelihood. While traditional approaches to confronting this problem may seem somewhat naive, they reveal numerous other considerations in the scientific workflow beyond the approximation error of the likelihood. I will highlight how normalizing flows and other techniques from machine learning are impacting scientific practice, discuss current challenges for state-of-the-art methods, and identify promising new directions in this line of research.

Kyle Cranmer
Sat 8:55 a.m. - 9:00 a.m. [iCal]
Q&A with Kyle Cranmer (Q&A)
Sat 9:00 a.m. - 9:25 a.m. [iCal]

I give an overview of the many uses of flows in probabilistic modeling and inference. I focus on settings in which flows are used to speed up or otherwise improve inference (i.e. settings in which flows are not part of the model specification), including applications to Optimal Experimental Design, Hamiltonian Monte Carlo, and Likelihood-Free Inference. I conclude with a brief discussion of how flows enter into probabilistic programming language (PPL) systems and suggest research directions that are important for improved PPL integration.

Martin Jankowiak
Sat 9:25 a.m. - 9:30 a.m. [iCal]
Q&A with Martin Jankowiak (Q&A)
Sat 9:30 a.m. - 9:55 a.m. [iCal]
Invertible Workshop INNF
Sat 9:55 a.m. - 10:00 a.m. [iCal]
Q&A with authors of contributed talk (Q&A)
Sat 10:00 a.m. - 10:40 a.m. [iCal]
Poster session 2 (poster)
-
Poster presentation: Improving Sample Quality by Training and Sampling from Latent Energy (talk) [ Video ]
Invertible Workshop INNF
-
Poster presentation: Exhaustive Neural Importance Sampling applied to Monte Carlo event generation (talk) [ Video ]
Invertible Workshop INNF
-
Poster presentation: Stochastic Normalizing Flows (talk) [ Video ]
Invertible Workshop INNF
-
Poster presentation: Quasi-Autoregressive Residual (QuAR) Flows (talk) [ Video ]
Invertible Workshop INNF
-
Poster presentation: Time Series Decomposition with Slow Flows (talk) [ Video ]
Invertible Workshop INNF
-
Poster presentation: Faster Orthogonal Parameterization with Householder Matrices (talk) [ Video ]
Invertible Workshop INNF
-
Poster presentation: The Power Spherical distribution (talk) [ Video ]
Invertible Workshop INNF
-
Poster presentation: Woodbury Transformations for Deep Generative Flows (talk) [ Video ]
Invertible Workshop INNF
-
Poster presentation: Super-resolution Variational Auto-Encoders (talk) [ Video ]
Invertible Workshop INNF
-
Poster presentation: Conditional Normalizing Flows for Low-Dose Computed Tomography Image Reconstruction (talk) [ Video ]
Invertible Workshop INNF
-
Poster presentation: Why Normalizing Flows Fail to Detect Out-of-Distribution Data (talk) [ Video ]
Invertible Workshop INNF
-
Poster presentation: Density Deconvolution with Normalizing Flows (talk) [ Video ]
Invertible Workshop INNF
-
Poster presentation: Consistency Regularization for Variational Auto-encoders (talk) [ Video ]
Invertible Workshop INNF
-
Poster presentation: Normalizing Flows with Multi-Scale Autoregressive Priors (talk) [ Video ]
Invertible Workshop INNF
-
Poster presentation: Metropolized Flow: from Invertible Flow to MCMC (talk) [ Video ]
Invertible Workshop INNF
-
Poster presentation: Robust model training and generalisation with Studentising flows (talk) [ Video ]
Invertible Workshop INNF
-
Poster presentation: Scaling RBMs to High Dimensional Data with Invertible Neural Networks (talk) [ Video ]
Invertible Workshop INNF
-
Poster presentation: On the Variational Posterior of Dirichlet Process Deep Latent Gaussian Mixture Models (talk) [ Video ]
Invertible Workshop INNF
-
Poster presentation: A Fourier State Space Model for Bayesian ODE Filters (talk) [ Video ]
Invertible Workshop INNF
-
Poster presentation: MoFlow: An Invertible Flow Model for Molecular Graph Generation (talk) [ Video ]
Invertible Workshop INNF
-
Poster presentation: TraDE: Transformers for Density Estimation (talk) [ Video ]
Invertible Workshop INNF
-
Poster presentation: WeakFlow: Iterative Invertible Distribution Transformations via Weak Destructive Flows (talk) [ Video ]
Invertible Workshop INNF
-
Poster presentation: Flow-based SVDD for anomaly detection (talk) [ Video ]
Invertible Workshop INNF
-
Poster presentation: Black-box Adversarial Example Generation with Normalizing Flows (talk) [ Video ]
Invertible Workshop INNF
-
Poster presentation: Sequential Autoregressive Flow-Based Policies (talk) [ Video ]
Invertible Workshop INNF
-
Poster presentation: Relative gradient optimization of the Jacobian term in unsupervised deep learning (talk) [ Video ]
Invertible Workshop INNF
-
Poster presentation: Deep Generative Video Compression with Temporal Autoregressive Transforms (talk) [ Video ]
Invertible Workshop INNF
-
Poster presentation: Normalizing Flows Across Dimensions (talk) [ Video ]
Invertible Workshop INNF
-
Poster presentation: Differentially Private Normalizing Flows for Privacy-Preserving Density Estimation (talk) [ Video ]
Invertible Workshop INNF
-
Poster presentation: Model-Agnostic Searches for New Physics with Normalizing Flows (talk) [ Video ]
Invertible Workshop INNF
-

to join the slack, please use the link https://join.slack.com/t/innf2020/shared_invite/zt-fp5gvn7l-1XAZrKGL1xtIP03Fsd5ZQQ

-

Author's availability and zoom link https://docs.google.com/spreadsheets/d/1HJa8F0bMSlM2qQ9WNCJ3LO26hVmndhrFzcgDZPz_VtQ/edit?usp=sharing

Author Information

Chin-Wei Huang (MILA)
David Krueger (Universit? de Montr?al)
Rianne Van den Berg (University of Amsterdam)
George Papamakarios (DeepMind)
Chris Cremer (University of Toronto)
Ricky T. Q. Chen (U of Toronto)
Danilo J. Rezende (DeepMind)
Danilo J. Rezende

Danilo is a Senior Staff Research Scientist at Google DeepMind, where he works on probabilistic machine reasoning and learning algorithms. He has a BA in Physics and MSc in Theoretical Physics from Ecole Polytechnique (Palaiseau – France) and from the Institute of Theoretical Physics (SP – Brazil) and a Ph.D. in Computational Neuroscience at Ecole Polytechnique Federale de Lausanne, EPFL (Lausanne – Switzerland). His research focuses on scalable inference methods, generative models of complex data (such as images and video), applied probability, causal reasoning and unsupervised learning for decision-making.

More from the Same Authors