Skip to yearly menu bar Skip to main content


Workshop

INNF+: Invertible Neural Networks, Normalizing Flows, and Explicit Likelihood Models

Chin-Wei Huang · David Krueger · Rianne Van den Berg · George Papamakarios · Ricky T. Q. Chen · Danilo J. Rezende

Normalizing flows are explicit likelihood models (ELM) characterized by a flexible invertible reparameterization of high-dimensional probability distributions. Unlike other ELMs, they offer both exact and efficient likelihood computation and data generation. Since their recent introduction, flow-based models have seen a significant resurgence of interest in the machine learning community. As a result, powerful flow-based models have been developed, with successes in density estimation, variational inference, and generative modeling of images, audio and video.

As the field is moving forward, the main goal of the workshop is to consolidate recent progress and connect ideas from related fields. Over the past few years, we’ve seen that normalizing flows are deeply connected to latent variable models, autoregressive models, and more recently, diffusion-based generative models. This year, we would like to further push the forefront of these explicit likelihood models through the lens of invertible reparameterization. We encourage researchers to use these models in conjunction to exploit the their benefits at once, and to work together to resolve some common issues of likelihood-based methods, such as mis-calibration of out-of-distribution uncertainty.

Chat is not available.
Timezone: America/Los_Angeles

Schedule