Skip to yearly menu bar Skip to main content


Workshop

INNF+: Invertible Neural Networks, Normalizing Flows, and Explicit Likelihood Models

Chin-Wei Huang · David Krueger · Rianne Van den Berg · George Papamakarios · Chris Cremer · Ricky T. Q. Chen · Danilo J. Rezende

Keywords:  Generative Models    Invertible neural networks    Normalizing flows    Likelihood-based models    Latent Variable models    Autoregressive models  

Normalizing flows are explicit likelihood models using invertible neural networks to construct flexible probability distributions of high-dimensional data. Compared to other generative models, the main advantage of normalizing flows is that they can offer exact and efficient likelihood computation and data generation. Since their recent introduction, flow-based models have seen a significant resurgence of interest in the machine learning community. As a result, powerful flow-based models have been developed, with successes in density estimation, variational inference, and generative modeling of images, audio and video.

This workshop is the 2nd iteration of the ICML 2019 workshop on Invertible Neural Networks and Normalizing Flows. While the main goal of last year’s workshop was to make flow-based models more accessible to the general machine learning community, as the field is moving forward, we believe there is now a need to consolidate recent progress and connect ideas from related fields. In light of the interpretation of latent variable models and autoregressive models as flows, this year we expand the scope of the workshop and consider likelihood-based models more broadly, including flow-based models, latent variable models and autoregressive models. We encourage the researchers to use these models in conjunction to exploit the their benefits at once, and to work together to resolve some common issues of likelihood-based methods, such as mis-calibration of out-of-distribution uncertainty.

Chat is not available.
Timezone: America/Los_Angeles

Schedule