Timezone: »

 
Workshop
Invertible Neural Networks and Normalizing Flows
Chin-Wei Huang · David Krueger · Rianne Van den Berg · George Papamakarios · Aidan Gomez · Chris Cremer · Aaron Courville · Ricky T. Q. Chen · Danilo J. Rezende

Sat Jun 15 08:30 AM -- 06:00 PM (PDT) @ 103
Event URL: https://invertibleworkshop.github.io/ »

Invertible neural networks have been a significant thread of research in the ICML community for several years. Such transformations can offer a range of unique benefits:

(1) They preserve information, allowing perfect reconstruction (up to numerical limits) and obviating the need to store hidden activations in memory for backpropagation.
(2) They are often designed to track the changes in probability density that applying the transformation induces (as in normalizing flows).
(3) Like autoregressive models, normalizing flows can be powerful generative models which allow exact likelihood computations; with the right architecture, they can also allow for much cheaper sampling than autoregressive models.

While many researchers are aware of these topics and intrigued by several high-profile papers, few are familiar enough with the technical details to easily follow new developments and contribute. Many may also be unaware of the wide range of applications of invertible neural networks, beyond generative modelling and variational inference.

Author Information

Chin-Wei Huang (MILA)
David Krueger (Universit? de Montr?al)
Rianne Van den Berg (University of Amsterdam)
George Papamakarios (University of Edinburgh)
Aidan Gomez (University of Oxford)
Chris Cremer (University of Toronto)
Aaron Courville (Université de Montréal)
Ricky T. Q. Chen (U of Toronto)
Danilo J. Rezende (DeepMind)
Danilo J. Rezende

Danilo is a Senior Staff Research Scientist at Google DeepMind, where he works on probabilistic machine reasoning and learning algorithms. He has a BA in Physics and MSc in Theoretical Physics from Ecole Polytechnique (Palaiseau – France) and from the Institute of Theoretical Physics (SP – Brazil) and a Ph.D. in Computational Neuroscience at Ecole Polytechnique Federale de Lausanne, EPFL (Lausanne – Switzerland). His research focuses on scalable inference methods, generative models of complex data (such as images and video), applied probability, causal reasoning and unsupervised learning for decision-making.

More from the Same Authors