Timezone: »
We show that standard ResNet architectures can be made invertible, allowing the same model to be used for classification, density estimation, and generation. Typically, enforcing invertibility requires partitioning dimensions or restricting network architectures. In contrast, our approach only requires adding a simple normalization step during training, already available in standard frameworks. Invertible ResNets define a generative model which can be trained by maximum likelihood on unlabeled data. To compute likelihoods, we introduce a tractable approximation to the Jacobian log-determinant of a residual block. Our empirical evaluation shows that invertible ResNets perform competitively with both state-of-the-art image classifiers and flow-based generative models, something that has not been previously achieved with a single architecture.
Author Information
Jens Behrmann (University of Bremen)
Will Grathwohl (University of Toronto)
Ricky T. Q. Chen (U of Toronto)
David Duvenaud (University of Toronto)
Joern-Henrik Jacobsen (Vector Institute and University of Toronto)
Related Events (a corresponding poster, oral, or spotlight)
-
2019 Oral: Invertible Residual Networks »
Wed. Jun 12th 09:00 -- 09:20 PM Room Hall A
More from the Same Authors
-
2022 Poster: On Implicit Bias in Overparameterized Bilevel Optimization »
Paul Vicol · Jonathan Lorraine · Fabian Pedregosa · David Duvenaud · Roger Grosse -
2022 Spotlight: On Implicit Bias in Overparameterized Bilevel Optimization »
Paul Vicol · Jonathan Lorraine · Fabian Pedregosa · David Duvenaud · Roger Grosse -
2021 : David Duvenaud »
David Duvenaud -
2021 Workshop: INNF+: Invertible Neural Networks, Normalizing Flows, and Explicit Likelihood Models »
Chin-Wei Huang · David Krueger · Rianne Van den Berg · George Papamakarios · Ricky T. Q. Chen · Danilo J. Rezende -
2021 Poster: Environment Inference for Invariant Learning »
Elliot Creager · Joern-Henrik Jacobsen · Richard Zemel -
2021 Spotlight: Environment Inference for Invariant Learning »
Elliot Creager · Joern-Henrik Jacobsen · Richard Zemel -
2021 Poster: Out-of-Distribution Generalization via Risk Extrapolation (REx) »
David Krueger · Ethan Caballero · Joern-Henrik Jacobsen · Amy Zhang · Jonathan Binas · Dinghuai Zhang · Remi Le Priol · Aaron Courville -
2021 Oral: Out-of-Distribution Generalization via Risk Extrapolation (REx) »
David Krueger · Ethan Caballero · Joern-Henrik Jacobsen · Amy Zhang · Jonathan Binas · Dinghuai Zhang · Remi Le Priol · Aaron Courville -
2021 Poster: Oops I Took A Gradient: Scalable Sampling for Discrete Distributions »
Will Grathwohl · Kevin Swersky · Milad Hashemi · David Duvenaud · Chris Maddison -
2021 Poster: "Hey, that's not an ODE": Faster ODE Adjoints via Seminorms »
Patrick Kidger · Ricky T. Q. Chen · Terry Lyons -
2021 Spotlight: "Hey, that's not an ODE": Faster ODE Adjoints via Seminorms »
Patrick Kidger · Ricky T. Q. Chen · Terry Lyons -
2021 Oral: Oops I Took A Gradient: Scalable Sampling for Discrete Distributions »
Will Grathwohl · Kevin Swersky · Milad Hashemi · David Duvenaud · Chris Maddison -
2020 Workshop: INNF+: Invertible Neural Networks, Normalizing Flows, and Explicit Likelihood Models »
Chin-Wei Huang · David Krueger · Rianne Van den Berg · George Papamakarios · Chris Cremer · Ricky T. Q. Chen · Danilo J. Rezende -
2020 Poster: How to Train Your Neural ODE: the World of Jacobian and Kinetic Regularization »
Chris Finlay · Joern-Henrik Jacobsen · Levon Nurbekyan · Adam Oberman -
2020 Poster: Fundamental Tradeoffs between Invariance and Sensitivity to Adversarial Perturbations »
Florian Tramer · Jens Behrmann · Nicholas Carlini · Nicolas Papernot · Joern-Henrik Jacobsen -
2020 Poster: Learning the Stein Discrepancy for Training and Evaluating Energy-Based Models without Sampling »
Will Grathwohl · Kuan-Chieh Wang · Joern-Henrik Jacobsen · David Duvenaud · Richard Zemel -
2019 Workshop: Invertible Neural Networks and Normalizing Flows »
Chin-Wei Huang · David Krueger · Rianne Van den Berg · George Papamakarios · Aidan Gomez · Chris Cremer · Aaron Courville · Ricky T. Q. Chen · Danilo J. Rezende -
2019 : Invertible Residual Networks and a Novel Perspective on Adversarial Examples »
Joern-Henrik Jacobsen -
2019 Poster: Flexibly Fair Representation Learning by Disentanglement »
Elliot Creager · David Madras · Joern-Henrik Jacobsen · Marissa Weis · Kevin Swersky · Toniann Pitassi · Richard Zemel -
2019 Oral: Flexibly Fair Representation Learning by Disentanglement »
Elliot Creager · David Madras · Joern-Henrik Jacobsen · Marissa Weis · Kevin Swersky · Toniann Pitassi · Richard Zemel -
2018 Poster: Noisy Natural Gradient as Variational Inference »
Guodong Zhang · Shengyang Sun · David Duvenaud · Roger Grosse -
2018 Oral: Noisy Natural Gradient as Variational Inference »
Guodong Zhang · Shengyang Sun · David Duvenaud · Roger Grosse -
2018 Poster: Inference Suboptimality in Variational Autoencoders »
Chris Cremer · Xuechen Li · David Duvenaud -
2018 Oral: Inference Suboptimality in Variational Autoencoders »
Chris Cremer · Xuechen Li · David Duvenaud