Timezone: »
Training neural ODEs on large datasets has not been tractable due to the necessity of allowing the adaptive numerical ODE solver to refine its step size to very small values. In practice this leads to dynamics equivalent to many hundreds or even thousands of layers. In this paper, we overcome this apparent difficulty by introducing a theoretically-grounded combination of both optimal transport and stability regularizations which encourage neural ODEs to prefer simpler dynamics out of all the dynamics that solve a problem well. Simpler dynamics lead to faster convergence and to fewer discretizations of the solver, considerably decreasing wall-clock time without loss in performance. Our approach allows us to train neural ODE-based generative models to the same performance as the unregularized dynamics, with significant reductions in training time. This brings neural ODEs closer to practical relevance in large-scale applications.
Author Information
Chris Finlay (McGill University)
Joern-Henrik Jacobsen (Apple Inc.)
Levon Nurbekyan (UCLA)
Adam Oberman (McGill University)
More from the Same Authors
-
2022 : ImageNet-Cartoon and ImageNet-Drawing: two domain shift datasets for ImageNet »
Tiago Salvador · Adam Oberman -
2022 : ImageNet-Cartoon and ImageNet-Drawing: two domain shift datasets for ImageNet »
Tiago Salvador · Adam Oberman -
2021 Poster: Environment Inference for Invariant Learning »
Elliot Creager · Joern-Henrik Jacobsen · Richard Zemel -
2021 Spotlight: Environment Inference for Invariant Learning »
Elliot Creager · Joern-Henrik Jacobsen · Richard Zemel -
2021 Poster: Out-of-Distribution Generalization via Risk Extrapolation (REx) »
David Krueger · Ethan Caballero · Joern-Henrik Jacobsen · Amy Zhang · Jonathan Binas · Dinghuai Zhang · Remi Le Priol · Aaron Courville -
2021 Oral: Out-of-Distribution Generalization via Risk Extrapolation (REx) »
David Krueger · Ethan Caballero · Joern-Henrik Jacobsen · Amy Zhang · Jonathan Binas · Dinghuai Zhang · Remi Le Priol · Aaron Courville -
2020 : Spotlight Talk 8: Calibrated Top-1 Uncertainty estimates for classification by score based models »
Adam Oberman -
2020 Poster: Fundamental Tradeoffs between Invariance and Sensitivity to Adversarial Perturbations »
Florian Tramer · Jens Behrmann · Nicholas Carlini · Nicolas Papernot · Joern-Henrik Jacobsen -
2020 Poster: Learning the Stein Discrepancy for Training and Evaluating Energy-Based Models without Sampling »
Will Grathwohl · Kuan-Chieh Wang · Joern-Henrik Jacobsen · David Duvenaud · Richard Zemel -
2019 : Invertible Residual Networks and a Novel Perspective on Adversarial Examples »
Joern-Henrik Jacobsen -
2019 Poster: Flexibly Fair Representation Learning by Disentanglement »
Elliot Creager · David Madras · Joern-Henrik Jacobsen · Marissa Weis · Kevin Swersky · Toniann Pitassi · Richard Zemel -
2019 Oral: Flexibly Fair Representation Learning by Disentanglement »
Elliot Creager · David Madras · Joern-Henrik Jacobsen · Marissa Weis · Kevin Swersky · Toniann Pitassi · Richard Zemel -
2019 Poster: Invertible Residual Networks »
Jens Behrmann · Will Grathwohl · Ricky T. Q. Chen · David Duvenaud · Joern-Henrik Jacobsen -
2019 Oral: Invertible Residual Networks »
Jens Behrmann · Will Grathwohl · Ricky T. Q. Chen · David Duvenaud · Joern-Henrik Jacobsen