Skip to yearly menu bar Skip to main content


Poster

Approximation Capabilities of Neural ODEs and Invertible Residual Networks

Han Zhang · Xi Gao · Jacob Unterman · Tomasz Arodz

Keywords: [ Supervised Learning ] [ Learning Theory ] [ Computational Learning Theory ] [ Architectures ]


Abstract: Recent interest in invertible models and normalizing flows has resulted in new architectures that ensure invertibility of the network model. Neural ODEs and i-ResNets are two recent techniques for constructing models that are invertible, but it is unclear if they can be used to approximate any continuous invertible mapping. Here, we show that out of the box, both of these architectures are limited in their approximation capabilities. We then show how to overcome this limitation: we prove that any homeomorphism on a $p$-dimensional Euclidean space can be approximated by a Neural ODE or an i-ResNet operating on a $2p$-dimensional Euclidean space. We conclude by showing that capping a Neural ODE or an i-ResNet with a single linear layer is sufficient to turn the model into a universal approximator for non-invertible continuous functions.

Chat is not available.