talk
in
Workshop: INNF+: Invertible Neural Networks, Normalizing Flows, and Explicit Likelihood Models
Invited talk 3: Representational limitations of invertible models
Emilien Dupont
Abstract:
This talk will review recent work on the representational limitations of invertible models both in the context of neural ODEs and normalizing flows. In particular, it has been shown that invertible neural networks are topology preserving and can therefore not map between spaces with different topologies. This has both theoretical and numerical consequences. In the context of normalizing flows for example, the source and target density often have different topologies leading to numerically ill-posed models and training. On top of reviewing the theoretical and practical aspects of this, the talk will also cover several recent models, methods and ideas for alleviating some of these limitations.
Chat is not available.