## Universal Joint Approximation of Manifolds and Densities by Simple Injective Flows

### Michael Puthawala · Matti Lassas · Ivan Dokmanic · Maarten de Hoop

##### Hall E #1411

Keywords: [ DL: Theory ] [ T: Everything Else ] [ T: Deep Learning ]

[ Abstract ]
[ [ [
Wed 20 Jul 3:30 p.m. PDT — 5:30 p.m. PDT

Spotlight presentation: Theory
Wed 20 Jul 1:30 p.m. PDT — 3 p.m. PDT

Abstract:

We study approximation of probability measures supported on n-dimensional manifolds embedded in R^m by injective flows---neural networks composed of invertible flows and injective layers. We show that in general, injective flows between R^n and R^m universally approximate measures supported on images of extendable embeddings, which are a subset of standard embeddings: when the embedding dimension m is small, topological obstructions may preclude certain manifolds as admissible targets. When the embedding dimension is sufficiently large, m >= 3n+1, we use an argument from algebraic topology known as the clean trick to prove that the topological obstructions vanish and injective flows universally approximate any differentiable embedding. Along the way we show that the studied injective flows admit efficient projections on the range, and that their optimality can be established "in reverse," resolving a conjecture made in Brehmer & Cranmer 2020.

Chat is not available.