Skip to yearly menu bar Skip to main content


( events)   Timezone:  
The 2021 schedule is still incomplete
Poster
Tue Jul 20 09:00 AM -- 11:00 AM (PDT) @ Virtual
Self Normalizing Flows
T. Anderson Keller · Jorn Peters · Priyank Jaini · Emiel Hoogeboom · Patrick Forré · Max Welling
[ Slides
[ Paper ]
Efficient gradient computation of the Jacobian determinant term is a core problem in many machine learning settings, and especially so in the normalizing flow framework. Most proposed flow models therefore either restrict to a function class with easy evaluation of the Jacobian determinant, or an efficient estimator thereof. However, these restrictions limit the performance of such density models, frequently requiring significant depth to reach desired performance levels. In this work, we propose \emph{Self Normalizing Flows}, a flexible framework for training normalizing flows by replacing expensive terms in the gradient by learned approximate inverses at each layer. This reduces the computational complexity of each layer's exact update from $\mathcal{O}(D^3)$ to $\mathcal{O}(D^2)$, allowing for the training of flow architectures which were otherwise computationally infeasible, while also providing efficient sampling. We show experimentally that such models are remarkably stable and optimize to similar data likelihood values as their exact gradient counterparts, while training more quickly and surpassing the performance of functionally constrained counterparts.