Skip to yearly menu bar Skip to main content


Spotlight

Hierarchical VAEs Know What They Don’t Know

Jakob D. Havtorn · Jes Frellsen · Søren Hauberg · Lars Maaløe

[ ] [ Livestream: Visit Deep Generative Model 1 ] [ Paper ]
[ Paper ]

Abstract:

Deep generative models have been demonstrated as state-of-the-art density estimators. Yet, recent work has found that they often assign a higher likelihood to data from outside the training distribution. This seemingly paradoxical behavior has caused concerns over the quality of the attained density estimates. In the context of hierarchical variational autoencoders, we provide evidence to explain this behavior by out-of-distribution data having in-distribution low-level features. We argue that this is both expected and desirable behavior. With this insight in hand, we develop a fast, scalable and fully unsupervised likelihood-ratio score for OOD detection that requires data to be in-distribution across all feature-levels. We benchmark the method on a vast set of data and model combinations and achieve state-of-the-art results on out-of-distribution detection.

Chat is not available.