Skip to yearly menu bar Skip to main content


Poster

A Geometric Explanation of the Likelihood OOD Detection Paradox

Hamidreza Kamkari · Brendan Ross · Jesse Cresswell · Anthony Caterini · Rahul G. Krishnan · Gabriel Loaiza-Ganem


Abstract:

Likelihood-based deep generative models (DGMs) commonly exhibit a puzzling behaviour: when trained on a relatively complex dataset, they assign higher likelihood values to out-of-distribution (OOD) data from simpler sources. Adding to the mystery, OOD samples are never generated by these DGMs despite having higher likelihoods. This two-pronged paradox has yet to be conclusively explained, making likelihood-based OOD detection unreliable. Our primary observation is that high-likelihood regions will not be generated if they contain minimal probability mass. We demonstrate how this seeming contradiction of large densities yet low probability mass can occur around data confined to low-dimensional manifolds. We also show that this scenario can be identified through local intrinsic dimension (LID) estimation, and propose a method for OOD detection which pairs the likelihoods and LID estimates obtained from a pre-trained DGM. Our method can be applied to normalizing flows and score-based diffusion models - which we show are also afflicted by the paradox - and often obtains results which surpass state-of-the-art OOD detection benchmarks using the same DGM backbones.

Live content is unavailable. Log in and register to view live content