Timezone: »
Deep generative models (DGMs) seem a natural fit for detecting out-of-distribution (OOD) inputs, but such models have been shown to assign higher probabilities or densities to OOD images than images from the training distribution. In this work, we explain why this behavior should be attributed to model misestimation. We first prove that no method can guarantee performance beyond random chance without assumptions on which out-distributions are relevant. We then interrogate the typical set hypothesis, the claim that relevant out-distributions can lie in high likelihood regions of the data distribution, and that OOD detection should be defined based on the data distribution's typical set. We highlight the consequences implied by assuming support overlap between in- and out-distributions, as well as the arbitrariness of the typical set for OOD detection. Our results suggest that estimation error is a more plausible explanation than the misalignment between likelihood-based OOD detection and out-distributions of interest, and we illustrate how even minimal estimation error can lead to OOD detection failures, yielding implications for future work in deep generative modeling and OOD detection.
Author Information
Lily Zhang (New York University)
Mark Goldstein (New York University)
Rajesh Ranganath (New York University)
Related Events (a corresponding poster, oral, or spotlight)
-
2021 Spotlight: Understanding Failures in Out-of-Distribution Detection with Deep Generative Models »
Wed. Jul 21st 12:45 -- 12:50 AM Room
More from the Same Authors
-
2023 Workshop: The Second Workshop on Spurious Correlations, Invariance and Stability »
Yoav Wald · Claudia Shi · Aahlad Puli · Amir Feder · Limor Gultchin · Mark Goldstein · Maggie Makar · Victor Veitch · Uri Shalit -
2022 Workshop: Spurious correlations, Invariance, and Stability (SCIS) »
Aahlad Puli · Maggie Makar · Victor Veitch · Yoav Wald · Mark Goldstein · Limor Gultchin · Angela Zhou · Uri Shalit · Suchi Saria -
2022 Poster: Set Norm and Equivariant Skip Connections: Putting the Deep in Deep Sets »
Lily Zhang · Veronica Tozzo · John Higgins · Rajesh Ranganath -
2022 Spotlight: Set Norm and Equivariant Skip Connections: Putting the Deep in Deep Sets »
Lily Zhang · Veronica Tozzo · John Higgins · Rajesh Ranganath -
2021 Poster: Offline Contextual Bandits with Overparameterized Models »
David Brandfonbrener · William Whitney · Rajesh Ranganath · Joan Bruna -
2021 Spotlight: Offline Contextual Bandits with Overparameterized Models »
David Brandfonbrener · William Whitney · Rajesh Ranganath · Joan Bruna -
2019 Poster: The Variational Predictive Natural Gradient »
Da Tang · Rajesh Ranganath -
2019 Poster: Predicate Exchange: Inference with Declarative Knowledge »
Zenna Tavares · Javier Burroni · Edgar Minasyan · Armando Solar-Lezama · Rajesh Ranganath -
2019 Oral: The Variational Predictive Natural Gradient »
Da Tang · Rajesh Ranganath -
2019 Oral: Predicate Exchange: Inference with Declarative Knowledge »
Zenna Tavares · Javier Burroni · Edgar Minasyan · Armando Solar-Lezama · Rajesh Ranganath -
2018 Poster: Noisin: Unbiased Regularization for Recurrent Neural Networks »
Adji Bousso Dieng · Rajesh Ranganath · Jaan Altosaar · David Blei -
2018 Oral: Noisin: Unbiased Regularization for Recurrent Neural Networks »
Adji Bousso Dieng · Rajesh Ranganath · Jaan Altosaar · David Blei