Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Structured Probabilistic Inference and Generative Modeling

Benchmarking Uncertainty Disentanglement: Specialized Uncertainties for Specialized Tasks

Bálint Mucsányi · Michael Kirchhof · Seong Joon Oh

Keywords: [ uncertainty quantification ] [ Abstained Prediction ] [ Aleatoric Uncertainty ] [ out-of-distribution detection ] [ epistemic uncertainty ] [ Uncertainty Disentanglement ]


Abstract:

Uncertainty quantification, once a singular task, has evolved into a spectrum of tasks, including abstained prediction, out-of-distribution detection, and aleatoric uncertainty quantification. The latest goal is disentanglement: the construction of multiple estimators that are each tailored to one and only one source of uncertainty. This paper evaluates a wide spectrum of Bayesian, evidential, and deterministic methods across various uncertainty tasks on ImageNet. We find that, despite promising theoretical endeavors, disentanglement is not yet achieved in practice. Further, we reveal which uncertainty estimators excel at which specific tasks, providing insights for practitioners and guiding future research toward task-centric and disentangled uncertainty estimation methods. Our code is available at https://anonymous.4open.science/r/bud-ED1B/.

Chat is not available.