Predictive Uncertainties Based on Proper Scoring Rules
Abstract
This paper presents a theoretical framework for understanding uncertainty through the lens of statistical risks. It introduces a method to differentiate between aleatoric uncertainty, which is related to inherent data variability, and epistemic uncertainty, which is linked to lacking of best model parameters knowledge. We explain how pointwise risk can be decomposed into Bayes risk and Excess risk, showing that Excess risk, linked to epistemic uncertainty, corresponds to Bregman divergences.To convert these theoretical risk measures into practical uncertainty estimates, we propose using a Bayesian approach, approximating the risks through posterior distributions. We validate our method on image datasets, assessing its capability to identify out-of-distribution and misclassified data using the AUROC metric. Our findings demonstrate the efficacy of this approach and provide practical insights for estimating uncertainty in real-world scenarios.