Poster
in
Workshop: 3rd Workshop on Interpretable Machine Learning in Healthcare (IMLH)
Better Calibration Error Estimation for Reliable Uncertainty Quantification
Shuman Peng · Parsa Alamzadeh · Martin Ester
Keywords: [ uncertainty quantification ] [ calibration error estimation ]
Abstract:
Reliable uncertainty quantification is crucial in high-stakes applications, such as healthcare. The $\text{ECE}_{EW}$ has been the most commonly used estimator to quantify the calibration error (CE), but it is heavily biased and can significantly underestimate the true calibration error. While alternative estimators, such as $\text{ECE}_\text{DEBIASED}$ and $\text{ECE}_\text{SWEEP}$, achieve smaller estimation bias in comparison, they exhibit a trade-off between overestimation of the CE on uncalibrated models and underestimation on recalibrated models. To address this trade-off, we propose a new estimator based on K-Nearest Neighbors (KNN), called $\text{ECE}_\text{KNN}$, which constructs representative overlapping local neighbourhoods for improved CE estimation. Empirical evaluation results demonstrate that $\text{ECE}_\text{KNN}$ simultaneously achieves near-zero underestimation of the CE on uncalibrated models while also achieving lower degrees of overestimation on recalibrated models.
Chat is not available.