Skip to yearly menu bar Skip to main content


Poster

Individual Calibration with Randomized Forecasting

Shengjia Zhao · Tengyu Ma · Stefano Ermon

Keywords: [ Fairness, Equity and Justice ] [ Trustworthy Machine Learning ] [ Statistical Learning Theory ] [ Robust Statistics and Machine Learning ]


Abstract:

Machine learning applications often require calibrated predictions, e.g. a 90\% credible interval should contain the true outcome 90\% of the times. However, typical definitions of calibration only require this to hold on average, and offer no guarantees on predictions made on individual samples. Thus, predictions can be systematically over or under confident on certain subgroups, leading to issues of fairness and potential vulnerabilities. We show that calibration for individual samples is possible in the regression setup if and only if the predictions are randomized, i.e. outputting randomized credible intervals. Randomization removes systematic bias by trading off bias with variance. We design a training objective to enforce individual calibration and use it to train randomized regression functions. The resulting models are more calibrated for arbitrarily chosen subgroups of the data, and can achieve higher utility in decision making against adversaries that exploit miscalibrated predictions.

Chat is not available.