Skip to yearly menu bar Skip to main content


Poster

When and How Mixup Improves Calibration

Linjun Zhang · Zhun Deng · Kenji Kawaguchi · James Zou

Hall E #523

Keywords: [ MISC: Supervised Learning ] [ SA: Trustworthy Machine Learning ] [ Theory ] [ MISC: Unsupervised and Semi-supervised Learning ] [ MISC: General Machine Learning Techniques ]


Abstract:

In many machine learning applications, it is important for the model to provide confidence scores that accurately capture its prediction uncertainty. Although modern learning methods have achieved great success in predictive accuracy, generating calibrated confidence scores remains a major challenge. Mixup, a popular yet simple data augmentation technique based on taking convex combinations of pairs of training examples, has been empirically found to significantly improve confidence calibration across diverse applications. However, when and how Mixup helps calibration is still a mystery. In this paper, we theoretically prove that Mixup improves calibration in \textit{high-dimensional} settings by investigating natural statistical models. Interestingly, the calibration benefit of Mixup increases as the model capacity increases. We support our theories with experiments on common architectures and datasets. In addition, we study how Mixup improves calibration in semi-supervised learning. While incorporating unlabeled data can sometimes make the model less calibrated, adding Mixup training mitigates this issue and provably improves calibration. Our analysis provides new insights and a framework to understand Mixup and calibration.

Chat is not available.