Skip to yearly menu bar Skip to main content


( events)   Timezone:  
Poster
Tue Jul 14 07:00 PM -- 07:45 PM & Wed Jul 15 04:00 AM -- 04:45 AM (PDT)
Unbiased Risk Estimators Can Mislead: A Case Study of Learning with Complementary Labels
Yu-Ting Chou · Gang Niu · Hsuan-Tien Lin · Masashi Sugiyama

In weakly supervised learning, unbiased risk estimator(URE) is a powerful tool for training classifiers when training and test data are drawn from different distributions. Nevertheless, UREs lead to overfitting in many problem settings when the models are complex like deep networks. In this paper, we investigate reasons for such overfitting by studying a weakly supervised problem called learning with complementary labels. We argue the quality of gradient estimation matters more in risk minimization. Theoretically, we show that a URE gives an unbiased gradient estimator(UGE). Practically, however, UGEs may suffer from huge variance, which causes empirical gradients to be usually far away from true gradients during minimization. To this end, we propose a novel surrogate complementary loss(SCL) framework that trades zero bias with reduced variance and makes empirical gradients more aligned with true gradients in the direction. Thanks to this characteristic, SCL successfully mitigates the overfitting issue and improves URE-based methods.