Timezone: »

 
Oral
How does Disagreement Help Generalization against Label Corruption?
Xingrui Yu · Bo Han · Jiangchao Yao · Gang Niu · Ivor Tsang · Masashi Sugiyama

Wed Jun 12 04:00 PM -- 04:20 PM (PDT) @ Hall A

Learning with noisy labels is one of the hottest problems in weakly-supervised learning. Based on memorization effects of deep neural networks, training on small-loss samples becomes very promising for handling noisy labels. This fosters the state-of-the-art approach "Co-teaching" that cross-trains two deep neural networks using small-loss trick. However, with the increase of epochs, two networks will converge to a consensus gradually and Co-teaching reduces to the self-training MentorNet. To tackle this issue, we propose a robust learning paradigm called Co-teaching+, which bridges the "Update by Disagreement" strategy with the original Co-teaching. First, two networks predict all data, and feed forward prediction disagreement data only. Then, among such disagreement data, each network selects its small-loss data, but back propagates the small-loss data by its peer network and updates its own parameters. Empirical results on noisy benchmark datasets demonstrate that Co-teaching+ is much superior to many state-of-the-art methods in the robustness of trained models.

Author Information

Xingrui Yu (University of Technology Sydney)
Bo Han (RIKEN-AIP)
Jiangchao Yao (University of Technology Sydney)
Gang Niu (RIKEN)
Gang Niu

Gang Niu is currently an indefinite-term senior research scientist at RIKEN Center for Advanced Intelligence Project.

Ivor Tsang (University of Technology Sydney)
Masashi Sugiyama (RIKEN / The University of Tokyo)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors