Timezone: »
In contrast to the standard classification paradigm where the true class is given to each training pattern, complementary-label learning only uses training patterns each equipped with a complementary label, which only specifies one of the classes that the pattern does not belong to. The goal of this paper is to derive a novel framework of complementary-label learning with an unbiased estimator of the classification risk, for arbitrary losses and models---all existing methods have failed to achieve this goal. Not only is this beneficial for the learning stage, it also makes model/hyper-parameter selection (through cross-validation) possible without the need of any ordinarily labeled validation data, while using any linear/non-linear models or convex/non-convex loss functions. We further improve the risk estimator by a non-negative correction and gradient ascent trick, and demonstrate its superiority through experiments.
Author Information
Takashi Ishida (The University of Tokyo / RIKEN)
Gang Niu (RIKEN)
Aditya Menon (Australian National University)
Masashi Sugiyama (RIKEN / The University of Tokyo)
Related Events (a corresponding poster, oral, or spotlight)
-
2019 Oral: Complementary-Label Learning for Arbitrary Losses and Models »
Thu Jun 13th 04:25 -- 04:30 PM Room Room 103
More from the Same Authors
-
2020 Poster: Few-shot Domain Adaptation by Causal Mechanism Transfer »
Takeshi Teshima · Issei Sato · Masashi Sugiyama -
2020 Poster: Do We Need Zero Training Loss After Achieving Zero Training Error? »
Takashi Ishida · Ikko Yamane · Tomoya Sakai · Gang Niu · Masashi Sugiyama -
2020 Poster: Progressive Identification of True Labels for Partial-Label Learning »
Jiaqi Lv · Miao Xu · LEI FENG · Gang Niu · Xin Geng · Masashi Sugiyama -
2020 Poster: Online Dense Subgraph Discovery via Blurred-Graph Feedback »
Yuko Kuroki · Atsushi Miyauchi · Junya Honda · Masashi Sugiyama -
2020 Poster: SIGUA: Forgetting May Make Learning with Noisy Labels More Robust »
Bo Han · Gang Niu · Xingrui Yu · QUANMING YAO · Miao Xu · Ivor Tsang · Masashi Sugiyama -
2020 Poster: Unbiased Risk Estimators Can Mislead: A Case Study of Learning with Complementary Labels »
Yu-Ting Chou · Gang Niu · Hsuan-Tien Lin · Masashi Sugiyama -
2020 Poster: Attacks Which Do Not Kill Training Make Adversarial Learning Stronger »
Jingfeng Zhang · Xilie Xu · Bo Han · Gang Niu · Lizhen Cui · Masashi Sugiyama · Mohan Kankanhalli -
2020 Poster: Accelerating the diffusion-based ensemble sampling by non-reversible dynamics »
Futoshi Futami · Issei Sato · Masashi Sugiyama -
2020 Poster: Variational Imitation Learning with Diverse-quality Demonstrations »
Voot Tangkaratt · Bo Han · Mohammad Emtiyaz Khan · Masashi Sugiyama -
2020 Poster: Learning with Multiple Complementary Labels »
LEI FENG · Takuo Kaneko · Bo Han · Gang Niu · Bo An · Masashi Sugiyama -
2020 Poster: Searching to Exploit Memorization Effect in Learning with Noisy Labels »
QUANMING YAO · Hansi Yang · Bo Han · Gang Niu · James Kwok -
2020 Poster: Normalized Flat Minima: Exploring Scale Invariant Definition of Flat Minima for Neural Networks Using PAC-Bayesian Analysis »
Yusuke Tsuzuku · Issei Sato · Masashi Sugiyama -
2019 Poster: Classification from Positive, Unlabeled and Biased Negative Data »
Yu-Guan Hsieh · Gang Niu · Masashi Sugiyama -
2019 Oral: Classification from Positive, Unlabeled and Biased Negative Data »
Yu-Guan Hsieh · Gang Niu · Masashi Sugiyama -
2019 Poster: How does Disagreement Help Generalization against Label Corruption? »
Xingrui Yu · Bo Han · Jiangchao Yao · Gang Niu · Ivor Tsang · Masashi Sugiyama -
2019 Oral: How does Disagreement Help Generalization against Label Corruption? »
Xingrui Yu · Bo Han · Jiangchao Yao · Gang Niu · Ivor Tsang · Masashi Sugiyama -
2019 Poster: Imitation Learning from Imperfect Demonstration »
Yueh-Hua Wu · Nontawat Charoenphakdee · Han Bao · Voot Tangkaratt · Masashi Sugiyama -
2019 Poster: On Symmetric Losses for Learning from Corrupted Labels »
Nontawat Charoenphakdee · Jongyeong Lee · Masashi Sugiyama -
2019 Oral: Imitation Learning from Imperfect Demonstration »
Yueh-Hua Wu · Nontawat Charoenphakdee · Han Bao · Voot Tangkaratt · Masashi Sugiyama -
2019 Oral: On Symmetric Losses for Learning from Corrupted Labels »
Nontawat Charoenphakdee · Jongyeong Lee · Masashi Sugiyama -
2018 Poster: Classification from Pairwise Similarity and Unlabeled Data »
Han Bao · Gang Niu · Masashi Sugiyama -
2018 Oral: Classification from Pairwise Similarity and Unlabeled Data »
Han Bao · Gang Niu · Masashi Sugiyama -
2018 Poster: Does Distributionally Robust Supervised Learning Give Robust Classifiers? »
Weihua Hu · Gang Niu · Issei Sato · Masashi Sugiyama -
2018 Oral: Does Distributionally Robust Supervised Learning Give Robust Classifiers? »
Weihua Hu · Gang Niu · Issei Sato · Masashi Sugiyama -
2018 Poster: Analysis of Minimax Error Rate for Crowdsourcing and Its Application to Worker Clustering Model »
Hideaki Imamura · Issei Sato · Masashi Sugiyama -
2018 Oral: Analysis of Minimax Error Rate for Crowdsourcing and Its Application to Worker Clustering Model »
Hideaki Imamura · Issei Sato · Masashi Sugiyama -
2017 Poster: Learning Discrete Representations via Information Maximizing Self-Augmented Training »
Weihua Hu · Takeru Miyato · Seiya Tokui · Eiichi Matsumoto · Masashi Sugiyama -
2017 Talk: Learning Discrete Representations via Information Maximizing Self-Augmented Training »
Weihua Hu · Takeru Miyato · Seiya Tokui · Eiichi Matsumoto · Masashi Sugiyama -
2017 Poster: Semi-Supervised Classification Based on Classification from Positive and Unlabeled Data »
Tomoya Sakai · Marthinus C du Plessis · Gang Niu · Masashi Sugiyama -
2017 Talk: Semi-Supervised Classification Based on Classification from Positive and Unlabeled Data »
Tomoya Sakai · Marthinus C du Plessis · Gang Niu · Masashi Sugiyama