Timezone: »
Partial-label learning (PLL) is a typical weakly supervised learning problem, where each training instance is equipped with a set of candidate labels among which only one is the true label. Most existing methods elaborately designed learning objectives as constrained optimizations that must be solved in specific manners, making their computational complexity a bottleneck for scaling up to big data. The goal of this paper is to propose a novel framework of PLL with flexibility on the model and optimization algorithm. More specifically, we propose a novel estimator of the classification risk, theoretically analyze the classifier-consistency, and establish an estimation error bound. Then we propose a progressive identification algorithm for approximately minimizing the proposed risk estimator, where the update of the model and identification of true labels are conducted in a seamless manner. The resulting algorithm is model-independent and loss-independent, and compatible with stochastic optimization. Thorough experiments demonstrate it sets the new state of the art.
Author Information
Jiaqi Lv (Southeast University)
Miao Xu (University of Queensland/ RIKEN AIP)
LEI FENG (Nanyang Technological University)
Gang Niu (RIKEN)

Gang Niu is currently an indefinite-term senior research scientist at RIKEN Center for Advanced Intelligence Project.
Xin Geng (Southeast University)
Masashi Sugiyama (RIKEN / The University of Tokyo)
More from the Same Authors
-
2023 : Invited Talk 3: Masashi Sugiyama (RIKEN & UTokyo) - Data distribution shift »
Masashi Sugiyama -
2023 : Enriching Disentanglement: Definitions to Metrics »
Yivan Zhang · Masashi Sugiyama -
2023 Poster: FREDIS: A Fusion Framework of Refinement and Disambiguation for Unreliable Partial Label Learning »
Congyu Qiao · Ning Xu · JIAQI LYU · yi ren · Xin Geng -
2023 Poster: Mitigating Memorization of Noisy Labels by Clipping the Model Prediction »
Hongxin Wei · HUIPING ZHUANG · RENCHUNZI XIE · Lei Feng · Gang Niu · Bo An · Sharon Li -
2023 Poster: GAT: Guided Adversarial Training with Pareto-optimal Auxiliary Tasks »
Salah GHAMIZI · Jingfeng ZHANG · Maxime Cordy · Mike Papadakis · Masashi Sugiyama · YVES LE TRAON -
2023 Poster: Revisiting Pseudo-Label for Single-Positive Multi-Label Learning »
biao liu · Ning Xu · JIAQI LYU · Xin Geng -
2023 Poster: Diversity-enhancing Generative Network for Few-shot Hypothesis Adaptation »
Ruijiang Dong · Feng Liu · Haoang Chi · Tongliang Liu · Mingming Gong · Gang Niu · Masashi Sugiyama · Bo Han -
2023 Poster: A Universal Unbiased Method for Classification from Aggregate Observations »
Zixi Wei · Lei Feng · Bo Han · Tongliang Liu · Gang Niu · Xiaofeng Zhu · Heng Tao Shen -
2023 Poster: Optimality of Thompson Sampling with Noninformative Priors for Pareto Bandits »
Jongyeong Lee · Junya Honda · Chao-Kai Chiang · Masashi Sugiyama -
2023 Poster: A Category-theoretical Meta-analysis of Definitions of Disentanglement »
Yivan Zhang · Masashi Sugiyama -
2023 Poster: Progressive Purification for Instance-Dependent Partial Label Learning »
Ning Xu · biao liu · JIAQI LYU · Congyu Qiao · Xin Geng -
2022 Poster: Estimating Instance-dependent Bayes-label Transition Matrix using a Deep Neural Network »
Shuo Yang · Erkun Yang · Bo Han · Yang Liu · Min Xu · Gang Niu · Tongliang Liu -
2022 Spotlight: Estimating Instance-dependent Bayes-label Transition Matrix using a Deep Neural Network »
Shuo Yang · Erkun Yang · Bo Han · Yang Liu · Min Xu · Gang Niu · Tongliang Liu -
2022 Poster: Mitigating Neural Network Overconfidence with Logit Normalization »
Hongxin Wei · RENCHUNZI XIE · Hao Cheng · LEI FENG · Bo An · Sharon Li -
2022 Poster: Adversarial Attack and Defense for Non-Parametric Two-Sample Tests »
Xilie Xu · Jingfeng Zhang · Feng Liu · Masashi Sugiyama · Mohan Kankanhalli -
2022 Poster: Adaptive Inertia: Disentangling the Effects of Adaptive Learning Rate and Momentum »
Zeke Xie · Xinrui Wang · Huishuai Zhang · Issei Sato · Masashi Sugiyama -
2022 Spotlight: Adversarial Attack and Defense for Non-Parametric Two-Sample Tests »
Xilie Xu · Jingfeng Zhang · Feng Liu · Masashi Sugiyama · Mohan Kankanhalli -
2022 Oral: Adaptive Inertia: Disentangling the Effects of Adaptive Learning Rate and Momentum »
Zeke Xie · Xinrui Wang · Huishuai Zhang · Issei Sato · Masashi Sugiyama -
2022 Spotlight: Mitigating Neural Network Overconfidence with Logit Normalization »
Hongxin Wei · RENCHUNZI XIE · Hao Cheng · LEI FENG · Bo An · Sharon Li -
2022 Poster: Fast and Reliable Evaluation of Adversarial Robustness with Minimum-Margin Attack »
Ruize Gao · Jiongxiao Wang · Kaiwen Zhou · Feng Liu · Binghui Xie · Gang Niu · Bo Han · James Cheng -
2022 Poster: To Smooth or Not? When Label Smoothing Meets Noisy Labels »
Jiaheng Wei · Hangyu Liu · Tongliang Liu · Gang Niu · Masashi Sugiyama · Yang Liu -
2022 Poster: Open-Sampling: Exploring Out-of-Distribution data for Re-balancing Long-tailed datasets »
Hongxin Wei · Lue Tao · RENCHUNZI XIE · LEI FENG · Bo An -
2022 Spotlight: Fast and Reliable Evaluation of Adversarial Robustness with Minimum-Margin Attack »
Ruize Gao · Jiongxiao Wang · Kaiwen Zhou · Feng Liu · Binghui Xie · Gang Niu · Bo Han · James Cheng -
2022 Spotlight: Open-Sampling: Exploring Out-of-Distribution data for Re-balancing Long-tailed datasets »
Hongxin Wei · Lue Tao · RENCHUNZI XIE · LEI FENG · Bo An -
2022 Oral: To Smooth or Not? When Label Smoothing Meets Noisy Labels »
Jiaheng Wei · Hangyu Liu · Tongliang Liu · Gang Niu · Masashi Sugiyama · Yang Liu -
2021 Poster: Provably End-to-end Label-noise Learning without Anchor Points »
Xuefeng Li · Tongliang Liu · Bo Han · Gang Niu · Masashi Sugiyama -
2021 Poster: Learning Diverse-Structured Networks for Adversarial Robustness »
Xuefeng Du · Jingfeng Zhang · Bo Han · Tongliang Liu · Yu Rong · Gang Niu · Junzhou Huang · Masashi Sugiyama -
2021 Poster: CIFS: Improving Adversarial Robustness of CNNs via Channel-wise Importance-based Feature Selection »
Hanshu YAN · Jingfeng Zhang · Gang Niu · Jiashi Feng · Vincent Tan · Masashi Sugiyama -
2021 Poster: Maximum Mean Discrepancy Test is Aware of Adversarial Attacks »
Ruize Gao · Feng Liu · Jingfeng Zhang · Bo Han · Tongliang Liu · Gang Niu · Masashi Sugiyama -
2021 Spotlight: CIFS: Improving Adversarial Robustness of CNNs via Channel-wise Importance-based Feature Selection »
Hanshu YAN · Jingfeng Zhang · Gang Niu · Jiashi Feng · Vincent Tan · Masashi Sugiyama -
2021 Spotlight: Provably End-to-end Label-noise Learning without Anchor Points »
Xuefeng Li · Tongliang Liu · Bo Han · Gang Niu · Masashi Sugiyama -
2021 Spotlight: Learning Diverse-Structured Networks for Adversarial Robustness »
Xuefeng Du · Jingfeng Zhang · Bo Han · Tongliang Liu · Yu Rong · Gang Niu · Junzhou Huang · Masashi Sugiyama -
2021 Spotlight: Maximum Mean Discrepancy Test is Aware of Adversarial Attacks »
Ruize Gao · Feng Liu · Jingfeng Zhang · Bo Han · Tongliang Liu · Gang Niu · Masashi Sugiyama -
2021 Poster: Mediated Uncoupled Learning: Learning Functions without Direct Input-output Correspondences »
Ikko Yamane · Junya Honda · Florian YGER · Masashi Sugiyama -
2021 Poster: Class2Simi: A Noise Reduction Perspective on Learning with Noisy Labels »
Songhua Wu · Xiaobo Xia · Tongliang Liu · Bo Han · Mingming Gong · Nannan Wang · Haifeng Liu · Gang Niu -
2021 Poster: Pointwise Binary Classification with Pairwise Confidence Comparisons »
Lei Feng · Senlin Shu · Nan Lu · Bo Han · Miao Xu · Gang Niu · Bo An · Masashi Sugiyama -
2021 Poster: Binary Classification from Multiple Unlabeled Datasets via Surrogate Set Classification »
Nan Lu · Shida Lei · Gang Niu · Issei Sato · Masashi Sugiyama -
2021 Poster: Learning from Similarity-Confidence Data »
Yuzhou Cao · Lei Feng · Yitian Xu · Bo An · Gang Niu · Masashi Sugiyama -
2021 Poster: Confidence Scores Make Instance-dependent Label-noise Learning Possible »
Antonin Berthon · Bo Han · Gang Niu · Tongliang Liu · Masashi Sugiyama -
2021 Poster: Learning Noise Transition Matrix from Only Noisy Labels via Total Variation Regularization »
Yivan Zhang · Gang Niu · Masashi Sugiyama -
2021 Spotlight: Learning from Similarity-Confidence Data »
Yuzhou Cao · Lei Feng · Yitian Xu · Bo An · Gang Niu · Masashi Sugiyama -
2021 Spotlight: Pointwise Binary Classification with Pairwise Confidence Comparisons »
Lei Feng · Senlin Shu · Nan Lu · Bo Han · Miao Xu · Gang Niu · Bo An · Masashi Sugiyama -
2021 Spotlight: Mediated Uncoupled Learning: Learning Functions without Direct Input-output Correspondences »
Ikko Yamane · Junya Honda · Florian YGER · Masashi Sugiyama -
2021 Spotlight: Binary Classification from Multiple Unlabeled Datasets via Surrogate Set Classification »
Nan Lu · Shida Lei · Gang Niu · Issei Sato · Masashi Sugiyama -
2021 Oral: Learning Noise Transition Matrix from Only Noisy Labels via Total Variation Regularization »
Yivan Zhang · Gang Niu · Masashi Sugiyama -
2021 Spotlight: Class2Simi: A Noise Reduction Perspective on Learning with Noisy Labels »
Songhua Wu · Xiaobo Xia · Tongliang Liu · Bo Han · Mingming Gong · Nannan Wang · Haifeng Liu · Gang Niu -
2021 Oral: Confidence Scores Make Instance-dependent Label-noise Learning Possible »
Antonin Berthon · Bo Han · Gang Niu · Tongliang Liu · Masashi Sugiyama -
2021 Poster: Lower-Bounded Proper Losses for Weakly Supervised Classification »
Shuhei M Yoshida · Takashi Takenouchi · Masashi Sugiyama -
2021 Poster: Classification with Rejection Based on Cost-sensitive Classification »
Nontawat Charoenphakdee · Zhenghang Cui · Yivan Zhang · Masashi Sugiyama -
2021 Poster: Label Distribution Learning Machine »
Jing Wang · Xin Geng -
2021 Spotlight: Classification with Rejection Based on Cost-sensitive Classification »
Nontawat Charoenphakdee · Zhenghang Cui · Yivan Zhang · Masashi Sugiyama -
2021 Spotlight: Lower-Bounded Proper Losses for Weakly Supervised Classification »
Shuhei M Yoshida · Takashi Takenouchi · Masashi Sugiyama -
2021 Oral: Label Distribution Learning Machine »
Jing Wang · Xin Geng -
2021 Poster: Large-Margin Contrastive Learning with Distance Polarization Regularizer »
Shuo Chen · Gang Niu · Chen Gong · Jun Li · Jian Yang · Masashi Sugiyama -
2021 Poster: Positive-Negative Momentum: Manipulating Stochastic Gradient Noise to Improve Generalization »
Zeke Xie · Li Yuan · Zhanxing Zhu · Masashi Sugiyama -
2021 Spotlight: Positive-Negative Momentum: Manipulating Stochastic Gradient Noise to Improve Generalization »
Zeke Xie · Li Yuan · Zhanxing Zhu · Masashi Sugiyama -
2021 Spotlight: Large-Margin Contrastive Learning with Distance Polarization Regularizer »
Shuo Chen · Gang Niu · Chen Gong · Jun Li · Jian Yang · Masashi Sugiyama -
2020 Poster: Few-shot Domain Adaptation by Causal Mechanism Transfer »
Takeshi Teshima · Issei Sato · Masashi Sugiyama -
2020 Poster: Do We Need Zero Training Loss After Achieving Zero Training Error? »
Takashi Ishida · Ikko Yamane · Tomoya Sakai · Gang Niu · Masashi Sugiyama -
2020 Poster: Online Dense Subgraph Discovery via Blurred-Graph Feedback »
Yuko Kuroki · Atsushi Miyauchi · Junya Honda · Masashi Sugiyama -
2020 Poster: SIGUA: Forgetting May Make Learning with Noisy Labels More Robust »
Bo Han · Gang Niu · Xingrui Yu · QUANMING YAO · Miao Xu · Ivor Tsang · Masashi Sugiyama -
2020 Poster: Variational Label Enhancement »
Ning Xu · Jun Shu · Yun-Peng Liu · Xin Geng -
2020 Poster: Unbiased Risk Estimators Can Mislead: A Case Study of Learning with Complementary Labels »
Yu-Ting Chou · Gang Niu · Hsuan-Tien (Tien) Lin · Masashi Sugiyama -
2020 Poster: Attacks Which Do Not Kill Training Make Adversarial Learning Stronger »
Jingfeng Zhang · Xilie Xu · Bo Han · Gang Niu · Lizhen Cui · Masashi Sugiyama · Mohan Kankanhalli -
2020 Poster: Accelerating the diffusion-based ensemble sampling by non-reversible dynamics »
Futoshi Futami · Issei Sato · Masashi Sugiyama -
2020 Poster: Variational Imitation Learning with Diverse-quality Demonstrations »
Voot Tangkaratt · Bo Han · Mohammad Emtiyaz Khan · Masashi Sugiyama -
2020 Poster: Learning with Multiple Complementary Labels »
LEI FENG · Takuo Kaneko · Bo Han · Gang Niu · Bo An · Masashi Sugiyama -
2020 Poster: Searching to Exploit Memorization Effect in Learning with Noisy Labels »
QUANMING YAO · Hansi Yang · Bo Han · Gang Niu · James Kwok -
2020 Poster: Normalized Flat Minima: Exploring Scale Invariant Definition of Flat Minima for Neural Networks Using PAC-Bayesian Analysis »
Yusuke Tsuzuku · Issei Sato · Masashi Sugiyama -
2019 : Spotlight »
Tyler Scott · Kiran Thekumparampil · Jonathan Aigrain · Rene Bidart · Priyadarshini Panda · Dian Ang Yap · Yaniv Yacoby · Raphael Gontijo Lopes · Alberto Marchisio · Erik Englesson · Wanqian Yang · Moritz Graule · Yi Sun · Daniel Kang · Mike Dusenberry · Min Du · Hartmut Maennel · Kunal Menda · Vineet Edupuganti · Luke Metz · David Stutz · Vignesh Srinivasan · Timo Sämann · Vineeth N Balasubramanian · Sina Mohseni · Rob Cornish · Judith Butepage · Zhangyang Wang · Bai Li · Bo Han · Honglin Li · Maksym Andriushchenko · Lukas Ruff · Meet P. Vadera · Yaniv Ovadia · Sunil Thulasidasan · Disi Ji · Gang Niu · Saeed Mahloujifar · Aviral Kumar · SANGHYUK CHUN · Dong Yin · Joyce Xu Xu · Hugo Gomes · Raanan Rohekar -
2019 Poster: Classification from Positive, Unlabeled and Biased Negative Data »
Yu-Guan Hsieh · Gang Niu · Masashi Sugiyama -
2019 Poster: Complementary-Label Learning for Arbitrary Losses and Models »
Takashi Ishida · Gang Niu · Aditya Menon · Masashi Sugiyama -
2019 Oral: Complementary-Label Learning for Arbitrary Losses and Models »
Takashi Ishida · Gang Niu · Aditya Menon · Masashi Sugiyama -
2019 Oral: Classification from Positive, Unlabeled and Biased Negative Data »
Yu-Guan Hsieh · Gang Niu · Masashi Sugiyama -
2019 Poster: How does Disagreement Help Generalization against Label Corruption? »
Xingrui Yu · Bo Han · Jiangchao Yao · Gang Niu · Ivor Tsang · Masashi Sugiyama -
2019 Oral: How does Disagreement Help Generalization against Label Corruption? »
Xingrui Yu · Bo Han · Jiangchao Yao · Gang Niu · Ivor Tsang · Masashi Sugiyama -
2019 Poster: Imitation Learning from Imperfect Demonstration »
Yueh-Hua Wu · Nontawat Charoenphakdee · Han Bao · Voot Tangkaratt · Masashi Sugiyama -
2019 Poster: On Symmetric Losses for Learning from Corrupted Labels »
Nontawat Charoenphakdee · Jongyeong Lee · Masashi Sugiyama -
2019 Oral: Imitation Learning from Imperfect Demonstration »
Yueh-Hua Wu · Nontawat Charoenphakdee · Han Bao · Voot Tangkaratt · Masashi Sugiyama -
2019 Oral: On Symmetric Losses for Learning from Corrupted Labels »
Nontawat Charoenphakdee · Jongyeong Lee · Masashi Sugiyama -
2018 Poster: Classification from Pairwise Similarity and Unlabeled Data »
Han Bao · Gang Niu · Masashi Sugiyama -
2018 Oral: Classification from Pairwise Similarity and Unlabeled Data »
Han Bao · Gang Niu · Masashi Sugiyama -
2018 Poster: Does Distributionally Robust Supervised Learning Give Robust Classifiers? »
Weihua Hu · Gang Niu · Issei Sato · Masashi Sugiyama -
2018 Oral: Does Distributionally Robust Supervised Learning Give Robust Classifiers? »
Weihua Hu · Gang Niu · Issei Sato · Masashi Sugiyama -
2018 Poster: Analysis of Minimax Error Rate for Crowdsourcing and Its Application to Worker Clustering Model »
Hideaki Imamura · Issei Sato · Masashi Sugiyama -
2018 Oral: Analysis of Minimax Error Rate for Crowdsourcing and Its Application to Worker Clustering Model »
Hideaki Imamura · Issei Sato · Masashi Sugiyama -
2017 Poster: Learning Discrete Representations via Information Maximizing Self-Augmented Training »
Weihua Hu · Takeru Miyato · Seiya Tokui · Eiichi Matsumoto · Masashi Sugiyama -
2017 Talk: Learning Discrete Representations via Information Maximizing Self-Augmented Training »
Weihua Hu · Takeru Miyato · Seiya Tokui · Eiichi Matsumoto · Masashi Sugiyama -
2017 Poster: Semi-Supervised Classification Based on Classification from Positive and Unlabeled Data »
Tomoya Sakai · Marthinus C du Plessis · Gang Niu · Masashi Sugiyama -
2017 Talk: Semi-Supervised Classification Based on Classification from Positive and Unlabeled Data »
Tomoya Sakai · Marthinus C du Plessis · Gang Niu · Masashi Sugiyama