Timezone: »
The rawly collected training data often comes with separate noisy labels collected from multiple imperfect annotators (e.g., via crowdsourcing). A typical way of using these separate labels is to first aggregate them into one and apply standard training methods. The literature has also studied extensively on effective aggregation approaches. This paper revisits this choice and aims to provide an answer to the question of whether one should aggregate separate noisy labels into single ones or use them separately as given. We theoretically analyze the performance of both approaches under the empirical risk minimization framework for a number of popular loss functions, including the ones designed specifically for the problem of learning with noisy labels. Our theorems conclude that label separation is preferred over label aggregation when the noise rates are high, or the number of labelers/annotations is insufficient. Extensive empirical results validate our conclusions.
Author Information
Jiaheng Wei (University of California, Santa Cruz)
Zhaowei Zhu (University of California, Santa Cruz)
Tianyi Luo (Amazon)
Ehsan Amid (Google DeepMind)
Abhishek Kumar (Google Brain)
Yang Liu (UC Santa Cruz/ByteDance Research)
More from the Same Authors
-
2020 : Contributed Talk: Incentives for Federated Learning: a Hypothesis Elicitation Approach »
Yang Liu · Jiaheng Wei -
2020 : Contributed Talk: Linear Models are Robust Optimal Under Strategic Behavior »
Wei Tang · Chien-Ju Ho · Yang Liu -
2021 : Linear Classifiers that Encourage Constructive Adaptation »
Yatong Chen · Jialu Wang · Yang Liu -
2021 : When Optimizing f-divergence is Robust with Label Noise »
Jiaheng Wei · Yang Liu -
2022 : Adaptive Data Debiasing Through Bounded Exploration »
Yifan Yang · Yang Liu · Parinaz Naghizadeh -
2023 : Understanding Unfairness via Training Concept Influence »
Yuanshun Yao · Yang Liu -
2023 : Towards an Efficient Algorithm for Time Series Forecasting with Anomalies »
Hao Cheng · Qingsong Wen · Yang Liu · Liang Sun -
2023 Workshop: DMLR Workshop: Data-centric Machine Learning Research »
Ce Zhang · Praveen Paritosh · Newsha Ardalani · Nezihe Merve Gürel · William Gaviria Rojas · Yang Liu · Rotem Dror · Manil Maskey · Lilith Bat-Leah · Tzu-Sheng Kuo · Luis Oala · Max Bartolo · Ludwig Schmidt · Alicia Parrish · Daniel Kondermann · Najoung Kim -
2023 Poster: Identifiability of Label Noise Transition Matrix »
Yang Liu · Hao Cheng · Kun Zhang -
2023 Poster: Weak Proxies are Sufficient and Preferable for Fairness with Missing Sensitive Attributes »
Zhaowei Zhu · Yuanshun Yao · Jiankai Sun · Hang Li · Yang Liu -
2023 Poster: Model Transferability with Responsive Decision Subjects »
Yatong Chen · Zeyu Tang · Kun Zhang · Yang Liu -
2022 : Model Transferability With Responsive Decision Subjects »
Yang Liu · Yatong Chen · Zeyu Tang · Kun Zhang -
2022 Poster: Estimating Instance-dependent Bayes-label Transition Matrix using a Deep Neural Network »
Shuo Yang · Erkun Yang · Bo Han · Yang Liu · Min Xu · Gang Niu · Tongliang Liu -
2022 Poster: Detecting Corrupted Labels Without Training a Model to Predict »
Zhaowei Zhu · Zihao Dong · Yang Liu -
2022 Poster: Understanding Instance-Level Impact of Fairness Constraints »
Jialu Wang · Xin Eric Wang · Yang Liu -
2022 Spotlight: Understanding Instance-Level Impact of Fairness Constraints »
Jialu Wang · Xin Eric Wang · Yang Liu -
2022 Spotlight: Estimating Instance-dependent Bayes-label Transition Matrix using a Deep Neural Network »
Shuo Yang · Erkun Yang · Bo Han · Yang Liu · Min Xu · Gang Niu · Tongliang Liu -
2022 Poster: Metric-Fair Classifier Derandomization »
Jimmy Wu · Yatong Chen · Yang Liu -
2022 Poster: Beyond Images: Label Noise Transition Matrix Estimation for Tasks with Lower-Quality Features »
Zhaowei Zhu · Jialu Wang · Yang Liu -
2022 Spotlight: Detecting Corrupted Labels Without Training a Model to Predict »
Zhaowei Zhu · Zihao Dong · Yang Liu -
2022 Spotlight: Metric-Fair Classifier Derandomization »
Jimmy Wu · Yatong Chen · Yang Liu -
2022 Spotlight: Beyond Images: Label Noise Transition Matrix Estimation for Tasks with Lower-Quality Features »
Zhaowei Zhu · Jialu Wang · Yang Liu -
2022 Poster: Public Data-Assisted Mirror Descent for Private Model Training »
Ehsan Amid · Arun Ganesh · Rajiv Mathews · Swaroop Ramaswamy · Shuang Song · Thomas Steinke · Thomas Steinke · Vinith Suriyakumar · Om Thakkar · Abhradeep Guha Thakurta -
2022 Poster: To Smooth or Not? When Label Smoothing Meets Noisy Labels »
Jiaheng Wei · Hangyu Liu · Tongliang Liu · Gang Niu · Masashi Sugiyama · Yang Liu -
2022 Spotlight: Public Data-Assisted Mirror Descent for Private Model Training »
Ehsan Amid · Arun Ganesh · Rajiv Mathews · Swaroop Ramaswamy · Shuang Song · Thomas Steinke · Thomas Steinke · Vinith Suriyakumar · Om Thakkar · Abhradeep Guha Thakurta -
2022 Oral: To Smooth or Not? When Label Smoothing Meets Noisy Labels »
Jiaheng Wei · Hangyu Liu · Tongliang Liu · Gang Niu · Masashi Sugiyama · Yang Liu -
2021 Poster: Clusterability as an Alternative to Anchor Points When Learning with Noisy Labels »
Zhaowei Zhu · Yiwen Song · Yang Liu -
2021 Spotlight: Clusterability as an Alternative to Anchor Points When Learning with Noisy Labels »
Zhaowei Zhu · Yiwen Song · Yang Liu -
2021 Poster: Understanding Instance-Level Label Noise: Disparate Impacts and Treatments »
Yang Liu -
2021 Oral: Understanding Instance-Level Label Noise: Disparate Impacts and Treatments »
Yang Liu -
2020 Workshop: Incentives in Machine Learning »
Boi Faltings · Yang Liu · David Parkes · Goran Radanovic · Dawn Song -
2020 Poster: Peer Loss Functions: Learning from Noisy Labels without Knowing Noise Rates »
Yang Liu · Hongyi Guo -
2019 Poster: Fairness without Harm: Decoupled Classifiers with Preference Guarantees »
Berk Ustun · Yang Liu · David Parkes -
2019 Oral: Fairness without Harm: Decoupled Classifiers with Preference Guarantees »
Berk Ustun · Yang Liu · David Parkes