Timezone: »
Self-supervised Contrastive Learning (CL) has been recently shown to be very effective in preventing deep networks from overfitting noisy labels. Despite its empirical success, the theoretical understanding of the effect of contrastive learning on boosting robustness is very limited. In this work, we rigorously prove that the representation matrix learned by contrastive learning boosts robustness, by having: (i) one prominent singular value corresponding to each sub-class in the data, and significantly smaller remaining singular values; and (ii) a large alignment between the prominent singular vectors and the clean labels of each sub-class. The above properties enable a linear layer trained on such representations to effectively learn the clean labels without overfitting the noise. We further show that the low-rank structure of the Jacobian of deep networks pre-trained with contrastive learning allows them to achieve a superior performance initially, when fine-tuned on noisy labels. Finally, we demonstrate that the initial robustness provided by contrastive learning enables robust training methods to achieve state-of-the-art performance under extreme noise levels, e.g., an average of 27.18% and 15.58% increase in accuracy on CIFAR-10 and CIFAR-100 with 80% symmetric noisy labels, and 4.11% increase in accuracy on WebVision.
Author Information
Yihao Xue (UCLA)
Kyle Whitecross (UCLA)
UCLA undergrad studying CS and ML intern at Kumo.ai.
Baharan Mirzasoleiman (Stanford University)
Related Events (a corresponding poster, oral, or spotlight)
-
2022 Poster: Investigating Why Contrastive Learning Benefits Robustness against Label Noise »
Thu. Jul 21st through Fri the 22nd Room Hall E #506
More from the Same Authors
-
2021 : CrossWalk: Fairness-enhanced Node Representation Learning »
Ahmad Khajehnejad · Moein Khajehnejad · Krishna Gummadi · Adrian Weller · Baharan Mirzasoleiman -
2022 : Investigating Why Contrastive Learning Benefits Robustness against Label Noise »
Yihao Xue · Kyle Whitecross · Baharan Mirzasoleiman -
2022 : Investigating Why Contrastive Learning Benefits Robustness against Label Noise »
Yihao Xue · Kyle Whitecross · Baharan Mirzasoleiman -
2023 : Which Features are Learned by Contrastive Learning? On the Role of Simplicity Bias in Class Collapse and Feature Suppression »
Yihao Xue · Siddharth Joshi · Eric Gan · Pin-Yu Chen · Baharan Mirzasoleiman -
2023 : Which Features are Learned by Contrastive Learning? On the Role of Simplicity Bias in Class Collapse and Feature Suppression »
Yihao Xue · Siddharth Joshi · Eric Gan · Pin-Yu Chen · Baharan Mirzasoleiman -
2023 : Mitigating Spurious Correlations in Multi-modal Models during Fine-tuning »
Yu Yang · Besmira Nushi · Hamid Palangi · Baharan Mirzasoleiman -
2023 : Robust Learning with Progressive Data Expansion Against Spurious Correlation »
Yihe Deng · Yu Yang · Baharan Mirzasoleiman · Quanquan Gu -
2023 : Data-Efficient Contrastive Self-supervised Learning: Most Beneficial Examples for Supervised Learning Contribute the Least »
Siddharth Joshi · Baharan Mirzasoleiman -
2023 : Which Features are Learned by Contrastive Learning? On the Role of Simplicity Bias in Class Collapse and Feature Suppression »
Yihao Xue · Siddharth Joshi · Eric Gan · Pin-Yu Chen · Baharan Mirzasoleiman -
2023 Workshop: 2nd ICML Workshop on New Frontiers in Adversarial Machine Learning »
Sijia Liu · Pin-Yu Chen · Dongxiao Zhu · Eric Wong · Kathrin Grosse · Baharan Mirzasoleiman · Sanmi Koyejo -
2023 Poster: Towards Sustainable Learning: Coresets for Data-efficient Deep Learning »
Yu Yang · Hao Kang · Baharan Mirzasoleiman -
2023 Poster: Which Features are Learnt by Contrastive Learning? On the Role of Simplicity Bias in Class Collapse and Feature Suppression »
Yihao Xue · Siddharth Joshi · Eric Gan · Pin-Yu Chen · Baharan Mirzasoleiman -
2023 Poster: Mitigating Spurious Correlations in Multi-modal Models during Fine-tuning »
Yu Yang · Besmira Nushi · Hamid Palangi · Baharan Mirzasoleiman -
2023 Oral: Which Features are Learnt by Contrastive Learning? On the Role of Simplicity Bias in Class Collapse and Feature Suppression »
Yihao Xue · Siddharth Joshi · Eric Gan · Pin-Yu Chen · Baharan Mirzasoleiman -
2023 Poster: Data-Efficient Contrastive Self-supervised Learning: Most Beneficial Examples for Supervised Learning Contribute the Least »
Siddharth Joshi · Baharan Mirzasoleiman -
2022 : Less Data Can Be More! »
Baharan Mirzasoleiman -
2022 : Not All Poisons are Created Equal: Robust Training against Data Poisoning »
Yu Yang · Baharan Mirzasoleiman -
2022 Poster: Adaptive Second Order Coresets for Data-efficient Machine Learning »
Omead Pooladzandi · David Davini · Baharan Mirzasoleiman -
2022 Spotlight: Adaptive Second Order Coresets for Data-efficient Machine Learning »
Omead Pooladzandi · David Davini · Baharan Mirzasoleiman -
2022 Poster: Not All Poisons are Created Equal: Robust Training against Data Poisoning »
Yu Yang · Tian Yu Liu · Baharan Mirzasoleiman -
2022 Oral: Not All Poisons are Created Equal: Robust Training against Data Poisoning »
Yu Yang · Tian Yu Liu · Baharan Mirzasoleiman -
2021 : Data-efficient and Robust Learning from Massive Datasets »
Baharan Mirzasoleiman