Skip to yearly menu bar Skip to main content


Poster
in
Workshop: The First Workshop on Pre-training: Perspectives, Pitfalls, and Paths Forward

Investigating Why Contrastive Learning Benefits Robustness against Label Noise

Yihao Xue · Kyle Whitecross · Baharan Mirzasoleiman


Abstract:

Self-supervised contrastive learning has recently been shown to be very effective in preventing deep networks from overfitting noisy labels. Despite its empirical success, the theoretical understanding of the effect of contrastive learning on boosting robustness is very limited. In this work, we rigorously prove that learned the representation matrix has certain desirable properties in terms its SVD that benefit robustness against label noise. We further show that the low-rank structure of the Jacobian of deep networks pre-trained with contrastive learning allows them to achieve a superior performance initially, when fine-tuned on noisy labels. Finally, we demonstrate that the initial robustness provided by contrastive learning enables robust training methods to achieve state-of-the-art performance under extreme noise levels.

Chat is not available.