Timezone: »

Toward Understanding the Feature Learning Process of Self-supervised Contrastive Learning
Zixin Wen · Yuanzhi Li

Wed Jul 21 07:40 AM -- 07:45 AM (PDT) @

We formally study how contrastive learning learns the feature representations for neural networks by investigating its feature learning process. We consider the case where our data are comprised of two types of features: the sparse features which we want to learn from, and the dense features we want to get rid of. Theoretically, we prove that contrastive learning using ReLU networks provably learns the desired features if proper augmentations are adopted. We present an underlying principle called feature decoupling to explain the effects of augmentations, where we theoretically characterize how augmentations can reduce the correlations of dense features between positive samples while keeping the correlations of sparse features intact, thereby forcing the neural networks to learn from the self-supervision of sparse features. Empirically, we verified that the feature decoupling principle matches the underlying mechanism of contrastive learning in practice.

Author Information

Zixin Wen (UIBE)
Yuanzhi Li (CMU)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors