Understanding self-supervised learning dynamics without contrastive pairs

Yuandong Tian · Xinlei Chen · Surya Ganguli


Keywords: [ Optimization for Deep Networks ]

award Outstanding Paper Honorable Mention
[ Abstract ]
[ Slides
[ Paper ]
[ Visit Poster at Spot A3 in Virtual World ]
Wed 21 Jul 9 p.m. PDT — 11 p.m. PDT
Oral presentation: Deep Learning Optimization
Wed 21 Jul 5 p.m. PDT — 6 p.m. PDT

Abstract: While contrastive approaches of self-supervised learning (SSL) learn representations by minimizing the distance between two augmented views of the same data point (positive pairs) and maximizing views from different data points (negative pairs), recent \emph{non-contrastive} SSL (e.g., BYOL and SimSiam) show remarkable performance {\it without} negative pairs, with an extra learnable predictor and a stop-gradient operation. A fundamental question rises: why they do not collapse into trivial representation? In this paper, we answer this question via a simple theoretical study and propose a novel approach, \ourmethod{}, that \emph{directly} sets the linear predictor based on the statistics of its inputs, rather than trained with gradient update. On ImageNet, it performs comparably with more complex two-layer non-linear predictors that employ BatchNorm and outperforms linear predictor by $2.5\%$ in 300-epoch training (and $5\%$ in 60-epoch). \ourmethod{} is motivated by our theoretical study of the nonlinear learning dynamics of non-contrastive SSL in simple linear networks. Our study yields conceptual insights into how non-contrastive SSL methods learn, how they avoid representational collapse, and how multiple factors, like predictor networks, stop-gradients, exponential moving averages, and weight decay all come into play. Our simple theory recapitulates the results of real-world ablation studies in both STL-10 and ImageNet. Code is released\footnote{\url{}}.

Chat is not available.