Skip to yearly menu bar Skip to main content


Poster
in
Workshop: The First Workshop on Pre-training: Perspectives, Pitfalls, and Paths Forward

What Do We Maximize In Self-Supervised Learning?

Ravid Shwartz-Ziv · Ravid Shwartz-Ziv · Randall Balestriero · Yann LeCun · Yann LeCun


Abstract:

This paper analyses self-supervised learning (SSL) methods, VICReg in particular, to provide for the first time an information-theoretical understanding of its construction. As a first step, we demonstrate how information-theoretic quantities can be obtained for a deterministic network, offering a possible alternative to prior works that rely on stochastic models. This enables us to demonstrate how VICReg can be (re-)discovered from first principles and its assumptions about data distribution. Furthermore, we demonstrated the validity of our assumptions empirically, confirming our novel understanding of VICReg. Finally, we believe that the derivation and insights we obtain can be generalized to many other SSL methods, opening new avenues for theoretical and practical understanding of SSL and transfer learning.

Chat is not available.