Timezone: »

Learning Structured Representations with Equivariant Contrastive Learning
Sharut Gupta · Joshua Robinson · Derek Lim · Soledad Villar · Stefanie Jegelka
Event URL: https://openreview.net/forum?id=zNuH8NMklR »

Self-supervised learning converts raw perceptual data such as images to a compact space where simple Euclidean distances measure meaningful variations in data. In this paper, we extend this formulation by adding additional geometric structure to the embedding space by enforcing transformations of input space to correspond to simple (i.e., linear) transformations of embedding space. Specifically, in the contrastive learning setting, we introduce an equivariance objective and theoretically prove that its minima forces augmentations on input space to correspond to rotations on the spherical embedding space. We show that merely combining our equivariant loss with a non-collapse term results in non-trivial representations, without requiring invariance to data augmentations. Optimal performance is achieved by also encouraging approximate invariance, where input augmentations correspond to small rotations. Our method, CARE: Contrastive Augmentation-induced Rotational Equivariance, leads to improved performance on downstream tasks and ensures sensitivity in embedding space to important variations in data (e.g., color) that standard contrastive methods do not achieve.

Author Information

Sharut Gupta (Massachusetts Institute of Technology)
Joshua Robinson (MIT)

I want to understand how machines can learn useful representations of the world. I am also interested in modeling diversity and its many applications in learning problems. I am Josh Robinson, a PhD student at MIT CSAIL & LIDS advised by Stefanie Jegelka and Suvrit Sra. I am part of the MIT machine learning group. Previously I was an undergraduate at the University of Warwick where I worked with Robert MacKay on probability theory.

Derek Lim (MIT)
Soledad Villar (Johns Hopkins)
Soledad Villar

Soledad Villar is an Assistant Professor at the Department of Applied Mathematics & Statistics, and at the Mathematical Institute for Data Science, Johns Hopkins University. She received her PhD in mathematics from University in Texas at Austin and was a research fellow at New York University as well as the Simons Institute in University of California Berkeley. Her mathematical interests are in computational methods for extracting information from data. She studies optimization for data science, machine learning, equivariant representation learning and graph neural networks. Soledad is originally from Uruguay.

Stefanie Jegelka (Massachusetts Institute of Technology)

More from the Same Authors