Timezone: »
Recent progress has been made towards learning invariant or equivariant representations with self-supervised learning. While invariant methods are evaluated on large scale datasets, equivariant ones are evaluated in smaller, more controlled, settings. We aim at bridging the gap between the two in order to learn more diverse representations that are suitable for a wide range of tasks. We start by introducing a dataset called 3DIEBench, consisting of renderings from 3D models over 55 classes and more than 2.5 million images where we have full control on the transformations applied to the objects. We further introduce a predictor architecture based on hypernetworks to learn equivariant representations with no possible collapse to invariance. We introduce SIE (Split Invariant-Equivariant) which combines the hypernetwork-based predictor with representations split in two parts, one invariant, the other equivariant, to learn richer representations. We demonstrate significant performance gains over existing methods on equivariance related tasks from both a qualitative and quantitative point of view. We further analyze our introduced predictor and show how it steers the learned latent space. We hope that both our introduced dataset and approach will enable learning richer representations without supervision in more complex scenarios. Code and data are available at https://github.com/garridoq/SIE.
Author Information
Quentin Garrido (Meta AI - FAIR, Université Gustave Eiffel)
Laurent Najman (Université Gustave Eiffel - ESIEE Paris)

Laurent Najman received the Habilitation à Diriger les Recherches in 2006 from the University of Marne-la-Vallée, a Ph.D. in applied mathematics from Paris-Dauphine University in 1994 with the highest honor (Félicitations du Jury) and an “Ingénieur” degree from the Ecole des Mines de Paris in 1991. After earning his engineering degree, he worked in the Central Research Laboratories of Thomson-CSF for three years, working on some problems of infrared image segmentation using mathematical morphology. He then joined a start-up company named Animation Science in 1995, as director of research and development. The technology of particle systems for computer graphics and scientific visualization, developed by the company under his technical leadership received several awards, including the “European Information Technology Prize 1997” awarded by the European Commission (Esprit program) and by the European Council for Applied Science and Engineering and the “Hottest Products of the Year 1996” awarded by the Computer Graphics World journal. In 1998, he joined OCÉ Print Logic Technologies, as senior scientist. He worked there on various problem of image analysis dedicated to scanning and printing. In 2002, he joined the Computer Sciences Department of ESIEE, Paris, where he is full professor and the leader of the A3SI team of the Laboratoire d’Informatique Gaspard Monge, Université Gustave Eiffel. His current research interests include the study of the topology of discrete structures (such as graphs, hierarchies, and simplicial complexes), using discrete mathematical morphology and discrete optimization.
Yann LeCun (New York University)
More from the Same Authors
-
2022 : Pre-Train Your Loss: Easy Bayesian Transfer Learning with Informative Prior »
Ravid Shwartz-Ziv · Micah Goldblum · Hossein Souri · Sanyam Kapoor · Chen Zhu · Yann LeCun · Andrew Wilson -
2022 : What Do We Maximize In Self-Supervised Learning? »
Ravid Shwartz-Ziv · Ravid Shwartz-Ziv · Randall Balestriero · Yann LeCun · Yann LeCun -
2023 Poster: RankMe: Assessing the Downstream Performance of Pretrained Self-Supervised Representations by Their Rank »
Quentin Garrido · Randall Balestriero · Laurent Najman · Yann LeCun -
2023 Poster: The SSL Interplay: Augmentations, Inductive Bias, and Generalization »
Vivien Cabannnes · Bobak T Kiani · Randall Balestriero · Yann LeCun · Alberto Bietti -
2023 Oral: RankMe: Assessing the Downstream Performance of Pretrained Self-Supervised Representations by Their Rank »
Quentin Garrido · Randall Balestriero · Laurent Najman · Yann LeCun -
2023 Poster: A Generalization of ViT/MLP-Mixer to Graphs »
Xiaoxin He · Bryan Hooi · Thomas Laurent · Adam Perold · Yann LeCun · Xavier Bresson -
2022 : Pre-Train Your Loss: Easy Bayesian Transfer Learning with Informative Prior »
Ravid Shwartz-Ziv · Micah Goldblum · Hossein Souri · Sanyam Kapoor · Chen Zhu · Yann LeCun · Andrew Wilson -
2018 Poster: Adversarially Regularized Autoencoders »
Jake Zhao · Yoon Kim · Kelly Zhang · Alexander Rush · Yann LeCun -
2018 Oral: Adversarially Regularized Autoencoders »
Jake Zhao · Yoon Kim · Kelly Zhang · Alexander Rush · Yann LeCun -
2018 Poster: Comparing Dynamics: Deep Neural Networks versus Glassy Systems »
Marco Baity-Jesi · Levent Sagun · Mario Geiger · Stefano Spigler · Gerard Arous · Chiara Cammarota · Yann LeCun · Matthieu Wyart · Giulio Biroli -
2018 Oral: Comparing Dynamics: Deep Neural Networks versus Glassy Systems »
Marco Baity-Jesi · Levent Sagun · Mario Geiger · Stefano Spigler · Gerard Arous · Chiara Cammarota · Yann LeCun · Matthieu Wyart · Giulio Biroli -
2017 Poster: Tunable Efficient Unitary Neural Networks (EUNN) and their application to RNNs »
Li Jing · Yichen Shen · Tena Dubcek · John E Peurifoy · Scott Skirlo · Yann LeCun · Max Tegmark · Marin Soljačić -
2017 Talk: Tunable Efficient Unitary Neural Networks (EUNN) and their application to RNNs »
Li Jing · Yichen Shen · Tena Dubcek · John E Peurifoy · Scott Skirlo · Yann LeCun · Max Tegmark · Marin Soljačić