Representation Learning Without Labels

S. M. Ali Eslami, Irina Higgins, Danilo J. Rezende

Mon 13 Jul 1 a.m. — 4 a.m. PDT [ Join Zoom ]
Mon 13 Jul 11 a.m. — 2 p.m. PDT [ Join Zoom ]

Please do not share or post zoom links
[ Video Part 1 [ Video Part 2 [ Video Part 3 [ Video Part 4

The videos for each part of this tutorial are linked above. The SlidesLive embed below is the livestream of the entire day including the Q&A.


The field of representation learning without labels, also known as unsupervised or self-supervised learning, is seeing significant progress. New techniques have been put forward that approach or even exceed the performance of fully supervised techniques in large-scale and competitive benchmarks such as image classification, while also showing improvements in label-efficiency by multiple orders of magnitude. Representation learning without labels is therefore finally starting to address some of the major challenges in modern deep learning. To continue making progress, however, it is important to systematically understand the nature of the learnt representations and the learning objectives that give rise to them.

In this tutorial we will: - Provide a unifying overview of the state of the art in representation learning without labels, - Contextualise these methods through a number of theoretical lenses, including generative modelling, manifold learning and causality, - Argue for the importance of careful and systematic evaluation of representations and provide an overview of the pros and cons of current evaluation methods.

Chat is not available.