The field of representation learning without labels, also known as unsupervised or self-supervised learning, is seeing significant progress. New techniques have been put forward that approach or even exceed the performance of fully supervised techniques in large-scale and competitive benchmarks such as image classification, while also showing improvements in label-efficiency by multiple orders of magnitude. Representation learning without labels is therefore finally starting to address some of the major challenges in modern deep learning. To continue making progress, however, it is important to systematically understand the nature of the learnt representations and the learning objectives that give rise to them.
In this tutorial we will: - Provide a unifying overview of the state of the art in representation learning without labels, - Contextualise these methods through a number of theoretical lenses, including generative modelling, manifold learning and causality, - Argue for the importance of careful and systematic evaluation of representations and provide an overview of the pros and cons of current evaluation methods.