Skip to yearly menu bar Skip to main content


Talk
in
Workshop: Self-supervision in Audio and Speech

Invited Talk: Denoising and real-vs-corrupted classification as two fundamental paradigms in self-supervised learning

Aapo Hyvarinen


Abstract:

The basic idea in self-supervised learning (SSL) is to turn an unsupervised learning task into a supervised task, and use well-known supervised methods to solve it. Even though the data initially has no labels or targets to enable supervised learning, we artificially define a "pretext" supervised task, with some labels or targets of our choosing. Here, I focus on two widely-used and fundamental paradigms for SSL. First, adding Gaussian noise to the data and then learning to denoise it, is a special case of the more general SSL principle of corrupting the data and learning to repair it. Second, classification can be used for SSL by first corrupting the data and then learning to discriminate between the original data and the corrupted version; in the extreme case, this means learning to discriminate between the data and pure noise. While these are very intuitive principles, a sophisticated theoretical analysis is possible in both cases. In particular, deep connections to energy-based modelling and nonlinear independent component analysis can be shown.

Link to the video: https://slideslive.com/38930735/denoising-and-realvscorrupted-classification-as-two-fundamental-paradigms-in-selfsupervised-learning

Chat is not available.