Skip to yearly menu bar Skip to main content


Workshop

Self-Supervised Learning for Reasoning and Perception

Pengtao Xie · Shanghang Zhang · Ishan Misra · Pulkit Agrawal · Katerina Fragkiadaki · Ruisi Zhang · Tassilo Klein · Asli Celikyilmaz · Mihaela van der Schaar · Eric Xing

Sat 24 Jul, 7:50 a.m. PDT

Self-supervised learning (SSL) is an unsupervised approach for representation learning without relying on human-provided labels. It creates auxiliary tasks on unlabeled input data and learns representations by solving these tasks. SSL has demonstrated great success on images, texts, robotics, etc. On a wide variety of tasks, SSL without using human-provided labels achieves performance that is close to fully supervised approaches. Existing SSL research mostly focuses on perception tasks such as image classification, speech recognition, text classification, etc. SSL for reasoning tasks (e.g., symbolic reasoning on graphs, relational reasoning in computer vision, multi-hop reasoning in NLP) is largely ignored. In this workshop, we aim to bridge this gap. We bring together SSL-interested researchers from various domains to discuss how to develop SSL methods for reasoning tasks, such as how to design pretext tasks for symbolic reasoning, how to develop contrastive learning methods for relational reasoning, how to develop SSL approaches to bridge reasoning and perception, etc. Different from previous SSL-related workshops which focus on perception tasks, our workshop focuses on promoting SSL research for reasoning.

Chat is not available.
Timezone: America/Los_Angeles

Schedule