Sat Jul 24 07:50 AM -- 06:50 PM (PDT)
Self-Supervised Learning for Reasoning and Perception
Self-supervised learning (SSL) is an unsupervised approach for representation learning without relying on human-provided labels. It creates auxiliary tasks on unlabeled input data and learns representations by solving these tasks. SSL has demonstrated great success on images, texts, robotics, etc. On a wide variety of tasks, SSL without using human-provided labels achieves performance that is close to fully supervised approaches. Existing SSL research mostly focuses on perception tasks such as image classification, speech recognition, text classification, etc. SSL for reasoning tasks (e.g., symbolic reasoning on graphs, relational reasoning in computer vision, multi-hop reasoning in NLP) is largely ignored. In this workshop, we aim to bridge this gap. We bring together SSL-interested researchers from various domains to discuss how to develop SSL methods for reasoning tasks, such as how to design pretext tasks for symbolic reasoning, how to develop contrastive learning methods for relational reasoning, how to develop SSL approaches to bridge reasoning and perception, etc. Different from previous SSL-related workshops which focus on perception tasks, our workshop focuses on promoting SSL research for reasoning.