Timezone: »
Recent empirical works successfully use unlabeled data to learn feature representations that are broadly useful in downstream classification tasks.
Several are reminiscent of the well-known word2vec embedding algorithm: leveraging availability of pairs of semantically similar" data points and
negative samples", the learner forces the inner product of representations of similar pairs with each other to be higher on average than with negative samples.
The current paper uses the term {\em contrastive learning} for such algorithms and presents a theoretical framework for understanding it, by introducing {\em latent classes} and hypothesizing that semantically similar points are sampled from the same {\em latent class}. This conceptual framework allows us to show provable guarantees on the performance of the learnt representation on downstream classification tasks, whose classes are assumed to be random samples from the same set of latent classes. Our generalization bound also shows that learnt representations can reduce (labeled) sample complexity on downstream tasks.
Controlled experiments are performed in NLP and image domains to support the theory.
Author Information
Nikunj Umesh Saunshi (Princeton University)
Orestis Plevrakis (Princeton University)
Sanjeev Arora ( Princeton University and Institute for Advanced Study)
Mikhail Khodak (CMU)
Hrishikesh Khandeparkar (Princeton University)
Related Events (a corresponding poster, oral, or spotlight)
-
2019 Poster: A Theoretical Analysis of Contrastive Unsupervised Representation Learning »
Fri. Jun 14th 01:30 -- 04:00 AM Room Pacific Ballroom #174
More from the Same Authors
-
2022 : Meta-Learning Adversarial Bandits »
Nina Balcan · Keegan Harris · Mikhail Khodak · Steven Wu -
2023 : Learning-augmented private algorithms for multiple quantile release »
Mikhail Khodak · Kareem Amin · Travis Dick · Sergei Vassilvitskii -
2023 : Fine-Tuning Language Models with Just Forward Passes »
Sadhika Malladi · Tianyu Gao · Eshaan Nichani · Jason Lee · Danqi Chen · Sanjeev Arora -
2023 : 🎤 Fine-Tuning Language Models with Just Forward Passes »
Sadhika Malladi · Tianyu Gao · Eshaan Nichani · Alex Damian · Jason Lee · Danqi Chen · Sanjeev Arora -
2023 : High-dimensional Optimization in the Age of ChatGPT, Sanjeev Arora »
Sanjeev Arora -
2023 Oral: Cross-Modal Fine-Tuning: Align then Refine »
Junhong Shen · Liam Li · Lucio Dery · Corey Staten · Mikhail Khodak · Graham Neubig · Ameet Talwalkar -
2023 Poster: Cross-Modal Fine-Tuning: Align then Refine »
Junhong Shen · Liam Li · Lucio Dery · Corey Staten · Mikhail Khodak · Graham Neubig · Ameet Talwalkar -
2023 Poster: Learning-augmented private algorithms for multiple quantile release »
Mikhail Khodak · Kareem Amin · Travis Dick · Sergei Vassilvitskii -
2023 Poster: Task-Specific Skill Localization in Fine-tuned Language Models »
Abhishek Panigrahi · Nikunj Saunshi · Haoyu Zhao · Sanjeev Arora -
2023 Poster: A Kernel-Based View of Language Model Fine-Tuning »
Sadhika Malladi · Alexander Wettig · Dingli Yu · Danqi Chen · Sanjeev Arora -
2022 : On the SDEs and Scaling Rules for Adaptive Gradient Algorithms »
Sadhika Malladi · Kaifeng Lyu · Abhishek Panigrahi · Sanjeev Arora -
2022 : Implicit Bias of Gradient Descent on Reparametrized Models: On Equivalence toMirror Descent »
Zhiyuan Li · Tianhao Wang · Jason Lee · Sanjeev Arora -
2022 Poster: Understanding Contrastive Learning Requires Incorporating Inductive Biases »
Nikunj Umesh Saunshi · Jordan Ash · Surbhi Goel · Dipendra Kumar Misra · Cyril Zhang · Sanjeev Arora · Sham Kakade · Akshay Krishnamurthy -
2022 Spotlight: Understanding Contrastive Learning Requires Incorporating Inductive Biases »
Nikunj Umesh Saunshi · Jordan Ash · Surbhi Goel · Dipendra Kumar Misra · Cyril Zhang · Sanjeev Arora · Sham Kakade · Akshay Krishnamurthy -
2022 Poster: Understanding Gradient Descent on the Edge of Stability in Deep Learning »
Sanjeev Arora · Zhiyuan Li · Abhishek Panigrahi -
2022 Spotlight: Understanding Gradient Descent on the Edge of Stability in Deep Learning »
Sanjeev Arora · Zhiyuan Li · Abhishek Panigrahi -
2021 Poster: A Representation Learning Perspective on the Importance of Train-Validation Splitting in Meta-Learning »
Nikunj Umesh Saunshi · Arushi Gupta · Wei Hu -
2021 Spotlight: A Representation Learning Perspective on the Importance of Train-Validation Splitting in Meta-Learning »
Nikunj Umesh Saunshi · Arushi Gupta · Wei Hu -
2020 : Lightning Talks Session 2 »
Jichan Chung · Saurav Prakash · Mikhail Khodak · Ravi Rahman · Vaikkunth Mugunthan · xinwei zhang · Hossein Hosseini -
2020 : 2.7 A Simple Setting for Understanding Neural Architecture Search with Weight-Sharing »
Mikhail Khodak -
2020 Poster: Provable Representation Learning for Imitation Learning via Bi-level Optimization »
Sanjeev Arora · Simon Du · Sham Kakade · Yuping Luo · Nikunj Umesh Saunshi -
2020 Poster: InstaHide: Instance-hiding Schemes for Private Distributed Learning »
Yangsibo Huang · Zhao Song · Kai Li · Sanjeev Arora -
2020 Poster: A Sample Complexity Separation between Non-Convex and Convex Meta-Learning »
Nikunj Umesh Saunshi · Yi Zhang · Mikhail Khodak · Sanjeev Arora -
2019 : Is Optimization a sufficient language to understand Deep Learning? »
Sanjeev Arora -
2019 Poster: Provable Guarantees for Gradient-Based Meta-Learning »
Nina Balcan · Mikhail Khodak · Ameet Talwalkar -
2019 Oral: Provable Guarantees for Gradient-Based Meta-Learning »
Nina Balcan · Mikhail Khodak · Ameet Talwalkar -
2019 Poster: Fine-Grained Analysis of Optimization and Generalization for Overparameterized Two-Layer Neural Networks »
Sanjeev Arora · Simon Du · Wei Hu · Zhiyuan Li · Ruosong Wang -
2019 Oral: Fine-Grained Analysis of Optimization and Generalization for Overparameterized Two-Layer Neural Networks »
Sanjeev Arora · Simon Du · Wei Hu · Zhiyuan Li · Ruosong Wang -
2018 Poster: Stronger Generalization Bounds for Deep Nets via a Compression Approach »
Sanjeev Arora · Rong Ge · Behnam Neyshabur · Yi Zhang -
2018 Oral: Stronger Generalization Bounds for Deep Nets via a Compression Approach »
Sanjeev Arora · Rong Ge · Behnam Neyshabur · Yi Zhang -
2018 Poster: On the Optimization of Deep Networks: Implicit Acceleration by Overparameterization »
Sanjeev Arora · Nadav Cohen · Elad Hazan -
2018 Oral: On the Optimization of Deep Networks: Implicit Acceleration by Overparameterization »
Sanjeev Arora · Nadav Cohen · Elad Hazan -
2018 Tutorial: Toward Theoretical Understanding of Deep Learning »
Sanjeev Arora -
2017 Poster: Generalization and Equilibrium in Generative Adversarial Nets (GANs) »
Sanjeev Arora · Rong Ge · Yingyu Liang · Tengyu Ma · Yi Zhang -
2017 Talk: Generalization and Equilibrium in Generative Adversarial Nets (GANs) »
Sanjeev Arora · Rong Ge · Yingyu Liang · Tengyu Ma · Yi Zhang