Timezone: »
One of the central problems in machine learning is domain adaptation. Different from past theoretical works, we consider a new model of subpopulation shift in the input or representation space. In this work, we propose a provably effective framework based on label propagation by using an input consistency loss. In our analysis we used a simple but realistic “expansion” assumption, which has been proposed in \citet{wei2021theoretical}. It turns out that based on a teacher classifier on the source domain, the learned classifier can not only propagate to the target domain but also improve upon the teacher. By leveraging existing generalization bounds, we also obtain end-to-end finite-sample guarantees on deep neural networks. In addition, we extend our theoretical framework to a more general setting of source-to-target transfer based on an additional unlabeled dataset, which can be easily applied to various learning scenarios. Inspired by our theory, we adapt consistency-based semi-supervised learning methods to domain adaptation settings and gain significant improvements.
Author Information
Tianle Cai (Princeton University)
Ruiqi Gao (Princeton University)
Jason Lee (Princeton)
Qi Lei (Princeton University)
Related Events (a corresponding poster, oral, or spotlight)
-
2021 Spotlight: A Theory of Label Propagation for Subpopulation Shift »
Thu. Jul 22nd 12:45 -- 12:50 AM Room
More from the Same Authors
-
2021 : Label Noise SGD Provably Prefers Flat Global Minimizers »
Alex Damian · Tengyu Ma · Jason Lee -
2021 : A Short Note on the Relationship of Information Gain and Eluder Dimension »
Kaixuan Huang · Sham Kakade · Jason Lee · Qi Lei -
2022 : Implicit Bias of Gradient Descent on Reparametrized Models: On Equivalence toMirror Descent »
Zhiyuan Li · Tianhao Wang · Jason Lee · Sanjeev Arora -
2021 Poster: Towards Certifying L-infinity Robustness using Neural Networks with L-inf-dist Neurons »
Bohang Zhang · Tianle Cai · Zhou Lu · Di He · Liwei Wang -
2021 Spotlight: Towards Certifying L-infinity Robustness using Neural Networks with L-inf-dist Neurons »
Bohang Zhang · Tianle Cai · Zhou Lu · Di He · Liwei Wang -
2021 Poster: Near-Optimal Linear Regression under Distribution Shift »
Qi Lei · Wei Hu · Jason Lee -
2021 Poster: Solving Inverse Problems with a Flow-based Noise Model »
Jay Whang · Qi Lei · Alexandros Dimakis -
2021 Poster: How Important is the Train-Validation Split in Meta-Learning? »
Yu Bai · Minshuo Chen · Pan Zhou · Tuo Zhao · Jason Lee · Sham Kakade · Huan Wang · Caiming Xiong -
2021 Spotlight: Solving Inverse Problems with a Flow-based Noise Model »
Jay Whang · Qi Lei · Alexandros Dimakis -
2021 Spotlight: How Important is the Train-Validation Split in Meta-Learning? »
Yu Bai · Minshuo Chen · Pan Zhou · Tuo Zhao · Jason Lee · Sham Kakade · Huan Wang · Caiming Xiong -
2021 Spotlight: Near-Optimal Linear Regression under Distribution Shift »
Qi Lei · Wei Hu · Jason Lee -
2021 Poster: Bilinear Classes: A Structural Framework for Provable Generalization in RL »
Simon Du · Sham Kakade · Jason Lee · Shachar Lovett · Gaurav Mahajan · Wen Sun · Ruosong Wang -
2021 Oral: Bilinear Classes: A Structural Framework for Provable Generalization in RL »
Simon Du · Sham Kakade · Jason Lee · Shachar Lovett · Gaurav Mahajan · Wen Sun · Ruosong Wang -
2021 Poster: GraphNorm: A Principled Approach to Accelerating Graph Neural Network Training »
Tianle Cai · Shengjie Luo · Keyulu Xu · Di He · Tie-Yan Liu · Liwei Wang -
2021 Spotlight: GraphNorm: A Principled Approach to Accelerating Graph Neural Network Training »
Tianle Cai · Shengjie Luo · Keyulu Xu · Di He · Tie-Yan Liu · Liwei Wang -
2020 Poster: SGD Learns One-Layer Networks in WGANs »
Qi Lei · Jason Lee · Alexandros Dimakis · Constantinos Daskalakis -
2020 Poster: Optimal transport mapping via input convex neural networks »
Ashok Vardhan Makkuva · Amirhossein Taghvaei · Sewoong Oh · Jason Lee