Timezone: »
Existing unsupervised domain adaptation (UDA) algorithms adapt a model from a labeled source domain to an unlabeled target domain in a one-off way. While these algorithms have been applied widely, they face a great challenge whenever the distribution distance between the source and the target is large. One natural idea to overcome this issue is to divide the original problem into smaller pieces so that each sub-problem only deals with a small shift. Following this idea and inspired by existing theory on gradual domain adaptation (GDA), we propose Generative Gradual Domain Adaptation with Optimal Transport (GOAT), a novel divide-and-conquer framework for UDA that automatically generates the intermediate domains connecting the source and the target in order to reduce the original UDA problem to GDA. Concretely, we first determine a Wasserstein geodesic under the Euclidean metric between the source and target in an embedding space, and then generate embeddings of intermediate domains along the geodesic by solving an optimal transport problem. Given the sequence of generated intermediate domains, we then apply gradual self-training, a standard GDA algorithm, to adapt the source-learned classifier sequentially to the target. Empirically, by using embeddings from modern generative models, we show that our algorithmic framework can utilize the power of existing generative models for UDA, which we believe makes the proposed algorithm widely applicable in many settings. We also conduct experiments on modern UDA datasets such as Rotated CIFAR-10, Office-31, and Office-Home. The results show superior performances of GOAT over conventional UDA approaches, which further demonstrates the effectiveness of GOAT in addressing large distribution shifts presented in many UDA problems.
Author Information
Yifei He (University of Illinois Urbana-Champaign)
Haoxiang Wang (University of Illinois Urbana-Champaign)
A Ph.D. student from UIUC, working on machine learning with theoretical guarantees.
Han Zhao (University of Illinois at Urbana-Champaign)
More from the Same Authors
-
2022 Poster: Provable Domain Generalization via Invariant-Feature Subspace Recovery »
Haoxiang Wang · Haozhe Si · Bo Li · Han Zhao -
2022 Spotlight: Provable Domain Generalization via Invariant-Feature Subspace Recovery »
Haoxiang Wang · Haozhe Si · Bo Li · Han Zhao -
2022 Poster: Understanding Gradual Domain Adaptation: Improved Analysis, Optimal Path and Beyond »
Haoxiang Wang · Bo Li · Han Zhao -
2022 Spotlight: Understanding Gradual Domain Adaptation: Improved Analysis, Optimal Path and Beyond »
Haoxiang Wang · Bo Li · Han Zhao -
2021 Poster: Understanding and Mitigating Accuracy Disparity in Regression »
Jianfeng Chi · Yuan Tian · Geoff Gordon · Han Zhao -
2021 Poster: Bridging Multi-Task Learning and Meta-Learning: Towards Efficient Training and Effective Adaptation »
Haoxiang Wang · Han Zhao · Bo Li -
2021 Spotlight: Understanding and Mitigating Accuracy Disparity in Regression »
Jianfeng Chi · Yuan Tian · Geoff Gordon · Han Zhao -
2021 Spotlight: Bridging Multi-Task Learning and Meta-Learning: Towards Efficient Training and Effective Adaptation »
Haoxiang Wang · Han Zhao · Bo Li -
2021 Poster: Information Obfuscation of Graph Neural Networks »
Peiyuan Liao · Han Zhao · Keyulu Xu · Tommi Jaakkola · Geoff Gordon · Stefanie Jegelka · Ruslan Salakhutdinov -
2021 Spotlight: Information Obfuscation of Graph Neural Networks »
Peiyuan Liao · Han Zhao · Keyulu Xu · Tommi Jaakkola · Geoff Gordon · Stefanie Jegelka · Ruslan Salakhutdinov