Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Principles of Distribution Shift (PODS)

Generative Gradual Domain Adaptation with Optimal Transport

Yifei He · Haoxiang Wang · Han Zhao


Abstract:

Existing unsupervised domain adaptation (UDA) algorithms adapt a model from a labeled source domain to an unlabeled target domain in a one-off way. While these algorithms have been applied widely, they face a great challenge whenever the distribution distance between the source and the target is large. One natural idea to overcome this issue is to divide the original problem into smaller pieces so that each sub-problem only deals with a small shift. Following this idea and inspired by existing theory on gradual domain adaptation (GDA), we propose Generative Gradual Domain Adaptation with Optimal Transport (GOAT), a novel divide-and-conquer framework for UDA that automatically generates the intermediate domains connecting the source and the target in order to reduce the original UDA problem to GDA. Concretely, we first determine a Wasserstein geodesic under the Euclidean metric between the source and target in an embedding space, and then generate embeddings of intermediate domains along the geodesic by solving an optimal transport problem. Given the sequence of generated intermediate domains, we then apply gradual self-training, a standard GDA algorithm, to adapt the source-learned classifier sequentially to the target. Empirically, by using embeddings from modern generative models, we show that our algorithmic framework can utilize the power of existing generative models for UDA, which we believe makes the proposed algorithm widely applicable in many settings. We also conduct experiments on modern UDA datasets such as Rotated CIFAR-10, Office-31, and Office-Home. The results show superior performances of GOAT over conventional UDA approaches, which further demonstrates the effectiveness of GOAT in addressing large distribution shifts presented in many UDA problems.

Chat is not available.