Timezone: »
Unsupervised domain adaptation (UDA) aims to leverage the knowledge learned from a labeled source dataset to solve similar tasks in a new unlabeled domain. Prior UDA methods typically require to access the source data when learning to adapt the model, making them risky and inefficient for decentralized private data. This work tackles a novel setting where only a trained source model is available and investigates how we can effectively utilize such a model without source data to solve UDA problems. We propose a simple yet generic representation learning framework, named Source HypOthesis Transfer (SHOT). SHOT freezes the classifier module (hypothesis) of the source model and learns the target-specific feature extraction module by exploiting both information maximization and self-supervised pseudo-labeling to implicitly align representations from the target domains to the source hypothesis. To verify its versatility, we evaluate SHOT in a variety of adaptation cases including closed-set, partial-set, and open-set domain adaptation. Experiments indicate that SHOT yields state-of-the-art results among multiple domain adaptation benchmarks.
Author Information
Jian Liang (NUS)
Dapeng Hu (NUS)
Jiashi Feng (National University of Singapore)
More from the Same Authors
-
2021 Poster: CIFS: Improving Adversarial Robustness of CNNs via Channel-wise Importance-based Feature Selection »
Hanshu YAN · Jingfeng Zhang · Gang Niu · Jiashi Feng · Vincent Tan · Masashi Sugiyama -
2021 Spotlight: CIFS: Improving Adversarial Robustness of CNNs via Channel-wise Importance-based Feature Selection »
Hanshu YAN · Jingfeng Zhang · Gang Niu · Jiashi Feng · Vincent Tan · Masashi Sugiyama -
2021 Poster: Towards Better Laplacian Representation in Reinforcement Learning with Generalized Graph Drawing »
Kaixin Wang · Kuangqi Zhou · Qixin Zhang · Jie Shao · Bryan Hooi · Jiashi Feng -
2021 Spotlight: Towards Better Laplacian Representation in Reinforcement Learning with Generalized Graph Drawing »
Kaixin Wang · Kuangqi Zhou · Qixin Zhang · Jie Shao · Bryan Hooi · Jiashi Feng -
2018 Poster: Policy Optimization with Demonstrations »
Bingyi Kang · Zequn Jie · Jiashi Feng -
2018 Poster: WSNet: Compact and Efficient Networks Through Weight Sampling »
Xiaojie Jin · Yingzhen Yang · Ning Xu · Jianchao Yang · Nebojsa Jojic · Jiashi Feng · Shuicheng Yan -
2018 Oral: WSNet: Compact and Efficient Networks Through Weight Sampling »
Xiaojie Jin · Yingzhen Yang · Ning Xu · Jianchao Yang · Nebojsa Jojic · Jiashi Feng · Shuicheng Yan -
2018 Oral: Policy Optimization with Demonstrations »
Bingyi Kang · Zequn Jie · Jiashi Feng -
2018 Poster: Understanding Generalization and Optimization Performance of Deep CNNs »
Pan Zhou · Jiashi Feng -
2018 Oral: Understanding Generalization and Optimization Performance of Deep CNNs »
Pan Zhou · Jiashi Feng