Timezone: »

 
Oral
Transferable Adversarial Training: A General Approach to Adapting Deep Classifiers
Hong Liu · Mingsheng Long · Jianmin Wang · Michael Jordan

Wed Jun 12 02:40 PM -- 03:00 PM (PDT) @ Room 201

Domain adaptation enables knowledge transfer from a labeled source domain to an unlabeled target domain. A mainstream approach is adversarial feature adaptation, which learns domain-invariant representations through aligning the feature distributions of both domains. However, a theoretical prerequisite of domain adaptation is the adaptability measured by the expected risk of an ideal joint hypothesis over the source and target domains. In this respect, adversarial feature adaptation may potentially deteriorate the adaptability, since it distorts the original feature distributions when suppressing domain-specific variations. To this end, we propose transferable adversarial training (TAT) to enable the adaptation of deep classifiers. The approach generates transferable examples to fill in the gap between the source and target domains, and adversarially trains the deep classifiers to make consistent predictions over transferable examples. Without learning domain-invariant representations at the expense of distorting the feature distributions, the adaptability in the theoretical learning bound is algorithmically guaranteed. A series of experiments validate that our approach advances the state-of-the-arts on a variety of domain adaptation tasks in vision and NLP, including object recognition, learning from synthetic to real, and sentiment classification.

Author Information

Hong Liu (Tsinghua University)
Mingsheng Long (Tsinghua University)
Jianmin Wang (Tsinghua University)
Michael Jordan (UC Berkeley)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors