Domain adaptation enables knowledge transfer from a labeled source domain to an unlabeled target domain. A mainstream approach is adversarial feature adaptation, which learns domain-invariant representations through aligning the feature distributions of both domains. However, a theoretical prerequisite of domain adaptation is the adaptability measured by the expected risk of an ideal joint hypothesis over the source and target domains. In this respect, adversarial feature adaptation may potentially deteriorate the adaptability, since it distorts the original feature distributions when suppressing domain-specific variations. To this end, we propose transferable adversarial training (TAT) to enable the adaptation of deep classifiers. The approach generates transferable examples to fill in the gap between the source and target domains, and adversarially trains the deep classifiers to make consistent predictions over transferable examples. Without learning domain-invariant representations at the expense of distorting the feature distributions, the adaptability in the theoretical learning bound is algorithmically guaranteed. A series of experiments validate that our approach advances the state-of-the-arts on a variety of domain adaptation tasks in vision and NLP, including object recognition, learning from synthetic to real, and sentiment classification.
Author Information
Hong Liu (Tsinghua University)
Mingsheng Long (Tsinghua University)
Jianmin Wang (Tsinghua University)
Michael Jordan (UC Berkeley)
Related Events (a corresponding poster, oral, or spotlight)
-
2019 Poster: Transferable Adversarial Training: A General Approach to Adapting Deep Classifiers »
Wed Jun 12th 06:30 -- 09:00 PM Room Pacific Ballroom
More from the Same Authors
-
2019 Poster: Bridging Theory and Algorithm for Domain Adaptation »
Yuchen Zhang · Tianle Liu · Mingsheng Long · Michael Jordan -
2019 Oral: Bridging Theory and Algorithm for Domain Adaptation »
Yuchen Zhang · Tianle Liu · Mingsheng Long · Michael Jordan -
2019 Poster: Towards Accurate Model Selection in Deep Unsupervised Domain Adaptation »
Kaichao You · Ximei Wang · Mingsheng Long · Michael Jordan -
2019 Poster: Transferability vs. Discriminability: Batch Spectral Penalization for Adversarial Domain Adaptation »
Xinyang Chen · Sinan Wang · Mingsheng Long · Jianmin Wang -
2019 Poster: A Dynamical Systems Perspective on Nesterov Acceleration »
Michael Muehlebach · Michael Jordan -
2019 Poster: Theoretically Principled Trade-off between Robustness and Accuracy »
Hongyang Zhang · Yaodong Yu · Jiantao Jiao · Eric Xing · Laurent El Ghaoui · Michael Jordan -
2019 Oral: A Dynamical Systems Perspective on Nesterov Acceleration »
Michael Muehlebach · Michael Jordan -
2019 Oral: Towards Accurate Model Selection in Deep Unsupervised Domain Adaptation »
Kaichao You · Ximei Wang · Mingsheng Long · Michael Jordan -
2019 Oral: Transferability vs. Discriminability: Batch Spectral Penalization for Adversarial Domain Adaptation »
Xinyang Chen · Sinan Wang · Mingsheng Long · Jianmin Wang -
2019 Oral: Theoretically Principled Trade-off between Robustness and Accuracy »
Hongyang Zhang · Yaodong Yu · Jiantao Jiao · Eric Xing · Laurent El Ghaoui · Michael Jordan -
2019 Poster: On Efficient Optimal Transport: An Analysis of Greedy and Accelerated Mirror Descent Algorithms »
Tianyi Lin · Nhat Ho · Michael Jordan -
2019 Poster: Rao-Blackwellized Stochastic Gradients for Discrete Distributions »
Runjing Liu · Jeffrey Regier · Nilesh Tripuraneni · Michael Jordan · Jon McAuliffe -
2019 Oral: Rao-Blackwellized Stochastic Gradients for Discrete Distributions »
Runjing Liu · Jeffrey Regier · Nilesh Tripuraneni · Michael Jordan · Jon McAuliffe -
2019 Oral: On Efficient Optimal Transport: An Analysis of Greedy and Accelerated Mirror Descent Algorithms »
Tianyi Lin · Nhat Ho · Michael Jordan -
2018 Poster: On the Theory of Variance Reduction for Stochastic Gradient Monte Carlo »
Niladri S Chatterji · Nicolas Flammarion · Yian Ma · Peter Bartlett · Michael Jordan -
2018 Poster: RLlib: Abstractions for Distributed Reinforcement Learning »
Eric Liang · Richard Liaw · Robert Nishihara · Philipp Moritz · Roy Fox · Ken Goldberg · Joseph Gonzalez · Michael Jordan · Ion Stoica -
2018 Oral: On the Theory of Variance Reduction for Stochastic Gradient Monte Carlo »
Niladri S Chatterji · Nicolas Flammarion · Yian Ma · Peter Bartlett · Michael Jordan -
2018 Oral: RLlib: Abstractions for Distributed Reinforcement Learning »
Eric Liang · Richard Liaw · Robert Nishihara · Philipp Moritz · Roy Fox · Ken Goldberg · Joseph Gonzalez · Michael Jordan · Ion Stoica -
2018 Poster: SAFFRON: an Adaptive Algorithm for Online Control of the False Discovery Rate »
Aaditya Ramdas · Tijana Zrnic · Martin Wainwright · Michael Jordan -
2018 Oral: SAFFRON: an Adaptive Algorithm for Online Control of the False Discovery Rate »
Aaditya Ramdas · Tijana Zrnic · Martin Wainwright · Michael Jordan -
2018 Poster: Learning to Explain: An Information-Theoretic Perspective on Model Interpretation »
Jianbo Chen · Le Song · Martin Wainwright · Michael Jordan -
2018 Poster: PredRNN++: Towards A Resolution of the Deep-in-Time Dilemma in Spatiotemporal Predictive Learning »
Yunbo Wang · Zhifeng Gao · Mingsheng Long · Jianmin Wang · Philip Yu -
2018 Oral: PredRNN++: Towards A Resolution of the Deep-in-Time Dilemma in Spatiotemporal Predictive Learning »
Yunbo Wang · Zhifeng Gao · Mingsheng Long · Jianmin Wang · Philip Yu -
2018 Oral: Learning to Explain: An Information-Theoretic Perspective on Model Interpretation »
Jianbo Chen · Le Song · Martin Wainwright · Michael Jordan -
2017 Poster: How to Escape Saddle Points Efficiently »
Chi Jin · Rong Ge · Praneeth Netrapalli · Sham Kakade · Michael Jordan -
2017 Talk: How to Escape Saddle Points Efficiently »
Chi Jin · Rong Ge · Praneeth Netrapalli · Sham Kakade · Michael Jordan -
2017 Poster: Deep Transfer Learning with Joint Adaptation Networks »
Mingsheng Long · Han Zhu · Jianmin Wang · Michael Jordan -
2017 Poster: Breaking Locality Accelerates Block Gauss-Seidel »
Stephen Tu · Shivaram Venkataraman · Ashia Wilson · Alex Gittens · Michael Jordan · Benjamin Recht -
2017 Talk: Deep Transfer Learning with Joint Adaptation Networks »
Mingsheng Long · Han Zhu · Jianmin Wang · Michael Jordan -
2017 Talk: Breaking Locality Accelerates Block Gauss-Seidel »
Stephen Tu · Shivaram Venkataraman · Ashia Wilson · Alex Gittens · Michael Jordan · Benjamin Recht