Timezone: »
Deep domain adaptation (DDA) approaches have recently been shown to perform better than their shallow rivals with better modeling capacity on complex domains (e.g., image, structural data, and sequential data). The underlying idea is to learn domain invariant representations on a latent space that can bridge the gap between source and target domains. Several theoretical studies have established insightful understanding and the benefit of learning domain invariant features; however, they are usually limited to the case where there is no label shift, hence hindering its applicability. In this paper, we propose and study a new challenging setting that allows us to use a Wasserstein distance (WS) to not only quantify the data shift but also to define the label shift directly. We further develop a theory to demonstrate that minimizing the WS of the data shift leads to closing the gap between the source and target data distributions on the latent space (e.g., an intermediate layer of a deep net), while still being able to quantify the label shift with respect to this latent space. Interestingly, our theory can consequently explain certain drawbacks of learning domain invariant features on the latent space. Finally, grounded on the results and guidance of our developed theory, we propose the Label Matching Deep Domain Adaptation (LAMDA) approach that outperforms baselines on real-world datasets for DA problems.
Author Information
Trung Le (Monash University)
Tuan Nguyen (Monash University)
Nhat Ho (University of Texas at Austin)
Hung Bui (VinAI Research)
Dinh Phung (Monash University, Australia)
Related Events (a corresponding poster, oral, or spotlight)
-
2021 Poster: LAMDA: Label Matching Deep Domain Adaptation »
Wed. Jul 21st 04:00 -- 06:00 AM Room
More from the Same Authors
-
2023 Poster: Vector Quantized Wasserstein Auto-Encoder »
Tung-Long Vuong · Trung Le · He Zhao · Chuanxia Zheng · Mehrtash Harandi · Jianfei Cai · Dinh Phung -
2023 Poster: Neural Collapse in Deep Linear Networks: From Balanced to Imbalanced Data »
Hien Dang · Tho Tran · TAN NGUYEN · Stanley Osher · Hung Tran-The · Nhat Ho -
2023 Poster: Revisiting Over-smoothing and Over-squashing Using Ollivier-Ricci Curvature »
Khang Nguyen · TAN NGUYEN · Nong Hieu · Vinh NGUYEN · Nhat Ho · Stanley Osher -
2023 Poster: On Excess Mass Behavior in Gaussian Mixture Models with Orlicz-Wasserstein Distances »
Aritra Guha · Nhat Ho · XuanLong Nguyen -
2023 Poster: Self-Attention Amortized Distributional Projection Optimization for Sliced Wasserstein Point-Cloud Reconstruction »
Khai Nguyen · Dang Nguyen · Nhat Ho -
2022 Poster: Entropic Gromov-Wasserstein between Gaussian Distributions »
Khang Le · Dung Le · Huy Nguyen · · Tung Pham · Nhat Ho -
2022 Poster: Improving Transformers with Probabilistic Attention Keys »
Tam Nguyen · Tan Nguyen · Dung Le · Duy Khuong Nguyen · Viet-Anh Tran · Richard Baraniuk · Nhat Ho · Stanley Osher -
2022 Spotlight: Improving Transformers with Probabilistic Attention Keys »
Tam Nguyen · Tan Nguyen · Dung Le · Duy Khuong Nguyen · Viet-Anh Tran · Richard Baraniuk · Nhat Ho · Stanley Osher -
2022 Spotlight: Entropic Gromov-Wasserstein between Gaussian Distributions »
Khang Le · Dung Le · Huy Nguyen · · Tung Pham · Nhat Ho -
2022 Poster: On Transportation of Mini-batches: A Hierarchical Approach »
Khai Nguyen · Dang Nguyen · Quoc Nguyen · Tung Pham · Hung Bui · Dinh Phung · Trung Le · Nhat Ho -
2022 Poster: Architecture Agnostic Federated Learning for Neural Networks »
Disha Makhija · Xing Han · Nhat Ho · Joydeep Ghosh -
2022 Poster: Improving Mini-batch Optimal Transport via Partial Transportation »
Khai Nguyen · Dang Nguyen · The-Anh Vu-Le · Tung Pham · Nhat Ho -
2022 Spotlight: Architecture Agnostic Federated Learning for Neural Networks »
Disha Makhija · Xing Han · Nhat Ho · Joydeep Ghosh -
2022 Spotlight: Improving Mini-batch Optimal Transport via Partial Transportation »
Khai Nguyen · Dang Nguyen · The-Anh Vu-Le · Tung Pham · Nhat Ho -
2022 Spotlight: On Transportation of Mini-batches: A Hierarchical Approach »
Khai Nguyen · Dang Nguyen · Quoc Nguyen · Tung Pham · Hung Bui · Dinh Phung · Trung Le · Nhat Ho -
2022 Poster: Refined Convergence Rates for Maximum Likelihood Estimation under Finite Mixture Models »
Tudor Manole · Nhat Ho -
2022 Oral: Refined Convergence Rates for Maximum Likelihood Estimation under Finite Mixture Models »
Tudor Manole · Nhat Ho -
2021 Poster: Temporal Predictive Coding For Model-Based Planning In Latent Space »
Tung Nguyen · Rui Shu · Tuan Pham · Hung Bui · Stefano Ermon -
2021 Spotlight: Temporal Predictive Coding For Model-Based Planning In Latent Space »
Tung Nguyen · Rui Shu · Tuan Pham · Hung Bui · Stefano Ermon -
2020 Poster: Predictive Coding for Locally-Linear Control »
Rui Shu · Tung Nguyen · Yinlam Chow · Tuan Pham · Khoat Than · Mohammad Ghavamzadeh · Stefano Ermon · Hung Bui -
2020 Poster: On Unbalanced Optimal Transport: An Analysis of Sinkhorn Algorithm »
Khiem Pham · Khang Le · Nhat Ho · Tung Pham · Hung Bui -
2020 Poster: Parameterized Rate-Distortion Stochastic Encoder »
Quan Hoang · Trung Le · Dinh Phung