Timezone: »
Conventional domain adaptation (DA) techniques aim to improve domain transferability by learning domain-invariant representations; while concurrently preserving the task-discriminability knowledge gathered from the labeled source data. However, the requirement of simultaneous access to labeled source and unlabeled target renders them unsuitable for the challenging source-free DA setting. The trivial solution of realizing an effective original to generic domain mapping improves transferability but degrades task discriminability. Upon analyzing the hurdles from both theoretical and empirical standpoints, we derive novel insights to show that a mixup between original and corresponding translated generic samples enhances the discriminability-transferability trade-off while duly respecting the privacy-oriented source-free setting. A simple but effective realization of the proposed insights on top of the existing source-free DA approaches yields state-of-the-art performance with faster convergence. Beyond single-source, we also outperform multi-source prior-arts across both classification and semantic segmentation benchmarks.
Author Information
Jogendra Nath Kundu (Indian Institute of Science)
Akshay Kulkarni (Indian Institute of Science)
Suvaansh Bhambri (Indian Institute of Science)
Deepesh Mehta (Indian Institute of Science)
Shreyas Kulkarni (Indian Institute of Science)
Varun Jampani (Google Research)
Venkatesh Babu Radhakrishnan (Indian Institute of Science)
Related Events (a corresponding poster, oral, or spotlight)
-
2022 Poster: Balancing Discriminability and Transferability for Source-Free Domain Adaptation »
Tue. Jul 19th through Wed the 20th Room Hall E #528
More from the Same Authors
-
2021 : Towards Achieving Adversarial Robustness Beyond Perceptual Limits »
Sravanti Addepalli · Samyak Jain · Gaurang Sriramanan · Shivangi Khare · Venkatesh Babu Radhakrishnan -
2022 : Efficient and Effective Augmentation Strategy for Adversarial Training »
Sravanti Addepalli · Samyak Jain · Venkatesh Babu Radhakrishnan -
2023 : SelMix: Selective Mixup Fine Tuning for Optimizing Non-Decomposable Metrics »
shrinivas ramasubramanian · Harsh Rangwani · Sho Takemori · Kunal Samanta · Yuhei Umeda · Venkatesh Babu Radhakrishnan -
2022 Poster: A Closer Look at Smoothness in Domain Adversarial Training »
Harsh Rangwani · Sumukh K Aithal · Mayank Mishra · Arihant Jain · Venkatesh Babu Radhakrishnan -
2022 Spotlight: A Closer Look at Smoothness in Domain Adversarial Training »
Harsh Rangwani · Sumukh K Aithal · Mayank Mishra · Arihant Jain · Venkatesh Babu Radhakrishnan -
2021 Poster: Implicit-PDF: Non-Parametric Representation of Probability Distributions on the Rotation Manifold »
Kieran Murphy · Carlos Esteves · Varun Jampani · Srikumar Ramalingam · Ameesh Makadia -
2021 Spotlight: Implicit-PDF: Non-Parametric Representation of Probability Distributions on the Rotation Manifold »
Kieran Murphy · Carlos Esteves · Varun Jampani · Srikumar Ramalingam · Ameesh Makadia -
2019 Poster: Zero-Shot Knowledge Distillation in Deep Networks »
Gaurav Kumar Nayak · Konda Reddy Mopuri · Vaisakh Shaj · Venkatesh Babu Radhakrishnan · Anirban Chakraborty -
2019 Oral: Zero-Shot Knowledge Distillation in Deep Networks »
Gaurav Kumar Nayak · Konda Reddy Mopuri · Vaisakh Shaj · Venkatesh Babu Radhakrishnan · Anirban Chakraborty