Timezone: »
Domain adversarial training has been ubiquitous for achieving invariant representations and is used widely for various domain adaptation tasks. In recent times, methods converging to smooth optima have shown improved generalization for supervised learning tasks like classification. In this work, we analyze the effect of smoothness enhancing formulations on domain adversarial training, the objective of which is a combination of task loss (eg. classification, regression etc.) and adversarial terms. We find that converging to a smooth minima with respect to (w.r.t.) task loss stabilizes the adversarial training leading to better performance on target domain. In contrast to task loss, our analysis shows that converging to smooth minima w.r.t. adversarial loss leads to sub-optimal generalization on the target domain. Based on the analysis, we introduce the Smooth Domain Adversarial Training (SDAT) procedure, which effectively enhances the performance of existing domain adversarial methods for both classification and object detection tasks. Our analysis also provides insight into the extensive usage of SGD over Adam in the community for domain adversarial training.
Author Information
Harsh Rangwani (Indian Institute of Science)
Phd Student at Video Analytics Lab, Indian Institute of Science. Supported by Prime Minister's Research Fellowship.
Sumukh K Aithal (PES University)
Mayank Mishra (Indian Institute of Science, Bangalore)
Arihant Jain (Indian Institute of Science)
Venkatesh Babu Radhakrishnan (Indian Institute of Science)
Related Events (a corresponding poster, oral, or spotlight)
-
2022 Spotlight: A Closer Look at Smoothness in Domain Adversarial Training »
Tue. Jul 19th 03:25 -- 03:30 PM Room Room 318 - 320
More from the Same Authors
-
2021 : Towards Achieving Adversarial Robustness Beyond Perceptual Limits »
Sravanti Addepalli · Samyak Jain · Gaurang Sriramanan · Shivangi Khare · Venkatesh Babu Radhakrishnan -
2022 : Efficient and Effective Augmentation Strategy for Adversarial Training »
Sravanti Addepalli · Samyak Jain · Venkatesh Babu Radhakrishnan -
2022 : Towards Domain Adversarial Methods to Mitigate Texture Bias »
Dhruva Kashyap · Sumukh K Aithal · Rakshith C · Natarajan Subramanyam -
2022 : Towards Domain Adversarial Methods to Mitigate Texture Bias »
Dhruva Kashyap · Sumukh K Aithal · Rakshith C · Natarajan Subramanyam -
2023 : SelMix: Selective Mixup Fine Tuning for Optimizing Non-Decomposable Metrics »
shrinivas ramasubramanian · Harsh Rangwani · Sho Takemori · Kunal Samanta · Yuhei Umeda · Venkatesh Babu Radhakrishnan -
2022 Poster: Balancing Discriminability and Transferability for Source-Free Domain Adaptation »
Jogendra Nath Kundu · Akshay Kulkarni · Suvaansh Bhambri · Deepesh Mehta · Shreyas Kulkarni · Varun Jampani · Venkatesh Babu Radhakrishnan -
2022 Spotlight: Balancing Discriminability and Transferability for Source-Free Domain Adaptation »
Jogendra Nath Kundu · Akshay Kulkarni · Suvaansh Bhambri · Deepesh Mehta · Shreyas Kulkarni · Varun Jampani · Venkatesh Babu Radhakrishnan -
2019 Poster: Zero-Shot Knowledge Distillation in Deep Networks »
Gaurav Kumar Nayak · Konda Reddy Mopuri · Vaisakh Shaj · Venkatesh Babu Radhakrishnan · Anirban Chakraborty -
2019 Oral: Zero-Shot Knowledge Distillation in Deep Networks »
Gaurav Kumar Nayak · Konda Reddy Mopuri · Vaisakh Shaj · Venkatesh Babu Radhakrishnan · Anirban Chakraborty