Timezone: »

Adversarially Robust Models may not Transfer Better: Sufficient Conditions for Domain Transferability from the View of Regularization
Xiaojun Xu · Yibo Zhang · Evelyn Ma · Hyun Ho Son · Sanmi Koyejo · Bo Li

Wed Jul 20 11:40 AM -- 11:45 AM (PDT) @ Room 309

Machine learning (ML) robustness and domain generalization are fundamentally correlated: they essentially concern data distribution shifts under adversarial and natural settings, respectively. On one hand, recent studies show that more robust (adversarially trained) models are more generalizable. On the other hand, there is a lack of theoretical understanding of their fundamental connections. In this paper, we explore the relationship between regularization and domain transferability considering different factors such as norm regularization and data augmentations (DA). We propose a general theoretical framework proving that factors involving the model function class regularization are sufficient conditions for relative domain transferability. Our analysis implies that ``robustness" is neither necessary nor sufficient for transferability; rather, regularization is a more fundamental perspective for understanding domain transferability. We then discuss popular DA protocols (including adversarial training) and show when they can be viewed as the function class regularization under certain conditions and therefore improve generalization. We conduct extensive experiments to verify our theoretical findings and show several counterexamples where robustness and generalization are negatively correlated on different datasets.

Author Information

Xiaojun Xu (University of Illinois at Urbana-Champaign)
Yibo Zhang (University of Illinois at Urbana-Champaign)
Evelyn Ma (UIUC)
Hyun Ho Son (University of Illinois Urbana-Champaign)
Sanmi Koyejo (Google / Illinois)

Sanmi (Oluwasanmi) Koyejo an Assistant Professor in the Department of Computer Science at the University of Illinois at Urbana-Champaign. Koyejo's research interests are in the development and analysis of probabilistic and statistical machine learning techniques motivated by, and applied to various modern big data problems. He is particularly interested in the analysis of large scale neuroimaging data. Koyejo completed his Ph.D in Electrical Engineering at the University of Texas at Austin advised by Joydeep Ghosh, and completed postdoctoral research at Stanford University with a focus on developing Machine learning techniques for neuroimaging data. His postdoctoral research was primarily with Russell A. Poldrack and Pradeep Ravikumar. Koyejo has been the recipient of several awards including the outstanding NCE/ECE student award, a best student paper award from the conference on uncertainty in artificial intelligence (UAI) and a trainee award from the Organization for Human Brain Mapping (OHBM).

Bo Li (UIUC)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors