Timezone: »

Understanding the Impact of Adversarial Robustness on Accuracy Disparity
Yuzheng Hu · Fan Wu · Hongyang Zhang · Han Zhao

Thu Jul 27 04:30 PM -- 06:00 PM (PDT) @ Exhibit Hall 1 #519

While it has long been empirically observed that adversarial robustness may be at odds with standard accuracy and may have further disparate impacts on different classes, it remains an open question to what extent such observations hold and how the class imbalance plays a role within. In this paper, we attempt to understand this question of accuracy disparity by taking a closer look at linear classifiers under a Gaussian mixture model. We decompose the impact of adversarial robustness into two parts: an inherent effect that will degrade the standard accuracy on all classes due to the robustness constraint, and the other caused by the class imbalance ratio, which will increase the accuracy disparity compared to standard training. Furthermore, we also show that such effects extend beyond the Gaussian mixture model, by generalizing our data model to the general family of stable distributions. More specifically, we demonstrate that while the constraint of adversarial robustness consistently degrades the standard accuracy in the balanced class setting, the class imbalance ratio plays a fundamentally different role in accuracy disparity compared to the Gaussian case, due to the heavy tail of the stable distribution. We additionally perform experiments on both synthetic and real-world datasets to corroborate our theoretical findings. Our empirical results also suggest that the implications may extend to nonlinear models over real-world datasets. Our code is publicly available on GitHub at https://github.com/Accuracy-Disparity/AT-on-AD.

Author Information

Yuzheng Hu (Peking University)
Fan Wu (University of Illinois, Urbana Champaign)
Hongyang Zhang (University of Waterloo)
Han Zhao (University of Illinois, Urbana Champaign)

More from the Same Authors