Skip to yearly menu bar Skip to main content


Poster

Robustness and Accuracy Could Be Reconcilable by (Proper) Definition

Tianyu Pang · Min Lin · Xiao Yang · Jun Zhu · Shuicheng Yan

Hall E #905

Keywords: [ DL: Robustness ] [ SA: Trustworthy Machine Learning ]


Abstract:

The trade-off between robustness and accuracy has been widely studied in the adversarial literature. Although still controversial, the prevailing view is that this trade-off is inherent, either empirically or theoretically. Thus, we dig for the origin of this trade-off in adversarial training and find that it may stem from the improperly defined robust error, which imposes an inductive bias of local invariance --- an overcorrection towards smoothness. Given this, we advocate employing local equivariance to describe the ideal behavior of a robust model, leading to a self-consistent robust error named SCORE. By definition, SCORE facilitates the reconciliation between robustness and accuracy, while still handling the worst-case uncertainty via robust optimization. By simply substituting KL divergence with variants of distance metrics, SCORE can be efficiently minimized. Empirically, our models achieve top-rank performance on RobustBench under AutoAttack. Besides, SCORE provides instructive insights for explaining the overfitting phenomenon and semantic input gradients observed on robust models.

Chat is not available.