Timezone: »
In practice, classification models that generalize well are often susceptible to adversarial perturbations. We illustrate a novel estimation-centric explanation of adversarial susceptibility using an overparameterized linear model on lifted Fourier features. We show that the min 2-norm interpolator of training data can be susceptible even to adversaries who can only perturb the low-dimensional inputs and not the high-dimensional lifted features directly. The adversarial vulnerability arises because of a phenomena we term spatial localization: the predictions of the learned model are markedly more sensitive in the vicinity of training points than elsewhere. This sensitivity is crucially a consequence of feature lifting and can have consequences reminiscent of Gibb’s and Runge’s phenomena from signal processing and functional analysis. Despite the adversarial susceptibility, we find that classification using spatially localized features can be “easier” i.e. less sensitive to the strength of the prior than in independent feature setups. Our findings are replicated theoretically for a random-feature setup that exhibits double-descent behavior, and empirically for polynomial features.
Author Information
Adhyyan Narang (University of Washington)
Vidya Muthukumar (Georgia Institute of Technology)
Anant Sahai (UC Berkeley)
More from the Same Authors
-
2021 : Benign Overfitting in Multiclass Classification: All Roads Lead to Interpolation »
Ke Wang · Vidya Muthukumar · Christos Thrampoulidis -
2021 : Estimating Optimal Policy Value in Linear Contextual Bandits beyond Gaussianity »
Jonathan Lee · Weihao Kong · Aldo Pacchiano · Vidya Muthukumar · Emma Brunskill -
2022 Poster: Universal and data-adaptive algorithms for model selection in linear contextual bandits »
Vidya Muthukumar · Akshay Krishnamurthy -
2022 Spotlight: Universal and data-adaptive algorithms for model selection in linear contextual bandits »
Vidya Muthukumar · Akshay Krishnamurthy