Timezone: »

Classification and Adversarial Examples in an Overparameterized Linear Model: A Signal-Processing Perspective
Adhyyan Narang · Vidya Muthukumar · Anant Sahai

In practice, classification models that generalize well are often susceptible to adversarial perturbations. We illustrate a novel estimation-centric explanation of adversarial susceptibility using an overparameterized linear model on lifted Fourier features. We show that the min 2-norm interpolator of training data can be susceptible even to adversaries who can only perturb the low-dimensional inputs and not the high-dimensional lifted features directly. The adversarial vulnerability arises because of a phenomena we term spatial localization: the predictions of the learned model are markedly more sensitive in the vicinity of training points than elsewhere. This sensitivity is crucially a consequence of feature lifting and can have consequences reminiscent of Gibb’s and Runge’s phenomena from signal processing and functional analysis. Despite the adversarial susceptibility, we find that classification using spatially localized features can be “easier” i.e. less sensitive to the strength of the prior than in independent feature setups. Our findings are replicated theoretically for a random-feature setup that exhibits double-descent behavior, and empirically for polynomial features.

Author Information

Adhyyan Narang (University of Washington)
Vidya Muthukumar (Georgia Institute of Technology)
Anant Sahai (UC Berkeley)

More from the Same Authors