Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Over-parameterization: Pitfalls and Opportunities

Classification and Adversarial Examples in an Overparameterized Linear Model: A Signal-Processing Perspective

Adhyyan Narang · Vidya Muthukumar · Anant Sahai


Abstract:

In practice, classification models that generalize well are often susceptible to adversarial perturbations. We illustrate a novel estimation-centric explanation of adversarial susceptibility using an overparameterized linear model on lifted Fourier features. We show that the min 2-norm interpolator of training data can be susceptible even to adversaries who can only perturb the low-dimensional inputs and not the high-dimensional lifted features directly. The adversarial vulnerability arises because of a phenomena we term spatial localization: the predictions of the learned model are markedly more sensitive in the vicinity of training points than elsewhere. This sensitivity is crucially a consequence of feature lifting and can have consequences reminiscent of Gibb’s and Runge’s phenomena from signal processing and functional analysis. Despite the adversarial susceptibility, we find that classification using spatially localized features can be “easier” i.e. less sensitive to the strength of the prior than in independent feature setups. Our findings are replicated theoretically for a random-feature setup that exhibits double-descent behavior, and empirically for polynomial features.