Timezone: »

Adversarial examples from computational constraints
Sebastien Bubeck · Yin Tat Lee · Eric Price · Ilya Razenshteyn

Tue Jun 11 11:40 AM -- 12:00 PM (PDT) @ Grand Ballroom

Why are classifiers in high dimension vulnerable to “adversarial” perturbations? We show that it is likely not due to information theoretic limitations, but rather it could be due to computational constraints.

First we prove that, for a broad set of classification tasks, the mere existence of a robust classifier implies that it can be found by a possibly exponential-time algorithm with relatively few training examples. Then we give two particular classification tasks where learning a robust classifier is computationally intractable. More precisely we construct two binary classifications task in high dimensional space which are (i) information theoretically easy to learn robustly for large perturbations, (ii) efficiently learnable (non-robustly) by a simple linear separator, (iii) yet are not efficiently robustly learnable, even for small perturbations. Specifically, for the first task hardness holds for any efficient algorithm in the statistical query (SQ) model, while for the second task we rule out any efficient algorithm under a cryptographic assumption. These examples give an exponential separation between classical learning and robust learning in the statistical query model or under a cryptographic assumption. It suggests that adversarial examples may be an unavoidable byproduct of computational limitations of learning algorithms.

Author Information

Sebastien Bubeck (Microsoft Research)
Yin Tat Lee (UW)
Eric Price (UT-Austin)
Ilya Razenshteyn (Microsoft Research Redmond)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors