Skip to yearly menu bar Skip to main content


Invited talk
in
Workshop: Over-parameterization: Pitfalls and Opportunities

Adversarial Examples in Random Deep Networks

Peter Bartlett


Abstract:

Because the phenomenon of adversarial examples in deep networks poses a serious barrier to the reliable and robust application of this methodology, there has been considerable interest in why it arises. We consider ReLU networks of constant depth with independent Gaussian parameters, and show that small perturbations of input vectors lead to large changes of outputs. Building on insights of Daniely and Schacham (2020) and of Bubeck et al (2021), we show that adversarial examples arise in these networks because the functions that they compute are very close to linear. The main result is for networks with a constant depth, but we also show that some constraint on depth is necessary for a result of this kind: there are suitably deep networks that, with constant probability, compute a function that is close to constant. This, combined with results characterizing benign overfitting in linear regression, suggests two potential mechanisms behind adversarial examples in overparameterized settings, one arising from label noise and the other from random initialization. Joint work with Sébastien Bubeck and Yeshwanth Cherapanamjeri