Skip to yearly menu bar Skip to main content


Poster

First-Order Adversarial Vulnerability of Neural Networks and Input Dimension

Carl-Johann Simon-Gabriel · Yann Ollivier · Leon Bottou · Bernhard Schölkopf · David Lopez-Paz

Pacific Ballroom #62

Keywords: [ Optimization ] [ Computer Vision ] [ Architectures ] [ Adversarial Examples ]


Abstract:

Over the past few years, neural networks were proven vulnerable to adversarial images: targeted but imperceptible image perturbations lead to drastically different predictions. We show that adversarial vulnerability increases with the gradients of the training objective when viewed as a function of the inputs. Surprisingly, vulnerability does not depend on network topology: for many standard network architectures, we prove that at initialization, the L1-norm of these gradients grows as the square root of the input dimension, leaving the networks increasingly vulnerable with growing image size. We empirically show that this dimension-dependence persists after either usual or robust training, but gets attenuated with higher regularization.

Live content is unavailable. Log in and register to view live content