Timezone: »

 
Poster
First-Order Adversarial Vulnerability of Neural Networks and Input Dimension
Carl-Johann Simon-Gabriel · Yann Ollivier · Leon Bottou · Bernhard Schölkopf · David Lopez-Paz

Tue Jun 11 06:30 PM -- 09:00 PM (PDT) @ Pacific Ballroom #62

Over the past few years, neural networks were proven vulnerable to adversarial images: targeted but imperceptible image perturbations lead to drastically different predictions. We show that adversarial vulnerability increases with the gradients of the training objective when viewed as a function of the inputs. Surprisingly, vulnerability does not depend on network topology: for many standard network architectures, we prove that at initialization, the L1-norm of these gradients grows as the square root of the input dimension, leaving the networks increasingly vulnerable with growing image size. We empirically show that this dimension-dependence persists after either usual or robust training, but gets attenuated with higher regularization.

Author Information

Carl-Johann Simon-Gabriel (Max-Planck-Institute for Intelligent Systems)
Yann Ollivier (Facebook Artificial Intelligence Research)
Leon Bottou (Facebook)
Bernhard Schölkopf (MPI for Intelligent Systems Tübingen, Germany)

Bernhard Scholkopf received degrees in mathematics (London) and physics (Tubingen), and a doctorate in computer science from the Technical University Berlin. He has researched at AT&T Bell Labs, at GMD FIRST, Berlin, at the Australian National University, Canberra, and at Microsoft Research Cambridge (UK). In 2001, he was appointed scientific member of the Max Planck Society and director at the MPI for Biological Cybernetics; in 2010 he founded the Max Planck Institute for Intelligent Systems. For further information, see www.kyb.tuebingen.mpg.de/~bs.

David Lopez-Paz (Facebook AI Research)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors