Timezone: »
The reliability of deep learning algorithms is fundamentally challenged by the existence of adversarial examples, which are incorrectly classified inputs that are extremely close to a correctly classified input. We explore the properties of adversarial examples for deep neural networks with random weights and biases, and prove that for any pā„1, the ā^p distance of any given input from the classification boundary scales as one over the square root of the dimension of the input times the ā^p norm of the input. The results are based on the recently proved equivalence between Gaussian processes and deep neural networks in the limit of infinite width of the hidden layers, and are validated with experiments on both random deep neural networks and deep neural networks trained on the MNIST and CIFAR10 datasets. The results constitute a fundamental advance in the theoretical understanding of adversarial examples, and open the way to a thorough theoretical characterization of the relation between network architecture and robustness to adversarial perturbations.
Author Information
Giacomo De Palma (Scuola Normale Superiore)
I am a tenuretrack assistant professor (RTDB) in mathematical physics at Scuola Normale Superiore. I have previously been a postdoc at MIT and a Marie Curie fellow at the University of Copenhagen. My research covers all areas of quantum information theory, and my main result is the proof of an entropic inequality which was a tenyear longstanding conjecture in quantum communication theory. I am currently working in both classical and quantum machine learning, with the goals to improve the theoretical understanding of the properties of deep neural networks and to find new ways for quantum computers to help in machine learning tasks, and I am applying to the latter goal ideas from the theory of optimal mass transport.
Bobak T Kiani (MIT)
Seth Lloyd (MIT)
Related Events (a corresponding poster, oral, or spotlight)

2021 Spotlight: Adversarial Robustness Guarantees for Random Deep Neural Networks »
Thu. Jul 22nd 12:20  12:25 PM Room
More from the Same Authors

2023 Oral: Equivariant Polynomials for Graph Neural Networks »
Omri Puny · Derek Lim · Bobak T Kiani · Haggai Maron · Yaron Lipman 
2023 Poster: Equivariant Polynomials for Graph Neural Networks »
Omri Puny · Derek Lim · Bobak T Kiani · Haggai Maron · Yaron Lipman 
2023 Poster: The SSL Interplay: Augmentations, Inductive Bias, and Generalization »
Vivien Cabannnes · Bobak T Kiani · Randall Balestriero · Yann LeCun · Alberto Bietti 
2022 Poster: Implicit Bias of Linear Equivariant Networks »
Hannah Lawrence · Bobak T Kiani · Kristian Georgiev · Andrew Dienes 
2022 Spotlight: Implicit Bias of Linear Equivariant Networks »
Hannah Lawrence · Bobak T Kiani · Kristian Georgiev · Andrew Dienes