Timezone: »
The study of adversarial examples has so far largely focused on the lp setting. However, neural networks turn out to be also vulnerable to other, very natural classes of perturbations such as translations and rotations. Unfortunately, the standard methods effective in remedying lp vulnerabilities are not as effective in this new regime.
With the goal of classifier robustness, we thoroughly investigate the vulnerabilities of neural network--based classifiers to rotations and translations. We uncover that while data augmentation generally helps very little, using ideas from robust optimization and test-time input aggregation we can significantly improve robustness.
In our exploration we find that, in contrast to the lp case, first-order methods cannot reliably find fooling inputs. This highlights fundamental differences in spatial robustness as compared to lp robustness, and suggests that we need a more comprehensive understanding of robustness in general.
Author Information
Logan Engstrom (MIT)
Brandon Tran (MIT)
Dimitris Tsipras (MIT)
Ludwig Schmidt (UC Berkeley)
Aleksander Madry (MIT)
Related Events (a corresponding poster, oral, or spotlight)
-
2019 Poster: Exploring the Landscape of Spatial Robustness »
Thu Jun 13th 01:30 -- 04:00 AM Room Pacific Ballroom
More from the Same Authors
-
2020 Poster: From ImageNet to Image Classification: Contextualizing Progress on Benchmarks »
Dimitris Tsipras · Shibani Santurkar · Logan Engstrom · Andrew Ilyas · Aleksander Madry -
2020 Poster: Identifying Statistical Bias in Dataset Replication »
Logan Engstrom · Andrew Ilyas · Shibani Santurkar · Dimitris Tsipras · Jacob Steinhardt · Aleksander Madry -
2019 Workshop: Identifying and Understanding Deep Learning Phenomena »
Hanie Sedghi · Samy Bengio · Kenji Hata · Aleksander Madry · Ari Morcos · Behnam Neyshabur · Maithra Raghu · Ali Rahimi · Ludwig Schmidt · Ying Xiao -
2018 Poster: On the Limitations of First-Order Approximation in GAN Dynamics »
Jerry Li · Aleksander Madry · John Peebles · Ludwig Schmidt -
2018 Oral: On the Limitations of First-Order Approximation in GAN Dynamics »
Jerry Li · Aleksander Madry · John Peebles · Ludwig Schmidt -
2018 Poster: Black-box Adversarial Attacks with Limited Queries and Information »
Andrew Ilyas · Logan Engstrom · Anish Athalye · Jessy Lin -
2018 Oral: Black-box Adversarial Attacks with Limited Queries and Information »
Andrew Ilyas · Logan Engstrom · Anish Athalye · Jessy Lin -
2018 Poster: Synthesizing Robust Adversarial Examples »
Anish Athalye · Logan Engstrom · Andrew Ilyas · Kevin Kwok -
2018 Poster: A Classification-Based Study of Covariate Shift in GAN Distributions »
Shibani Santurkar · Ludwig Schmidt · Aleksander Madry -
2018 Oral: Synthesizing Robust Adversarial Examples »
Anish Athalye · Logan Engstrom · Andrew Ilyas · Kevin Kwok -
2018 Oral: A Classification-Based Study of Covariate Shift in GAN Distributions »
Shibani Santurkar · Ludwig Schmidt · Aleksander Madry