Timezone: »

 
Oral
Exploring the Landscape of Spatial Robustness
Logan Engstrom · Brandon Tran · Dimitris Tsipras · Ludwig Schmidt · Aleksander Madry

Wed Jun 12 04:20 PM -- 04:25 PM (PDT) @ Seaside Ballroom

The study of adversarial examples has so far largely focused on the lp setting. However, neural networks turn out to be also vulnerable to other, very natural classes of perturbations such as translations and rotations. Unfortunately, the standard methods effective in remedying lp vulnerabilities are not as effective in this new regime.

With the goal of classifier robustness, we thoroughly investigate the vulnerabilities of neural network--based classifiers to rotations and translations. We uncover that while data augmentation generally helps very little, using ideas from robust optimization and test-time input aggregation we can significantly improve robustness.

In our exploration we find that, in contrast to the lp case, first-order methods cannot reliably find fooling inputs. This highlights fundamental differences in spatial robustness as compared to lp robustness, and suggests that we need a more comprehensive understanding of robustness in general.

Author Information

Logan Engstrom (MIT)
Brandon Tran (MIT)
Dimitris Tsipras (MIT)
Ludwig Schmidt (UC Berkeley)
Aleksander Madry (MIT)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors