Timezone: »
Poster
Adversarial Examples Are a Natural Consequence of Test Error in Noise
Justin Gilmer · Nicolas Ford · Nicholas Carlini · Ekin Dogus Cubuk
Over the last few years, the phenomenon of \emph{adversarial examples} --- maliciously constructed inputs that fool trained machine learning models --- has captured the attention of the research community, especially when restricted to small modifications of a correctly handled input. Less surprisingly, image classifiers also lack human-level performance on randomly corrupted images, such as images with additive Gaussian noise. In this paper we provide both empirical and theoretical evidence that these are two manifestations of the same underlying phenomenon, and therefore the adversarial robustness and corruption robustness research programs are closely related. This suggests that improving adversarial robustness should go hand in hand with improving performance in the presence of more general and realistic image corruptions. This yields a computationally tractable evaluation metric for defenses to consider: test error in noisy image distributions.
Author Information
Justin Gilmer (Google Brain)
Nicolas Ford (Google Brain)
Nicholas Carlini (Google)
Ekin Dogus Cubuk (Google Brain)
Related Events (a corresponding poster, oral, or spotlight)
-
2019 Oral: Adversarial Examples Are a Natural Consequence of Test Error in Noise »
Wed. Jun 12th 06:25 -- 06:30 PM Room Room 104
More from the Same Authors
-
2023 : Counterfactual Memorization in Neural Language Models »
Chiyuan Zhang · Daphne Ippolito · Katherine Lee · Matthew Jagielski · Florian Tramer · Nicholas Carlini -
2023 : Predicting Properties of Amorphous Solids with Graph Network Potentials »
Muratahan Aykol · Jennifer Wei · Simon Batzner · Amil Merchant · Ekin Dogus Cubuk -
2023 Poster: Tied-Augment: Controlling Representation Similarity Improves Data Augmentation »
Emirhan Kurtulus · Zichao Li · Yann Nicolas Dauphin · Ekin Dogus Cubuk -
2023 Poster: Scaling Vision Transformers to 22 Billion Parameters »
Mostafa Dehghani · Josip Djolonga · Basil Mustafa · Piotr Padlewski · Jonathan Heek · Justin Gilmer · Andreas Steiner · Mathilde Caron · Robert Geirhos · Ibrahim Alabdulmohsin · Rodolphe Jenatton · Lucas Beyer · Michael Tschannen · Anurag Arnab · Xiao Wang · Carlos Riquelme · Matthias Minderer · Joan Puigcerver · Utku Evci · Manoj Kumar · Sjoerd van Steenkiste · Gamaleldin Elsayed · Aravindh Mahendran · Fisher Yu · Avital Oliver · Fantine Huot · Jasmijn Bastings · Mark Collier · Alexey Gritsenko · Vighnesh N Birodkar · Cristina Vasconcelos · Yi Tay · Thomas Mensink · Alexander Kolesnikov · Filip Pavetic · Dustin Tran · Thomas Kipf · Mario Lucic · Xiaohua Zhai · Daniel Keysers · Jeremiah Harmsen · Neil Houlsby -
2023 Oral: Scaling Vision Transformers to 22 Billion Parameters »
Mostafa Dehghani · Josip Djolonga · Basil Mustafa · Piotr Padlewski · Jonathan Heek · Justin Gilmer · Andreas Steiner · Mathilde Caron · Robert Geirhos · Ibrahim Alabdulmohsin · Rodolphe Jenatton · Lucas Beyer · Michael Tschannen · Anurag Arnab · Xiao Wang · Carlos Riquelme · Matthias Minderer · Joan Puigcerver · Utku Evci · Manoj Kumar · Sjoerd van Steenkiste · Gamaleldin Elsayed · Aravindh Mahendran · Fisher Yu · Avital Oliver · Fantine Huot · Jasmijn Bastings · Mark Collier · Alexey Gritsenko · Vighnesh N Birodkar · Cristina Vasconcelos · Yi Tay · Thomas Mensink · Alexander Kolesnikov · Filip Pavetic · Dustin Tran · Thomas Kipf · Mario Lucic · Xiaohua Zhai · Daniel Keysers · Jeremiah Harmsen · Neil Houlsby -
2021 Poster: Learn2Hop: Learned Optimization on Rough Landscapes »
Amil Merchant · Luke Metz · Samuel Schoenholz · Ekin Dogus Cubuk -
2021 Spotlight: Learn2Hop: Learned Optimization on Rough Landscapes »
Amil Merchant · Luke Metz · Samuel Schoenholz · Ekin Dogus Cubuk -
2020 : Keynote #5 Justin Gilmer »
Justin Gilmer -
2020 Poster: Fundamental Tradeoffs between Invariance and Sensitivity to Adversarial Perturbations »
Florian Tramer · Jens Behrmann · Nicholas Carlini · Nicolas Papernot · Joern-Henrik Jacobsen -
2019 Workshop: Uncertainty and Robustness in Deep Learning »
Sharon Yixuan Li · Dan Hendrycks · Thomas Dietterich · Balaji Lakshminarayanan · Justin Gilmer -
2019 Poster: Imperceptible, Robust, and Targeted Adversarial Examples for Automatic Speech Recognition »
Yao Qin · Nicholas Carlini · Garrison Cottrell · Ian Goodfellow · Colin Raffel -
2019 Oral: Imperceptible, Robust, and Targeted Adversarial Examples for Automatic Speech Recognition »
Yao Qin · Nicholas Carlini · Garrison Cottrell · Ian Goodfellow · Colin Raffel -
2018 Poster: Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV) »
Been Kim · Martin Wattenberg · Justin Gilmer · Carrie Cai · James Wexler · Fernanda Viégas · Rory sayres -
2018 Oral: Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV) »
Been Kim · Martin Wattenberg · Justin Gilmer · Carrie Cai · James Wexler · Fernanda Viégas · Rory sayres -
2017 Poster: Neural Message Passing for Quantum Chemistry »
Justin Gilmer · Samuel Schoenholz · Patrick F Riley · Oriol Vinyals · George Dahl -
2017 Talk: Neural Message Passing for Quantum Chemistry »
Justin Gilmer · Samuel Schoenholz · Patrick F Riley · Oriol Vinyals · George Dahl -
2017 Poster: Input Switched Affine Networks: An RNN Architecture Designed for Interpretability »
Jakob Foerster · Justin Gilmer · Jan Chorowski · Jascha Sohl-Dickstein · David Sussillo -
2017 Talk: Input Switched Affine Networks: An RNN Architecture Designed for Interpretability »
Jakob Foerster · Justin Gilmer · Jan Chorowski · Jascha Sohl-Dickstein · David Sussillo