Timezone: »
Adaptive defenses, which optimize at test time, promise to improve adversarial robustness. We categorize such adaptive test-time defenses, explain their potential benefits and drawbacks, and evaluate a representative variety of the latest adaptive defenses for image classification. Unfortunately, none significantly improve upon static defenses when subjected to our careful case study evaluation. Some even weaken the underlying static model while simultaneously increasing inference computation. While these results are disappointing, we still believe that adaptive test-time defenses are a promising avenue of research and, as such, we provide recommendations for their thorough evaluation. We extend the checklist of Carlini et al. (2019) by providing concrete steps specific to adaptive defenses.
Author Information
Francesco Croce (University of Tuebingen)
Sven Gowal (DeepMind)
Thomas Brunner (Technical University of Munich)
Evan Shelhamer (DeepMind)
Matthias Hein (University of Tübingen)
Taylan Cemgil (DeepMind)
Related Events (a corresponding poster, oral, or spotlight)
-
2022 Poster: Evaluating the Adversarial Robustness of Adaptive Test-time Defenses »
Tue. Jul 19th through Wed the 20th Room Hall E #219
More from the Same Authors
-
2021 : An Empirical Investigation of Learning from Biased Toxicity Labels »
Neel Nanda · Jonathan Uesato · Sven Gowal -
2022 : Provably Adversarially Robust Detection of Out-of-Distribution Data (Almost) for Free »
Alexander Meinke · Julian Bitterwolf · Matthias Hein -
2022 : Sound randomized smoothing in floating-point arithmetics »
Václav Voráček · Matthias Hein -
2022 : Sound randomized smoothing in floating-point arithmetics »
Václav Voráček · Matthias Hein -
2022 : Classifiers Should Do Well Even on Their Worst Classes »
Julian Bitterwolf · Alexander Meinke · Valentyn Boreiko · Matthias Hein -
2022 : Lost in Translation: Modern Image Classifiers still degrade even under simple Translations »
Leander Kurscheidt · Matthias Hein -
2022 : Back to the Source: Test-Time Diffusion-Driven Adaptation »
Jin Gao · Jialing Zhang · Xihui Liu · Trevor Darrell · Evan Shelhamer · Dequan Wang -
2022 : Lost in Translation: Modern Image Classifiers still degrade even under simple Translations »
Leander Kurscheidt · Matthias Hein -
2022 : Classifiers Should Do Well Even on Their Worst Classes »
Julian Bitterwolf · Alexander Meinke · Valentyn Boreiko · Matthias Hein -
2022 : On the interplay of adversarial robustness and architecture components: patches, convolution and attention »
Francesco Croce · Matthias Hein -
2022 Workshop: Shift happens: Crowdsourcing metrics and test datasets beyond ImageNet »
Roland S. Zimmermann · Julian Bitterwolf · Evgenia Rusak · Steffen Schneider · Matthias Bethge · Wieland Brendel · Matthias Hein -
2022 Poster: Breaking Down Out-of-Distribution Detection: Many Methods Based on OOD Training Data Estimate a Combination of the Same Core Quantities »
Julian Bitterwolf · Alexander Meinke · Maximilian Augustin · Matthias Hein -
2022 Spotlight: Breaking Down Out-of-Distribution Detection: Many Methods Based on OOD Training Data Estimate a Combination of the Same Core Quantities »
Julian Bitterwolf · Alexander Meinke · Maximilian Augustin · Matthias Hein -
2022 Poster: Adversarial Robustness against Multiple and Single $l_p$-Threat Models via Quick Fine-Tuning of Robust Classifiers »
Francesco Croce · Matthias Hein -
2022 Poster: Provably Adversarially Robust Nearest Prototype Classifiers »
Václav Voráček · Matthias Hein -
2022 Poster: Hindering Adversarial Attacks with Implicit Neural Representations »
Andrei A Rusu · Dan Andrei Calian · Sven Gowal · Raia Hadsell -
2022 Spotlight: Adversarial Robustness against Multiple and Single $l_p$-Threat Models via Quick Fine-Tuning of Robust Classifiers »
Francesco Croce · Matthias Hein -
2022 Spotlight: Hindering Adversarial Attacks with Implicit Neural Representations »
Andrei A Rusu · Dan Andrei Calian · Sven Gowal · Raia Hadsell -
2022 Spotlight: Provably Adversarially Robust Nearest Prototype Classifiers »
Václav Voráček · Matthias Hein -
2021 : Discussion Panel #1 »
Hang Su · Matthias Hein · Liwei Wang · Sven Gowal · Jan Hendrik Metzen · Henry Liu · Yisen Wang -
2021 : Invited Talk #3 »
Matthias Hein -
2021 : Invited Talk #2 »
Sven Gowal -
2021 Poster: Mind the Box: $l_1$-APGD for Sparse Adversarial Attacks on Image Classifiers »
Francesco Croce · Matthias Hein -
2021 Spotlight: Mind the Box: $l_1$-APGD for Sparse Adversarial Attacks on Image Classifiers »
Francesco Croce · Matthias Hein -
2020 : Spotlight Talk 7: AutoAttack: reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks »
Francesco Croce -
2020 : Keynote #1 Matthias Hein »
Matthias Hein -
2020 Poster: Minimally distorted Adversarial Examples with a Fast Adaptive Boundary Attack »
Francesco Croce · Matthias Hein -
2020 Poster: Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks »
Francesco Croce · Matthias Hein -
2020 Poster: Being Bayesian, Even Just a Bit, Fixes Overconfidence in ReLU Networks »
Agustinus Kristiadi · Matthias Hein · Philipp Hennig -
2020 Poster: Confidence-Calibrated Adversarial Training: Generalizing to Unseen Attacks »
David Stutz · Matthias Hein · Bernt Schiele -
2019 Poster: Spectral Clustering of Signed Graphs via Matrix Power Means »
Pedro Mercado · Francesco Tudisco · Matthias Hein -
2019 Poster: Learning from Delayed Outcomes via Proxies with Applications to Recommender Systems »
Timothy Mann · Sven Gowal · András György · Huiyi Hu · Ray Jiang · Balaji Lakshminarayanan · Prav Srinivasan -
2019 Oral: Spectral Clustering of Signed Graphs via Matrix Power Means »
Pedro Mercado · Francesco Tudisco · Matthias Hein -
2019 Oral: Learning from Delayed Outcomes via Proxies with Applications to Recommender Systems »
Timothy Mann · Sven Gowal · András György · Huiyi Hu · Ray Jiang · Balaji Lakshminarayanan · Prav Srinivasan -
2019 Poster: Infinite Mixture Prototypes for Few-shot Learning »
Kelsey Allen · Evan Shelhamer · Hanul Shin · Josh Tenenbaum -
2019 Oral: Infinite Mixture Prototypes for Few-shot Learning »
Kelsey Allen · Evan Shelhamer · Hanul Shin · Josh Tenenbaum