Timezone: »
Machine learning classifiers with high test accuracy often perform poorly under adversarial attacks. It is commonly believed that adversarial training alleviates this issue. In this paper, we demonstrate that, surprisingly, the opposite can be true for a natural class of perceptible perturbations --- even though adversarial training helps when enough data is available, it may in fact hurt robust generalization in the small sample size regime. We first prove this phenomenon for a high-dimensional linear classification setting with noiseless observations. Using intuitive insights from the proof, we could surprisingly find perturbations on standard image datasets for which this behavior persists. Specifically, it occurs for perceptible attacks that effectively reduce class information such as object occlusions or corruptions.
Author Information
jacob clarysse (ETH Zürich)
Julia Hörrmann (ETH Zurich)
Fanny Yang (ETH Zurich)
More from the Same Authors
-
2021 : Maximizing the robust margin provably overfits on noiseless data »
Fanny Yang · Reinhard Heckel · Michael Aerni · Alexandru Tifrea · Konstantin Donhauser -
2021 : Surprising benefits of ridge regularization for noiseless regression »
Konstantin Donhauser · Alexandru Tifrea · Michael Aerni · Reinhard Heckel · Fanny Yang -
2021 : Novel disease detection using ensembles with regularized disagreement »
Alexandru Tifrea · Eric Stavarache · Fanny Yang -
2022 : Provable Concept Learning for Interpretable Predictions Using Variational Autoencoders »
Armeen Taeb · Nicolò Ruggeri · Carina Schnuck · Fanny Yang -
2022 Poster: Fast rates for noisy interpolation require rethinking the effect of inductive bias »
Konstantin Donhauser · Nicolò Ruggeri · Stefan Stojanovic · Fanny Yang -
2022 Spotlight: Fast rates for noisy interpolation require rethinking the effect of inductive bias »
Konstantin Donhauser · Nicolò Ruggeri · Stefan Stojanovic · Fanny Yang -
2021 Poster: How rotational invariance of common kernels prevents generalization in high dimensions »
Konstantin Donhauser · Mingqi Wu · Fanny Yang -
2021 Spotlight: How rotational invariance of common kernels prevents generalization in high dimensions »
Konstantin Donhauser · Mingqi Wu · Fanny Yang -
2020 : QA for invited talk 3 Yang »
Fanny Yang -
2020 : Invited talk 3 Yang »
Fanny Yang -
2020 Poster: Understanding and Mitigating the Tradeoff between Robustness and Accuracy »
Aditi Raghunathan · Sang Michael Xie · Fanny Yang · John Duchi · Percy Liang