Timezone: »
Maximizing the robust margin provably overfits on noiseless data
Fanny Yang · Reinhard Heckel · Michael Aerni · Alexandru Tifrea · Konstantin Donhauser
Numerous recent works show that overparameterization implicitly reduces variance, suggesting vanishing benefits for explicit regularization in high dimensions. However, this narrative has been challenged by empirical observations indicating that adversarially trained deep neural networks suffer from robust overfitting. While existing explanations attribute this phenomenon to noise or problematic samples in the training data set, we prove that even on entirely noiseless data, achieving a vanishing adversarial logistic training loss is suboptimal compared to regularized counterparts.
Author Information
Fanny Yang (ETH Zurich)
Reinhard Heckel (TU Munich)
Michael Aerni (Swiss Federal Institute of Technology)
Alexandru Tifrea (ETH Zurich)
Konstantin Donhauser (Swiss Federal Institute of Technology)
More from the Same Authors
-
2021 : Surprising benefits of ridge regularization for noiseless regression »
Konstantin Donhauser · Alexandru Tifrea · Michael Aerni · Reinhard Heckel · Fanny Yang -
2021 : Novel disease detection using ensembles with regularized disagreement »
Alexandru Tifrea · Eric Stavarache · Fanny Yang -
2022 : Why adversarial training can hurt robust accuracy »
jacob clarysse · Julia Hörrmann · Fanny Yang -
2022 : Provable Concept Learning for Interpretable Predictions Using Variational Autoencoders »
Armeen Taeb · Nicolò Ruggeri · Carina Schnuck · Fanny Yang -
2022 : Monotonic Risk Relationships under Distribution Shifts for Regularized Risk Minimization »
Daniel LeJeune · Jiayu Liu · Reinhard Heckel -
2022 Poster: Fast rates for noisy interpolation require rethinking the effect of inductive bias »
Konstantin Donhauser · Nicolò Ruggeri · Stefan Stojanovic · Fanny Yang -
2022 Spotlight: Fast rates for noisy interpolation require rethinking the effect of inductive bias »
Konstantin Donhauser · Nicolò Ruggeri · Stefan Stojanovic · Fanny Yang -
2021 Poster: How rotational invariance of common kernels prevents generalization in high dimensions »
Konstantin Donhauser · Mingqi Wu · Fanny Yang -
2021 Spotlight: How rotational invariance of common kernels prevents generalization in high dimensions »
Konstantin Donhauser · Mingqi Wu · Fanny Yang -
2020 : QA for invited talk 3 Yang »
Fanny Yang -
2020 : Invited talk 3 Yang »
Fanny Yang -
2020 Poster: Understanding and Mitigating the Tradeoff between Robustness and Accuracy »
Aditi Raghunathan · Sang Michael Xie · Fanny Yang · John Duchi · Percy Liang