Timezone: »
Spotlight
Fast rates for noisy interpolation require rethinking the effect of inductive bias
Konstantin Donhauser · Nicolò Ruggeri · Stefan Stojanovic · Fanny Yang
Thu Jul 21 08:50 AM -- 08:55 AM (PDT) @ Ballroom 3 & 4
Good generalization performance on high-dimensional data crucially hinges on a simple structure of the ground truth and a corresponding strong inductive bias of the estimator. Even though this intuition is valid for regularized models, in this paper we caution against a strong inductive bias for interpolation in the presence of noise: While a stronger inductive bias encourages a simpler structure that is more aligned with the ground truth, it also increases the detrimental effect of noise. Specifically, for both linear regression and classification with a sparse ground truth, we prove that minimum $\ell_p$-norm and maximum $\ell_p$-margin interpolators achieve fast polynomial rates close to order $1/n$ for $p > 1$ compared to a logarithmic rate for $p = 1$. Finally, we provide preliminary experimental evidence that this trade-off may also play a crucial role in understanding non-linear interpolating models used in practice.
Author Information
Konstantin Donhauser (ETH Zurich)
Nicolò Ruggeri (ETH)
Stefan Stojanovic (ETH Zurich)
Fanny Yang (ETH Zurich)
Related Events (a corresponding poster, oral, or spotlight)
-
2022 Poster: Fast rates for noisy interpolation require rethinking the effect of inductive bias »
Thu. Jul 21st through Fri the 22nd Room Hall E #1109
More from the Same Authors
-
2021 : Maximizing the robust margin provably overfits on noiseless data »
Fanny Yang · Reinhard Heckel · Michael Aerni · Alexandru Tifrea · Konstantin Donhauser -
2021 : Surprising benefits of ridge regularization for noiseless regression »
Konstantin Donhauser · Alexandru Tifrea · Michael Aerni · Reinhard Heckel · Fanny Yang -
2021 : Novel disease detection using ensembles with regularized disagreement »
Alexandru Tifrea · Eric Stavarache · Fanny Yang -
2022 : Why adversarial training can hurt robust accuracy »
jacob clarysse · Julia Hörrmann · Fanny Yang -
2022 : Provable Concept Learning for Interpretable Predictions Using Variational Autoencoders »
Armeen Taeb · Nicolò Ruggeri · Carina Schnuck · Fanny Yang -
2021 Poster: How rotational invariance of common kernels prevents generalization in high dimensions »
Konstantin Donhauser · Mingqi Wu · Fanny Yang -
2021 Spotlight: How rotational invariance of common kernels prevents generalization in high dimensions »
Konstantin Donhauser · Mingqi Wu · Fanny Yang -
2020 : QA for invited talk 3 Yang »
Fanny Yang -
2020 : Invited talk 3 Yang »
Fanny Yang -
2020 Poster: Understanding and Mitigating the Tradeoff between Robustness and Accuracy »
Aditi Raghunathan · Sang Michael Xie · Fanny Yang · John Duchi · Percy Liang