Timezone: »
A Universal Law of Robustness via Isoperimetry
Sebastien Bubeck · Mark Sellke
Classically, data interpolation with a parametrized model class is possible as long as the number of parameters is larger than the number of equations to be satisfied. A puzzling phenomenon in deep learning is that models are trained with many more parameters than what this classical theory would suggest. We propose a theoretical explanation for this phenomenon. We prove that for a broad class of data distributions and model classes, overparametrization is necessary if one wants to interpolate the data smoothly. Namely we show that smooth interpolation requires $d$ times more parameters than mere interpolation, where $d$ is the ambient data dimension. We prove this universal law of robustness for any smoothly parametrized function class with polynomial size weights, and any covariate distribution verifying isoperimetry (or a mixture thereof). In the case of two-layer neural networks and Gaussian covariates, this law was conjectured in prior work by Bubeck, Li and Nagaraj.
Author Information
Sebastien Bubeck (Microsoft Research)
Mark Sellke
More from the Same Authors
-
2021 : Ranking Architectures by Feature Extraction Capabilities »
Debadeepta Dey · Shital Shah · Sebastien Bubeck -
2021 : A Universal Law of Robustness via Isoperimetry »
Sebastien Bubeck · Mark Sellke -
2022 Poster: Data Augmentation as Feature Manipulation »
Ruoqi Shen · Sebastien Bubeck · Suriya Gunasekar -
2022 Spotlight: Data Augmentation as Feature Manipulation »
Ruoqi Shen · Sebastien Bubeck · Suriya Gunasekar -
2020 Poster: Statistically Preconditioned Accelerated Gradient Method for Distributed Optimization »
Hadrien Hendrikx · Lin Xiao · Sebastien Bubeck · Francis Bach · Laurent Massoulié -
2020 Poster: Online Learning for Active Cache Synchronization »
Andrey Kolobov · Sebastien Bubeck · Julian Zimmert -
2019 Poster: Adversarial examples from computational constraints »
Sebastien Bubeck · Yin Tat Lee · Eric Price · Ilya Razenshteyn -
2019 Oral: Adversarial examples from computational constraints »
Sebastien Bubeck · Yin Tat Lee · Eric Price · Ilya Razenshteyn -
2018 Poster: Make the Minority Great Again: First-Order Regret Bound for Contextual Bandits »
Zeyuan Allen-Zhu · Sebastien Bubeck · Yuanzhi Li -
2018 Oral: Make the Minority Great Again: First-Order Regret Bound for Contextual Bandits »
Zeyuan Allen-Zhu · Sebastien Bubeck · Yuanzhi Li -
2017 Poster: Optimal Algorithms for Smooth and Strongly Convex Distributed Optimization in Networks »
Kevin Scaman · Francis Bach · Sebastien Bubeck · Yin Tat Lee · Laurent Massoulié -
2017 Talk: Optimal Algorithms for Smooth and Strongly Convex Distributed Optimization in Networks »
Kevin Scaman · Francis Bach · Sebastien Bubeck · Yin Tat Lee · Laurent Massoulié