Timezone: »

 
Poster
Benign Overfitting in Deep Neural Networks under Lazy Training
Zhenyu Zhu · Fanghui Liu · Grigorios Chrysos · Francesco Locatello · Volkan Cevher

Wed Jul 26 05:00 PM -- 06:30 PM (PDT) @ Exhibit Hall 1 #603

This paper focuses on over-parameterized deep neural networks (DNNs) with ReLU activation functions and proves that when the data distribution is well-separated, DNNs can achieve Bayes-optimal test error for classification while obtaining (nearly) zero-training error under the lazy training regime. For this purpose, we unify three interrelated concepts of overparameterization, benign overfitting, and the Lipschitz constant of DNNs. Our results indicate that interpolating with smoother functions leads to better generalization. Furthermore, we investigate the special case where interpolating smooth ground-truth functions is performed by DNNs under the Neural Tangent Kernel (NTK) regime for generalization. Our result demonstrates that the generalization error converges to a constant order that only depends on label noise and initialization noise, which theoretically verifies benign overfitting. Our analysis provides a tight lower bound on the normalized margin under non-smooth activation functions, as well as the minimum eigenvalue of NTK under high-dimensional settings, which has its own interest in learning theory.

Author Information

Zhenyu Zhu (EPFL)
Fanghui Liu (EPFL)

l am currently a postdoc researcher in EPFL, and my research interest includes statistical machine learning, mainly on kernel methods and learning theory.

Grigorios Chrysos (Swiss Federal Institute of Technology Lausanne)
Francesco Locatello (Amazon)
Volkan Cevher (EPFL)

More from the Same Authors