Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Over-parameterization: Pitfalls and Opportunities

Binary Classification of Gaussian Mixtures: Abundance of Support Vectors, Benign Overfitting and Regularization

Ke Wang · Christos Thrampoulidis


Abstract:

Deep neural networks generalize well despite being exceedingly overparameterized and being trained without explicit regularization. This curious phenomenon, often termed benign overfitting, has inspired extensive research activity in establishing its statistical principles. In this work, we study both max-margin SVM and min-norm interpolating classifiers. First, we leverage an idea introduced in [V. Muthukumar et al., arXiv:2005.08054, (2020)] to relate the SVM solution to the least-squares (LS) interpolating solution. Second, we derive non-asymptotic bounds on the classification error of the LS solution. Combining the two, we present sufficient conditions on the overparameterization ratio and on the signal-to-noise ratio (SNR) for benign overfitting to occur. Moreover, we investigate the role of regularization and identify precise conditions under which the interpolating estimator performs better than the regularized estimates. We corroborate our theoretical findings with numerical simulations.