Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Over-parameterization: Pitfalls and Opportunities

Benign Overfitting in Multiclass Classification: All Roads Lead to Interpolation

Ke Wang · Vidya Muthukumar · Christos Thrampoulidis


Abstract:

Motivated by the growing literature on ``benign overfitting" in overparameterized models, we study benign overfitting in multiclass linear classification. Specifically, we consider the following popular training algorithms on separable data generated from Gaussian mixtures: (i) empirical risk minimization (ERM) with cross-entropy loss, which converges to the multiclass support vector machine (SVM) solution; (ii) ERM with least-squares loss, which converges to the min-norm interpolating (MNI) solution; and, (iii) the one-vs-all SVM classifier. Our first key finding is that under a simple sufficient condition, all three algorithms lead to classifiers that interpolate the training data and have equal accuracy. Second, we derive novel error bounds on the accuracy of the MNI classifier, thereby showing that all three training algorithms lead to benign overfitting under sufficient overparameterization. Ultimately, our analysis shows that good generalization is possible for SVM solutions beyond the realm in which typical margin-based bounds apply.