Skip to yearly menu bar Skip to main content


Poster

Universal Consistency of Wide and Deep ReLU Neural Networks and Minimax Optimal Convergence Rates for Kolmogorov-Donoho Optimal Function Classes

Hyunouk Ko · Xiaoming Huo

Hall C 4-9 #1501
[ ]
Wed 24 Jul 4:30 a.m. PDT — 6 a.m. PDT

Abstract: In this paper, we prove the universal consistency of wide and deep ReLU neural network classifiers. We also give sufficient conditions for a class of probability measures for which classifiers based on neural networks achieve minimax optimal rates of convergence. The result applies to a wide range of known function classes. In particular, while most previous works impose explicit smoothness assumptions on the regression function, our framework encompasses more general settings. The proposed neural networks are either the minimizers of the $0$-$1$ loss that exhibit a benign overfitting behavior.

Live content is unavailable. Log in and register to view live content