Timezone: »

Finite-Sample Analysis of Learning High-Dimensional Single ReLU Neuron
Jingfeng Wu · Difan Zou · Zixiang Chen · Vladimir Braverman · Quanquan Gu · Sham Kakade

Thu Jul 27 01:30 PM -- 03:00 PM (PDT) @ Exhibit Hall 1 #623

This paper considers the problem of learning single ReLU neuron with squared loss (a.k.a., ReLU regression) in the overparameterized regime, where the input dimension can exceed the number of samples. We analyze a Perceptron-type algorithm called GLM-tron [Kakade et al. 2011], and provide its dimension-free risk upper bounds for high-dimensional ReLU regression in both well-specified and misspecified settings. Our risk bounds recover several existing results as special cases. Moreover, in the well-specified setting, we also provide an instance-wise matching risk lower bound for GLM-tron. Our upper and lower risk bounds provide a sharp characterization of the high-dimensional ReLU regression problems that can be learned via GLM-tron. On the other hand, we provide some negative results for stochastic gradient descent (SGD) for ReLU regression with symmetric Bernoulli data: if the model is well-specified, the excess risk of SGD is provably no better than that of GLM-tron ignoring constant factors, for each problem instance; and in the noiseless case, GLM-tron can achieve a small risk while SGD unavoidably suffers from a constant risk in expectation. These results together suggest that GLM-tron might be more preferable than SGD for high-dimensional ReLU regression.

Author Information

Jingfeng Wu (JHU & UC Berkeley)
Difan Zou (The University of Hong Kong)
Zixiang Chen (UCLA)
Vladimir Braverman (Johns Hopkins University)
Quanquan Gu (University of California, Los Angeles)
Sham Kakade (Harvard University and Amazon Scholar)

More from the Same Authors