Timezone: »

 
Oral
Last Iterate Risk Bounds of SGD with Decaying Stepsize for Overparameterized Linear Regression
Jingfeng Wu · Difan Zou · Vladimir Braverman · Quanquan Gu · Sham Kakade

Wed Jul 20 12:05 PM -- 12:25 PM (PDT) @ Hall G

Stochastic gradient descent (SGD) has been shown to generalize well in many deep learning applications. In practice, one often runs SGD with a geometrically decaying stepsize, i.e., a constant initial stepsize followed by multiple geometric stepsize decay, and uses the last iterate as the output. This kind of SGD is known to be nearly minimax optimal for classical finite-dimensional linear regression problems (Ge et al., 2019). However, a sharp analysis for the last iterate of SGD in the overparameterized setting is still open. In this paper, we provide a problem-dependent analysis on the last iterate risk bounds of SGD with decaying stepsize, for (overparameterized) linear regression problems. In particular, for last iterate SGD with (tail) geometrically decaying stepsize, we prove nearly matching upper and lower bounds on the excess risk. Moreover, we provide an excess risk lower bound for last iterate SGD with polynomially decaying stepsize and demonstrate the advantage of geometrically decaying stepsize in an instance-wise manner, which complements the minimax rate comparison made in prior work.

Author Information

Jingfeng Wu (Johns Hopkins University)
Difan Zou (UCLA)
Vladimir Braverman (Johns Hopkins University)
Quanquan Gu (University of California, Los Angeles)
Sham Kakade (Harvard University)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors