Timezone: »

 
Poster
Accelerated, Optimal and Parallel: Some results on model-based stochastic optimization
Karan Chadha · Gary Cheng · John Duchi

Thu Jul 21 03:00 PM -- 05:00 PM (PDT) @ Hall E #605

The Approximate-Proximal Point (APROX) family of model-based stochastic optimization algorithms improve over standard stochastic gradient methods, as they are robust to step size choices, adaptive to problem difficulty, converge on a broader range of problems than stochastic gradientmethods, and converge very fast on interpolation problems, all while retaining nice minibatching properties~\cite{AsiDu19siopt,AsiChChDu20}. In this paper, we propose an acceleration scheme for the APROX family and provide non-asymptotic convergence guarantees, which are order-optimal in all problem-dependent constants and provide even larger minibatching speedups. For interpolation problems where the objective satisfies additional growth conditions, we show that our algorithm achieves linear convergence rates for a wide range of stepsizes. In this setting, we also prove matching lower bounds, identifying new fundamental constants and showing the optimality of the APROX family. We corroborate our theoretical results with empirical testing to demonstrate the gains accurate modeling, acceleration, and minibatching provide.

Author Information

Karan Chadha (Stanford University)
Gary Cheng (Stanford University)
John Duchi (Stanford University)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors