Skip to yearly menu bar Skip to main content


Accelerated, Optimal and Parallel: Some results on model-based stochastic optimization

Karan Chadha · Gary Cheng · John Duchi

Hall E #605

Keywords: [ OPT: Sampling and Optimization ] [ OPT: Large Scale, Parallel and Distributed ] [ OPT: Stochastic ] [ OPT: Optimization and Learning under Uncertainty ] [ OPT: Convex ]


The Approximate-Proximal Point (APROX) family of model-based stochastic optimization algorithms improve over standard stochastic gradient methods, as they are robust to step size choices, adaptive to problem difficulty, converge on a broader range of problems than stochastic gradientmethods, and converge very fast on interpolation problems, all while retaining nice minibatching properties~\cite{AsiDu19siopt,AsiChChDu20}. In this paper, we propose an acceleration scheme for the APROX family and provide non-asymptotic convergence guarantees, which are order-optimal in all problem-dependent constants and provide even larger minibatching speedups. For interpolation problems where the objective satisfies additional growth conditions, we show that our algorithm achieves linear convergence rates for a wide range of stepsizes. In this setting, we also prove matching lower bounds, identifying new fundamental constants and showing the optimality of the APROX family. We corroborate our theoretical results with empirical testing to demonstrate the gains accurate modeling, acceleration, and minibatching provide.

Chat is not available.