Timezone: »
The Approximate-Proximal Point (APROX) family of model-based stochastic optimization algorithms improve over standard stochastic gradient methods, as they are robust to step size choices, adaptive to problem difficulty, converge on a broader range of problems than stochastic gradientmethods, and converge very fast on interpolation problems, all while retaining nice minibatching properties~\cite{AsiDu19siopt,AsiChChDu20}. In this paper, we propose an acceleration scheme for the APROX family and provide non-asymptotic convergence guarantees, which are order-optimal in all problem-dependent constants and provide even larger minibatching speedups. For interpolation problems where the objective satisfies additional growth conditions, we show that our algorithm achieves linear convergence rates for a wide range of stepsizes. In this setting, we also prove matching lower bounds, identifying new fundamental constants and showing the optimality of the APROX family. We corroborate our theoretical results with empirical testing to demonstrate the gains accurate modeling, acceleration, and minibatching provide.
Author Information
Karan Chadha (Stanford University)
Gary Cheng (Stanford University)
John Duchi (Stanford University)
Related Events (a corresponding poster, oral, or spotlight)
-
2022 Spotlight: Accelerated, Optimal and Parallel: Some results on model-based stochastic optimization »
Thu. Jul 21st 02:35 -- 02:40 PM Room Room 310
More from the Same Authors
-
2021 : Adapting to function difficulty and growth conditions in private optimization »
Hilal Asi · Daniel A Levy · John Duchi -
2023 : Differentially Private Heavy Hitters using Federated Analytics »
Karan Chadha · Junye Chen · John Duchi · Vitaly Feldman · Hanieh Hashemi · Omid Javidbakht · Audra McMillan · Kunal Talwar -
2022 Poster: Private optimization in the interpolation regime: faster rates and hardness results »
Hilal Asi · Karan Chadha · Gary Cheng · John Duchi -
2022 Spotlight: Private optimization in the interpolation regime: faster rates and hardness results »
Hilal Asi · Karan Chadha · Gary Cheng · John Duchi -
2021 Poster: Private Adaptive Gradient Methods for Convex Optimization »
Hilal Asi · John Duchi · Alireza Fallah · Omid Javidbakht · Kunal Talwar -
2021 Spotlight: Private Adaptive Gradient Methods for Convex Optimization »
Hilal Asi · John Duchi · Alireza Fallah · Omid Javidbakht · Kunal Talwar -
2020 Poster: Understanding and Mitigating the Tradeoff between Robustness and Accuracy »
Aditi Raghunathan · Sang Michael Xie · Fanny Yang · John Duchi · Percy Liang -
2020 Poster: FormulaZero: Distributionally Robust Online Adaptation via Offline Population Synthesis »
Aman Sinha · Matthew O'Kelly · Hongrui Zheng · Rahul Mangharam · John Duchi · Russ Tedrake -
2017 Poster: Adaptive Sampling Probabilities for Non-Smooth Optimization »
Hongseok Namkoong · Aman Sinha · Steven Yadlowsky · John Duchi -
2017 Poster: “Convex Until Proven Guilty”: Dimension-Free Acceleration of Gradient Descent on Non-Convex Functions »
Yair Carmon · John Duchi · Oliver Hinder · Aaron Sidford -
2017 Talk: “Convex Until Proven Guilty”: Dimension-Free Acceleration of Gradient Descent on Non-Convex Functions »
Yair Carmon · John Duchi · Oliver Hinder · Aaron Sidford -
2017 Talk: Adaptive Sampling Probabilities for Non-Smooth Optimization »
Hongseok Namkoong · Aman Sinha · Steven Yadlowsky · John Duchi