Timezone: »
Recently it has been shown that the step sizes of a family of variance reduced gradient methods called the JacSketch methods depend on the expected smoothness constant. In particular, if this expected smoothness constant could be calculated a priori, then one could safely set much larger step sizes which would result in a much faster convergence rate. We fill in this gap, and provide simple closed form expressions for the expected smoothness constant and careful numerical experiments verifying these bounds. Using these bounds, and since the SAGA algorithm is part of this JacSketch family, we suggest a new standard practice for setting the step and mini-batch sizes for SAGA that are competitive with a numerical grid search. Furthermore, we can now show that the total complexity of the SAGA algorithm decreases linearly in the mini-batch size up to a pre-defined value: the optimal mini-batch size. This is a rare result in the stochastic variance reduced literature, only previously shown for the Katyusha algorithm. Finally we conjecture that this is the case for many other stochastic variance reduced methods and that our bounds and analysis of the expected smoothness constant is key to extending these results.
Author Information
Nidham Gazagnadou (Télécom ParisTech)
Robert Gower (Telecom ParisTech)
Joseph Salmon (Université de Montpellier)
Related Events (a corresponding poster, oral, or spotlight)
-
2019 Oral: Optimal Mini-Batch and Step Sizes for SAGA »
Thu. Jun 13th 12:10 -- 12:15 AM Room Room 103
More from the Same Authors
-
2022 Poster: Stochastic smoothing of the top-K calibrated hinge loss for deep imbalanced classification »
Camille Garcin · Maximilien Servajean · Alexis Joly · Joseph Salmon -
2022 Poster: Differentially Private Coordinate Descent for Composite Empirical Risk Minimization »
Paul Mangold · Aurélien Bellet · Joseph Salmon · Marc Tommasi -
2022 Spotlight: Differentially Private Coordinate Descent for Composite Empirical Risk Minimization »
Paul Mangold · Aurélien Bellet · Joseph Salmon · Marc Tommasi -
2022 Spotlight: Stochastic smoothing of the top-K calibrated hinge loss for deep imbalanced classification »
Camille Garcin · Maximilien Servajean · Alexis Joly · Joseph Salmon -
2020 Poster: Implicit differentiation of Lasso-type models for hyperparameter optimization »
Quentin Bertrand · Quentin Klopfenstein · Mathieu Blondel · Samuel Vaiter · Alexandre Gramfort · Joseph Salmon -
2019 Poster: Screening rules for Lasso with non-convex Sparse Regularizers »
alain rakotomamonjy · Gilles Gasso · Joseph Salmon -
2019 Oral: Screening rules for Lasso with non-convex Sparse Regularizers »
alain rakotomamonjy · Gilles Gasso · Joseph Salmon -
2019 Poster: Safe Grid Search with Optimal Complexity »
Eugene Ndiaye · Tam Le · Olivier Fercoq · Joseph Salmon · Ichiro Takeuchi -
2019 Oral: Safe Grid Search with Optimal Complexity »
Eugene Ndiaye · Tam Le · Olivier Fercoq · Joseph Salmon · Ichiro Takeuchi