Timezone: »
In this paper, we propose a unified view of gradient-based algorithms for stochastic convex composite optimization by extending the concept of estimate sequence introduced by Nesterov. This point of view covers the stochastic gradient descent method, variants of the approaches SAGA, SVRG, and has several advantages: (i) we provide a generic proof of convergence for the aforementioned methods; (ii) we show that this SVRG variant is adaptive to strong convexity; (iii) we naturally obtain new algorithms with the same guarantees; (iv) we derive generic strategies to make these algorithms robust to stochastic noise, which is useful when data is corrupted by small random perturbations. Finally, we show that this viewpoint is useful to obtain new accelerated algorithms in the sense of Nesterov.
Author Information
Andrei Kulunchakov (Inria)
Julien Mairal (Inria)
Related Events (a corresponding poster, oral, or spotlight)
-
2019 Oral: Estimate Sequences for Variance-Reduced Stochastic Composite Optimization »
Wed. Jun 12th 11:30 -- 11:35 PM Room Room 103
More from the Same Authors
-
2022 Workshop: Continuous Time Perspectives in Machine Learning »
Mihaela Rosca · Chongli Qin · Julien Mairal · Marc Deisenroth -
2020 Poster: Convolutional Kernel Networks for Graph-Structured Data »
Dexiong Chen · Laurent Jacob · Julien Mairal -
2019 Invited Talk: Online Dictionary Learning for Sparse Coding »
Julien Mairal · Francis Bach · Jean Ponce · Guillermo Sapiro -
2019 Poster: A Kernel Perspective for Regularizing Deep Neural Networks »
Alberto Bietti · Gregoire Mialon · Dexiong Chen · Julien Mairal -
2019 Oral: A Kernel Perspective for Regularizing Deep Neural Networks »
Alberto Bietti · Gregoire Mialon · Dexiong Chen · Julien Mairal