Timezone: »
Oral
Anytime Online-to-Batch, Optimism and Acceleration
Ashok Cutkosky
A standard way to obtain convergence guarantees in stochastic convex optimization is to run an online learning algorithm and then output the average of its iterates: the actual iterates of the online learning algorithm do not come with individual guarantees. We close this gap by introducing a black-box modification to any online learning algorithm whose iterates converge to the optimum in stochastic scenarios. We then consider the case of smooth losses, and show that combining our approach with optimistic online learning algorithms immediately yields a fast convergence rate of $O(L/T^{3/2}+\sigma/\sqrt{T})$ on $L$-smooth problems with $\sigma^2$ variance in the gradients. Finally, we provide a reduction that converts any adaptive online algorithm into one that obtains the optimal accelerated rate of $\tilde O(L/T^2 + \sigma/\sqrt{T})$, while still maintaining $\tilde O(1/\sqrt{T})$ convergence in the non-smooth setting. Importantly, these algorithms adapt to $L$ and $\sigma$ automatically: they do not need to know either to obtain these rates.
Author Information
Ashok Cutkosky (Google)
Related Events (a corresponding poster, oral, or spotlight)
-
2019 Poster: Anytime Online-to-Batch, Optimism and Acceleration »
Fri. Jun 14th 01:30 -- 04:00 AM Room Pacific Ballroom #164
More from the Same Authors
-
2022 : Optimal Parameter-free Online Learning with Switching Cost »
Zhiyu Zhang · Ashok Cutkosky · Ioannis Paschalidis -
2023 Poster: Optimal Stochastic Non-smooth Non-convex Optimization through Online-to-Non-convex Conversion »
Ashok Cutkosky · Harsh Mehta · Francesco Orabona -
2023 Poster: Unconstrained Online Learning with Unbounded Losses »
Andrew Jacobsen · Ashok Cutkosky -
2023 Poster: Bandit Online Linear Optimization with Hints and Queries »
Aditya Bhaskara · Ashok Cutkosky · Ravi Kumar · Manish Purohit -
2022 Poster: PDE-Based Optimal Strategy for Unconstrained Online Learning »
Zhiyu Zhang · Ashok Cutkosky · Ioannis Paschalidis -
2022 Spotlight: PDE-Based Optimal Strategy for Unconstrained Online Learning »
Zhiyu Zhang · Ashok Cutkosky · Ioannis Paschalidis -
2021 Poster: Robust Pure Exploration in Linear Bandits with Limited Budget »
Ayya Alieva · Ashok Cutkosky · Abhimanyu Das -
2021 Poster: Dynamic Balancing for Model Selection in Bandits and RL »
Ashok Cutkosky · Christoph Dann · Abhimanyu Das · Claudio Gentile · Aldo Pacchiano · Manish Purohit -
2021 Spotlight: Robust Pure Exploration in Linear Bandits with Limited Budget »
Ayya Alieva · Ashok Cutkosky · Abhimanyu Das -
2021 Spotlight: Dynamic Balancing for Model Selection in Bandits and RL »
Ashok Cutkosky · Christoph Dann · Abhimanyu Das · Claudio Gentile · Aldo Pacchiano · Manish Purohit -
2020 Poster: Parameter-free, Dynamic, and Strongly-Adaptive Online Learning »
Ashok Cutkosky -
2020 Poster: Momentum Improves Normalized SGD »
Ashok Cutkosky · Harsh Mehta -
2020 Poster: Online Learning with Imperfect Hints »
Aditya Bhaskara · Ashok Cutkosky · Ravi Kumar · Manish Purohit -
2020 Tutorial: Parameter-free Online Optimization »
Francesco Orabona · Ashok Cutkosky -
2019 Poster: Matrix-Free Preconditioning in Online Learning »
Ashok Cutkosky · Tamas Sarlos -
2019 Oral: Matrix-Free Preconditioning in Online Learning »
Ashok Cutkosky · Tamas Sarlos -
2019 Poster: Surrogate Losses for Online Learning of Stepsizes in Stochastic Non-Convex Optimization »
zhenxun zhuang · Ashok Cutkosky · Francesco Orabona -
2019 Oral: Surrogate Losses for Online Learning of Stepsizes in Stochastic Non-Convex Optimization »
zhenxun zhuang · Ashok Cutkosky · Francesco Orabona