Timezone: »
Poster
Optimal Stochastic Non-smooth Non-convex Optimization through Online-to-Non-convex Conversion
Ashok Cutkosky · Harsh Mehta · Francesco Orabona
We present new algorithms for optimizing non-smooth, non-convex stochastic objectives based on a novel analysis technique. This improves the current best-known complexity for finding a $(\delta,\epsilon)$-stationary point from $O(\epsilon^{-4}\delta^{-1})$ stochastic gradient queries to $O(\epsilon^{-3}\delta^{-1})$, which we also show to be optimal. Our primary technique is a reduction from non-smooth non-convex optimization to *online learning*, after which our results follow from standard regret bounds in online learning. For *deterministic and second-order smooth* objectives, applying more advanced optimistic online learning techniques enables a new complexity of $O(\epsilon^{-1.5}\delta^{-0.5})$. Our improved non-smooth analysis also immediately recovers all optimal or best-known results for finding $\epsilon$ stationary points of smooth or second-order smooth objectives in both stochastic and deterministic settings.
Author Information
Ashok Cutkosky (Boston University)
Harsh Mehta (Google Research)
Francesco Orabona (KAUST)

Francesco Orabona is an Associate Professor at KAUST. His background covers both theoretical and practical aspects of machine learning and optimization. His current research interests lie in online learning, and more generally the problem of designing and analyzing adaptive and parameter-free learning algorithms. He received the PhD degree in Electrical Engineering at the University of Genoa in 2007. He is (co)author of more than 60 peer reviewed papers.
More from the Same Authors
-
2022 : Optimal Parameter-free Online Learning with Switching Cost »
Zhiyu Zhang · Ashok Cutkosky · Ioannis Paschalidis -
2023 : Improved Time-Uniform PAC-Bayes Bounds using Coin Betting »
Kyoungseok Jang · Kwang-Sung Jun · Ilja Kuzborskij · Francesco Orabona -
2023 : Implicit Interpretation of Importance Weight Aware Updates »
Keyi Chen · Francesco Orabona -
2023 : Improved Time-Uniform PAC-Bayes Bounds using Coin Betting »
Kyoungseok Jang · Kwang-Sung Jun · Ilja Kuzborskij · Francesco Orabona -
2023 Poster: Unconstrained Online Learning with Unbounded Losses »
Andrew Jacobsen · Ashok Cutkosky -
2023 Poster: Generalized Implicit Follow-The-Regularized-Leader »
Keyi Chen · Francesco Orabona -
2023 Poster: Bandit Online Linear Optimization with Hints and Queries »
Aditya Bhaskara · Ashok Cutkosky · Ravi Kumar · Manish Purohit -
2022 Poster: PDE-Based Optimal Strategy for Unconstrained Online Learning »
Zhiyu Zhang · Ashok Cutkosky · Ioannis Paschalidis -
2022 Spotlight: PDE-Based Optimal Strategy for Unconstrained Online Learning »
Zhiyu Zhang · Ashok Cutkosky · Ioannis Paschalidis -
2021 Poster: A Second look at Exponential and Cosine Step Sizes: Simplicity, Adaptivity, and Performance »
Xiaoyu Li · Zhenxun Zhuang · Francesco Orabona -
2021 Poster: Online Learning with Optimism and Delay »
Genevieve Flaspohler · Francesco Orabona · Judah Cohen · Soukayna Mouatadid · Miruna Oprescu · Paulo Orenstein · Lester Mackey -
2021 Spotlight: A Second look at Exponential and Cosine Step Sizes: Simplicity, Adaptivity, and Performance »
Xiaoyu Li · Zhenxun Zhuang · Francesco Orabona -
2021 Spotlight: Online Learning with Optimism and Delay »
Genevieve Flaspohler · Francesco Orabona · Judah Cohen · Soukayna Mouatadid · Miruna Oprescu · Paulo Orenstein · Lester Mackey -
2021 Poster: Robust Pure Exploration in Linear Bandits with Limited Budget »
Ayya Alieva · Ashok Cutkosky · Abhimanyu Das -
2021 Poster: Dynamic Balancing for Model Selection in Bandits and RL »
Ashok Cutkosky · Christoph Dann · Abhimanyu Das · Claudio Gentile · Aldo Pacchiano · Manish Purohit -
2021 Spotlight: Robust Pure Exploration in Linear Bandits with Limited Budget »
Ayya Alieva · Ashok Cutkosky · Abhimanyu Das -
2021 Spotlight: Dynamic Balancing for Model Selection in Bandits and RL »
Ashok Cutkosky · Christoph Dann · Abhimanyu Das · Claudio Gentile · Aldo Pacchiano · Manish Purohit -
2020 Poster: Parameter-free, Dynamic, and Strongly-Adaptive Online Learning »
Ashok Cutkosky -
2020 Poster: Momentum Improves Normalized SGD »
Ashok Cutkosky · Harsh Mehta -
2020 Poster: Online Learning with Imperfect Hints »
Aditya Bhaskara · Ashok Cutkosky · Ravi Kumar · Manish Purohit -
2020 Tutorial: Parameter-free Online Optimization »
Francesco Orabona · Ashok Cutkosky -
2019 Poster: Matrix-Free Preconditioning in Online Learning »
Ashok Cutkosky · Tamas Sarlos -
2019 Poster: Anytime Online-to-Batch, Optimism and Acceleration »
Ashok Cutkosky -
2019 Oral: Anytime Online-to-Batch, Optimism and Acceleration »
Ashok Cutkosky -
2019 Oral: Matrix-Free Preconditioning in Online Learning »
Ashok Cutkosky · Tamas Sarlos -
2019 Poster: Surrogate Losses for Online Learning of Stepsizes in Stochastic Non-Convex Optimization »
zhenxun zhuang · Ashok Cutkosky · Francesco Orabona -
2019 Oral: Surrogate Losses for Online Learning of Stepsizes in Stochastic Non-Convex Optimization »
zhenxun zhuang · Ashok Cutkosky · Francesco Orabona -
2017 Poster: Efficient Online Bandit Multiclass Learning with O(sqrt{T}) Regret »
Alina Beygelzimer · Francesco Orabona · Chicheng Zhang -
2017 Talk: Efficient Online Bandit Multiclass Learning with O(sqrt{T}) Regret »
Alina Beygelzimer · Francesco Orabona · Chicheng Zhang