Timezone: »
Poster
Unconstrained Online Learning with Unbounded Losses
Andrew Jacobsen · Ashok Cutkosky
Algorithms for online learning typically require one or more boundedness assumptions: that the domain is bounded, that the losses are Lipschitz, or both. In this paper, we develop a new setting for online learning with unbounded domains and non-Lipschitz losses. For this setting we provide an algorithm which guarantees $R_{T}(u)\le \tilde O(G\|u\|\sqrt{T}+L\|u\|^{2}\sqrt{T})$ regret on any problem where the subgradients satisfy $\|g_{t}\|\le G+L\|w_{t}\|$, and show that this bound is unimprovable without further assumptions. We leverage this algorithm to develop new saddle-point optimization algorithms that converge in duality gap in unbounded domains, even in the absence of meaningful curvature. Finally, we provide the first algorithm achieving non-trivial dynamic regret in an unbounded domain for non-Lipschitz losses, as well as a matching lower bound. The regret of our dynamic regret algorithm automatically improves to a novel $L^{*}$ bound when the losses are smooth.
Author Information
Andrew Jacobsen (University of Alberta)
I am made completely out of human body parts
Ashok Cutkosky (Boston University)
More from the Same Authors
-
2022 : Optimal Parameter-free Online Learning with Switching Cost »
Zhiyu Zhang · Ashok Cutkosky · Ioannis Paschalidis -
2023 Poster: Optimal Stochastic Non-smooth Non-convex Optimization through Online-to-Non-convex Conversion »
Ashok Cutkosky · Harsh Mehta · Francesco Orabona -
2023 Poster: Bandit Online Linear Optimization with Hints and Queries »
Aditya Bhaskara · Ashok Cutkosky · Ravi Kumar · Manish Purohit -
2022 Poster: PDE-Based Optimal Strategy for Unconstrained Online Learning »
Zhiyu Zhang · Ashok Cutkosky · Ioannis Paschalidis -
2022 Spotlight: PDE-Based Optimal Strategy for Unconstrained Online Learning »
Zhiyu Zhang · Ashok Cutkosky · Ioannis Paschalidis -
2021 Poster: Robust Pure Exploration in Linear Bandits with Limited Budget »
Ayya Alieva · Ashok Cutkosky · Abhimanyu Das -
2021 Poster: Dynamic Balancing for Model Selection in Bandits and RL »
Ashok Cutkosky · Christoph Dann · Abhimanyu Das · Claudio Gentile · Aldo Pacchiano · Manish Purohit -
2021 Spotlight: Robust Pure Exploration in Linear Bandits with Limited Budget »
Ayya Alieva · Ashok Cutkosky · Abhimanyu Das -
2021 Spotlight: Dynamic Balancing for Model Selection in Bandits and RL »
Ashok Cutkosky · Christoph Dann · Abhimanyu Das · Claudio Gentile · Aldo Pacchiano · Manish Purohit -
2020 Poster: Parameter-free, Dynamic, and Strongly-Adaptive Online Learning »
Ashok Cutkosky -
2020 Poster: Momentum Improves Normalized SGD »
Ashok Cutkosky · Harsh Mehta -
2020 Poster: Online Learning with Imperfect Hints »
Aditya Bhaskara · Ashok Cutkosky · Ravi Kumar · Manish Purohit -
2020 Tutorial: Parameter-free Online Optimization »
Francesco Orabona · Ashok Cutkosky -
2019 Poster: Matrix-Free Preconditioning in Online Learning »
Ashok Cutkosky · Tamas Sarlos -
2019 Poster: Anytime Online-to-Batch, Optimism and Acceleration »
Ashok Cutkosky -
2019 Oral: Anytime Online-to-Batch, Optimism and Acceleration »
Ashok Cutkosky -
2019 Oral: Matrix-Free Preconditioning in Online Learning »
Ashok Cutkosky · Tamas Sarlos -
2019 Poster: Surrogate Losses for Online Learning of Stepsizes in Stochastic Non-Convex Optimization »
zhenxun zhuang · Ashok Cutkosky · Francesco Orabona -
2019 Oral: Surrogate Losses for Online Learning of Stepsizes in Stochastic Non-Convex Optimization »
zhenxun zhuang · Ashok Cutkosky · Francesco Orabona