Timezone: »
Poster
PDE-Based Optimal Strategy for Unconstrained Online Learning
Zhiyu Zhang · Ashok Cutkosky · Ioannis Paschalidis
Unconstrained Online Linear Optimization (OLO) is a practical problem setting to study the training of machine learning models. Existing works proposed a number of potential-based algorithms, but in general the design of these potential functions relies heavily on guessing. To streamline this workflow, we present a framework that generates new potential functions by solving a Partial Differential Equation (PDE). Specifically, when losses are 1-Lipschitz, our framework produces a novel algorithm with anytime regret bound $C\sqrt{T}+||u||\sqrt{2T}[\sqrt{\log(1+||u||/C)}+2]$, where $C$ is a user-specified constant and $u$ is any comparator unknown and unbounded a priori. Such a bound attains an optimal loss-regret trade-off without the impractical doubling trick. Moreover, a matching lower bound shows that the leading order term, including the constant multiplier $\sqrt{2}$, is tight. To our knowledge, the proposed algorithm is the first to achieve such optimalities.
Author Information
Zhiyu Zhang (Boston University)
Hello! I am a PhD student at Boston University, advised by Prof. Yannis Paschalidis and Prof. Ashok Cutkosky. I am broadly interested in the theoretical aspects of machine learning, optimization and control theory. Specifically, I work on adaptive online learning, i.e., designing online decision making algorithms that optimally exploit problem structures.
Ashok Cutkosky (Boston University)
Ioannis Paschalidis (Boston University)
Related Events (a corresponding poster, oral, or spotlight)
-
2022 Spotlight: PDE-Based Optimal Strategy for Unconstrained Online Learning »
Wed. Jul 20th 06:25 -- 06:30 PM Room Room 327 - 329
More from the Same Authors
-
2022 : Optimal Parameter-free Online Learning with Switching Cost »
Zhiyu Zhang · Ashok Cutkosky · Ioannis Paschalidis -
2023 Poster: Optimal Stochastic Non-smooth Non-convex Optimization through Online-to-Non-convex Conversion »
Ashok Cutkosky · Harsh Mehta · Francesco Orabona -
2023 Poster: Unconstrained Online Learning with Unbounded Losses »
Andrew Jacobsen · Ashok Cutkosky -
2023 Poster: Bandit Online Linear Optimization with Hints and Queries »
Aditya Bhaskara · Ashok Cutkosky · Ravi Kumar · Manish Purohit -
2021 Poster: Robust Pure Exploration in Linear Bandits with Limited Budget »
Ayya Alieva · Ashok Cutkosky · Abhimanyu Das -
2021 Poster: Dynamic Balancing for Model Selection in Bandits and RL »
Ashok Cutkosky · Christoph Dann · Abhimanyu Das · Claudio Gentile · Aldo Pacchiano · Manish Purohit -
2021 Spotlight: Robust Pure Exploration in Linear Bandits with Limited Budget »
Ayya Alieva · Ashok Cutkosky · Abhimanyu Das -
2021 Spotlight: Dynamic Balancing for Model Selection in Bandits and RL »
Ashok Cutkosky · Christoph Dann · Abhimanyu Das · Claudio Gentile · Aldo Pacchiano · Manish Purohit -
2020 Poster: Parameter-free, Dynamic, and Strongly-Adaptive Online Learning »
Ashok Cutkosky -
2020 Poster: Momentum Improves Normalized SGD »
Ashok Cutkosky · Harsh Mehta -
2020 Poster: Online Learning with Imperfect Hints »
Aditya Bhaskara · Ashok Cutkosky · Ravi Kumar · Manish Purohit -
2020 Tutorial: Parameter-free Online Optimization »
Francesco Orabona · Ashok Cutkosky -
2019 Poster: Matrix-Free Preconditioning in Online Learning »
Ashok Cutkosky · Tamas Sarlos -
2019 Poster: Anytime Online-to-Batch, Optimism and Acceleration »
Ashok Cutkosky -
2019 Oral: Anytime Online-to-Batch, Optimism and Acceleration »
Ashok Cutkosky -
2019 Oral: Matrix-Free Preconditioning in Online Learning »
Ashok Cutkosky · Tamas Sarlos -
2019 Poster: Surrogate Losses for Online Learning of Stepsizes in Stochastic Non-Convex Optimization »
zhenxun zhuang · Ashok Cutkosky · Francesco Orabona -
2019 Oral: Surrogate Losses for Online Learning of Stepsizes in Stochastic Non-Convex Optimization »
zhenxun zhuang · Ashok Cutkosky · Francesco Orabona