## PDE-Based Optimal Strategy for Unconstrained Online Learning

### Zhiyu Zhang · Ashok Cutkosky · Ioannis Paschalidis

##### Hall E #1319

Keywords: [ T: Online Learning and Bandits ]

[ Abstract ]
[ [
Wed 20 Jul 3:30 p.m. PDT — 5:30 p.m. PDT

Spotlight presentation: T: Online Learning and Bandits/Learning Theory
Wed 20 Jul 10:15 a.m. PDT — 11:45 a.m. PDT

Abstract: Unconstrained Online Linear Optimization (OLO) is a practical problem setting to study the training of machine learning models. Existing works proposed a number of potential-based algorithms, but in general the design of these potential functions relies heavily on guessing. To streamline this workflow, we present a framework that generates new potential functions by solving a Partial Differential Equation (PDE). Specifically, when losses are 1-Lipschitz, our framework produces a novel algorithm with anytime regret bound $C\sqrt{T}+||u||\sqrt{2T}[\sqrt{\log(1+||u||/C)}+2]$, where $C$ is a user-specified constant and $u$ is any comparator unknown and unbounded a priori. Such a bound attains an optimal loss-regret trade-off without the impractical doubling trick. Moreover, a matching lower bound shows that the leading order term, including the constant multiplier $\sqrt{2}$, is tight. To our knowledge, the proposed algorithm is the first to achieve such optimalities.

Chat is not available.