Timezone: »

PDE-Based Optimal Strategy for Unconstrained Online Learning
Zhiyu Zhang · Ashok Cutkosky · Ioannis Paschalidis

Wed Jul 20 03:30 PM -- 05:30 PM (PDT) @ Hall E #1319
Unconstrained Online Linear Optimization (OLO) is a practical problem setting to study the training of machine learning models. Existing works proposed a number of potential-based algorithms, but in general the design of these potential functions relies heavily on guessing. To streamline this workflow, we present a framework that generates new potential functions by solving a Partial Differential Equation (PDE). Specifically, when losses are 1-Lipschitz, our framework produces a novel algorithm with anytime regret bound $C\sqrt{T}+||u||\sqrt{2T}[\sqrt{\log(1+||u||/C)}+2]$, where $C$ is a user-specified constant and $u$ is any comparator unknown and unbounded a priori. Such a bound attains an optimal loss-regret trade-off without the impractical doubling trick. Moreover, a matching lower bound shows that the leading order term, including the constant multiplier $\sqrt{2}$, is tight. To our knowledge, the proposed algorithm is the first to achieve such optimalities.

Author Information

Zhiyu Zhang (Boston University)

Hello! I am a PhD student at Boston University, advised by Prof. Yannis Paschalidis and Prof. Ashok Cutkosky. I am broadly interested in the theoretical aspects of machine learning, optimization and control theory. Specifically, I work on adaptive online learning, i.e., designing online decision making algorithms that optimally exploit problem structures.

Ashok Cutkosky (Boston University)
Ioannis Paschalidis (Boston University)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors