Processing math: 100%
Skip to yearly menu bar Skip to main content


Poster

Online Linear Regression in Dynamic Environments via Discounting

Andrew Jacobsen · Ashok Cutkosky

Hall C 4-9 #2006

Abstract: We develop algorithms for online linear regression which achieve optimal static and dynamic regret guarantees *even in the complete absence of prior knowledge*. We present a novel analysis showing that a discounted variant of the Vovk-Azoury-Warmuth forecaster achieves dynamic regret of the form RT(u)O(dlog(T)dPγT(u)T), where PγT(u) is a measure of variability of the comparator sequence, and show that the discount factor achieving this result can be learned on-the-fly. We show that this result is optimal by providing a matching lower bound. We also extend our results to *strongly-adaptive* guarantees which hold over every sub-interval [a,b][1,T] simultaneously.

Chat is not available.