Poster

Non-Exponentially Weighted Aggregation: Regret Bounds for Unbounded Loss Functions

Pierre Alquier

Virtual

Keywords: [ Bayesian Methods ]

[ Abstract ]
[ Slides
[ Paper ]
[ Visit Poster at Spot A0 in Virtual World ]
Wed 21 Jul 9 p.m. PDT — 11 p.m. PDT
 
Spotlight presentation: Reinforcement Learning Theory 3
Wed 21 Jul 5 p.m. PDT — 6 p.m. PDT

Abstract: We tackle the problem of online optimization with a general, possibly unbounded, loss function. It is well known that when the loss is bounded, the exponentially weighted aggregation strategy (EWA) leads to a regret in $\sqrt{T}$ after $T$ steps. In this paper, we study a generalized aggregation strategy, where the weights no longer depend exponentially on the losses. Our strategy is based on Follow The Regularized Leader (FTRL): we minimize the expected losses plus a regularizer, that is here a $\phi$-divergence. When the regularizer is the Kullback-Leibler divergence, we obtain EWA as a special case. Using alternative divergences enables unbounded losses, at the cost of a worst regret bound in some cases.

Chat is not available.