Poster
Beating Stochastic and Adversarial Semi-bandits Optimally and Simultaneously
Julian Zimmert · Haipeng Luo · Chen-Yu Wei
Pacific Ballroom #126
Keywords: [ Bandits ] [ Online Learning ]
[
Abstract
]
Abstract:
We develop the first general semi-bandit algorithm that simultaneously achieves $\mathcal{O}(\log T)$ regret for stochastic environments
and $\mathcal{O}(\sqrt{T})$ regret for adversarial environments
without knowledge of the regime or the number of rounds $T$.
The leading problem-dependent constants of our bounds are not only optimal in some worst-case sense studied previously,
but also optimal for two concrete instances of semi-bandit problems.
Our algorithm and analysis extend the recent work of (Zimmert & Seldin, 2019) for the special case of multi-armed bandits,
but importantly requires a novel hybrid regularizer designed specifically for semi-bandit.
Experimental results on synthetic data show that our algorithm indeed performs well uniformly over different environments.
We finally provide a preliminary extension of our results to the full bandit feedback.
Live content is unavailable. Log in and register to view live content