Beating Stochastic and Adversarial Semi-bandits Optimally and Simultaneously
Julian Zimmert · Haipeng Luo · Chen-Yu Wei

Wed Jun 12th 11:40 AM -- 12:00 PM @ Seaside Ballroom

We develop the first general semi-bandit algorithm that simultaneously achieves $\mathcal{O}(\log T)$ regret for stochastic environments and $\mathcal{O}(\sqrt{T})$ regret for adversarial environments without knowledge of the regime or the number of rounds $T$. The leading problem-dependent constants of our bounds are not only optimal in some worst-case sense studied previously, but also optimal for two concrete instances of semi-bandit problems. Our algorithm and analysis extend the recent work of (Zimmert & Seldin, 2019) for the special case of multi-armed bandit, but importantly requires a novel hybrid regularizer designed specifically for semi-bandit. Experimental results on synthetic data show that our algorithm indeed performs well uniformly over different environments. We finally provide a preliminary extension of our results to the full bandit feedback.

Author Information

Julian Zimmert (University of Copenhagen)
Haipeng Luo (University of Southern California)
Chen-Yu Wei (University of Southern California)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors