Bandits with Adversarial Scaling

Thodoris Lykouris · Vahab Mirrokni · Renato Leme

Keywords: [ Game Theory and Mechanism Design ] [ Online Learning / Bandits ] [ Online Learning, Active Learning, and Bandits ]

[ Abstract ]
Tue 14 Jul 11 a.m. PDT — 11:45 a.m. PDT
Tue 14 Jul 10 p.m. PDT — 10:45 p.m. PDT


We study "adversarial scaling", a multi-armed bandit model where rewards have a stochastic and an adversarial component. Our model captures display advertising where the "click-through-rate" can be decomposed to a (fixed across time) arm-quality component and a non-stochastic user-relevance component (fixed across arms). Despite the relative stochasticity of our model, we demonstrate two settings where most bandit algorithms suffer. On the positive side, we show that two algorithms, one from the action elimination and one from the mirror descent family are adaptive enough to be robust to adversarial scaling. Our results shed light on the robustness of adaptive parameter selection in stochastic bandits, which may be of independent interest.

Chat is not available.