Skip to yearly menu bar Skip to main content


Poster

When Are Linear Stochastic Bandits Attackable?

Huazheng Wang · Haifeng Xu · Hongning Wang

Hall E #635

Keywords: [ T: Online Learning and Bandits ] [ T: Social Aspects ] [ MISC: Online Learning, Active Learning and Bandits ]


Abstract: We study adversarial attacks on linear stochastic bandits: by manipulating the rewards, an adversary aims to control the behaviour of the bandit algorithm. Perhaps surprisingly, we first show that some attack goals can never be achieved. This is in a sharp contrast to context-free stochastic bandits, and is intrinsically due to the correlation among arms in linear stochastic bandits. Motivated by this finding, this paper studies the attackability of a $k$-armed linear bandit environment. We first provide a complete necessity and sufficiency characterization of attackability based on the geometry of the arms' context vectors. We then propose a two-stage attack method against LinUCB and Robust Phase Elimination. The method first asserts whether the given environment is attackable; and if yes, it poisons the rewards to force the algorithm to pull a target arm linear times using only a sublinear cost. Numerical experiments further validate the effectiveness and cost-efficiency of the proposed attack method.

Chat is not available.