Timezone: »

 
Oral
Garbage In, Reward Out: Bootstrapping Exploration in Multi-Armed Bandits
Branislav Kveton · Csaba Szepesvari · Sharan Vaswani · Zheng Wen · Tor Lattimore · Mohammad Ghavamzadeh

Wed Jun 12 11:35 AM -- 11:40 AM (PDT) @ Seaside Ballroom
We propose a bandit algorithm that explores by randomizing its history of observations. The algorithm estimates the value of the arm from a non-parametric bootstrap sample of its history, which is augmented with pseudo observations. Our novel design of pseudo observations guarantees that the bootstrap estimates are optimistic with a high probability. We call our algorithm Giro, which stands for garbage in, reward out. We analyze Giro in a Bernoulli bandit and prove a $O(K \Delta^{-1} \log n)$ bound on its $n$-round regret, where $K$ is the number of arms and $\Delta$ is the difference in the expected rewards of the optimal and the best suboptimal arms. The key advantage of our exploration design is that it can be easily applied to structured problems. To show this, we propose contextual Giro with an arbitrary non-linear reward generalization model. We evaluate Giro and its contextual variant on multiple synthetic and real-world problems, and observe that Giro performs well.

Author Information

Branislav Kveton (Google Research)
Csaba Szepesvari (DeepMind/University of Alberta)
Sharan Vaswani (Mila, University of Montreal)
Zheng Wen (Adobe Research)
Tor Lattimore (DeepMind)
Mohammad Ghavamzadeh (Facebook AI Research)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors