Skip to yearly menu bar Skip to main content


Spotlight

MOTS: Minimax Optimal Thompson Sampling

Tianyuan Jin · Pan Xu · Jieming Shi · Xiaokui Xiao · Quanquan Gu

[ ] [ Livestream: Visit Online Learning 2 ] [ Paper ]
[ Paper ]

Abstract: Thompson sampling is one of the most widely used algorithms in many online decision problems due to its simplicity for implementation and superior empirical performance over other state-of-the-art methods. Despite its popularity and empirical success, it has remained an open problem whether Thompson sampling can achieve the minimax optimal regret O(\sqrt{TK}) for K-armed bandit problems, where T is the total time horizon. In this paper we fill this long open gap by proposing a new Thompson sampling algorithm called MOTS that adaptively truncates the sampling result of the chosen arm at each time step. We prove that this simple variant of Thompson sampling achieves the minimax optimal regret bound O(\sqrt{TK}) for finite time horizon T and also the asymptotic optimal regret bound when $T$ grows to infinity as well. This is the first time that the minimax optimality of multi-armed bandit problems has been attained by Thompson sampling type of algorithms.

Chat is not available.