Skip to yearly menu bar Skip to main content


Poster

On Thompson Sampling with Langevin Algorithms

Eric Mazumdar · Aldo Pacchiano · Yian Ma · Michael Jordan · Peter Bartlett

Keywords: [ Online Learning / Bandits ] [ Online Learning, Active Learning, and Bandits ]


Abstract:

Thompson sampling for multi-armed bandit problems is known to enjoy favorable performance in both theory and practice, though it suffers from a significant limitation computationally arising from the need for samples from posterior distributions at every iteration. To address this issue, we propose two Markov Chain Monte Carlo (MCMC) methods tailored to Thompson sampling. We construct quickly converging Langevin algorithms to generate approximate samples that have accuracy guarantees, and leverage novel posterior concentration rates to analyze the regret of the resulting approximate Thompson sampling algorithm. Further, we specify the necessary hyperparameters for the MCMC procedure to guarantee optimal instance-dependent frequentist regret while having low computational complexity. In particular, our algorithms take advantage of both posterior concentration and a sample reuse mechanism to ensure that only a constant number of iterations and a constant amount of data is needed in each round. The resulting approximate Thompson sampling algorithm has logarithmic regret and its computational complexity does not scale with the time horizon of the algorithm.

Chat is not available.