Thompson Sampling Algorithms for Mean-Variance Bandits

Qiuyu Zhu · Vincent Tan


Keywords: [ Online Learning / Bandits ] [ Reinforcement Learning Theory ] [ Online Learning, Active Learning, and Bandits ]

[ Abstract ]
[ Slides
Tue 14 Jul 3 p.m. PDT — 3:45 p.m. PDT
Wed 15 Jul 4 a.m. PDT — 4:45 a.m. PDT


The multi-armed bandit (MAB) problem is a classical learning task that exemplifies the exploration-exploitation tradeoff. However, standard formulations do not take into account risk. In online decision making systems, risk is a primary concern. In this regard, the mean-variance risk measure is one of the most common objective functions. Existing algorithms for mean-variance optimization in the context of MAB problems have unrealistic assumptions on the reward distributions. We develop Thompson Sampling-style algorithms for mean-variance MAB and provide comprehensive regret analyses for Gaussian and Bernoulli bandits with fewer assumptions. Our algorithms achieve the best known regret bounds for mean-variance MABs and also attain the information-theoretic bounds in some parameter regimes. Empirical simulations show that our algorithms significantly outperform existing LCB-based algorithms for all risk tolerances.

Chat is not available.