Timezone: »

 
Poster
Exploration Through Reward Biasing: Reward-Biased Maximum Likelihood Estimation for Stochastic Multi-Armed Bandits
Xi Liu · Ping-Chun Hsieh · Yu Heng Hung · Anirban Bhattacharya · P. Kumar

Wed Jul 15 08:00 AM -- 08:45 AM & Wed Jul 15 09:00 PM -- 09:45 PM (PDT) @ None #None
Inspired by the Reward-Biased Maximum Likelihood Estimate method of adaptive control, we propose RBMLE -- a novel family of learning algorithms for stochastic multi-armed bandits (SMABs). For a broad range of SMABs including both the parametric Exponential Family as well as the non-parametric sub-Gaussian/Exponential family, we show that RBMLE yields an index policy. To choose the bias-growth rate $\alpha(t)$ in RBMLE, we reveal the nontrivial interplay between $\alpha(t)$ and the regret bound that generally applies in both the Exponential Family as well as the sub-Gaussian/Exponential family bandits. To quantify the finite-time performance, we prove that RBMLE attains order-optimality by adaptively estimating the unknown constants in the expression of $\alpha(t)$ for Gaussian and sub-Gaussian bandits. Extensive experiments demonstrate that the proposed RBMLE achieves empirical regret performance competitive with the state-of-the-art methods, while being more computationally efficient and scalable in comparison to the best-performing ones among them.

Author Information

Xi Liu (Texas A&M University)
Ping-Chun Hsieh (National Chiao Tung University)
Yu Heng Hung (National Chiao Tung University)
Anirban Bhattacharya (Texas A&M University)
P. Kumar (Texas A&M University)

More from the Same Authors