Oral
Optimal Algorithms for Lipschitz Bandits with Heavy-tailed Rewards
Shiyin Lu · Guanghui Wang · Yao Hu · Lijun Zhang

Wed Jun 12th 11:00 -- 11:20 AM @ Seaside Ballroom

We study Lipschitz bandits, where a learner repeatedly plays one arm from an infinite arm set and then receives a stochastic reward whose expectation is a Lipschitz function of the chosen arm. Most of existing work assume the reward distributions are bounded or at least sub-Gaussian, and thus do not apply to heavy-tailed rewards arising in many real-world scenarios such as web advertising and financial markets. To address this limitation, in this paper we relax the assumption on rewards to allow arbitrary distributions that have finite $(1+\epsilon)$-th moments for some $\epsilon \in (0, 1]$, and propose algorithms that enjoy a sublinear regret of $\tilde{O}(T^{(d \epsilon + 1)/(d \epsilon + \epsilon + 1)})$ where $T$ is the time horizon and $d$ is the zooming dimension. The key idea is to exploit the Lipschitz property of the expected reward function by adaptively discretizing the arm set, and employ upper confidence bound policies with robust mean estimators designed for heavy-tailed distributions. Furthermore, we present a lower bound for Lipschitz bandits with heavy-tailed rewards, and show that our algorithms are optimal in terms of $T$. Finally, we conduct numerical experiments to demonstrate the effectiveness of our algorithms.

Author Information

Shiyin Lu (Nanjing University)
Guanghui Wang (Nanjing University)
Yao Hu (Alibaba Youku Cognitive and Intelligent Lab)
Lijun Zhang (Nanjing University)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors