Timezone: »

 
Spotlight
Almost Optimal Anytime Algorithm for Batched Multi-Armed Bandits
Tianyuan Jin · Jing Tang · Pan Xu · Keke Huang · Xiaokui Xiao · Quanquan Gu

Wed Jul 21 06:40 PM -- 06:45 PM (PDT) @ None

In batched multi-armed bandit problems, the learner can adaptively pull arms and adjust strategy in batches. In many real applications, not only the regret but also the batch complexity need to be optimized. Existing batched bandit algorithms usually assume that the time horizon T is known in advance. However, many applications involve an unpredictable stopping time. In this paper, we study the anytime batched multi-armed bandit problem. We propose an anytime algorithm that achieves the asymptotically optimal regret for exponential families of reward distributions with O(\log \log T \ilog^{\alpha} (T)) \footnote{Notation \ilog^{\alpha} (T) is the result of iteratively applying the logarithm function on T for \alpha times, e.g., \ilog^{3} (T)=\log\log\log T.} batches, where $\alpha\in O_{T}(1). Moreover, we prove that for any constant c>0, no algorithm can achieve the asymptotically optimal regret within c\log\log T batches.

Author Information

Tianyuan Jin (National University of Singapore)
Jing Tang (The Hong Kong University of Science and Technology)
Pan Xu (California Institute of Technology)
Keke Huang (National University of Singapore)
Xiaokui Xiao (National University of Singapore)
Quanquan Gu (University of California, Los Angeles)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors