Skip to yearly menu bar Skip to main content


Stochastic bandits with arm-dependent delays

Anne Gael Manegueu · Claire Vernade · Alexandra Carpentier · Michal Valko

Keywords: [ Active Learning ] [ Online Learning / Bandits ] [ Online Learning, Active Learning, and Bandits ]


Significant work has been recently dedicated to the stochastic delayed bandit setting because of its relevance in applications. The applicability of existing algorithms is however restricted by the fact that strong assumptions are often made on the delay distributions, such as full observability, restrictive shape constraints, or uniformity over arms. In this work, we weaken them significantly and only assume that there is a bound on the tail of the delay. In particular, we cover the important case where the delay distributions vary across arms, and the case where the delays are heavy-tailed. Addressing these difficulties, we propose a simple but efficient UCB-based algorithm called the PATIENTBANDITS. We provide both problem-dependent and problem-independent bounds on the regret as well as performance lower bounds.

Chat is not available.