Skip to yearly menu bar Skip to main content


Poster

Linear bandits with Stochastic Delayed Feedback

Claire Vernade · Alexandra Carpentier · Tor Lattimore · Giovanni Zappella · Beyza Ermis · Michael Brueckner

Keywords: [ Online Learning, Active Learning, and Bandits ] [ Online Learning / Bandits ]


Abstract:

Stochastic linear bandits are a natural and well-studied model for structured exploration/exploitation problems and are widely used in applications such as on-line marketing and recommendation. One of the main challenges faced by practitioners hoping to apply existing algorithms is that usually the feedback is randomly delayed and delays are only partially observable. For example, while a purchase is usually observable some time after the display, the decision of not buying is never explicitly sent to the system. In other words, the learner only observes delayed positive events. We formalize this problem as a novel stochastic delayed linear bandit and propose OTFLinUCB and OTFLinTS, two computationally efficient algorithms able to integrate new information as it becomes available and to deal with the permanently censored feedback. We prove optimal O(d\sqrt{T}) bounds on the regret of the first algorithm and study the dependency on delay-dependent parameters. Our model, assumptions and results are validated by experiments on simulated and real data.

Chat is not available.