Skip to yearly menu bar Skip to main content


Poster

Best Arm Identification for Cascading Bandits in the Fixed Confidence Setting

Zixin Zhong · Wang Chi Cheung · Vincent Tan

Keywords: [ Online Learning, Active Learning, and Bandits ] [ Reinforcement Learning Theory ] [ Online Learning / Bandits ] [ Statistical Learning Theory ]


Abstract:

We design and analyze CascadeBAI, an algorithm for finding the best set of K items, also called an arm, within the framework of cascading bandits. An upper bound on the time complexity of CascadeBAI is derived by overcoming a crucial analytical challenge, namely, that of probabilistically estimating the amount of available feedback at each step. To do so, we define a new class of random variables (r.v.'s) which we term as left-sided sub-Gaussian r.v.'s; these are r.v.'s whose cumulant generating functions (CGFs) can be bounded by a quadratic only for non-positive arguments of the CGFs. This enables the application of a sufficiently tight Bernstein-type concentration inequality. We show, through the derivation of a lower bound on the time complexity, that the performance of CascadeBAI is optimal in some practical regimes. Finally, extensive numerical simulations corroborate the efficacy of CascadeBAI as well as the tightness of our upper bound on its time complexity.

Chat is not available.