Skip to yearly menu bar Skip to main content


Poster

Robust Pure Exploration in Linear Bandits with Limited Budget

Ayya Alieva · Ashok Cutkosky · Abhimanyu Das

Keywords: [ Bandits ] [ Reinforcement Learning and Planning ] [ Unsupervised Learning ] [ Algorithms ] [ Adversarial Learning ]


Abstract:

We consider the pure exploration problem in the fixed-budget linear bandit setting. We provide a new algorithm that identifies the best arm with high probability while being robust to unknown levels of observation noise as well as to moderate levels of misspecification in the linear model. Our technique combines prior approaches to pure exploration in the multi-armed bandit problem with optimal experimental design algorithms to obtain both problem dependent and problem independent bounds. Our success probability is never worse than that of an algorithm that ignores the linear structure, but seamlessly takes advantage of such structure when possible. Furthermore, we only need the number of samples to scale with the dimension of the problem rather than the number of arms. We complement our theoretical results with empirical validation.

Chat is not available.