Timezone: »

 
Spotlight
Dynamic Balancing for Model Selection in Bandits and RL
Ashok Cutkosky · Christoph Dann · Abhimanyu Das · Claudio Gentile · Aldo Pacchiano · Manish Purohit

Wed Jul 21 05:45 PM -- 05:50 PM (PDT) @

We propose a framework for model selection by combining base algorithms in stochastic bandits and reinforcement learning. We require a candidate regret bound for each base algorithm that may or may not hold. We select base algorithms to play in each round using a ``balancing condition'' on the candidate regret bounds. Our approach simultaneously recovers previous worst-case regret bounds, while also obtaining much smaller regret in natural scenarios when some base learners significantly exceed their candidate bounds. Our framework is relevant in many settings, including linear bandits and MDPs with nested function classes, linear bandits with unknown misspecification, and tuning confidence parameters of algorithms such as LinUCB. Moreover, unlike recent efforts in model selection for linear stochastic bandits, our approach can be extended to consider adversarial rather than stochastic contexts.

Author Information

Ashok Cutkosky (Boston University)
Christoph Dann (Google)
Abhimanyu Das (Google)
Claudio Gentile (Google Research)
Aldo Pacchiano (UC Berkeley)
Manish Purohit (Google Research)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors