## Congested Bandits: Optimal Routing via Short-term Resets

### Pranjal Awasthi · Kush Bhatia · Sreenivas Gollapudi · Kostas Kollias

##### Hall E #1314

Keywords: [ T: Learning Theory ] [ T: Reinforcement Learning and Planning ] [ T: Online Learning and Bandits ]

[ Abstract ]
[
Wed 20 Jul 3:30 p.m. PDT — 5:30 p.m. PDT

Spotlight presentation: T: Online Learning and Bandits/Learning Theory
Wed 20 Jul 10:15 a.m. PDT — 11:45 a.m. PDT

Abstract: For traffic routing platforms, the choice of which route to recommend to a user depends on the congestion on these routes -- indeed, an individual's utility depends on the number of people using the recommended route at that instance. Motivated by this, we introduce the problem of Congested Bandits where each arm's reward is allowed to depend on the number of times it was played in the past $\Delta$ timesteps. This dependence on past history of actions leads to a dynamical system where an algorithm's present choices also affect its future pay-offs, and requires an algorithm to plan for this. We study the congestion aware formulation in the multi-armed bandit (MAB) setup and in the contextual bandit setup with linear rewards. For the multi-armed setup, we propose a UCB style algorithm and show that its policy regret scales as $\tilde{O}(\sqrt{K \Delta T})$. For the linear contextual bandit setup, our algorithm, based on an iterative least squares planner, achieves policy regret $\tilde{O}(\sqrt{dT} + \Delta)$. From an experimental standpoint, we corroborate the no-regret properties of our algorithms via a simulation study.

Chat is not available.