Timezone: »
Poster
Coordinated Exploration in Concurrent Reinforcement Learning
Maria Dimakopoulou · Benjamin Van Roy
We consider a team of reinforcement learning agents that concurrently learn to operate in a common environment. We identify three properties - adaptivity, commitment, and diversity - which are necessary for efficient coordinated exploration and demonstrate that straightforward extensions to single-agent optimistic and posterior sampling approaches fail to satisfy them. As an alternative, we propose seed sampling, which extends posterior sampling in a manner that meets these requirements. Simulation results investigate how per-agent regret decreases as the number of agents grows, establishing substantial advantages of seed sampling over alternative exploration schemes.
Author Information
Maria Dimakopoulou (Stanford)
Benjamin Van Roy (Stanford University)
Related Events (a corresponding poster, oral, or spotlight)
-
2018 Oral: Coordinated Exploration in Concurrent Reinforcement Learning »
Wed. Jul 11th 11:30 -- 11:50 AM Room A1
More from the Same Authors
-
2022 : Deciding What to Model: Value-Equivalent Sampling for Reinforcement Learning »
Dilip Arumugam · Benjamin Van Roy -
2021 Poster: Deciding What to Learn: A Rate-Distortion Approach »
Dilip Arumugam · Benjamin Van Roy -
2021 Spotlight: Deciding What to Learn: A Rate-Distortion Approach »
Dilip Arumugam · Benjamin Van Roy -
2019 Poster: On the Design of Estimators for Bandit Off-Policy Evaluation »
Nikos Vlassis · Aurelien Bibaut · Maria Dimakopoulou · Tony Jebara -
2019 Oral: On the Design of Estimators for Bandit Off-Policy Evaluation »
Nikos Vlassis · Aurelien Bibaut · Maria Dimakopoulou · Tony Jebara -
2017 Poster: Why is Posterior Sampling Better than Optimism for Reinforcement Learning? »
Ian Osband · Benjamin Van Roy -
2017 Talk: Why is Posterior Sampling Better than Optimism for Reinforcement Learning? »
Ian Osband · Benjamin Van Roy