Skip to yearly menu bar Skip to main content


Fast Population-Based Reinforcement Learning on a Single Machine

Arthur Flajolet · Claire Bizon Monroc · Karim Beguir · Thomas Pierrot

Hall E #414

Keywords: [ RL: Deep RL ] [ RL: Batch/Offline ] [ Deep Learning ]


Training populations of agents has demonstrated great promise in Reinforcement Learning for stabilizing training, improving exploration and asymptotic performance, and generating a diverse set of solutions. However, population-based training is often not considered by practitioners as it is perceived to be either prohibitively slow (when implemented sequentially), or computationally expensive (if agents are trained in parallel on independent accelerators). In this work, we compare implementations and revisit previous studies to show that the judicious use of compilation and vectorization allows population-based training to be performed on a single machine with one accelerator with minimal overhead compared to training a single agent. We also show that, when provided with a few accelerators, our protocols extend to large population sizes for applications such as hyperparameter tuning. We hope that this work and the public release of our code will encourage practitioners to use population-based learning techniques more frequently for their research and applications.

Chat is not available.