Timezone: »
In complex tasks, such as those with large combinatorial action spaces, random exploration may be too inefficient to achieve meaningful learning progress. In this work, we use a curriculum of progressively growing action spaces to accelerate learning. We assume the environment is out of our control, but that the agent may set an internal curriculum by initially restricting its action space. Our approach uses off-policy reinforcement learning to estimate optimal value functions for multiple action spaces simultaneously and efficiently transfers data, value estimates, and state representations from restricted action spaces to the full task. We show the efficacy of our approach in proof-of-concept control tasks and on challenging large-scale StarCraft micromanagement tasks with large, multi-agent action spaces.
Author Information
Gregory Farquhar (Deepmind)
Laura Gustafson (Facebook AI Research)
Zeming Lin (New York University)
Shimon Whiteson (Oxford University)
Nicolas Usunier (Facebook AI Research)
Gabriel Synnaeve (Facebook AI Research)
More from the Same Authors
-
2023 Poster: Why Target Networks Stabilise Temporal Difference Methods »
Mattie Fellows · Matthew Smith · Shimon Whiteson -
2023 Poster: Universal Morphology Control via Contextual Modulation »
Zheng Xiong · Jacob Beck · Shimon Whiteson -
2022 Workshop: Responsible Decision Making in Dynamic Environments »
Virginie Do · Thorsten Joachims · Alessandro Lazaric · Joelle Pineau · Matteo Pirotta · Harsh Satija · Nicolas Usunier -
2022 Poster: Flashlight: Enabling Innovation in Tools for Machine Learning »
Jacob Kahn · Vineel Pratap · Tatiana Likhomanenko · Qiantong Xu · Awni Hannun · Jeff Cai · Paden Tomasello · Ann Lee · Edouard Grave · Gilad Avidov · Benoit Steiner · Vitaliy Liptchinsky · Gabriel Synnaeve · Ronan Collobert -
2022 Spotlight: Flashlight: Enabling Innovation in Tools for Machine Learning »
Jacob Kahn · Vineel Pratap · Tatiana Likhomanenko · Qiantong Xu · Awni Hannun · Jeff Cai · Paden Tomasello · Ann Lee · Edouard Grave · Gilad Avidov · Benoit Steiner · Vitaliy Liptchinsky · Gabriel Synnaeve · Ronan Collobert -
2022 Poster: Learning inverse folding from millions of predicted structures »
Chloe Hsu · Robert Verkuil · Jason Liu · Zeming Lin · Brian Hie · Tom Sercu · Adam Lerer · Alexander Rives -
2022 Oral: Learning inverse folding from millions of predicted structures »
Chloe Hsu · Robert Verkuil · Jason Liu · Zeming Lin · Brian Hie · Tom Sercu · Adam Lerer · Alexander Rives -
2020 Poster: Word-Level Speech Recognition With a Letter to Word Encoder »
Ronan Collobert · Awni Hannun · Gabriel Synnaeve -
2020 Poster: Fully Parallel Hyperparameter Search: Reshaped Space-Filling »
Marie-Liesse Cauwet · Camille Couprie · Julien Dehos · Pauline Luc · Jeremy Rapin · Morgane Riviere · Fabien Teytaud · Olivier Teytaud · Nicolas Usunier -
2019 Poster: A fully differentiable beam search decoder »
Ronan Collobert · Awni Hannun · Gabriel Synnaeve -
2019 Oral: A fully differentiable beam search decoder »
Ronan Collobert · Awni Hannun · Gabriel Synnaeve -
2018 Poster: Canonical Tensor Decomposition for Knowledge Base Completion »
Timothee Lacroix · Nicolas Usunier · Guillaume Obozinski -
2018 Oral: Canonical Tensor Decomposition for Knowledge Base Completion »
Timothee Lacroix · Nicolas Usunier · Guillaume Obozinski -
2017 Workshop: Video Games and Machine Learning »
Gabriel Synnaeve · Julian Togelius · Tom Schaul · Oriol Vinyals · Nicolas Usunier -
2017 Poster: Parseval Networks: Improving Robustness to Adversarial Examples »
Moustapha Cisse · Piotr Bojanowski · Edouard Grave · Yann Dauphin · Nicolas Usunier -
2017 Talk: Parseval Networks: Improving Robustness to Adversarial Examples »
Moustapha Cisse · Piotr Bojanowski · Edouard Grave · Yann Dauphin · Nicolas Usunier