Timezone: »
Standard RL algorithms assume fixed environment dynamics and require a significant amount of interaction to adapt to new environments. We introduce Policy-Dynamics Value Functions (PD-VF), a novel approach for rapidly adapting to dynamics different from those previously seen in training. PD-VF explicitly estimates the cumulative reward in a space of policies and environments. An ensemble of conventional RL policies is used to gather experience on training environments, from which embeddings of both policies and environments can be learned. Then, a value function conditioned on both embeddings is trained. At test time, a few actions are sufficient to infer the environment embedding, enabling a policy to be selected by maximizing the learned value function (which requires no additional environment interaction). We show that our method can rapidly adapt to new dynamics on a set of MuJoCo domains.
Author Information
Roberta Raileanu (NYU)
Max Goldstein (NYU)
Arthur Szlam (Facebook)
Facebook Rob Fergus (Facebook AI Research, NYU)
More from the Same Authors
-
2021 Poster: CURI: A Benchmark for Productive Concept Learning Under Uncertainty »
Shanmukha Ramakrishna Vedantam · Arthur Szlam · Maximilian Nickel · Ari Morcos · Brenden Lake -
2021 Spotlight: CURI: A Benchmark for Productive Concept Learning Under Uncertainty »
Shanmukha Ramakrishna Vedantam · Arthur Szlam · Maximilian Nickel · Ari Morcos · Brenden Lake -
2021 Oral: Decoupling Value and Policy for Generalization in Reinforcement Learning »
Roberta Raileanu · Rob Fergus -
2021 Poster: Decoupling Value and Policy for Generalization in Reinforcement Learning »
Roberta Raileanu · Rob Fergus -
2021 Poster: Not All Memories are Created Equal: Learning to Forget by Expiring »
Sainbayar Sukhbaatar · Da JU · Spencer Poff · Stephen Roller · Arthur Szlam · Jason Weston · Angela Fan -
2021 Oral: Not All Memories are Created Equal: Learning to Forget by Expiring »
Sainbayar Sukhbaatar · Da JU · Spencer Poff · Stephen Roller · Arthur Szlam · Jason Weston · Angela Fan -
2020 : Collaboration in Situated Instruction Following Q&A »
Yoav Artzi · Arthur Szlam -
2020 : Collaborative Construction and Communication in Minecraft Q&A »
Julia Hockenmaier · Arthur Szlam -
2020 Workshop: Workshop on Learning in Artificial Open Worlds »
Arthur Szlam · Katja Hofmann · Ruslan Salakhutdinov · Noboru Kuno · William Guss · Kavya Srinet · Brandon Houghton -
2020 : Automatic Data Augmentation for Generalization in Reinforcement Learning »
Roberta Raileanu -
2019 Workshop: Workshop on Multi-Task and Lifelong Reinforcement Learning »
Sarath Chandar · Shagun Sodhani · Khimya Khetarpal · Tom Zahavy · Daniel J. Mankowitz · Shie Mannor · Balaraman Ravindran · Doina Precup · Chelsea Finn · Abhishek Gupta · Amy Zhang · Kyunghyun Cho · Andrei A Rusu · Facebook Rob Fergus -
2018 Poster: Optimizing the Latent Space of Generative Networks »
Piotr Bojanowski · Armand Joulin · David Lopez-Paz · Arthur Szlam -
2018 Poster: Modeling Others using Oneself in Multi-Agent Reinforcement Learning »
Roberta Raileanu · Emily Denton · Arthur Szlam · Facebook Rob Fergus -
2018 Poster: Composable Planning with Attributes »
Amy Zhang · Sainbayar Sukhbaatar · Adam Lerer · Arthur Szlam · Facebook Rob Fergus -
2018 Oral: Composable Planning with Attributes »
Amy Zhang · Sainbayar Sukhbaatar · Adam Lerer · Arthur Szlam · Facebook Rob Fergus -
2018 Oral: Modeling Others using Oneself in Multi-Agent Reinforcement Learning »
Roberta Raileanu · Emily Denton · Arthur Szlam · Facebook Rob Fergus -
2018 Oral: Optimizing the Latent Space of Generative Networks »
Piotr Bojanowski · Armand Joulin · David Lopez-Paz · Arthur Szlam