Timezone: »
Reinforcement learning algorithms usually assume that all actions are always available to an agent. However, both people and animals understand the general link between the features of their environment and the actions that are feasible. Gibson (1977) coined the term "affordances" to describe the fact that certain states enable an agent to do certain actions, in the context of embodied agents. In this paper, we develop a theory of affordances for agents who learn and plan in Markov Decision Processes. Affordances play a dual role in this case. On one hand, they allow faster planning, by reducing the number of actions available in any given situation. On the other hand, they facilitate more efficient and precise learning of transition models from data, especially when such models require function approximation. We establish these properties through theoretical results as well as illustrative examples. We also propose an approach to learn affordances and use it to estimate transition models that are simpler and generalize better.
Author Information
Khimya Khetarpal (McGill University, Mila Montreal)
Ph.D. Student
Zafarali Ahmed (DeepMind)
Gheorghe Comanici (DeepMind)
David Abel (Brown University)
Doina Precup (DeepMind)
More from the Same Authors
-
2021 : Gradient Starvation: A Learning Proclivity in Neural Networks »
Mohammad Pezeshki · Sékou-Oumar Kaba · Yoshua Bengio · Aaron Courville · Doina Precup · Guillaume Lajoie -
2023 Poster: Discovering Object-Centric Generalized Value Functions From Pixels »
Somjit Nath · Gopeshh Subbaraj · Khimya Khetarpal · Samira Ebrahimi Kahou -
2022 Poster: Proving Theorems using Incremental Learning and Hindsight Experience Replay »
Eser Aygün · Ankit Anand · Laurent Orseau · Xavier Glorot · Stephen McAleer · Vlad Firoiu · Lei Zhang · Doina Precup · Shibl Mourad -
2022 Spotlight: Proving Theorems using Incremental Learning and Hindsight Experience Replay »
Eser Aygün · Ankit Anand · Laurent Orseau · Xavier Glorot · Stephen McAleer · Vlad Firoiu · Lei Zhang · Doina Precup · Shibl Mourad -
2019 Workshop: Workshop on Multi-Task and Lifelong Reinforcement Learning »
Sarath Chandar · Shagun Sodhani · Khimya Khetarpal · Tom Zahavy · Daniel J. Mankowitz · Shie Mannor · Balaraman Ravindran · Doina Precup · Chelsea Finn · Abhishek Gupta · Amy Zhang · Kyunghyun Cho · Andrei A Rusu · Facebook Rob Fergus -
2019 Poster: Finding Options that Minimize Planning Time »
Yuu Jinnai · David Abel · David Hershkowitz · Michael L. Littman · George Konidaris -
2019 Oral: Finding Options that Minimize Planning Time »
Yuu Jinnai · David Abel · David Hershkowitz · Michael L. Littman · George Konidaris -
2019 Poster: Understanding the Impact of Entropy on Policy Optimization »
Zafarali Ahmed · Nicolas Le Roux · Mohammad Norouzi · Dale Schuurmans -
2019 Oral: Understanding the Impact of Entropy on Policy Optimization »
Zafarali Ahmed · Nicolas Le Roux · Mohammad Norouzi · Dale Schuurmans -
2019 Poster: Per-Decision Option Discounting »
Anna Harutyunyan · Peter Vrancx · Philippe Hamel · Ann Nowe · Doina Precup -
2019 Poster: Discovering Options for Exploration by Minimizing Cover Time »
Yuu Jinnai · Jee Won Park · David Abel · George Konidaris -
2019 Oral: Discovering Options for Exploration by Minimizing Cover Time »
Yuu Jinnai · Jee Won Park · David Abel · George Konidaris -
2019 Oral: Per-Decision Option Discounting »
Anna Harutyunyan · Peter Vrancx · Philippe Hamel · Ann Nowe · Doina Precup -
2018 Poster: State Abstractions for Lifelong Reinforcement Learning »
David Abel · Dilip S. Arumugam · Lucas Lehnert · Michael L. Littman -
2018 Oral: State Abstractions for Lifelong Reinforcement Learning »
David Abel · Dilip S. Arumugam · Lucas Lehnert · Michael L. Littman -
2018 Poster: Policy and Value Transfer in Lifelong Reinforcement Learning »
David Abel · Yuu Jinnai · Sophie Guo · George Konidaris · Michael L. Littman -
2018 Oral: Policy and Value Transfer in Lifelong Reinforcement Learning »
David Abel · Yuu Jinnai · Sophie Guo · George Konidaris · Michael L. Littman