Timezone: »
Maximum State Entropy Exploration using Predecessor and Successor Representations
Arnav Kumar Jain · Lucas Lehnert · Irina Rish · Glen Berseth
Event URL: https://openreview.net/forum?id=inE5hW4tQ0 »
Animals have a developed ability to explore that aids them in important tasks such as locating food, exploring for shelter, and finding misplaced items. These exploration skills necessarily track where they have been so that they can plan for finding items with relative efficiency. Contemporary exploration algorithms often learn a less efficient exploration strategy because they either condition only on the current state or simply rely on making random open-loop exploratory moves. In this work, we propose $\eta\psi$-Learning, a method to learn efficient exploratory policies by conditioning on past episodic experience to make the next exploratory move. Specifically, $\eta\psi$-Learning learns an exploration policy that maximizes the entropy of the state visitation distribution of a single trajectory. Furthermore, we demonstrate how variants of the predecessor representation and successor representations can be combined to predict the state visitation entropy. Our experiments demonstrate the efficacy of the proposed algorithm to strategically explore the environment and maximize the state coverage with limited samples.
Animals have a developed ability to explore that aids them in important tasks such as locating food, exploring for shelter, and finding misplaced items. These exploration skills necessarily track where they have been so that they can plan for finding items with relative efficiency. Contemporary exploration algorithms often learn a less efficient exploration strategy because they either condition only on the current state or simply rely on making random open-loop exploratory moves. In this work, we propose $\eta\psi$-Learning, a method to learn efficient exploratory policies by conditioning on past episodic experience to make the next exploratory move. Specifically, $\eta\psi$-Learning learns an exploration policy that maximizes the entropy of the state visitation distribution of a single trajectory. Furthermore, we demonstrate how variants of the predecessor representation and successor representations can be combined to predict the state visitation entropy. Our experiments demonstrate the efficacy of the proposed algorithm to strategically explore the environment and maximize the state coverage with limited samples.
Author Information
Arnav Kumar Jain (Mila - Quebec AI Institute, University of Montreal)
Lucas Lehnert (Meta FAIR)
Irina Rish (MILA / Université de Montréal h)
Glen Berseth (Mila Universite de Montreal)
More from the Same Authors
-
2021 : Intrinsic Control of Variational Beliefs in Dynamic Partially-Observed Visual Environments »
Nicholas Rhinehart · Jenny Wang · Glen Berseth · John Co-Reyes · Danijar Hafner · Chelsea Finn · Sergey Levine -
2021 : Explore and Control with Adversarial Surprise »
Arnaud Fickinger · Natasha Jaques · Samyak Parajuli · Michael Chang · Nicholas Rhinehart · Glen Berseth · Stuart Russell · Sergey Levine -
2021 : Continual Meta Policy Search for Sequential Multi-Task Learning »
Glen Berseth · Zhiwei Zhang -
2021 : ReLMM: Practical RL for Learning Mobile Manipulation Skills Using Only Onboard Sensors »
Charles Sun · Jedrzej Orbik · Coline Devin · Abhishek Gupta · Glen Berseth · Sergey Levine -
2022 : Towards Out-of-Distribution Adversarial Robustness »
Adam Ibrahim · Charles Guille-Escuret · Ioannis Mitliagkas · Irina Rish · David Krueger · Pouya Bashivan -
2023 : Towards Out-of-Distribution Adversarial Robustness »
Adam Ibrahim · Charles Guille-Escuret · Ioannis Mitliagkas · Irina Rish · David Krueger · Pouya Bashivan -
2023 : IQL-TD-MPC: Implicit Q-Learning for Hierarchical Model Predictive Control »
Yingchen Xu · Rohan Chitnis · Bobak Hashemi · Lucas Lehnert · Urun Dogan · Zheqing Zhu · Olivier Delalleau -
2023 : Continual Pre-Training of Large Language Models: How to re-warm your model? »
Kshitij Gupta · Benjamin Thérien · Adam Ibrahim · Mats Richter · Quentin Anthony · Eugene Belilovsky · Timothée Lesort · Irina Rish -
2023 : Cognitive Models as Simulators: Using Cognitive Models to Tap into Implicit Human Feedback »
Ardavan S. Nobandegani · Thomas Shultz · Irina Rish -
2023 Poster: Towards Learning to Imitate from a Single Video Demonstration »
Glen Berseth · Florian Golemo · Christopher Pal -
2022 Poster: AnyMorph: Learning Transferable Polices By Inferring Agent Morphology »
Brandon Trabucco · mariano phielipp · Glen Berseth -
2022 Poster: Towards Scaling Difference Target Propagation by Learning Backprop Targets »
Maxence ERNOULT · Fabrice Normandin · Abhinav Moudgil · Sean Spinney · Eugene Belilovsky · Irina Rish · Blake Richards · Yoshua Bengio -
2022 Spotlight: Towards Scaling Difference Target Propagation by Learning Backprop Targets »
Maxence ERNOULT · Fabrice Normandin · Abhinav Moudgil · Sean Spinney · Eugene Belilovsky · Irina Rish · Blake Richards · Yoshua Bengio -
2022 Spotlight: AnyMorph: Learning Transferable Polices By Inferring Agent Morphology »
Brandon Trabucco · mariano phielipp · Glen Berseth -
2021 : Panel Discussion1 »
Razvan Pascanu · Irina Rish -
2020 : Panel Discussion »
Eric Eaton · Martha White · Doina Precup · Irina Rish · Harm van Seijen -
2020 : Q&A with Irina Rish »
Irina Rish · Shagun Sodhani · Sarath Chandar -
2020 : Invited Talk: Lifelong Learning: Towards Broad and Robust AI by Irina Rish »
Irina Rish