Timezone: »

 
Poster
Planning to Explore via Self-Supervised World Models
Ramanan Sekar · Oleh Rybkin · Kostas Daniilidis · Pieter Abbeel · Danijar Hafner · Deepak Pathak

Wed Jul 15 08:00 AM -- 08:45 AM & Wed Jul 15 07:00 PM -- 07:45 PM (PDT) @ Virtual #None

Reinforcement learning allows solving complex tasks, however, the learning tends to be task-specific and the sample efficiency remains a challenge. We present Plan2Explore, a self-supervised reinforcement learning agent that tackles both these challenges through a new approach to self-supervised exploration and fast adaptation to new tasks, which need not be known during exploration. During exploration, unlike prior methods which retrospectively compute the novelty of observations after the agent has already reached them, our agent acts efficiently by leveraging planning to seek out expected future novelty. After exploration, the agent quickly adapts to multiple downstream tasks in a zero or a few-shot manner. We evaluate on challenging control tasks from high-dimensional image inputs. Without any training supervision or task-specific interaction, Plan2Explore outperforms prior self-supervised exploration methods, and in fact, almost matches the performances oracle which has access to rewards. Videos and code: https://ramanans1.github.io/plan2explore/

Author Information

Ramanan Sekar (University of Pennsylvania)
Oleg Rybkin (University of Pennsylvania)

Oleg is a Ph.D. student in the GRASP laboratory at the University of Pennsylvania advised by Kostas Daniilidis. He received his Bachelor's degree from Czech Technical University in Prague. He is interested in deep learning and computer vision, and, more specifically, on using deep predictive models to discover semantic structure in video as well as applications of these models for planning. Prior to his Ph.D. studies, he worked on camera geometry as an undergraduate researcher advised by Tomas Pajdla. He was a visiting student researcher at INRIA advised by Josef Sivic, Tokyo Institute of Technology advised by Akihiko Torii, and UC Berkeley advised by Sergey Levine.

Kostas Daniilidis (University of Pennsylvania)
Pieter Abbeel (UC Berkeley & Covariant)
Danijar Hafner (Google Brain & University of Toronto)
Deepak Pathak (CMU, FAIR)

More from the Same Authors