Self-Supervised Exploration via Disagreement
Deepak Pathak · Dhiraj Gandhi · Abhinav Gupta

Wed Jun 12th 12:10 -- 12:15 PM @ Hall B

Exploration has been a long standing problem in both model-based and model-free learning methods for sensorimotor control. There have been major advances in recent years demonstrated in noise-free, non-stochastic domains such as video games and simulation. However, most of the current formulations get stuck when there are stochastic dynamics. In this paper, we propose a formulation for exploration inspired from the work in active learning literature. Specifically, we train an ensemble of dynamics models and incentivize the agent to maximize the disagreement or variance of those ensembles. We show that this formulation works as well as other formulations in non-stochastic scenarios, and is able to explore better in scenarios with stochastic-dynamics. Further, we show that this objective can be leveraged to perform differentiable policy optimization. This leads to a sample efficient exploration policy. We show experiments on a large number of standard environments to demonstrate the efficacy of this approach. Furthermore, we implement our exploration algorithm on a real robot which learns to interact with objects completely from scratch. Project videos are in supplementary.

Author Information

Deepak Pathak (UC Berkeley)
Dhiraj Gandhi (Carnegie Mellon University Robotics Institute)
Abhinav Gupta (Carnegie Mellon University)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors