Skip to yearly menu bar Skip to main content


Spotlight

Guided Exploration with Proximal Policy Optimization using a Single Demonstration

Gabriele Libardi · Gianni De Fabritiis · Sebastian Dittert

[ ] [ Livestream: Visit Reinforcement Learning 11 ] [ Paper ]
[ Paper ]

Abstract:

Solving sparse reward tasks through exploration is one of the major challenges in deep reinforcement learning, especially in three-dimensional, partially-observable environments. Critically, the algorithm proposed in this article is capable of using a single human demonstration to solve hard-exploration problems. We train an agent on a combination of demonstrations and own experience to solve problems with variable initial conditions and we integrate it with proximal policy optimization (PPO). The agent is also able to increase its performance and to tackle harder problems by replaying its own past trajectories prioritizing them based on the obtained reward and the maximum value of the trajectory. We finally compare variations of this algorithm to different imitation learning algorithms on a set of hard-exploration tasks in the Animal-AI Olympics environment. To the best of our knowledge, learning a task in a three-dimensional environment with comparable difficulty has never been considered before using only one human demonstration.

Chat is not available.