Timezone: »

 
Oral
Learning by Playing - Solving Sparse Reward Tasks from Scratch
Martin Riedmiller · Roland Hafner · Thomas Lampe · Michael Neunert · Jonas Degrave · Tom Van de Wiele · Vlad Mnih · Nicolas Heess · Jost Springenberg

Wed Jul 11 07:20 AM -- 07:40 AM (PDT) @ A1

We propose Scheduled Auxiliary Control (SAC-X), a new learning paradigm in the context of Reinforcement Learning (RL). SAC-X enables learning of complex behaviors - from scratch - in the presence of multiple sparse reward signals.To this end, the agent is equipped with a set of general auxiliary tasks, that it attempts to learn simultaneously via off-policy RL.The key idea behind our method is that active (learned) scheduling and execution of auxiliary policies allows the agent to efficiently explore its environment - enabling it to excel at sparse reward RL.Our experiments in several challenging robotic manipulation settings demonstrate the power of our approach.

Author Information

Martin Riedmiller (DeepMind)
Roland Hafner (DeepMind)
Thomas Lampe (DeepMind)
Michael Neunert (Google DeepMind)
Jonas Degrave (Google)
Tom Van de Wiele (DeepMind)
Vlad Mnih (Google Deepmind)
Nicolas Heess (DeepMind)
Jost Springenberg (DeepMind)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors