Skip to yearly menu bar Skip to main content


Poster

Offline Actor-Critic Reinforcement Learning Scales to Large Models

Jost Springenberg · Abbas Abdolmaleki · Jingwei Zhang · Oliver M Groth · Michael Bloesch · Thomas Lampe · Philemon Brakel · Sarah Bechtle · Steven Kapturowski · Roland Hafner · Nicolas Heess · Martin Riedmiller


Abstract:

We show that offline actor-critic reinforcement learning can scale to large models - such as transformers - and follows similar scaling laws as supervised learning. We find that offline actor-critic algorithms can outperform strong, supervised, behavioral cloning baselines for multi-task training on a large dataset containing both sub-optimal and expert behavior on 132 continuous control tasks. We introduce a novel Perceiver-based actor-critic model and elucidate the key model features needed to make offline RL work with transformer style self- and cross-attention. Overall, we find that: i) simple offline actor critic algorithms are a natural choice for gradually moving away from the currently predominant paradigm of behavioral cloning, and ii) via offline RL it is possible to extract multi-task policies that master many domains simultaneously, including real robotics tasks, from sub-optimal demonstrations or self-generated data.

Live content is unavailable. Log in and register to view live content