Skip to yearly menu bar Skip to main content


Talk

DARLA: Improving Zero-Shot Transfer in Reinforcement Learning

Irina Higgins · Arka Pal · Andrei A Rusu · Loic Matthey · Christopher Burgess · Alexander Pritzel · Matthew Botvinick · Charles Blundell · Alexander Lerchner

C4.5

Abstract:

Domain adaptation is an important open problem in deep reinforcement learning (RL). In many scenarios of interest data is hard to obtain, so agents may learn a source policy in a setting where data is readily available, with the hope that it generalises well to the target domain. We propose a new multi-stage RL agent, DARLA (DisentAngled Representation Learning Agent), which learns to see before learning to act. DARLA’s vision is based on learning a disentangled representation of the observed environment. Once DARLA can see, it is able to acquire source policies that are robust to many domain shifts - even with no access to the target domain. DARLA significantly outperforms conventional baselines in zero-shot domain adaptation scenarios, an effect that holds across a variety of RL environments (Jaco arm, DeepMind Lab) and base RL algorithms (DQN, A3C and EC).

Live content is unavailable. Log in and register to view live content