Skip to yearly menu bar Skip to main content


Poster

Stabilizing Off-Policy Deep Reinforcement Learning from Pixels

Edoardo Cetin · Philip Ball · Stephen Roberts · Oya Celiktutan

Hall E #832

Keywords: [ RL: Deep RL ] [ MISC: Representation Learning ] [ RL: Continuous Action ] [ Reinforcement Learning ]


Abstract:

Off-policy reinforcement learning (RL) from pixel observations is notoriously unstable. As a result, many successful algorithms must combine different domain-specific practices and auxiliary losses to learn meaningful behaviors in complex environments. In this work, we provide novel analysis demonstrating that these instabilities arise from performing temporal-difference learning with a convolutional encoder and low-magnitude rewards. We show that this new visual deadly triad causes unstable training and premature convergence to degenerate solutions, a phenomenon we name catastrophic self-overfitting. Based on our analysis, we propose A-LIX, a method providing adaptive regularization to the encoder's gradients that explicitly prevents the occurrence of catastrophic self-overfitting using a dual objective. By applying A-LIX, we significantly outperform the prior state-of-the-art on the DeepMind Control and Atari benchmarks without any data augmentation or auxiliary losses.

Chat is not available.