Poster
Policy Consolidation for Continual Reinforcement Learning
Christos Kaplanis · Murray Shanahan · Claudia Clopath

Tue Jun 11th 06:30 -- 09:00 PM @ Pacific Ballroom #37

We propose a method for tackling catastrophic forgetting in deep reinforcement learning that is \textit{agnostic} to the timescale of changes in the distribution of experiences, does not require knowledge of task boundaries and can adapt in \textit{continuously} changing environments. In our \textit{policy consolidation} model, the policy network interacts with a cascade of hidden networks that simultaneously remember the agent's policy at a range of timescales and regularise the current policy by its own history, thereby improving its ability to learn without forgetting. We find that the model improves continual learning relative to baselines on a number of continuous control tasks in single-task, alternating two-task, and multi-agent competitive self-play settings.

Author Information

Christos Kaplanis (Imperial College London)

PhD student investigating the topic of continual learning in artificial neural networks.

Murray Shanahan (DeepMind / Imperial College London)
Claudia Clopath (Imperial College London)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors