Poster
Self-Composing Policies for Scalable Continual Reinforcement Learning
Mikel Malagón · Josu Ceberio · Jose A Lozano
Hall C 4-9 #1105
Thu 25 Jul 1:30 a.m. PDT — 2:30 a.m. PDT
This work introduces a growable and modular neural network architecture that naturally avoids catastrophic forgetting and interference in continual reinforcement learning. The structure of each module allows the selective combination of previous policies along with its internal policy accelerating the learning process on the current task. Unlike previous growing neural network approaches, we show that the number of parameters of the proposed approach grows linearly with respect to the number of tasks, and does not sacrifice plasticity to scale. Experiments conducted in benchmark continuous control and visual problems reveal that the proposed approach achieves greater knowledge transfer and performance than alternative methods.