Timezone: »

Understanding Plasticity in Neural Networks
Clare Lyle · Zeyu Zheng · Evgenii Nikishin · Bernardo Avila Pires · Razvan Pascanu · Will Dabney

Wed Jul 26 05:00 PM -- 06:30 PM (PDT) @ Exhibit Hall 1 #711

Plasticity, the ability of a neural network to quickly change its predictions in response to new information, is essential for the adaptability and robustness of deep reinforcement learning systems. Deep neural networks are known to lose plasticity over the course of training even in relatively simple learning problems, but the mechanisms driving this phenomenon are still poorly understood. This paper conducts a systematic empirical analysis into plasticity loss, with the goal of understanding the phenomenon mechanistically in order to guide the future development of targeted solutions. We find that loss of plasticity is deeply connected to changes in the curvature of the loss landscape, but that it often occurs in the absence of saturated units. Based on this insight, we identify a number of parameterization and optimization design choices which enable networks to better preserve plasticity over the course of training. We validate the utility of these findings on larger-scale RL benchmarks in the Arcade Learning Environment.

Author Information

Clare Lyle (University of Oxford)
Zeyu Zheng (Google DeepMind)
Evgenii Nikishin (Mila, DeepMind)
Bernardo Avila Pires (Google DeepMind)
Razvan Pascanu (DeepMind)
Will Dabney (Google DeepMind)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors