Timezone: »

 
Oral
The Dormant Neuron Phenomenon in Deep Reinforcement Learning
Ghada Sokar · Rishabh Agarwal · Pablo Samuel Castro · Utku Evci

Tue Jul 25 08:38 PM -- 08:46 PM (PDT) @ Ballroom C

In this work we identify the dormant neuron phenomenon in deep reinforcement learning, where an agent's network suffers from an increasing number of inactive neurons, thereby affecting network expressivity. We demonstrate the presence of this phenomenon across a variety of algorithms and environments, and highlight its effect on learning. To address this issue, we propose a simple and effective method (ReDo) that Recycles Dormant neurons throughout training. Our experiments demonstrate that ReDo maintains the expressive power of networks by reducing the number of dormant neurons and results in improved performance.

Author Information

Ghada Sokar (Eindhoven University of Technology)
Rishabh Agarwal (Google DeepMind)
Pablo Samuel Castro (Google DeepMind)

Pablo was born and raised in Quito, Ecuador, and moved to Montreal after high school to study at McGill. He stayed in Montreal for the next 10 years, finished his bachelors, worked at a flight simulator company, and then eventually obtained his masters and PhD at McGill, focusing on Reinforcement Learning. After his PhD Pablo did a 10-month postdoc in Paris before moving to Pittsburgh to join Google. He has worked at Google for almost 6 years, and is currently a research Software Engineer in Google Brain in Montreal, focusing on fundamental Reinforcement Learning research, as well as Machine Learning and Music. Aside from his interest in coding/AI/math, Pablo is an active musician (https://www.psctrio.com), loves running (5 marathons so far, including Boston!), and discussing politics and activism.

Utku Evci (Google)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors