Timezone: »

 
Oral
Continual Reinforcement Learning with Complex Synapses
Christos Kaplanis · Murray Shanahan · Claudia Clopath

Wed Jul 11 08:50 AM -- 09:00 AM (PDT) @ A3

Unlike humans, who are capable of continuallearning over their lifetimes, artificial neural networkshave long been known to suffer from aphenomenon known as catastrophic forgetting,whereby new learning can lead to abrupt erasureof previously acquired knowledge. Whereas in aneural network the parameters are typically modelledas scalar values, an individual synapse inthe brain comprises a complex network of interactingbiochemical components that evolve at differenttimescales. In this paper, we show that byequipping tabular and deep reinforcement learningagents with a synaptic model that incorporatesthis biological complexity (Benna & Fusi, 2016),catastrophic forgetting can be mitigated at multipletimescales. In particular, we find that as wellas enabling continual learning across sequentialtraining of two simple tasks, it can also be used toovercome within-task forgetting by reducing theneed for an experience replay database.

Author Information

Christos Kaplanis (Imperial College London)

PhD student investigating the topic of continual learning in artificial neural networks.

Murray Shanahan (Imperial College London)
Claudia Clopath (Imperial College London)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors