Skip to yearly menu bar Skip to main content


Oral

Continual Reinforcement Learning with Complex Synapses

Christos Kaplanis · Murray Shanahan · Claudia Clopath

Abstract:

Unlike humans, who are capable of continuallearning over their lifetimes, artificial neural networkshave long been known to suffer from aphenomenon known as catastrophic forgetting,whereby new learning can lead to abrupt erasureof previously acquired knowledge. Whereas in aneural network the parameters are typically modelledas scalar values, an individual synapse inthe brain comprises a complex network of interactingbiochemical components that evolve at differenttimescales. In this paper, we show that byequipping tabular and deep reinforcement learningagents with a synaptic model that incorporatesthis biological complexity (Benna & Fusi, 2016),catastrophic forgetting can be mitigated at multipletimescales. In particular, we find that as wellas enabling continual learning across sequentialtraining of two simple tasks, it can also be used toovercome within-task forgetting by reducing theneed for an experience replay database.

Chat is not available.