Theory and Foundation of Continual Learning

Thang Doan · Bogdan Mazoure · Amal Rannen Triki · Rahaf Aljundi · Vincenzo Lomonaco · Xu He · Arslan Chaudhry


Machine learning systems are commonly applied to isolated tasks (such as image recognition or playing chess) or narrow domains (such as control over similar robotic bodies). It is further assumed that the learning system has simultaneous access to all annotated data points of the tasks at hand. In contrast, Continual Learning (CL), also referred to as Lifelong or Incremental Learning, studies the problem of learning from a stream of data from changing domains, each connected to a different learning task. The objective of CL is to quickly adapt to new situations or tasks by exploiting previously acquired knowledge, while protecting previous learning from being erased.

Significant advances have been made in CL over the past few years, mostly through empirical investigations and benchmarking. However, theoretical understanding is still lagging behind. For instance, while Catastrophic Forgetting (CF) is a recurring ineffectiveness that most works try to tackle, little understanding is provided in the literature from a theoretical point of view. Many real life applications share common assumptions and settings with CL, what are the convergence guarantees when deploying a certain method? If memory capacity is an important constraint for replay methods, how can we select the minimal examples such that CF is minimized? While answers to the questions above are key ingredients to design better heuristics, very little theoretical guidance is provided in the literature.

The aim of this workshop is to achieve an understanding of different components of continual learning to bridge the gap with empirical results. Furthermore, we are also interested in submissions that draw connections between Continual Learning and other areas, such as Neuroscience and Meta-learning.

For more info visit our workshop website

Chat is not available.
Timezone: America/Los_Angeles »