Timezone: »
Continual learning—the ability to learn many tasks in sequence—is critical for artificial learning systems. Yet standard training methods for deep networks often suffer from catastrophic forgetting, where learning new tasks erases knowledge of the earlier tasks. While catastrophic forgetting labels the problem, the theoretical reasons for interference between tasks remain unclear. Here, we attempt to narrow this gap between theory and practice by studying continual learning in the teacher-student setup. We extend previous analytical work on two-layer networks in the teacher-student setup to multiple teachers. Using each teacher to represent a different task, we investigate how the relationship between teachers affects the amount of forgetting and transfer exhibited by the student when the task switches. In line with recent work, we find that when tasks depend on similar features, intermediate task similarity leads to greatest forgetting. However, feature similarity is only one way in which tasks may be related. The teacher-student approach allows us to disentangle task similarity at the level of \emph{readouts} (hidden-to-output weights) as well as \emph{features} (input-to-hidden weights). We find a complex interplay between both types of similarity, initial transfer/forgetting rates, maximum transfer/forgetting, and the long-time (post-switch) amount of transfer/forgetting. Together, these results help illuminate the diverse factors contributing to catastrophic forgetting.
Author Information
Sebastian Lee (Microsoft Research)
ML PhD Student
Sebastian Goldt (International School of Advanced Studies (SISSA))
I'm an assistant professor working on theories of learning in neural networks.
Andrew Saxe (University of Oxford)
Related Events (a corresponding poster, oral, or spotlight)
-
2021 Poster: Continual Learning in the Teacher-Student Setup: Impact of Task Similarity »
Tue. Jul 20th 04:00 -- 06:00 PM Room
More from the Same Authors
-
2023 Poster: Neural networks trained with SGD learn distributions of increasing complexity »
Maria Refinetti · Alessandro Ingrosso · Sebastian Goldt -
2022 Poster: The dynamics of representation learning in shallow, non-linear autoencoders »
Maria Refinetti · Sebastian Goldt -
2022 Poster: Maslow's Hammer in Catastrophic Forgetting: Node Re-Use vs. Node Activation »
Sebastian Lee · Stefano Sarao Mannelli · Claudia Clopath · Sebastian Goldt · Andrew Saxe -
2022 Spotlight: Maslow's Hammer in Catastrophic Forgetting: Node Re-Use vs. Node Activation »
Sebastian Lee · Stefano Sarao Mannelli · Claudia Clopath · Sebastian Goldt · Andrew Saxe -
2022 Spotlight: The dynamics of representation learning in shallow, non-linear autoencoders »
Maria Refinetti · Sebastian Goldt -
2021 Poster: Classifying high-dimensional Gaussian mixtures: Where kernel methods fail and neural networks succeed »
Maria Refinetti · Sebastian Goldt · FLORENT KRZAKALA · Lenka Zdeborova -
2021 Poster: Align, then memorise: the dynamics of learning with feedback alignment »
Maria Refinetti · Stéphane d'Ascoli · Ruben Ohana · Sebastian Goldt -
2021 Spotlight: Align, then memorise: the dynamics of learning with feedback alignment »
Maria Refinetti · Stéphane d'Ascoli · Ruben Ohana · Sebastian Goldt -
2021 Spotlight: Classifying high-dimensional Gaussian mixtures: Where kernel methods fail and neural networks succeed »
Maria Refinetti · Sebastian Goldt · FLORENT KRZAKALA · Lenka Zdeborova -
2019 : Poster discussion »
Roman Novak · Maxime Gabella · Frederic Dreyer · Siavash Golkar · Anh Tong · Irina Higgins · Mirco Milletari · Joe Antognini · Sebastian Goldt · Adín Ramírez Rivera · Roberto Bondesan · Ryo Karakida · Remi Tachet des Combes · Michael Mahoney · Nicholas Walker · Stanislav Fort · Samuel Smith · Rohan Ghosh · Aristide Baratin · Diego Granziol · Stephen Roberts · Dmitry Vetrov · Andrew Wilson · César Laurent · Valentin Thomas · Simon Lacoste-Julien · Dar Gilboa · Daniel Soudry · Anupam Gupta · Anirudh Goyal · Yoshua Bengio · Erich Elsen · Soham De · Stanislaw Jastrzebski · Charles H Martin · Samira Shabanian · Aaron Courville · Shorato Akaho · Lenka Zdeborova · Ethan Dyer · Maurice Weiler · Pim de Haan · Taco Cohen · Max Welling · Ping Luo · zhanglin peng · Nasim Rahaman · Loic Matthey · Danilo J. Rezende · Jaesik Choi · Kyle Cranmer · Lechao Xiao · Jaehoon Lee · Yasaman Bahri · Jeffrey Pennington · Greg Yang · Jiri Hron · Jascha Sohl-Dickstein · Guy Gur-Ari -
2019 : Analyzing the dynamics of online learning in over-parameterized two-layer neural networks »
Sebastian Goldt