Timezone: »
We introduce a conceptually simple and scalable framework for continual learning domains where tasks are learned sequentially. Our method is constant in the number of parameters and is designed to preserve performance on previously encountered tasks while accelerating learning progress on subsequent problems. This is achieved by training a network with two components: A knowledge base, capable of solving previously encountered problems, which is connected to an active column that is employed to efficiently learn the current task. After learning a new task, the active column is distilled into the knowledge base, taking care to protect any previously acquired skills. This cycle of active learning (progression) followed by consolidation (compression) requires no architecture growth, no access to or storing of previous data or tasks, and no task-specific parameters. We demonstrate the progress & compress approach on sequential classification of handwritten alphabets as well as two reinforcement learning domains: Atari games and 3D maze navigation.
Author Information
Jonathan Richard Schwarz (DeepMind)
Wojciech Czarnecki (DeepMind)
Jelena Luketina (The University of Oxford)
Agnieszka Grabska-Barwinska (DeepMind)
Yee Teh (DeepMind)
Razvan Pascanu (DeepMind)
Raia Hadsell (DeepMind)
Raia Hadsell, a senior research scientist at DeepMind, has worked on deep learning and robotics problems for over 10 years. Her early research developed the notion of manifold learning using Siamese networks, which has been used extensively for invariant feature learning. After completing a PhD with Yann LeCun, which featured a self-supervised deep learning vision system for a mobile robot, her research continued at Carnegie Mellon’s Robotics Institute and SRI International, and in early 2014 she joined DeepMind in London to study artificial general intelligence. Her current research focuses on the challenge of continual learning for AI agents and robotic systems. While deep RL algorithms are capable of attaining superhuman performance on single tasks, they cannot transfer that performance to additional tasks, especially if experienced sequentially. She has proposed neural approaches such as policy distillation, progressive nets, and elastic weight consolidation to solve the problem of catastrophic forgetting and improve transfer learning.
Related Events (a corresponding poster, oral, or spotlight)
-
2018 Oral: Progress & Compress: A scalable framework for continual learning »
Fri. Jul 13th 02:00 -- 02:20 PM Room Victoria
More from the Same Authors
-
2022 : Pushing the limits of self-supervised ResNets: Can we outperform supervised learning without labels on ImageNet? »
Nenad Tomasev · Ioana Bica · Brian McWilliams · Lars Buesing · Razvan Pascanu · Charles Blundell · Jovana Mitrovic -
2023 : On the Universality of Linear Recurrences Followed by Nonlinear Projections »
Antonio Orvieto · Soham De · Razvan Pascanu · Caglar Gulcehre · Samuel Smith -
2023 : Latent Space Representations of Neural Algorithmic Reasoners »
Vladimir V. Mirjanić · Razvan Pascanu · Petar Veličković -
2023 : Asynchronous Algorithmic Alignment with Cocycles »
Andrew Dudzik · Tamara von Glehn · Razvan Pascanu · Petar Veličković -
2023 : Asynchronous Algorithmic Alignment with Cocycles »
Andrew Dudzik · Tamara von Glehn · Razvan Pascanu · Petar Veličković -
2023 Oral: Resurrecting Recurrent Neural Networks for Long Sequences »
Antonio Orvieto · Samuel Smith · Albert Gu · Anushan Fernando · Caglar Gulcehre · Razvan Pascanu · Soham De -
2023 Poster: Modality-Agnostic Variational Compression of Implicit Neural Representations »
Jonathan Richard Schwarz · Jihoon Tack · Yee-Whye Teh · Jaeho Lee · Jinwoo Shin -
2023 Oral: Understanding Plasticity in Neural Networks »
Clare Lyle · Zeyu Zheng · Evgenii Nikishin · Bernardo Avila Pires · Razvan Pascanu · Will Dabney -
2023 Poster: Understanding Plasticity in Neural Networks »
Clare Lyle · Zeyu Zheng · Evgenii Nikishin · Bernardo Avila Pires · Razvan Pascanu · Will Dabney -
2023 Poster: Resurrecting Recurrent Neural Networks for Long Sequences »
Antonio Orvieto · Samuel Smith · Albert Gu · Anushan Fernando · Caglar Gulcehre · Razvan Pascanu · Soham De -
2022 Poster: Wide Neural Networks Forget Less Catastrophically »
Seyed Iman Mirzadeh · Arslan Chaudhry · Dong Yin · Huiyi Hu · Razvan Pascanu · Dilan Gorur · Mehrdad Farajtabar -
2022 Spotlight: Wide Neural Networks Forget Less Catastrophically »
Seyed Iman Mirzadeh · Arslan Chaudhry · Dong Yin · Huiyi Hu · Razvan Pascanu · Dilan Gorur · Mehrdad Farajtabar -
2022 Poster: The CLRS Algorithmic Reasoning Benchmark »
Petar Veličković · Adrià Puigdomenech Badia · David Budden · Razvan Pascanu · Andrea Banino · Misha Dashevskiy · Raia Hadsell · Charles Blundell -
2022 Spotlight: The CLRS Algorithmic Reasoning Benchmark »
Petar Veličković · Adrià Puigdomenech Badia · David Budden · Razvan Pascanu · Andrea Banino · Misha Dashevskiy · Raia Hadsell · Charles Blundell -
2022 Poster: Hindering Adversarial Attacks with Implicit Neural Representations »
Andrei A Rusu · Dan Andrei Calian · Sven Gowal · Raia Hadsell -
2022 Spotlight: Hindering Adversarial Attacks with Implicit Neural Representations »
Andrei A Rusu · Dan Andrei Calian · Sven Gowal · Raia Hadsell -
2021 : Invited Talk #4 »
Razvan Pascanu -
2021 : Panel Discussion1 »
Razvan Pascanu · Irina Rish -
2021 Test Of Time: Bayesian Learning via Stochastic Gradient Langevin Dynamics »
Yee Teh · Max Welling -
2021 Poster: Spectral Normalisation for Deep Reinforcement Learning: An Optimisation Perspective »
Florin Gogianu · Tudor Berariu · Mihaela Rosca · Claudia Clopath · Lucian Busoniu · Razvan Pascanu -
2021 Spotlight: Spectral Normalisation for Deep Reinforcement Learning: An Optimisation Perspective »
Florin Gogianu · Tudor Berariu · Mihaela Rosca · Claudia Clopath · Lucian Busoniu · Razvan Pascanu -
2020 Workshop: 1st Workshop on Language in Reinforcement Learning (LaReL) »
Nantas Nardelli · Jelena Luketina · Nantas Nardelli · Jakob Foerster · Victor Zhong · Jacob Andreas · Tim Rocktäschel · Edward Grefenstette · Tim Rocktäschel -
2020 : Invited Talk: Razvan Pascanu "Continual Learning from an Optimization/Learning-dynamics perspective" »
Razvan Pascanu -
2020 Workshop: Workshop on Continual Learning »
Haytham Fayek · Arslan Chaudhry · David Lopez-Paz · Eugene Belilovsky · Jonathan Richard Schwarz · Marc Pickett · Rahaf Aljundi · Sayna Ebrahimi · Razvan Pascanu · Puneet Dokania -
2020 Poster: CoMic: Complementary Task Learning & Mimicry for Reusable Skills »
Leonard Hasenclever · Fabio Pardo · Raia Hadsell · Nicolas Heess · Josh Merel -
2020 Poster: Stabilizing Transformers for Reinforcement Learning »
Emilio Parisotto · Francis Song · Jack Rae · Razvan Pascanu · Caglar Gulcehre · Siddhant Jayakumar · Max Jaderberg · Raphael Lopez Kaufman · Aidan Clark · Seb Noury · Matthew Botvinick · Nicolas Heess · Raia Hadsell -
2020 Poster: A distributional view on multi-objective policy optimization »
Abbas Abdolmaleki · Sandy Huang · Leonard Hasenclever · Michael Neunert · Francis Song · Martina Zambelli · Murilo Martins · Nicolas Heess · Raia Hadsell · Martin Riedmiller -
2020 Poster: Improving the Gating Mechanism of Recurrent Neural Networks »
Albert Gu · Caglar Gulcehre · Thomas Paine · Matthew Hoffman · Razvan Pascanu -
2019 : Panel Discussion »
Yoshua Bengio · Andrew Ng · Raia Hadsell · John Platt · Claire Monteleoni · Jennifer Chayes -
2019 Poster: Open-ended learning in symmetric zero-sum games »
David Balduzzi · Marta Garnelo · Yoram Bachrach · Wojciech Czarnecki · Julien Perolat · Max Jaderberg · Thore Graepel -
2019 Oral: Open-ended learning in symmetric zero-sum games »
David Balduzzi · Marta Garnelo · Yoram Bachrach · Wojciech Czarnecki · Julien Perolat · Max Jaderberg · Thore Graepel -
2018 Poster: Mix & Match - Agent Curricula for Reinforcement Learning »
Wojciech Czarnecki · Siddhant Jayakumar · Max Jaderberg · Leonard Hasenclever · Yee Teh · Nicolas Heess · Simon Osindero · Razvan Pascanu -
2018 Oral: Mix & Match - Agent Curricula for Reinforcement Learning »
Wojciech Czarnecki · Siddhant Jayakumar · Max Jaderberg · Leonard Hasenclever · Yee Teh · Nicolas Heess · Simon Osindero · Razvan Pascanu -
2018 Poster: Graph Networks as Learnable Physics Engines for Inference and Control »
Alvaro Sanchez-Gonzalez · Nicolas Heess · Jost Springenberg · Josh Merel · Martin Riedmiller · Raia Hadsell · Peter Battaglia -
2018 Poster: Been There, Done That: Meta-Learning with Episodic Recall »
Samuel Ritter · Jane Wang · Zeb Kurth-Nelson · Siddhant Jayakumar · Charles Blundell · Razvan Pascanu · Matthew Botvinick -
2018 Poster: Conditional Neural Processes »
Marta Garnelo · Dan Rosenbaum · Chris Maddison · Tiago Ramalho · David Saxton · Murray Shanahan · Yee Teh · Danilo J. Rezende · S. M. Ali Eslami -
2018 Oral: Been There, Done That: Meta-Learning with Episodic Recall »
Samuel Ritter · Jane Wang · Zeb Kurth-Nelson · Siddhant Jayakumar · Charles Blundell · Razvan Pascanu · Matthew Botvinick -
2018 Oral: Conditional Neural Processes »
Marta Garnelo · Dan Rosenbaum · Chris Maddison · Tiago Ramalho · David Saxton · Murray Shanahan · Yee Teh · Danilo J. Rezende · S. M. Ali Eslami -
2018 Oral: Graph Networks as Learnable Physics Engines for Inference and Control »
Alvaro Sanchez-Gonzalez · Nicolas Heess · Jost Springenberg · Josh Merel · Martin Riedmiller · Raia Hadsell · Peter Battaglia -
2017 Invited Talk: Towards Reinforcement Learning in the Real World »
Raia Hadsell -
2017 Poster: Sharp Minima Can Generalize For Deep Nets »
Laurent Dinh · Razvan Pascanu · Samy Bengio · Yoshua Bengio -
2017 Poster: Decoupled Neural Interfaces using Synthetic Gradients »
Max Jaderberg · Wojciech Czarnecki · Simon Osindero · Oriol Vinyals · Alex Graves · David Silver · Koray Kavukcuoglu -
2017 Poster: Understanding Synthetic Gradients and Decoupled Neural Interfaces »
Wojciech Czarnecki · Grzegorz Świrszcz · Max Jaderberg · Simon Osindero · Oriol Vinyals · Koray Kavukcuoglu -
2017 Talk: Sharp Minima Can Generalize For Deep Nets »
Laurent Dinh · Razvan Pascanu · Samy Bengio · Yoshua Bengio -
2017 Talk: Understanding Synthetic Gradients and Decoupled Neural Interfaces »
Wojciech Czarnecki · Grzegorz Świrszcz · Max Jaderberg · Simon Osindero · Oriol Vinyals · Koray Kavukcuoglu -
2017 Talk: Decoupled Neural Interfaces using Synthetic Gradients »
Max Jaderberg · Wojciech Czarnecki · Simon Osindero · Oriol Vinyals · Alex Graves · David Silver · Koray Kavukcuoglu