Keywords: continual learning catastrophic forgetting Knowledge transfer
Machine learning systems are commonly applied to isolated tasks or narrow domains (e.g. control over similar robotic bodies). It is further assumed that the learning system has simultaneous access to all the data points of the tasks at hand. In contrast, Continual Learning (CL) studies the problem of learning from a stream of data from changing domains, each connected to a different learning task. The objective of CL is to quickly adapt to new situations or tasks by exploiting previously acquired knowledge, while protecting previous learning from being erased. Meeting the objectives of CL will provide an opportunity for systems to quickly learn new skills given knowledge accumulated in the past and continually extend their capabilities to changing environments, a hallmark of natural intelligence.
Fri 6:00 a.m. - 6:05 a.m.
|
Introduction
(
Talk
)
|
🔗 |
Fri 6:05 a.m. - 6:35 a.m.
|
Invited Talk: Christoph H. Lampert "Learning Theory for Continual and Meta-Learning"
(
Talk
)
link »
SlidesLive Video » In recent years we have seen an explosion of approaches that aim at transferring information between different learning tasks, in particular meta-learning and continual or lifelong learning. In my talk, I discuss ways to study these formally, using tools from learning theory that abstract away the specific details of implementation. In particular, I will discuss which assumptions one has to make on the tasks to be learned in order to guarantee a successful transfer of information. |
Christoph H. Lampert 🔗 |
Fri 6:35 a.m. - 6:40 a.m.
|
Live Q&A: Christoph H. Lampert
(
Q&A
)
Ask your questions here: https://app.sli.do/event/izl9dbaz/live/questions |
🔗 |
Fri 6:40 a.m. - 6:55 a.m.
|
Spotlight Talk: Wandering Within a World: Online Contextualized Few-Shot Learning
(
Talk
)
|
🔗 |
Fri 6:55 a.m. - 7:25 a.m.
|
Invited Talk: Razvan Pascanu "Continual Learning from an Optimization/Learning-dynamics perspective"
(
Talk
)
link »
SlidesLive Video » Continual learning is usually described through a list of desiderata, however some of the "wants" on this list are in contradiction with each other, hence a solution to continual learning implies finding suitable trade-offs between the different objectives. Such trade-offs can be given by grounding ourselves into a particular domain or set of tasks. Alternatively, I believe, one can also rely on framing or looking at continual learning through different perspectives to gain this grounding. In this talk I'm looking at optimization and learning dynamics. From this perspective, continual learning can be seen as looking for a more suitable credit assignment mechanism for learning, one that does not rely on tug-of-war dynamics that result from gradient based optimization techniques. I exemplify in what sense this grounds us, and present a few recent projects I've been involved in that could be thought of as looking at continual learning from this perspective. |
Razvan Pascanu 🔗 |
Fri 7:25 a.m. - 7:30 a.m.
|
Live Q&A: Razvan Pascanu
(
Q&A
)
Ask your questions here: https://app.sli.do/event/cye40uex/live/questions |
🔗 |
Fri 7:30 a.m. - 7:45 a.m.
|
Spotlight Talk: SOLA: Continual Learning with Second-Order Loss Approximation
(
Talk
)
|
🔗 |
Fri 7:45 a.m. - 8:15 a.m.
|
Invited Talk: Bing Liu "Learning on the Job in the Open World"
(
Talk
)
link »
SlidesLive Video » In existing machine learning (ML) applications, once a model is built it is deployed to perform its intended task. During the application, the model is fixed due to the closed-world assumption of the classic ML paradigm – everything seen in testing/application must have been seen in training. However, many real-life environments - such as those for chatbots and self-driving cars - are full of unknown, which are called the open environments/worlds. We humans can deal with such environments comfortably - detecting unknowns and learning them continuously in the interaction with other humans and the environment to adapt to the new environment and to become more and more knowledgeable. In fact, we humans never stop learning. After formal education, we continue to learn on the job or while working. AI systems should have the same on-the-job learning capability. It is impossible for them to rely solely on manually labeled data and offline training to deal with the dynamic open world. This talk discusses this problem and presents some initial work in the context of natural language processing. |
Bing Liu 🔗 |
Fri 8:15 a.m. - 8:20 a.m.
|
Live Q&A: Bing Liu
(
Q&A
)
Ask your questions here: https://app.sli.do/event/5g97klgd/live/questions |
🔗 |
Fri 8:20 a.m. - 8:35 a.m.
|
Spotlight Talk: Continual Learning from the Perspective of Compression
(
Talk
)
|
🔗 |
Fri 9:00 a.m. - 10:30 a.m.
|
Panel Discussion
(
Panel Disucssion
)
Ask your questions here: https://app.sli.do/event/3dsxoqjl/live/questions |
🔗 |
Fri 10:30 a.m. - 11:30 a.m.
|
Poster Session 1
(
Poster session
)
|
🔗 |
Fri 11:30 a.m. - 12:00 p.m.
|
Invited Talk: Claudia Clopath "Continual learning though consolidation – a neuroscience angle"
(
Talk
)
SlidesLive Video » I will review the different mechanisms the brain might use to mitigate catastrophic forgetting in the brain and present a couple of brain-inspired agents in a reinforcement learning set up. [Update] Claudia kindly asked us to keep this talk accessible for a limited time only. Therefore, this talk will no longer be available for you to watch. |
Claudia Clopath 🔗 |
Fri 12:00 p.m. - 12:05 p.m.
|
Live Q&A: Claudia Clopath
(
Q&A
)
Ask your questions here: https://app.sli.do/event/eluqy8a2/live/questions |
🔗 |
Fri 12:05 p.m. - 12:20 p.m.
|
Spotlight Talk: Deep Reinforcement Learning amidst Lifelong Non-Stationarity
(
Talk
)
|
🔗 |
Fri 12:20 p.m. - 12:50 p.m.
|
Invited Talk: Jeff Clune
(
Talk
)
link »
SlidesLive Video » A dominant trend in machine learning is that hand-designed pipelines are replaced by higher-performing learned pipelines once sufficient compute and data are available. I argue that trend will apply to machine learning itself, and thus that the fastest path to truly powerful AI is to create AI-generating algorithms (AI-GAs) that on their own learn to solve the hardest AI problems. This paradigm is an all-in bet on meta-learning. After introducing these ideas, the talk focuses on one example of this paradigm: Learning to Continually Learn. I describe a Neuromodulated Meta-Learning algorithm (ANML), which uses meta-learning to try to solve catastrophic forgetting, producing state-of-the-art results. |
Jeff Clune 🔗 |
Fri 12:50 p.m. - 12:55 p.m.
|
Live Q&A: Jeff Clune
(
Q&A
)
Ask your questions here: https://app.sli.do/event/oivbvz6e/live/questions |
🔗 |
Fri 12:55 p.m. - 1:10 p.m.
|
Spotlight Talk: Supermasks in Superposition
(
Talk
)
|
🔗 |
Fri 1:10 p.m. - 1:40 p.m.
|
Live Invited Talk: Alexi Efros "Imagining a Post-Dataset Era"
(
Talk
)
Large-scale datasets have been key to the progress in fields like computer vision during the 21st century. Yet, the over-reliance on datasets has brought new challenges, such as various dataset biases, fixation on a few standardized tasks, failure to generalize beyond the narrow training domain, etc. It might be time to move away from the standard training set / test set paradigm, and consider data as it presents itself to an agent in the real world -- via a continuous, non-repeating stream. In this talk, I will discuss some of the potential benefits, as well as the challenges, of learning in a post-dataset world, including some of our recent work in test-time training. |
Alexei Efros 🔗 |
Fri 1:40 p.m. - 1:45 p.m.
|
Live Q&A: Alexi Efros
(
Q&A
)
Ask your questions here: https://app.sli.do/event/pxks1d8c/live/questions |
🔗 |
Fri 1:45 p.m. - 2:00 p.m.
|
Best paper: Anatomy of Catastrophic Forgetting: Hidden Representations and Task Semantics
(
Talk
)
|
🔗 |
Fri 2:00 p.m. - 2:05 p.m.
|
Closing remarks
(
Q&A
)
|
🔗 |
Fri 2:05 p.m. - 3:00 p.m.
|
Poster Session 2
(
Poster session
)
|
🔗 |
-
|
Continual Reinforcement Learning with Multi-Timescale Replay
(
Zoom Poster Session
)
link »
https://us02web.zoom.us/j/85486169869?pwd=ZnVnQlRObjNsWE1sYmd4WnVDbTRHdz09 |
🔗 |
-
|
Understanding Regularisation Methods for Continual Learning
(
Zoom Poster Session
)
link »
https://us02web.zoom.us/j/86970194338?pwd=amhZRWcyNXVscDBWN25WRkZaNjBzdz09 |
🔗 |
-
|
SOLA: Continual Learning with Second-Order Loss Approximation
(
Zoom Poster Session
)
link »
https://us02web.zoom.us/j/85871137797?pwd=aTRqZG5sWGZDWnpWVkxzbGdJSDh4dz09 |
🔗 |
-
|
Variational Auto-Regressive Gaussian Processes for Continual Learning
(
Zoom Poster Session
)
link »
https://us02web.zoom.us/j/86777903933?pwd=NU81QlFmdkE5Q3poRzZraVlnRXhmQT09 |
🔗 |
-
|
Beyond Catastrophic Forgetting in Continual Learning: An Attempt with SVM
(
Zoom Poster Session
)
link »
https://us02web.zoom.us/j/85785774573?pwd=cXhmVnZxR2g1THg0cVAzcWNIWTgwUT09 |
🔗 |
-
|
Disentanglement of Color and Shape Representations for Continual Learning
(
Zoom Poster Session
)
link »
https://us02web.zoom.us/j/81202188095?pwd=RzR2TDZvNmhXSEoyUzU0cjRib1VuQT09 |
🔗 |
-
|
On Class Orderings for Incremental Learning
(
Zoom Poster Session
)
link »
https://us02web.zoom.us/j/89543454610?pwd=czQreW9IRlc3bTlVUEF6YkxlenZjdz09 |
🔗 |
-
|
Continual Learning from the Perspective of Compression
(
Zoom Poster Session
)
link »
https://us02web.zoom.us/j/83517402371?pwd=TlRxM3EzcSsrbHBYTWlxa09SdTNFZz09 |
🔗 |
-
|
Deep Reinforcement Learning amidst Lifelong Non-Stationarity
(
Zoom Poster Session
)
link »
https://us02web.zoom.us/j/85067574363?pwd=NDBaclZIR2hUUTErL2xuVUNKblN3UT09 |
🔗 |
-
|
Wandering Within a World: Online Contextualized Few-Shot Learning
(
Zoom Poster Session
)
link »
https://us02web.zoom.us/j/82622553026?pwd=VEFBRlM5QVl6dXBQMWVNVEJQa2F2UT09 |
🔗 |
-
|
Understanding the Role of Training Regime in Continual Learning
(
Zoom Poster Session
)
link »
https://us02web.zoom.us/j/87280586676?pwd=bFFGaEJudk1yaE9zNzhLV2xjVXk2UT09 |
🔗 |
-
|
Visually Grounded Continual Learning of Compositional Phrases
(
Zoom Poster Session
)
link »
https://us02web.zoom.us/j/88656995060?pwd=UDlTN1FlOWJxMGc1ZFAvV0Z3U0k5QT09 |
🔗 |
-
|
Attention Option-Critic
(
Zoom Poster Session
)
link »
https://us02web.zoom.us/j/82840404969?pwd=OUN5Wmx1cGNnUW1nL2loL1JFNzUzdz09 |
🔗 |
-
|
A General Framework for Continual Learning of Compositional Structures
(
Zoom Poster Session
)
link »
https://us02web.zoom.us/j/86173688177?pwd=MEVDZ1daZHprN2ROYVBUeExCaElEQT09 |
🔗 |
-
|
Continual Learning in Human Activity Recognition: An empirical analysis of Regularization
(
Zoom Poster Session
)
link »
https://us02web.zoom.us/j/82763897108?pwd=WTlTd2hvWmNxVDR4dUIyNUtUQys5dz09 |
🔗 |
-
|
Continual Deep Learning by Functional Regularisation of Memorable Past
(
Zoom Poster Session
)
link »
https://us02web.zoom.us/j/84040119219?pwd=M1lqY091c2g1dlBNaThVL1dlTzRCZz09 |
🔗 |
-
|
Online Inducing Points Selection for Gaussian Processes
(
Zoom Poster Session
)
link »
https://us02web.zoom.us/j/85369361995?pwd=YjY5RTFaN1FsM0FiM3BmRW9uRkJOZz09 |
🔗 |
-
|
Task-Agnostic Continual Learning via Stochastic Synapses
(
Zoom Poster Session
)
link »
https://us02web.zoom.us/j/85112531500?pwd=TTdtK0k4UGhUS3N3WnQxUTcraWpPQT09 |
🔗 |
-
|
Routing Networks with Co-training for Continual Learning
(
Zoom Poster Session
)
link »
https://us02web.zoom.us/j/83415438372?pwd=R3FDajNtRjR1U0FSTFBKYjRTTWRFdz09 |
🔗 |
-
|
UNCLEAR: A Straightforward Method for Continual Reinforcement Learning
(
Zoom Poster Session
)
https://us02web.zoom.us/j/88604978097?pwd=dlJpVzV3VWtTSmpVcDczY2hncUg3Zz09 |
🔗 |
-
|
Active Online Domain Adaptation
(
Zoom Poster Session
)
link »
https://us02web.zoom.us/j/83891140486?pwd=d3NvRmxiYjhnVmhGRmRiVmIwdW0rUT09 |
🔗 |
-
|
Supermasks in Superposition
(
Zoom Poster Session
)
link »
https://us02web.zoom.us/j/86564942704?pwd=ODcvTHZRVXVoMjBoZGlSb1VpaTRMQT09 |
🔗 |
-
|
Combining Variational Continual Learning with FiLM Layers
(
Zoom Poster Session
)
link »
https://us02web.zoom.us/j/83405630312?pwd=cWM0ODkrRlhWUzFFU3BOakJOZXhJUT09 |
🔗 |
-
|
Task Agnostic Continual Learning via Meta Learning
(
Zoom Poster Session
)
link »
https://us02web.zoom.us/j/85298431987?pwd=alhnU0JOcGtyUFoxcWFEUm03YTVJZz09 |
🔗 |
-
|
Variational Beam Search for Continual Learning
(
Zoom Poster Session
)
link »
https://us02web.zoom.us/j/88677693560?pwd=NURaRFhXNWJySFVaT3hyWFBLODc3dz09 |
🔗 |
-
|
Online Hyperparameter Tuning for Multi-Task Learning
(
Zoom Poster Session
)
link »
https://us02web.zoom.us/j/86560430147?pwd=dXUybm9aM2JkejBXTFYvSGNaZHFadz09 |
🔗 |
-
|
Evaluating Agents Without Rewards
(
Zoom Poster Session
)
link »
https://us02web.zoom.us/j/84217423656?pwd=bk1CaFpsaXQ0UERVaTEvSVZxM215Zz09 |
🔗 |
-
|
Anatomy of Catastrophic Forgetting: Hidden Representations and Task Semantics
(
Zoom Poster Session
)
link »
https://us02web.zoom.us/j/84436400626?pwd=WTdhWDdvakxYY2lSMWlhNWtTcDRzQT09 |
🔗 |