Timezone: »

 
Workshop
Workshop on Continual Learning
Haytham Fayek · Arslan Chaudhry · David Lopez-Paz · Eugene Belilovsky · Jonathan Schwarz · Marc Pickett · Rahaf Aljundi · Sayna Ebrahimi · Razvan Pascanu · Puneet Dokania

Fri Jul 17 06:00 AM -- 02:00 PM (PDT) @ None
Event URL: https://sites.google.com/view/cl-icml/ »

Machine learning systems are commonly applied to isolated tasks or narrow domains (e.g. control over similar robotic bodies). It is further assumed that the learning system has simultaneous access to all the data points of the tasks at hand. In contrast, Continual Learning (CL) studies the problem of learning from a stream of data from changing domains, each connected to a different learning task. The objective of CL is to quickly adapt to new situations or tasks by exploiting previously acquired knowledge, while protecting previous learning from being erased. Meeting the objectives of CL will provide an opportunity for systems to quickly learn new skills given knowledge accumulated in the past and continually extend their capabilities to changing environments, a hallmark of natural intelligence.

Fri 6:00 a.m. - 6:05 a.m. [iCal]
Introduction (Talk)
Fri 6:05 a.m. - 6:35 a.m. [iCal]

In recent years we have seen an explosion of approaches that aim at transferring information between different learning tasks, in particular meta-learning and continual or lifelong learning. In my talk, I discuss ways to study these formally, using tools from learning theory that abstract away the specific details of implementation. In particular, I will discuss which assumptions one has to make on the tasks to be learned in order to guarantee a successful transfer of information.

Christoph H. Lampert
Fri 6:35 a.m. - 6:40 a.m. [iCal]

Ask your questions here: https://app.sli.do/event/izl9dbaz/live/questions

Fri 6:40 a.m. - 6:55 a.m. [iCal]
Spotlight Talk: Wandering Within a World: Online Contextualized Few-Shot Learning (Talk)
Fri 6:55 a.m. - 7:25 a.m. [iCal]

Continual learning is usually described through a list of desiderata, however some of the "wants" on this list are in contradiction with each other, hence a solution to continual learning implies finding suitable trade-offs between the different objectives. Such trade-offs can be given by grounding ourselves into a particular domain or set of tasks. Alternatively, I believe, one can also rely on framing or looking at continual learning through different perspectives to gain this grounding. In this talk I'm looking at optimization and learning dynamics. From this perspective, continual learning can be seen as looking for a more suitable credit assignment mechanism for learning, one that does not rely on tug-of-war dynamics that result from gradient based optimization techniques. I exemplify in what sense this grounds us, and present a few recent projects I've been involved in that could be thought of as looking at continual learning from this perspective.

Razvan Pascanu
Fri 7:25 a.m. - 7:30 a.m. [iCal]

Ask your questions here: https://app.sli.do/event/cye40uex/live/questions

Fri 7:30 a.m. - 7:45 a.m. [iCal]
Spotlight Talk: SOLA: Continual Learning with Second-Order Loss Approximation (Talk)
Fri 7:45 a.m. - 8:15 a.m. [iCal]

In existing machine learning (ML) applications, once a model is built it is deployed to perform its intended task. During the application, the model is fixed due to the closed-world assumption of the classic ML paradigm – everything seen in testing/application must have been seen in training. However, many real-life environments - such as those for chatbots and self-driving cars - are full of unknown, which are called the open environments/worlds. We humans can deal with such environments comfortably - detecting unknowns and learning them continuously in the interaction with other humans and the environment to adapt to the new environment and to become more and more knowledgeable. In fact, we humans never stop learning. After formal education, we continue to learn on the job or while working. AI systems should have the same on-the-job learning capability. It is impossible for them to rely solely on manually labeled data and offline training to deal with the dynamic open world. This talk discusses this problem and presents some initial work in the context of natural language processing.

Bing Liu
Fri 8:15 a.m. - 8:20 a.m. [iCal]

Ask your questions here: https://app.sli.do/event/5g97klgd/live/questions

Fri 8:20 a.m. - 8:35 a.m. [iCal]
Spotlight Talk: Continual Learning from the Perspective of Compression (Talk)
Fri 9:00 a.m. - 10:30 a.m. [iCal]

Ask your questions here: https://app.sli.do/event/3dsxoqjl/live/questions

Fri 10:30 a.m. - 11:30 a.m. [iCal]
Poster Session 1 (Poster session)
Fri 11:30 a.m. - 12:00 p.m. [iCal]

I will review the different mechanisms the brain might use to mitigate catastrophic forgetting in the brain and present a couple of brain-inspired agents in a reinforcement learning set up.

[Update] Claudia kindly asked us to keep this talk accessible for a limited time only. Therefore, this talk will no longer be available for you to watch.

Claudia Clopath
Fri 12:00 p.m. - 12:05 p.m. [iCal]

Ask your questions here: https://app.sli.do/event/eluqy8a2/live/questions

Fri 12:05 p.m. - 12:20 p.m. [iCal]
Spotlight Talk: Deep Reinforcement Learning amidst Lifelong Non-Stationarity (Talk)
Fri 12:20 p.m. - 12:50 p.m. [iCal]

A dominant trend in machine learning is that hand-designed pipelines are replaced by higher-performing learned pipelines once sufficient compute and data are available. I argue that trend will apply to machine learning itself, and thus that the fastest path to truly powerful AI is to create AI-generating algorithms (AI-GAs) that on their own learn to solve the hardest AI problems. This paradigm is an all-in bet on meta-learning. After introducing these ideas, the talk focuses on one example of this paradigm: Learning to Continually Learn. I describe a Neuromodulated Meta-Learning algorithm (ANML), which uses meta-learning to try to solve catastrophic forgetting, producing state-of-the-art results.

Jeff Clune
Fri 12:50 p.m. - 12:55 p.m. [iCal]

Ask your questions here: https://app.sli.do/event/oivbvz6e/live/questions

Fri 12:55 p.m. - 1:10 p.m. [iCal]
Spotlight Talk: Supermasks in Superposition (Talk)
Fri 1:10 p.m. - 1:40 p.m. [iCal]

Large-scale datasets have been key to the progress in fields like computer vision during the 21st century. Yet, the over-reliance on datasets has brought new challenges, such as various dataset biases, fixation on a few standardized tasks, failure to generalize beyond the narrow training domain, etc. It might be time to move away from the standard training set / test set paradigm, and consider data as it presents itself to an agent in the real world -- via a continuous, non-repeating stream. In this talk, I will discuss some of the potential benefits, as well as the challenges, of learning in a post-dataset world, including some of our recent work in test-time training.

Alexei Efros
Fri 1:40 p.m. - 1:45 p.m. [iCal]

Ask your questions here: https://app.sli.do/event/pxks1d8c/live/questions

Fri 1:45 p.m. - 2:00 p.m. [iCal]
Best paper: Anatomy of Catastrophic Forgetting: Hidden Representations and Task Semantics (Talk)
Fri 2:00 p.m. - 2:05 p.m. [iCal]
Closing remarks (Q&A)
Fri 2:05 p.m. - 3:00 p.m. [iCal]
Poster Session 2 (Poster session)
-

https://us02web.zoom.us/j/85486169869?pwd=ZnVnQlRObjNsWE1sYmd4WnVDbTRHdz09

-

https://us02web.zoom.us/j/86970194338?pwd=amhZRWcyNXVscDBWN25WRkZaNjBzdz09

-

https://us02web.zoom.us/j/85871137797?pwd=aTRqZG5sWGZDWnpWVkxzbGdJSDh4dz09

-

https://us02web.zoom.us/j/86777903933?pwd=NU81QlFmdkE5Q3poRzZraVlnRXhmQT09

-

https://us02web.zoom.us/j/85785774573?pwd=cXhmVnZxR2g1THg0cVAzcWNIWTgwUT09

-

https://us02web.zoom.us/j/81202188095?pwd=RzR2TDZvNmhXSEoyUzU0cjRib1VuQT09

-

https://us02web.zoom.us/j/89543454610?pwd=czQreW9IRlc3bTlVUEF6YkxlenZjdz09

-

https://us02web.zoom.us/j/83517402371?pwd=TlRxM3EzcSsrbHBYTWlxa09SdTNFZz09

-

https://us02web.zoom.us/j/85067574363?pwd=NDBaclZIR2hUUTErL2xuVUNKblN3UT09

-

https://us02web.zoom.us/j/82622553026?pwd=VEFBRlM5QVl6dXBQMWVNVEJQa2F2UT09

-

https://us02web.zoom.us/j/87280586676?pwd=bFFGaEJudk1yaE9zNzhLV2xjVXk2UT09

-

https://us02web.zoom.us/j/88656995060?pwd=UDlTN1FlOWJxMGc1ZFAvV0Z3U0k5QT09

-

https://us02web.zoom.us/j/82840404969?pwd=OUN5Wmx1cGNnUW1nL2loL1JFNzUzdz09

-

https://us02web.zoom.us/j/86173688177?pwd=MEVDZ1daZHprN2ROYVBUeExCaElEQT09

-

https://us02web.zoom.us/j/82763897108?pwd=WTlTd2hvWmNxVDR4dUIyNUtUQys5dz09

-

https://us02web.zoom.us/j/84040119219?pwd=M1lqY091c2g1dlBNaThVL1dlTzRCZz09

-

https://us02web.zoom.us/j/85369361995?pwd=YjY5RTFaN1FsM0FiM3BmRW9uRkJOZz09

-

https://us02web.zoom.us/j/85112531500?pwd=TTdtK0k4UGhUS3N3WnQxUTcraWpPQT09

-

https://us02web.zoom.us/j/83415438372?pwd=R3FDajNtRjR1U0FSTFBKYjRTTWRFdz09

-

https://us02web.zoom.us/j/88604978097?pwd=dlJpVzV3VWtTSmpVcDczY2hncUg3Zz09

-

https://us02web.zoom.us/j/83891140486?pwd=d3NvRmxiYjhnVmhGRmRiVmIwdW0rUT09

-

https://us02web.zoom.us/j/86564942704?pwd=ODcvTHZRVXVoMjBoZGlSb1VpaTRMQT09

-

https://us02web.zoom.us/j/83405630312?pwd=cWM0ODkrRlhWUzFFU3BOakJOZXhJUT09

-

https://us02web.zoom.us/j/85298431987?pwd=alhnU0JOcGtyUFoxcWFEUm03YTVJZz09

-

https://us02web.zoom.us/j/88677693560?pwd=NURaRFhXNWJySFVaT3hyWFBLODc3dz09

-

https://us02web.zoom.us/j/86560430147?pwd=dXUybm9aM2JkejBXTFYvSGNaZHFadz09

-

https://us02web.zoom.us/j/84217423656?pwd=bk1CaFpsaXQ0UERVaTEvSVZxM215Zz09

-

https://us02web.zoom.us/j/84436400626?pwd=WTdhWDdvakxYY2lSMWlhNWtTcDRzQT09

Author Information

Haytham Fayek (RMIT)
Arslan Chaudhry (University of Oxford)
David Lopez-Paz (Facebook AI Research)
Eugene Belilovsky (Mila, University of Montreal)
Jonathan Schwarz (DeepMind)
Marc Pickett (Google Research)
Rahaf Aljundi (Toyota Motor Europe)
Sayna Ebrahimi (UC Berkley)
Razvan Pascanu (DeepMind)
Puneet Dokania (University of Oxford)

More from the Same Authors