Workshop on Multi-Task and Lifelong Reinforcement Learning
Sarath Chandar · Shagun Sodhani · Khimya Khetarpal · Tom Zahavy · Daniel J. Mankowitz · Shie Mannor · Balaraman Ravindran · Doina Precup · Chelsea Finn · Abhishek Gupta · Amy Zhang · Kyunghyun Cho · Andrei Rusu · Facebook Rob Fergus

Sat Jun 15th 08:30 AM -- 06:00 PM @ 102
Event URL: »

Website link:

Significant progress has been made in reinforcement learning, enabling agents to accomplish complex tasks such as Atari games, robotic manipulation, simulated locomotion, and Go. These successes have stemmed from the core reinforcement learning formulation of learning a single policy or value function from scratch. However, reinforcement learning has proven challenging to scale to many practical real world problems due to problems in learning efficiency and objective specification, among many others. Recently, there has been emerging interest and research in leveraging structure and information across multiple reinforcement learning tasks to more efficiently and effectively learn complex behaviors. This includes:

1. curriculum and lifelong learning, where the problem requires learning a sequence of tasks, leveraging their shared structure to enable knowledge transfer
2. goal-conditioned reinforcement learning techniques that leverage the structure of the provided goal space to learn many tasks significantly faster
3. meta-learning methods that aim to learn efficient learning algorithms that can learn new tasks quickly
4. hierarchical reinforcement learning, where the reinforcement learning problem might entail a compositions of subgoals or subtasks with shared structure

Multi-task and lifelong reinforcement learning has the potential to alter the paradigm of traditional reinforcement learning, to provide more practical and diverse sources of supervision, while helping overcome many challenges associated with reinforcement learning, such as exploration, sample efficiency and credit assignment. However, the field of multi-task and lifelong reinforcement learning is still young, with many more developments needed in terms of problem formulation, algorithmic and theoretical advances as well as better benchmarking and evaluation.

The focus of this workshop will be on both the algorithmic and theoretical foundations of multi-task and lifelong reinforcement learning as well as the practical challenges associated with building multi-tasking agents and lifelong learning benchmarks. Our goal is to bring together researchers that study different problem domains (such as games, robotics, language, and so forth), different optimization approaches (deep learning, evolutionary algorithms, model-based control, etc.), and different formalisms (as mentioned above) to discuss the frontiers, open problems and meaningful next steps in multi-task and lifelong reinforcement learning.

08:45 AM Opening Remarks
09:00 AM Sergey Levine: Unsupervised Reinforcement Learning and Meta-Learning (Invited talk) Sergey Levine
09:25 AM Spotlight Presentations (Spotlights)
09:50 AM Peter Stone: Learning Curricula for Transfer Learning in RL (Invited talk) Peter Stone
10:15 AM Contributed Talks
10:30 AM Posters and Break (Poster Session and Break)
11:00 AM Jacob Andreas: Linguistic Scaffolds for Policy Learning (Invited talk) Jacob Andreas
11:25 AM Karol Hausman: Skill Representation and Supervision in Multi-Task Reinforcement Learning (Invited talk) Karol Hausman
11:50 AM Contributed Talks
12:20 PM Posters and Lunch Break (Poster Session and Lunch Break)
02:00 PM Martha White: Learning Representations for Continual Learning (Invited talk) Martha White
02:25 PM Natalia Diaz-Rodriguez: Continual Learning and Robotics: an overview (Invited talk) Natalia Diaz Rodriguez
02:50 PM Posters and Break (Poster Session and Break)
03:30 PM Jeff Clune: Towards Solving Catastrophic Forgetting with Neuromodulation & Learning Curricula by Generating Environments (Invited talk) Jeff Clune
03:55 PM Contributed Talks
04:15 PM Nicolas Heess: TBD (Invited talk) Nicolas Heess
04:40 PM Benjamin Rosman: Exploiting Structure For Accelerating Reinforcement Learning (Invited talk) Benjamin Rosman
05:05 PM Panel Discussion

Author Information

Sarath Chandar (Mila / University of Montreal)
Shagun Sodhani (University of Montreal)
Khimya Khetarpal (McGill University, Reasoning and Learning Lab)

Ph.D. Student

Tom Zahavy (Technion)
Daniel J. Mankowitz (Deepmind)
Shie Mannor (Technion)
Balaraman Ravindran (Indian Institute of Technology)
Doina Precup (McGill University / DeepMind)
Chelsea Finn (Stanford, Google, UC Berkeley)
Chelsea Finn

Chelsea Finn is a research scientist at Google Brain and a post-doctoral scholar at UC Berkeley. In September 2019, she will be joining Stanford's computer science department as an assistant professor. Finn's research interests lie in the ability to enable robots and other agents to develop broadly intelligent behavior through learning and interaction. To this end, Finn has developed deep learning algorithms for concurrently learning visual perception and control in robotic manipulation skills, inverse reinforcement methods for scalable acquisition of nonlinear reward functions, and meta-learning algorithms that can enable fast, few-shot adaptation in both visual perception and deep reinforcement learning. Finn received her Bachelors degree in EECS at MIT, and her PhD in CS at UC Berkeley. Her research has been recognized through an NSF graduate fellowship, a Facebook fellowship, the C.V. Ramamoorthy Distinguished Research Award, and the MIT Technology Review 35 under 35 Award, and her work has been covered by various media outlets, including the New York Times, Wired, and Bloomberg.

Abhishek Gupta (UC Berkeley)
Amy Zhang (McGill University)
Kyunghyun Cho (New York University)
Andrei Rusu (DeepMind)
Facebook Rob Fergus (Facebook AI Research, NYU)

More from the Same Authors