Timezone: »
Deep reinforcement learning has rapidly grown as a research field with far-reaching potential for artificial intelligence. Large set of ATARI games have been used as the main benchmark domain for many fundamental developments. As the field matures, it is important to develop more sophisticated learning systems with the aim of solving more complex tasks. I will describe some recent research from DeepMind that allows end-to-end learning in challenging environments with real-world variability and complex task structure.
Author Information
Raia Hadsell (DeepMind)
Raia Hadsell, a senior research scientist at DeepMind, has worked on deep learning and robotics problems for over 10 years. Her early research developed the notion of manifold learning using Siamese networks, which has been used extensively for invariant feature learning. After completing a PhD with Yann LeCun, which featured a self-supervised deep learning vision system for a mobile robot, her research continued at Carnegie Mellon’s Robotics Institute and SRI International, and in early 2014 she joined DeepMind in London to study artificial general intelligence. Her current research focuses on the challenge of continual learning for AI agents and robotic systems. While deep RL algorithms are capable of attaining superhuman performance on single tasks, they cannot transfer that performance to additional tasks, especially if experienced sequentially. She has proposed neural approaches such as policy distillation, progressive nets, and elastic weight consolidation to solve the problem of catastrophic forgetting and improve transfer learning.
More from the Same Authors
-
2022 Poster: The CLRS Algorithmic Reasoning Benchmark »
Petar Veličković · Adrià Puigdomenech Badia · David Budden · Razvan Pascanu · Andrea Banino · Misha Dashevskiy · Raia Hadsell · Charles Blundell -
2022 Spotlight: The CLRS Algorithmic Reasoning Benchmark »
Petar Veličković · Adrià Puigdomenech Badia · David Budden · Razvan Pascanu · Andrea Banino · Misha Dashevskiy · Raia Hadsell · Charles Blundell -
2022 Poster: Hindering Adversarial Attacks with Implicit Neural Representations »
Andrei A Rusu · Dan Andrei Calian · Sven Gowal · Raia Hadsell -
2022 Spotlight: Hindering Adversarial Attacks with Implicit Neural Representations »
Andrei A Rusu · Dan Andrei Calian · Sven Gowal · Raia Hadsell -
2020 Poster: CoMic: Complementary Task Learning & Mimicry for Reusable Skills »
Leonard Hasenclever · Fabio Pardo · Raia Hadsell · Nicolas Heess · Josh Merel -
2020 Poster: Stabilizing Transformers for Reinforcement Learning »
Emilio Parisotto · Francis Song · Jack Rae · Razvan Pascanu · Caglar Gulcehre · Siddhant Jayakumar · Max Jaderberg · Raphael Lopez Kaufman · Aidan Clark · Seb Noury · Matthew Botvinick · Nicolas Heess · Raia Hadsell -
2020 Poster: A distributional view on multi-objective policy optimization »
Abbas Abdolmaleki · Sandy Huang · Leonard Hasenclever · Michael Neunert · Francis Song · Martina Zambelli · Murilo Martins · Nicolas Heess · Raia Hadsell · Martin Riedmiller -
2019 : Panel Discussion »
Yoshua Bengio · Andrew Ng · Raia Hadsell · John Platt · Claire Monteleoni · Jennifer Chayes -
2018 Poster: Progress & Compress: A scalable framework for continual learning »
Jonathan Richard Schwarz · Wojciech Czarnecki · Jelena Luketina · Agnieszka Grabska-Barwinska · Yee Teh · Razvan Pascanu · Raia Hadsell -
2018 Oral: Progress & Compress: A scalable framework for continual learning »
Jonathan Richard Schwarz · Wojciech Czarnecki · Jelena Luketina · Agnieszka Grabska-Barwinska · Yee Teh · Razvan Pascanu · Raia Hadsell -
2018 Poster: Graph Networks as Learnable Physics Engines for Inference and Control »
Alvaro Sanchez-Gonzalez · Nicolas Heess · Jost Springenberg · Josh Merel · Martin Riedmiller · Raia Hadsell · Peter Battaglia -
2018 Oral: Graph Networks as Learnable Physics Engines for Inference and Control »
Alvaro Sanchez-Gonzalez · Nicolas Heess · Jost Springenberg · Josh Merel · Martin Riedmiller · Raia Hadsell · Peter Battaglia