Timezone: »
Off-policy learning allows us to learn about possible policies of behavior from experience generated by a different behavior policy. Temporal difference (TD) learning algorithms can become unstable when combined with function approximation and off-policy sampling---this is known as the ``deadly triad''. Emphatic temporal difference (ETD(λ)) algorithm ensures convergence in the linear case by appropriately weighting the TD(λ) updates. In this paper, we extend the use of emphatic methods to deep reinforcement learning agents. We show that naively adapting ETD(λ) to popular deep reinforcement learning algorithms, which use forward view multi-step returns, results in poor performance. We then derive new emphatic algorithms for use in the context of such algorithms, and we demonstrate that they provide noticeable benefits in small problems designed to highlight the instability of TD methods. Finally, we observed improved performance when applying these algorithms at scale on classic Atari games from the Arcade Learning Environment.
Author Information
Ray Jiang (DeepMind)
Tom Zahavy (DeepMind)
Zhongwen Xu (University of Technology Sydney)
Adam White (Deepmind, University of Alberta)
Matteo Hessel (DeepMind)
Charles Blundell (DeepMind)
Hado van Hasselt (DeepMind)
Related Events (a corresponding poster, oral, or spotlight)
-
2021 Poster: Emphatic Algorithms for Deep Reinforcement Learning »
Wed. Jul 21st 04:00 -- 06:00 AM Room
More from the Same Authors
-
2021 : PonderNet: Learning to Ponder »
Andrea Banino · Jan Balaguer · Charles Blundell -
2021 : Beyond Fine-Tuning: Transferring Behavior in Reinforcement Learning »
Víctor Campos · Pablo Sprechmann · Steven Hansen · Andre Barreto · Steven Kapturowski · Alex Vitvitskyi · Adrià Puigdomenech Badia · Charles Blundell -
2021 : CoBERL: Contrastive BERT for Reinforcement Learning »
Andrea Banino · Adrià Puigdomenech Badia · Jacob C Walker · Tim Scholtes · Jovana Mitrovic · Charles Blundell -
2021 : Discovering Diverse Nearly Optimal Policies with Successor Features »
Tom Zahavy · Brendan O'Donoghue · Andre Barreto · Sebastian Flennerhag · Vlad Mnih · Satinder Singh -
2021 : Reward is enough for convex MDPs »
Tom Zahavy · Brendan O'Donoghue · Guillaume Desjardins · Satinder Singh -
2022 : Pushing the limits of self-supervised ResNets: Can we outperform supervised learning without labels on ImageNet? »
Nenad Tomasev · Ioana Bica · Brian McWilliams · Lars Buesing · Razvan Pascanu · Charles Blundell · Jovana Mitrovic -
2023 Poster: ReLOAD: Reinforcement Learning with Optimistic Ascent-Descent for Last-Iterate Convergence in Constrained MDPs »
Ted Moskovitz · Brendan O'Donoghue · Vivek Veeriah · Sebastian Flennerhag · Satinder Singh · Tom Zahavy -
2023 Poster: Neural Algorithmic Reasoning with Causal Regularisation »
Beatrice Bevilacqua · Kyriacos Nikiforou · Borja Ibarz · Ioana Bica · Michela Paganini · Charles Blundell · Jovana Mitrovic · Petar Veličković -
2022 Poster: Retrieval-Augmented Reinforcement Learning »
Anirudh Goyal · Abe Friesen Friesen · Andrea Banino · Theophane Weber · Nan Rosemary Ke · Adrià Puigdomenech Badia · Arthur Guez · Mehdi Mirza · Peter Humphreys · Ksenia Konyushkova · Michal Valko · Simon Osindero · Timothy Lillicrap · Nicolas Heess · Charles Blundell -
2022 Spotlight: Retrieval-Augmented Reinforcement Learning »
Anirudh Goyal · Abe Friesen Friesen · Andrea Banino · Theophane Weber · Nan Rosemary Ke · Adrià Puigdomenech Badia · Arthur Guez · Mehdi Mirza · Peter Humphreys · Ksenia Konyushkova · Michal Valko · Simon Osindero · Timothy Lillicrap · Nicolas Heess · Charles Blundell -
2022 Poster: The CLRS Algorithmic Reasoning Benchmark »
Petar Veličković · Adrià Puigdomenech Badia · David Budden · Razvan Pascanu · Andrea Banino · Misha Dashevskiy · Raia Hadsell · Charles Blundell -
2022 Spotlight: The CLRS Algorithmic Reasoning Benchmark »
Petar Veličković · Adrià Puigdomenech Badia · David Budden · Razvan Pascanu · Andrea Banino · Misha Dashevskiy · Raia Hadsell · Charles Blundell -
2021 Poster: Online Limited Memory Neural-Linear Bandits with Likelihood Matching »
Ofir Nabati · Tom Zahavy · Shie Mannor -
2021 Spotlight: Online Limited Memory Neural-Linear Bandits with Likelihood Matching »
Ofir Nabati · Tom Zahavy · Shie Mannor -
2021 Poster: Muesli: Combining Improvements in Policy Optimization »
Matteo Hessel · Ivo Danihelka · Fabio Viola · Arthur Guez · Simon Schmitt · Laurent Sifre · Theophane Weber · David Silver · Hado van Hasselt -
2021 Social: The ICML Debate: Should AI Research and Development Be Controlled by a Regulatory Body or Government Oversight? »
Yunpeng Li · Olga Isupova · Nika Haghtalab · Adam White · Diego Granziol -
2021 Spotlight: Muesli: Combining Improvements in Policy Optimization »
Matteo Hessel · Ivo Danihelka · Fabio Viola · Arthur Guez · Simon Schmitt · Laurent Sifre · Theophane Weber · David Silver · Hado van Hasselt -
2020 Poster: Off-Policy Actor-Critic with Shared Experience Replay »
Simon Schmitt · Matteo Hessel · Karen Simonyan -
2020 Poster: Agent57: Outperforming the Atari Human Benchmark »
Adrià Puigdomenech Badia · Bilal Piot · Steven Kapturowski · Pablo Sprechmann · Oleksandr Vitvitskyi · Zhaohan Guo · Charles Blundell -
2020 Poster: What Can Learned Intrinsic Rewards Capture? »
Zeyu Zheng · Junhyuk Oh · Matteo Hessel · Zhongwen Xu · Manuel Kroiss · Hado van Hasselt · David Silver · Satinder Singh -
2018 Poster: Learning to Coordinate with Coordination Graphs in Repeated Single-Stage Multi-Agent Decision Problems »
Eugenio Bargiacchi · Timothy Verstraeten · Diederik Roijers · Ann Nowé · Hado van Hasselt -
2018 Oral: Learning to Coordinate with Coordination Graphs in Repeated Single-Stage Multi-Agent Decision Problems »
Eugenio Bargiacchi · Timothy Verstraeten · Diederik Roijers · Ann Nowé · Hado van Hasselt -
2018 Poster: Transfer in Deep Reinforcement Learning Using Successor Features and Generalised Policy Improvement »
Andre Barreto · Diana Borsa · John Quan · Tom Schaul · David Silver · Matteo Hessel · Daniel J. Mankowitz · Augustin Zidek · Remi Munos -
2018 Poster: Been There, Done That: Meta-Learning with Episodic Recall »
Samuel Ritter · Jane Wang · Zeb Kurth-Nelson · Siddhant Jayakumar · Charles Blundell · Razvan Pascanu · Matthew Botvinick -
2018 Oral: Been There, Done That: Meta-Learning with Episodic Recall »
Samuel Ritter · Jane Wang · Zeb Kurth-Nelson · Siddhant Jayakumar · Charles Blundell · Razvan Pascanu · Matthew Botvinick -
2018 Oral: Transfer in Deep Reinforcement Learning Using Successor Features and Generalised Policy Improvement »
Andre Barreto · Diana Borsa · John Quan · Tom Schaul · David Silver · Matteo Hessel · Daniel J. Mankowitz · Augustin Zidek · Remi Munos -
2017 Poster: The Predictron: End-To-End Learning and Planning »
David Silver · Hado van Hasselt · Matteo Hessel · Tom Schaul · Arthur Guez · Tim Harley · Gabriel Dulac-Arnold · David Reichert · Neil Rabinowitz · Andre Barreto · Thomas Degris -
2017 Poster: Neural Episodic Control »
Alexander Pritzel · Benigno Uria · Srinivasan Sriram · Adrià Puigdomenech Badia · Oriol Vinyals · Demis Hassabis · Daan Wierstra · Charles Blundell -
2017 Talk: Neural Episodic Control »
Alexander Pritzel · Benigno Uria · Srinivasan Sriram · Adrià Puigdomenech Badia · Oriol Vinyals · Demis Hassabis · Daan Wierstra · Charles Blundell -
2017 Talk: The Predictron: End-To-End Learning and Planning »
David Silver · Hado van Hasselt · Matteo Hessel · Tom Schaul · Arthur Guez · Tim Harley · Gabriel Dulac-Arnold · David Reichert · Neil Rabinowitz · Andre Barreto · Thomas Degris -
2017 Poster: DARLA: Improving Zero-Shot Transfer in Reinforcement Learning »
Irina Higgins · Arka Pal · Andrei A Rusu · Loic Matthey · Christopher Burgess · Alexander Pritzel · Matthew Botvinick · Charles Blundell · Alexander Lerchner -
2017 Talk: DARLA: Improving Zero-Shot Transfer in Reinforcement Learning »
Irina Higgins · Arka Pal · Andrei A Rusu · Loic Matthey · Christopher Burgess · Alexander Pritzel · Matthew Botvinick · Charles Blundell · Alexander Lerchner