Timezone: »
Many reinforcement learning (RL) tasks provide the agent with high-dimensional observations that can be simplified into low-dimensional continuous states. To formalize this process, we introduce the concept of a \texit{DeepMDP}, a parameterized latent space model that is trained via the minimization of two tractable latent space losses: prediction of rewards and prediction of the distribution over next latent states. We show that the optimization of these objectives guarantees (1) the quality of the embedding function as a representation of the state space and (2) the quality of the DeepMDP as a model of the environment. Our theoretical findings are substantiated by the experimental result that a trained DeepMDP recovers the latent structure underlying high-dimensional observations on a synthetic environment. Finally, we show that learning a DeepMDP as an auxiliary task in the Atari 2600 domain leads to large performance improvements over model-free RL.
Author Information
Carles Gelada (Google Brain)
Saurabh Kumar (Google Brain)
Jacob Buckman (Johns Hopkins University)
Ofir Nachum (Google Brain)
Marc Bellemare (Google Brain)
Related Events (a corresponding poster, oral, or spotlight)
-
2019 Oral: DeepMDP: Learning Continuous Latent Space Models for Representation Learning »
Tue. Jun 11th 10:05 -- 10:10 PM Room Room 104
More from the Same Authors
-
2020 : One Solution is Not All You Need: Few-Shot Extrapolation via Structured MaxEnt RL »
Saurabh Kumar · Aviral Kumar -
2021 : SparseDice: Imitation Learning for Temporally Sparse Data via Regularization »
Alberto Camacho · Izzeddin Gur · Marcin Moczulski · Ofir Nachum · Aleksandra Faust -
2021 : Understanding the Generalization Gap in Visual Reinforcement Learning »
Anurag Ajay · Ge Yang · Ofir Nachum · Pulkit Agrawal -
2023 : Suboptimal Data Can Bottleneck Scaling »
Jacob Buckman · Kshitij Gupta · Ethan Caballero · Rishabh Agarwal · Marc Bellemare -
2023 Poster: Bootstrapped Representations in Reinforcement Learning »
Charline Le Lan · Stephen Tu · Mark Rowland · Anna Harutyunyan · Rishabh Agarwal · Marc Bellemare · Will Dabney -
2023 Poster: The Statistical Benefits of Quantile Temporal-Difference Learning for Value Estimation »
Mark Rowland · Yunhao Tang · Clare Lyle · Remi Munos · Marc Bellemare · Will Dabney -
2023 Poster: Bigger, Better, Faster: Human-level Atari with human-level efficiency »
Max Schwarzer · Johan Obando Ceron · Aaron Courville · Marc Bellemare · Rishabh Agarwal · Pablo Samuel Castro -
2022 Poster: Why Should I Trust You, Bellman? The Bellman Error is a Poor Replacement for Value Error »
Scott Fujimoto · David Meger · Doina Precup · Ofir Nachum · Shixiang Gu -
2022 Poster: Model Selection in Batch Policy Optimization »
Jonathan Lee · George Tucker · Ofir Nachum · Bo Dai -
2022 Poster: A Parametric Class of Approximate Gradient Updates for Policy Optimization »
Ramki Gummadi · Saurabh Kumar · Junfeng Wen · Dale Schuurmans -
2022 Spotlight: A Parametric Class of Approximate Gradient Updates for Policy Optimization »
Ramki Gummadi · Saurabh Kumar · Junfeng Wen · Dale Schuurmans -
2022 Spotlight: Why Should I Trust You, Bellman? The Bellman Error is a Poor Replacement for Value Error »
Scott Fujimoto · David Meger · Doina Precup · Ofir Nachum · Shixiang Gu -
2022 Spotlight: Model Selection in Batch Policy Optimization »
Jonathan Lee · George Tucker · Ofir Nachum · Bo Dai -
2022 Poster: Distributional Hamilton-Jacobi-Bellman Equations for Continuous-Time Reinforcement Learning »
Harley Wiltzer · David Meger · Marc Bellemare -
2022 Spotlight: Distributional Hamilton-Jacobi-Bellman Equations for Continuous-Time Reinforcement Learning »
Harley Wiltzer · David Meger · Marc Bellemare -
2021 Poster: Policy Information Capacity: Information-Theoretic Measure for Task Complexity in Deep Reinforcement Learning »
Hiroki Furuta · Tatsuya Matsushima · Tadashi Kozuno · Yutaka Matsuo · Sergey Levine · Ofir Nachum · Shixiang Gu -
2021 Poster: Offline Reinforcement Learning with Fisher Divergence Critic Regularization »
Ilya Kostrikov · Rob Fergus · Jonathan Tompson · Ofir Nachum -
2021 Poster: Representation Matters: Offline Pretraining for Sequential Decision Making »
Mengjiao Yang · Ofir Nachum -
2021 Spotlight: Representation Matters: Offline Pretraining for Sequential Decision Making »
Mengjiao Yang · Ofir Nachum -
2021 Spotlight: Policy Information Capacity: Information-Theoretic Measure for Task Complexity in Deep Reinforcement Learning »
Hiroki Furuta · Tatsuya Matsushima · Tadashi Kozuno · Yutaka Matsuo · Sergey Levine · Ofir Nachum · Shixiang Gu -
2021 Spotlight: Offline Reinforcement Learning with Fisher Divergence Critic Regularization »
Ilya Kostrikov · Rob Fergus · Jonathan Tompson · Ofir Nachum -
2020 Poster: Representations for Stable Off-Policy Reinforcement Learning »
Dibya Ghosh · Marc Bellemare -
2019 : posters »
Zhengxing Chen · Juan Jose Garau Luis · Ignacio Albert Smet · Aditya Modi · Sabina Tomkins · Riley Simmons-Edler · Hongzi Mao · Alexander Irpan · Hao Lu · Rose Wang · Subhojyoti Mukherjee · Aniruddh Raghu · Syed Arbab Mohd Shihab · Byung Hoon Ahn · Rasool Fakoor · Pratik Chaudhari · Elena Smirnova · Min-hwan Oh · Xiaocheng Tang · Tony Qin · Qingyang Li · Marc Brittain · Ian Fox · Supratik Paul · Xiaofeng Gao · Yinlam Chow · Gabriel Dulac-Arnold · Ofir Nachum · Nikos Karampatziakis · Bharathan Balaji · Supratik Paul · Ali Davody · Djallel Bouneffouf · Himanshu Sahni · Soo Kim · Andrey Kolobov · Alexander Amini · Yao Liu · Xinshi Chen · · Craig Boutilier -
2019 Poster: Statistics and Samples in Distributional Reinforcement Learning »
Mark Rowland · Robert Dadashi · Saurabh Kumar · Remi Munos · Marc Bellemare · Will Dabney -
2019 Oral: Statistics and Samples in Distributional Reinforcement Learning »
Mark Rowland · Robert Dadashi · Saurabh Kumar · Remi Munos · Marc Bellemare · Will Dabney -
2019 Poster: The Value Function Polytope in Reinforcement Learning »
Robert Dadashi · Marc Bellemare · Adrien Ali Taiga · Nicolas Le Roux · Dale Schuurmans -
2019 Oral: The Value Function Polytope in Reinforcement Learning »
Robert Dadashi · Marc Bellemare · Adrien Ali Taiga · Nicolas Le Roux · Dale Schuurmans -
2018 Poster: Smoothed Action Value Functions for Learning Gaussian Policies »
Ofir Nachum · Mohammad Norouzi · George Tucker · Dale Schuurmans -
2018 Poster: Is Generator Conditioning Causally Related to GAN Performance? »
Augustus Odena · Jacob Buckman · Catherine Olsson · Tom B Brown · Christopher Olah · Colin Raffel · Ian Goodfellow -
2018 Oral: Smoothed Action Value Functions for Learning Gaussian Policies »
Ofir Nachum · Mohammad Norouzi · George Tucker · Dale Schuurmans -
2018 Oral: Is Generator Conditioning Causally Related to GAN Performance? »
Augustus Odena · Jacob Buckman · Catherine Olsson · Tom B Brown · Christopher Olah · Colin Raffel · Ian Goodfellow -
2018 Poster: Path Consistency Learning in Tsallis Entropy Regularized MDPs »
Yinlam Chow · Ofir Nachum · Mohammad Ghavamzadeh -
2018 Oral: Path Consistency Learning in Tsallis Entropy Regularized MDPs »
Yinlam Chow · Ofir Nachum · Mohammad Ghavamzadeh -
2017 : Panel Discussion »
Balaraman Ravindran · Chelsea Finn · Alessandro Lazaric · Katja Hofmann · Marc Bellemare -
2017 : Marc G. Bellemare: The role of density models in reinforcement learning »
Marc Bellemare -
2017 Poster: Count-Based Exploration with Neural Density Models »
Georg Ostrovski · Marc Bellemare · Aäron van den Oord · Remi Munos -
2017 Talk: Count-Based Exploration with Neural Density Models »
Georg Ostrovski · Marc Bellemare · Aäron van den Oord · Remi Munos -
2017 Poster: A Laplacian Framework for Option Discovery in Reinforcement Learning »
Marlos C. Machado · Marc Bellemare · Michael Bowling -
2017 Poster: A Distributional Perspective on Reinforcement Learning »
Marc Bellemare · Will Dabney · Remi Munos -
2017 Poster: Automated Curriculum Learning for Neural Networks »
Alex Graves · Marc Bellemare · Jacob Menick · Remi Munos · Koray Kavukcuoglu -
2017 Talk: A Laplacian Framework for Option Discovery in Reinforcement Learning »
Marlos C. Machado · Marc Bellemare · Michael Bowling -
2017 Talk: A Distributional Perspective on Reinforcement Learning »
Marc Bellemare · Will Dabney · Remi Munos -
2017 Talk: Automated Curriculum Learning for Neural Networks »
Alex Graves · Marc Bellemare · Jacob Menick · Remi Munos · Koray Kavukcuoglu