Timezone: »
Learning effective representations in image-based environments is crucial for sample efficient Reinforcement Learning (RL). Unfortunately, in RL, representation learning is confounded with the exploratory experience of the agent -- learning a useful representation requires diverse data, while effective exploration is only possible with coherent representations. Furthermore, we would like to learn representations that not only generalize across tasks but also accelerate downstream exploration for efficient task-specific training. To address these challenges we propose Proto-RL, a self-supervised framework that ties representation learning with exploration through prototypical representations. These prototypes simultaneously serve as a summarization of the exploratory experience of an agent as well as a basis for representing observations. We pre-train these task-agnostic representations and prototypes on environments without downstream task information. This enables state-of-the-art downstream policy learning on a set of difficult continuous control tasks.
Author Information
Denis Yarats (New York University)
Rob Fergus (Facebook / NYU)
Alessandro Lazaric (Facebook AI Research)
Lerrel Pinto (NYU)
Related Events (a corresponding poster, oral, or spotlight)
-
2021 Poster: Reinforcement Learning with Prototypical Representations »
Tue. Jul 20th 04:00 -- 06:00 PM Room Virtual
More from the Same Authors
-
2021 : Reinforcement Learning in Linear MDPs: Constant Regret and Representation Selection »
Matteo Papini · Andrea Tirinzoni · Aldo Pacchiano · Marcello Restelli · Alessandro Lazaric · Matteo Pirotta -
2021 : A Fully Problem-Dependent Regret Lower Bound for Finite-Horizon MDPs »
Andrea Tirinzoni · Matteo Pirotta · Alessandro Lazaric -
2021 : Stochastic Shortest Path: Minimax, Parameter-Free and Towards Horizon-Free Regret »
Jean Tarbouriech · Jean Tarbouriech · Simon Du · Matteo Pirotta · Michal Valko · Alessandro Lazaric -
2021 : A general sample complexity analysis of vanilla policy gradient »
Rui Yuan · Robert Gower · Alessandro Lazaric -
2021 : Direct then Diffuse: Incremental Unsupervised Skill Discovery for State Covering and Goal Reaching »
Pierre-Alexandre Kamienny · Jean Tarbouriech · Alessandro Lazaric · Ludovic Denoyer -
2021 : Exploration-Driven Representation Learning in Reinforcement Learning »
Akram Erraqabi · Mingde Zhao · Marlos C. Machado · Yoshua Bengio · Sainbayar Sukhbaatar · Ludovic Denoyer · Alessandro Lazaric -
2023 : Accelerating exploration and representation learning with offline pre-training »
Bogdan Mazoure · Jake Bruce · Doina Precup · Rob Fergus · Ankit Anand -
2023 Poster: Layered State Discovery for Incremental Autonomous Exploration »
Liyu Chen · Andrea Tirinzoni · Alessandro Lazaric · Matteo Pirotta -
2023 Poster: Distilling Internet-Scale Vision-Language Models into Embodied Agents »
Theodore R Sumers · Kenneth Marino · Arun Ahuja · Rob Fergus · Ishita Dasgupta -
2023 Poster: Reduce, Reuse, Recycle: Compositional Generation with Energy-Based Diffusion Models and MCMC »
Yilun Du · Conor Durkan · Robin Strudel · Josh Tenenbaum · Sander Dieleman · Rob Fergus · Jascha Sohl-Dickstein · Arnaud Doucet · Will Grathwohl -
2022 Workshop: Responsible Decision Making in Dynamic Environments »
Virginie Do · Thorsten Joachims · Alessandro Lazaric · Joelle Pineau · Matteo Pirotta · Harsh Satija · Nicolas Usunier -
2022 Poster: Scaling Gaussian Process Optimization by Evaluating a Few Unique Candidates Multiple Times »
Daniele Calandriello · Luigi Carratino · Alessandro Lazaric · Michal Valko · Lorenzo Rosasco -
2022 Spotlight: Scaling Gaussian Process Optimization by Evaluating a Few Unique Candidates Multiple Times »
Daniele Calandriello · Luigi Carratino · Alessandro Lazaric · Michal Valko · Lorenzo Rosasco -
2021 : Invited Talk by Alessandro Lazaric »
Alessandro Lazaric -
2021 Workshop: ICML 2021 Workshop on Unsupervised Reinforcement Learning »
Feryal Behbahani · Joelle Pineau · Lerrel Pinto · Roberta Raileanu · Aravind Srinivas · Denis Yarats · Amy Zhang -
2021 Poster: Leveraging Good Representations in Linear Contextual Bandits »
Matteo Papini · Andrea Tirinzoni · Marcello Restelli · Alessandro Lazaric · Matteo Pirotta -
2021 Spotlight: Leveraging Good Representations in Linear Contextual Bandits »
Matteo Papini · Andrea Tirinzoni · Marcello Restelli · Alessandro Lazaric · Matteo Pirotta -
2021 Oral: Decoupling Value and Policy for Generalization in Reinforcement Learning »
Roberta Raileanu · Rob Fergus -
2021 Poster: Decoupling Value and Policy for Generalization in Reinforcement Learning »
Roberta Raileanu · Rob Fergus -
2020 Poster: No-Regret Exploration in Goal-Oriented Reinforcement Learning »
Jean Tarbouriech · Evrard Garcelon · Michal Valko · Matteo Pirotta · Alessandro Lazaric -
2020 Poster: Efficient Optimistic Exploration in Linear-Quadratic Regulators via Lagrangian Relaxation »
Marc Abeille · Alessandro Lazaric -
2020 Poster: Learning Near Optimal Policies with Low Inherent Bellman Error »
Andrea Zanette · Alessandro Lazaric · Mykel Kochenderfer · Emma Brunskill -
2020 Poster: The Differentiable Cross-Entropy Method »
Brandon Amos · Denis Yarats -
2020 Poster: Meta-learning with Stochastic Linear Bandits »
Leonardo Cella · Alessandro Lazaric · Massimiliano Pontil -
2020 Poster: Near-linear time Gaussian process optimization with adaptive batching and resparsification »
Daniele Calandriello · Luigi Carratino · Alessandro Lazaric · Michal Valko · Lorenzo Rosasco -
2020 Poster: Hierarchically Decoupled Imitation For Morphological Transfer »
Donald Hejna · Lerrel Pinto · Pieter Abbeel -
2018 Poster: Stochastic Video Generation with a Learned Prior »
Emily Denton · Rob Fergus -
2018 Oral: Stochastic Video Generation with a Learned Prior »
Emily Denton · Rob Fergus -
2018 Poster: Improved large-scale graph learning through ridge spectral sparsification »
Daniele Calandriello · Alessandro Lazaric · Ioannis Koutis · Michal Valko -
2018 Oral: Improved large-scale graph learning through ridge spectral sparsification »
Daniele Calandriello · Alessandro Lazaric · Ioannis Koutis · Michal Valko -
2018 Poster: Efficient Bias-Span-Constrained Exploration-Exploitation in Reinforcement Learning »
Ronan Fruit · Matteo Pirotta · Alessandro Lazaric · Ronald Ortner -
2018 Poster: Improved Regret Bounds for Thompson Sampling in Linear Quadratic Control Problems »
Marc Abeille · Alessandro Lazaric -
2018 Oral: Improved Regret Bounds for Thompson Sampling in Linear Quadratic Control Problems »
Marc Abeille · Alessandro Lazaric -
2018 Oral: Efficient Bias-Span-Constrained Exploration-Exploitation in Reinforcement Learning »
Ronan Fruit · Matteo Pirotta · Alessandro Lazaric · Ronald Ortner