Skip to yearly menu bar Skip to main content


Poster

Bootstrap Latent-Predictive Representations for Multitask Reinforcement Learning

Zhaohan Guo · Bernardo Avila Pires · Bilal Piot · Jean-Bastien Grill · Florent Altché · Remi Munos · Mohammad Gheshlaghi Azar

Keywords: [ Reinforcement Learning - Deep RL ] [ Reinforcement Learning ] [ Representation Learning ] [ Deep Reinforcement Learning ]


Abstract:

Learning a good representation is an essential component for deep reinforcement learning (RL). Representation learning is especially important in multitask and partially observable settings where building a representation of the unknown environment is crucial to solve the tasks. Here we introduce Predictions of Bootstrapped Latents (PBL), a simple and flexible self-supervised representation learning algorithm for multitask deep RL. PBL builds on multistep predictive representations of future observations, and focuses on capturing structured information about environment dynamics. Specifically, PBL trains its representation by predicting latent embeddings of future observations. These latent embeddings are themselves trained to be predictive of the aforementioned representations. These predictions form a bootstrapping effect, allowing the agent to learn more about the key aspects of the environment dynamics. In addition, by defining prediction tasks completely in latent space, PBL provides the flexibility of using multimodal observations involving pixel images, language instructions, rewards and more. We show in our experiments that PBL delivers across-the-board improved performance over state of the art deep RL agents in the DMLab-30 multitask setting.

Chat is not available.