Timezone: »

 
Poster
Retrieval-Augmented Reinforcement Learning
Anirudh Goyal · Abe Friesen Friesen · Andrea Banino · Theophane Weber · Nan Rosemary Ke · Adrià Puigdomenech Badia · Arthur Guez · Mehdi Mirza · Peter Humphreys · Ksenia Konyushkova · Michal Valko · Simon Osindero · Timothy Lillicrap · Nicolas Heess · Charles Blundell

Thu Jul 21 03:00 PM -- 05:00 PM (PDT) @ Hall E #1010

Most deep reinforcement learning (RL) algorithms distill experience into parametric behavior policies or value functions via gradient updates. While effective, this approach has several disadvantages: (1) it is computationally expensive, (2) it can take many updates to integrate experiences into the parametric model, (3) experiences that are not fully integrated do not appropriately influence the agent's behavior, and (4) behavior is limited by the capacity of the model. In this paper we explore an alternative paradigm in which we train a network to map a dataset of past experiences to optimal behavior. Specifically, we augment an RL agent with a retrieval process (parameterized as a neural network) that has direct access to a dataset of experiences. This dataset can come from the agent's past experiences, expert demonstrations, or any other relevant source. The retrieval process is trained to retrieve information from the dataset that may be useful in the current context, to help the agent achieve its goal faster and more efficiently. The proposed method facilitates learning agents that at test time can condition their behavior on the entire dataset and not only the current state, or current trajectory. We integrate our method into two different RL agents: an offline DQN agent and an online R2D2 agent. In offline multi-task problems, we show that the retrieval-augmented DQN agent avoids task interference and learns faster than the baseline DQN agent. On Atari, we show that retrieval-augmented R2D2 learns significantly faster than the baseline R2D2 agent and achieves higher scores. We run extensive ablations to measure the contributions of the components of our proposed method.

Author Information

Anirudh Goyal (Université de Montréal)
Abe Friesen Friesen (DeepMind)
Andrea Banino (DeepMind)
Theophane Weber (DeepMind)
Nan Rosemary Ke (Deepmind, Mila)
Adrià Puigdomenech Badia (Deepmind)
Arthur Guez (Google DeepMind)
Mehdi Mirza (DeepMind)
Peter Humphreys (Deepmind)
Ksenia Konyushkova (DeepMind)
Michal Valko (DeepMind / Inria / ENS Paris-Saclay)
Michal Valko

Michal is a machine learning scientist in DeepMind Paris, tenured researcher at Inria, and the lecturer of the master course Graphs in Machine Learning at l'ENS Paris-Saclay. Michal is primarily interested in designing algorithms that would require as little human supervision as possible. This means 1) reducing the “intelligence” that humans need to input into the system and 2) minimizing the data that humans need to spend inspecting, classifying, or “tuning” the algorithms. That is why he is working on methods and settings that are able to deal with minimal feedback, such as deep reinforcement learning, bandit algorithms, or self-supervised learning. Michal is actively working on represenation learning and building worlds models. He is also working on deep (reinforcement) learning algorithm that have some theoretical underpinning. He has also worked on sequential algorithms with structured decisions where exploiting the structure leads to provably faster learning. He received his Ph.D. in 2011 from the University of Pittsburgh under the supervision of Miloš Hauskrecht and after was a postdoc of Rémi Munos before taking a permanent position at Inria in 2012.

Simon Osindero (DeepMind)
Timothy Lillicrap (Google DeepMind)
Nicolas Heess (DeepMind)
Charles Blundell (DeepMind)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors