Timezone: »

 
Poster
Tell me why! Explanations support learning relational and causal structure
Andrew Lampinen · Nicholas Roy · Ishita Dasgupta · Stephanie Chan · Allison Tam · James McClelland · Chen Yan · Adam Santoro · Neil Rabinowitz · Jane Wang · Feilx Hill

@ None #None

Inferring the abstract relational and causal structure of the world is a major challenge for reinforcement-learning (RL) agents. For humans, explanations play a considerable role in overcoming this challenge. Here, we show that explanations can play a similar role for deep RL agents in complex environments. While agents typically struggle to acquire relational and causal knowledge, augmenting their training with language-like explanations can overcome these limitations. We show that explanations help agents learn challenging relational tasks, and we examine which aspects of explanations contribute to their benefits. We then show that explanations can help agents to infer not only relational but also causal structure. Explanations enable agents to generalize out-of-distribution from ambiguous, causally-confounded training, and even allow agents to learn to perform experimental interventions to identify causal relationships. These results suggest that explanations may provide a powerful tool for improving agent learning and generalization.

Author Information

Andrew Lampinen (DeepMind)

I am interested in cognitive flexibility and generalization, and how these abilities are enabled by factors like language, memory, and embodiment. I am a Senior Research Scientist at DeepMind.

Nicholas Roy (DeepMind)
Ishita Dasgupta (DeepMind)
Stephanie Chan (DeepMind)
Allison Tam (DeepMind)
James McClelland (Stanford University and Deepmind)
James McClelland

I have been using neural networks to model human cognition since the late 1970's, and co-led the <em><Parallel Distributed Processing </em> research group with David Rumelhart in the early 1980's. My main interests lie in capturing human cognitive abilities with neural networks and in using what we know about human cognition to inform the development of better AI systems.

Chen Yan
Adam Santoro (DeepMind)
Neil Rabinowitz (DeepMind)
Jane Wang (DeepMind)
Feilx Hill (Deepmind)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors