Timezone: »

 
Poster
Visualizing and Understanding Atari Agents
Samuel Greydanus · Anurag Koul · Jonathan Dodge · Alan Fern

Thu Jul 12 09:15 AM -- 12:00 PM (PDT) @ Hall B #93

While deep reinforcement learning (deep RL) agents are effective at maximizing rewards, it is often unclear what strategies they use to do so. In this paper, we take a step toward explaining deep RL agents through a case study using Atari 2600 environments. In particular, we focus on using saliency maps to understand how an agent learns and executes a policy. We introduce a method for generating useful saliency maps and use it to show 1) what strong agents attend to, 2) whether agents are making decisions for the right or wrong reasons, and 3) how agents evolve during learning. We also test our method on non-expert human subjects and find that it improves their ability to reason about these agents. Overall, our results show that saliency information can provide significant insight into an RL agent's decisions and learning behavior.

Author Information

Samuel Greydanus (Oregon State University)
Anurag Koul (Oregon State University)

Deep Reinforcement Learning + Explainable Artificial Intelligence

Jonathan Dodge (Oregon State University)
Alan Fern (Oregon State University)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors