Timezone: »

 
Spotlight
Cliff Diving: Exploring Reward Surfaces in Reinforcement Learning Environments
Ryan Sullivan · Jordan Terry · Benjamin Black · John P Dickerson

Thu Jul 21 01:05 PM -- 01:10 PM (PDT) @ Room 309

Visualizing optimization landscapes has resulted in many fundamental insights in numeric optimization, specifically regarding novel improvements to optimization techniques. However, visualizations of the objective that reinforcement learning optimizes (the "reward surface") have only ever been generated for a small number of narrow contexts. This work presents reward surfaces and related visualizations of 27 of the most widely used reinforcement learning environments in Gym for the first time. We also explore reward surfaces in the policy gradient direction and show for the first time that many popular reinforcement learning environments have frequent "cliffs" (sudden large drops in expected reward). We demonstrate that A2C often "dives off" these cliffs into low reward regions of the parameter space while PPO avoids them, confirming a popular intuition for PPO's improved performance over previous methods. We additionally introduce a highly extensible library that allows researchers to easily generate these visualizations in the future. Our findings provide new intuition to explain the successes and failures of modern RL methods, and our visualizations concretely characterize several failure modes of reinforcement learning agents in novel ways.

Author Information

Ryan Sullivan (University of Maryland)
Jordan Terry (University of Maryland, College Park)
Benjamin Black (University of Maryland)
John P Dickerson (Arthur AI & Univ. of Maryland)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors