Timezone: »
Visualizing optimization landscapes has resulted in many fundamental insights in numeric optimization, specifically regarding novel improvements to optimization techniques. However, visualizations of the objective that reinforcement learning optimizes (the "reward surface") have only ever been generated for a small number of narrow contexts. This work presents reward surfaces and related visualizations of 27 of the most widely used reinforcement learning environments in Gym for the first time. We also explore reward surfaces in the policy gradient direction and show for the first time that many popular reinforcement learning environments have frequent "cliffs" (sudden large drops in expected reward). We demonstrate that A2C often "dives off" these cliffs into low reward regions of the parameter space while PPO avoids them, confirming a popular intuition for PPO's improved performance over previous methods. We additionally introduce a highly extensible library that allows researchers to easily generate these visualizations in the future. Our findings provide new intuition to explain the successes and failures of modern RL methods, and our visualizations concretely characterize several failure modes of reinforcement learning agents in novel ways.
Author Information
Ryan Sullivan (University of Maryland)
Jordan Terry (University of Maryland, College Park)
Benjamin Black (University of Maryland)
John P Dickerson (Arthur AI & Univ. of Maryland)
Related Events (a corresponding poster, oral, or spotlight)
-
2022 Poster: Cliff Diving: Exploring Reward Surfaces in Reinforcement Learning Environments »
Thu. Jul 21st through Fri the 22nd Room Hall E #1027
More from the Same Authors
-
2021 : PreferenceNet: Encoding Human Preferences in Auction Design »
Neehar Peri · Michael Curry · Samuel Dooley · John P Dickerson -
2022 : Centralized vs Individual Models for Decision Making in Interconnected Infrastructure »
Stephanie Allen · John P Dickerson · Steven Gabriel -
2022 : Planning to Fairly Allocate: Probabilistic Fairness in the Restless Bandit Setting »
Christine Herlihy · Aviva Prins · Aravind Srinivasan · John P Dickerson -
2023 Poster: Generalized Reductions: Making any Hierarchical Clustering Fair and Balanced with Low Cost »
Marina Knittel · Max Springer · John P Dickerson · MohammadTaghi Hajiaghayi -
2022 Poster: Measuring Representational Robustness of Neural Networks Through Shared Invariances »
Vedant Nanda · Till Speicher · Camila Kolling · John P Dickerson · Krishna Gummadi · Adrian Weller -
2022 Oral: Measuring Representational Robustness of Neural Networks Through Shared Invariances »
Vedant Nanda · Till Speicher · Camila Kolling · John P Dickerson · Krishna Gummadi · Adrian Weller -
2022 Poster: Certified Neural Network Watermarks with Randomized Smoothing »
Arpit Bansal · Ping-yeh Chiang · Michael Curry · Rajiv Jain · Curtis Wigington · Varun Manjunatha · John P Dickerson · Tom Goldstein -
2022 Spotlight: Certified Neural Network Watermarks with Randomized Smoothing »
Arpit Bansal · Ping-yeh Chiang · Michael Curry · Rajiv Jain · Curtis Wigington · Varun Manjunatha · John P Dickerson · Tom Goldstein -
2021 Poster: Just How Toxic is Data Poisoning? A Unified Benchmark for Backdoor and Data Poisoning Attacks »
Avi Schwarzschild · Micah Goldblum · Arjun Gupta · John P Dickerson · Tom Goldstein -
2021 Spotlight: Just How Toxic is Data Poisoning? A Unified Benchmark for Backdoor and Data Poisoning Attacks »
Avi Schwarzschild · Micah Goldblum · Arjun Gupta · John P Dickerson · Tom Goldstein -
2020 Poster: A Pairwise Fair and Community-preserving Approach to k-Center Clustering »
Brian Brubach · Darshan Chakrabarti · John P Dickerson · Samir Khuller · Aravind Srinivasan · Leonidas Tsepenekas -
2020 Poster: Measuring Non-Expert Comprehension of Machine Learning Fairness Metrics »
Debjani Saha · Candice Schumann · Duncan McElfresh · John P Dickerson · Michelle Mazurek · Michael Tschantz