Poster
Evaluating the Performance of Reinforcement Learning Algorithms
Scott Jordan · Yash Chandak · Daniel Cohen · Mengxue Zhang · Philip Thomas
Virtual
Keywords: [ Reinforcement Learning ] [ Other ] [ Reinforcement Learning - General ]
Performance evaluations are critical for quantifying algorithmic advances in reinforcement learning. Recent reproducibility analyses have shown that reported performance results are often inconsistent and difficult to replicate. In this work, we argue that the inconsistency of performance stems from the use of flawed evaluation metrics. Taking a step towards ensuring that reported results are consistent, we propose a new comprehensive evaluation methodology for reinforcement learning algorithms that produces reliable measurements of performance both on a single environment and when aggregated across environments. We demonstrate this method by evaluating a broad class of reinforcement learning algorithms on standard benchmark tasks.