Timezone: »
Performance evaluations are critical for quantifying algorithmic advances in reinforcement learning. Recent reproducibility analyses have shown that reported performance results are often inconsistent and difficult to replicate. In this work, we argue that the inconsistency of performance stems from the use of flawed evaluation metrics. Taking a step towards ensuring that reported results are consistent, we propose a new comprehensive evaluation methodology for reinforcement learning algorithms that produces reliable measurements of performance both on a single environment and when aggregated across environments. We demonstrate this method by evaluating a broad class of reinforcement learning algorithms on standard benchmark tasks.
Author Information
Scott Jordan (University of Massachusetts)
Yash Chandak (University of Massachusetts Amherst)
Daniel Cohen (University of Massachusetts Amherst)
Mengxue Zhang (umass Amherst )
Philip Thomas (University of Massachusetts Amherst)
More from the Same Authors
-
2023 : In-Context Decision-Making from Supervised Pretraining »
Jonathan Lee · Annie Xie · Aldo Pacchiano · Yash Chandak · Chelsea Finn · Ofir Nachum · Emma Brunskill -
2023 Poster: Understanding Self-Predictive Learning for Reinforcement Learning »
Yunhao Tang · Zhaohan Guo · Pierre Richemond · Bernardo Avila Pires · Yash Chandak · Remi Munos · Mark Rowland · Mohammad Gheshlaghi Azar · Charline Le Lan · Clare Lyle · Andras Gyorgy · Shantanu Thakoor · Will Dabney · Bilal Piot · Daniele Calandriello · Michal Valko -
2023 Poster: Representations and Exploration for Deep Reinforcement Learning using Singular Value Decomposition »
Yash Chandak · Shantanu Thakoor · Zhaohan Guo · Yunhao Tang · Remi Munos · Will Dabney · Diana Borsa -
2021 Spotlight: Towards Practical Mean Bounds for Small Samples »
My Phan · Philip Thomas · Erik Learned-Miller -
2021 Poster: Towards Practical Mean Bounds for Small Samples »
My Phan · Philip Thomas · Erik Learned-Miller -
2021 Poster: Posterior Value Functions: Hindsight Baselines for Policy Gradient Methods »
Chris Nota · Philip Thomas · Bruno C. da Silva -
2021 Spotlight: Posterior Value Functions: Hindsight Baselines for Policy Gradient Methods »
Chris Nota · Philip Thomas · Bruno C. da Silva -
2021 Poster: High Confidence Generalization for Reinforcement Learning »
James Kostas · Yash Chandak · Scott Jordan · Georgios Theocharous · Philip Thomas -
2021 Spotlight: High Confidence Generalization for Reinforcement Learning »
James Kostas · Yash Chandak · Scott Jordan · Georgios Theocharous · Philip Thomas -
2020 Poster: Asynchronous Coagent Networks »
James Kostas · Chris Nota · Philip Thomas -
2020 Poster: Optimizing for the Future in Non-Stationary MDPs »
Yash Chandak · Georgios Theocharous · Shiv Shankar · Martha White · Sridhar Mahadevan · Philip Thomas -
2019 Poster: Concentration Inequalities for Conditional Value at Risk »
Philip Thomas · Erik Learned-Miller -
2019 Oral: Concentration Inequalities for Conditional Value at Risk »
Philip Thomas · Erik Learned-Miller -
2019 Poster: Learning Action Representations for Reinforcement Learning »
Yash Chandak · Georgios Theocharous · James Kostas · Scott Jordan · Philip Thomas -
2019 Oral: Learning Action Representations for Reinforcement Learning »
Yash Chandak · Georgios Theocharous · James Kostas · Scott Jordan · Philip Thomas -
2018 Poster: Decoupling Gradient-Like Learning Rules from Representations »
Philip Thomas · Christoph Dann · Emma Brunskill -
2018 Oral: Decoupling Gradient-Like Learning Rules from Representations »
Philip Thomas · Christoph Dann · Emma Brunskill