Timezone: »
Bayesian reward learning from demonstrations enables rigorous safety and uncertainty analysis when performing imitation learning. However, Bayesian reward learning methods are typically computationally intractable for complex control problems. We propose Bayesian Reward Extrapolation (Bayesian REX), a highly efficient Bayesian reward learning algorithm that scales to high-dimensional imitation learning problems by pre-training a low-dimensional feature encoding via self-supervised tasks and then leveraging preferences over demonstrations to perform fast Bayesian inference. Bayesian REX can learn to play Atari games from demonstrations, without access to the game score and can generate 100,000 samples from the posterior over reward functions in only 5 minutes on a personal laptop. Bayesian REX also results in imitation learning performance that is competitive with or better than state-of-the-art methods that only learn point estimates of the reward function. Finally, Bayesian REX enables efficient high-confidence policy evaluation without having access to samples of the reward function. These high-confidence performance bounds can be used to rank the performance and risk of a variety of evaluation policies and provide a way to detect reward hacking behaviors.
Author Information
Daniel Brown (University of Texas at Austin)
Russell Coleman (University of Texas at Austin)
Ravi Srinivasan (The University of Texas at Austin)
Scott Niekum (University of Texas at Austin)
More from the Same Authors
-
2022 : A Study of Causal Confusion in Preference-Based Reward Learning »
Jeremy Tien · Zhiyang He · Zackory Erickson · Anca Dragan · Daniel S Brown -
2023 Poster: Contextual Reliability: When Different Features Matter in Different Contexts »
Gaurav Ghosal · Amrith Setlur · Daniel S Brown · Anca Dragan · Aditi Raghunathan -
2021 : Scaling up Probabilistic Safe Learning »
Scott Niekum -
2021 Poster: Policy Gradient Bayesian Robust Optimization for Imitation Learning »
Zaynah Javed · Daniel Brown · Satvik Sharma · Jerry Zhu · Ashwin Balakrishna · Marek Petrik · Anca Dragan · Ken Goldberg -
2021 Spotlight: Policy Gradient Bayesian Robust Optimization for Imitation Learning »
Zaynah Javed · Daniel Brown · Satvik Sharma · Jerry Zhu · Ashwin Balakrishna · Marek Petrik · Anca Dragan · Ken Goldberg -
2021 Poster: Value Alignment Verification »
Daniel Brown · Jordan Schneider · Anca Dragan · Scott Niekum -
2021 Spotlight: Value Alignment Verification »
Daniel Brown · Jordan Schneider · Anca Dragan · Scott Niekum -
2019 Poster: Extrapolating Beyond Suboptimal Demonstrations via Inverse Reinforcement Learning from Observations »
Daniel Brown · Wonjoon Goo · Prabhat Nagarajan · Scott Niekum -
2019 Oral: Extrapolating Beyond Suboptimal Demonstrations via Inverse Reinforcement Learning from Observations »
Daniel Brown · Wonjoon Goo · Prabhat Nagarajan · Scott Niekum -
2019 Poster: Importance Sampling Policy Evaluation with an Estimated Behavior Policy »
Josiah Hanna · Scott Niekum · Peter Stone -
2019 Oral: Importance Sampling Policy Evaluation with an Estimated Behavior Policy »
Josiah Hanna · Scott Niekum · Peter Stone -
2017 Poster: Data-Efficient Policy Evaluation Through Behavior Policy Search »
Josiah Hanna · Philip S. Thomas · Peter Stone · Scott Niekum -
2017 Talk: Data-Efficient Policy Evaluation Through Behavior Policy Search »
Josiah Hanna · Philip S. Thomas · Peter Stone · Scott Niekum