Timezone: »
The difficulty in specifying rewards for many real-world problems has led to an increased focus on learning rewards from human feedback, such as demonstrations. However, there are often many different reward functions that explain the human feedback, leaving agents with uncertainty over what the true reward function is. While most policy optimization approaches handle this uncertainty by optimizing for expected performance, many applications demand risk-averse behavior. We derive a novel policy gradient-style robust optimization approach, PG-BROIL, that optimizes a soft-robust objective that balances expected performance and risk. To the best of our knowledge, PG-BROIL is the first policy optimization algorithm robust to a distribution of reward hypotheses which can scale to continuous MDPs. Results suggest that PG-BROIL can produce a family of behaviors ranging from risk-neutral to risk-averse and outperforms state-of-the-art imitation learning algorithms when learning from ambiguous demonstrations by hedging against uncertainty, rather than seeking to uniquely identify the demonstrator's reward function.
Author Information
Zaynah Javed (UC Berkeley)
Daniel Brown (UC Berkeley)
Satvik Sharma (University of California, Berkeley)
Jerry Zhu (UC Berkeley)
Ashwin Balakrishna (University of California, Berkeley)
Marek Petrik (University of New Hampshire)
Anca Dragan (University of California, Berkeley)
Ken Goldberg (UC Berkeley)
Related Events (a corresponding poster, oral, or spotlight)
-
2021 Poster: Policy Gradient Bayesian Robust Optimization for Imitation Learning »
Wed. Jul 21st 04:00 -- 06:00 AM Room
More from the Same Authors
-
2022 : A Study of Causal Confusion in Preference-Based Reward Learning »
Jeremy Tien · Zhiyang He · Zackory Erickson · Anca Dragan · Daniel S Brown -
2023 Poster: Contextual Reliability: When Different Features Matter in Different Contexts »
Gaurav Ghosal · Amrith Setlur · Daniel S Brown · Anca Dragan · Aditi Raghunathan -
2023 Poster: Policy Gradient in Robust MDPs with Global Convergence Guarantee »
Qiuhao Wang · Chin Pang Ho · Marek Petrik -
2023 Poster: Automatically Auditing Large Language Models via Discrete Optimization »
Erik Jones · Anca Dragan · Aditi Raghunathan · Jacob Steinhardt -
2023 Workshop: Interactive Learning with Implicit Human Feedback »
Andi Peng · Akanksha Saran · Andreea Bobu · Tengyang Xie · Pierre-Yves Oudeyer · Anca Dragan · John Langford -
2022 Poster: Estimating and Penalizing Induced Preference Shifts in Recommender Systems »
Micah Carroll · Anca Dragan · Stuart Russell · Dylan Hadfield-Menell -
2022 Spotlight: Estimating and Penalizing Induced Preference Shifts in Recommender Systems »
Micah Carroll · Anca Dragan · Stuart Russell · Dylan Hadfield-Menell -
2022 : Learning to interact: PARTIAL OBSERVABILITY + GAME Theory of mind on steroids »
Anca Dragan -
2022 : Learning to interact: PARTIAL OBSERVABILITY The actions you take as part of the task are the queries! »
Anca Dragan -
2022 : Q&A »
Dorsa Sadigh · Anca Dragan -
2022 Tutorial: Learning for Interactive Agents »
Dorsa Sadigh · Anca Dragan -
2022 : Learning objectives and preferences: WHAT DATA? From diverse types of human data »
Anca Dragan -
2021 Poster: Resource Allocation in Multi-armed Bandit Exploration: Overcoming Sublinear Scaling with Adaptive Parallelism »
Brijen Thananjeyan · Kirthevasan Kandasamy · Ion Stoica · Michael Jordan · Ken Goldberg · Joseph E Gonzalez -
2021 Oral: Resource Allocation in Multi-armed Bandit Exploration: Overcoming Sublinear Scaling with Adaptive Parallelism »
Brijen Thananjeyan · Kirthevasan Kandasamy · Ion Stoica · Michael Jordan · Ken Goldberg · Joseph E Gonzalez -
2021 Poster: Value Alignment Verification »
Daniel Brown · Jordan Schneider · Anca Dragan · Scott Niekum -
2021 Spotlight: Value Alignment Verification »
Daniel Brown · Jordan Schneider · Anca Dragan · Scott Niekum -
2020 : Invited Talk 7: Prof. Anca Dragan from UC Berkeley »
Anca Dragan -
2020 : "Active Learning through Physically-embodied, Synthesized-from-“scratch” Queries" »
Anca Dragan -
2020 Poster: Learning Human Objectives by Evaluating Hypothetical Behavior »
Siddharth Reddy · Anca Dragan · Sergey Levine · Shane Legg · Jan Leike -
2020 Poster: Safe Imitation Learning via Fast Bayesian Reward Inference from Preferences »
Daniel Brown · Russell Coleman · Ravi Srinivasan · Scott Niekum -
2019 Poster: On the Feasibility of Learning, Rather than Assuming, Human Biases for Reward Inference »
Rohin Shah · Noah Gundotra · Pieter Abbeel · Anca Dragan -
2019 Oral: On the Feasibility of Learning, Rather than Assuming, Human Biases for Reward Inference »
Rohin Shah · Noah Gundotra · Pieter Abbeel · Anca Dragan -
2019 Poster: Extrapolating Beyond Suboptimal Demonstrations via Inverse Reinforcement Learning from Observations »
Daniel Brown · Wonjoon Goo · Prabhat Nagarajan · Scott Niekum -
2019 Oral: Extrapolating Beyond Suboptimal Demonstrations via Inverse Reinforcement Learning from Observations »
Daniel Brown · Wonjoon Goo · Prabhat Nagarajan · Scott Niekum -
2019 Poster: Learning a Prior over Intent via Meta-Inverse Reinforcement Learning »
Kelvin Xu · Ellis Ratner · Anca Dragan · Sergey Levine · Chelsea Finn -
2019 Oral: Learning a Prior over Intent via Meta-Inverse Reinforcement Learning »
Kelvin Xu · Ellis Ratner · Anca Dragan · Sergey Levine · Chelsea Finn -
2018 Poster: RLlib: Abstractions for Distributed Reinforcement Learning »
Eric Liang · Richard Liaw · Robert Nishihara · Philipp Moritz · Roy Fox · Ken Goldberg · Joseph E Gonzalez · Michael Jordan · Ion Stoica -
2018 Oral: RLlib: Abstractions for Distributed Reinforcement Learning »
Eric Liang · Richard Liaw · Robert Nishihara · Philipp Moritz · Roy Fox · Ken Goldberg · Joseph E Gonzalez · Michael Jordan · Ion Stoica -
2018 Poster: An Efficient, Generalized Bellman Update For Cooperative Inverse Reinforcement Learning »
Dhruv Malik · Malayandi Palaniappan · Jaime Fisac · Dylan Hadfield-Menell · Stuart Russell · Anca Dragan -
2018 Poster: Fast Bellman Updates for Robust MDPs »
Chin Pang Ho · Marek Petrik · Wolfram Wiesemann -
2018 Oral: Fast Bellman Updates for Robust MDPs »
Chin Pang Ho · Marek Petrik · Wolfram Wiesemann -
2018 Oral: An Efficient, Generalized Bellman Update For Cooperative Inverse Reinforcement Learning »
Dhruv Malik · Malayandi Palaniappan · Jaime Fisac · Dylan Hadfield-Menell · Stuart Russell · Anca Dragan