Timezone: »
Objective Robustness in Deep Reinforcement Learning
Lauro Langosco di Langosco · Lee Sharkey
We study objective robustness failures, a type of out-of-distribution robustness failure in reinforcement learning (RL). Objective robustness failures occur when an RL agent retains its capabilities out-of-distribution yet pursues the wrong objective. This kind of failure presents different risks than the robustness problems usually considered in the literature, since it involves agents that leverage their capabilities to pursue the wrong objective rather than simply failing to do anything useful.We provide the first explicit empirical demonstrations of objective robustness failures and present a partial characterization of its causes.
Author Information
Lauro Langosco di Langosco (ETH)
Lee Sharkey (ETHZ)
More from the Same Authors
-
2022 Poster: Goal Misgeneralization in Deep Reinforcement Learning »
Lauro Langosco di Langosco · Jack Koch · Lee Sharkey · Jacob Pfau · David Krueger -
2022 Spotlight: Goal Misgeneralization in Deep Reinforcement Learning »
Lauro Langosco di Langosco · Jack Koch · Lee Sharkey · Jacob Pfau · David Krueger