Timezone: »
We consider the problem of learning the best possible policy from a fixed dataset, known as offline Reinforcement Learning (RL). A common taxonomy of existing offline RL works is policy regularization, which typically constrains the learned policy by distribution or support of the behavior policy. However, distribution and support constraints are overly conservative since they both force the policy to choose similar actions as the behavior policy when considering particular states. It will limit the learned policy's performance, especially when the behavior policy is sub-optimal. In this paper, we find that regularizing the policy towards the nearest state-action pair can be more effective and thus propose Policy Regularization with Dataset Constraint (PRDC). When updating the policy in a given state, PRDC searches the entire dataset for the nearest state-action sample and then restricts the policy with the action of this sample. Unlike previous works, PRDC can guide the policy with proper behaviors from the dataset, allowing it to choose actions that do not appear in the dataset along with the given state. It is a softer constraint but still keeps enough conservatism from out-of-distribution actions. Empirical evidence and theoretical analysis show that PRDC can alleviate offline RL's fundamentally challenging value overestimation issue with a bounded performance gap. Moreover, on a set of locomotion and navigation tasks, PRDC achieves state-of-the-art performance compared with existing methods. Code is available at https://github.com/LAMDA-RL/PRDC
Author Information
Yuhang Ran (Nanjing University)
Yi-Chen Li (Nanjing University)
Fuxiang Zhang (Nanjing University)
Zongzhang Zhang (Nanjing University)
Yang Yu (Nanjing University)
More from the Same Authors
-
2023 : How to Improve Imitation Learning Performance with Sub-optimal Supplementary Data? »
Ziniu Li · Tian Xu · Zeyu Qin · Yang Yu · Zhiquan Luo -
2023 Poster: Retrosynthetic Planning with Dual Value Networks »
Guoqing Liu · Di Xue · Shufang Xie · Yingce Xia · Austin Tripp · Krzysztof Maziarz · Marwin Segler · Tao Qin · Zongzhang Zhang · Tie-Yan Liu -
2023 Poster: Model-Bellman Inconsistency for Model-based Offline Reinforcement Learning »
Yihao Sun · Jiaji Zhang · Chengxing Jia · Haoxin Lin · Junyin Ye · Yang Yu -
2022 Poster: The Teaching Dimension of Regularized Kernel Learners »
Hong Qian · Xu-Hui Liu · Chen-Xi Su · Aimin Zhou · Yang Yu -
2022 Spotlight: The Teaching Dimension of Regularized Kernel Learners »
Hong Qian · Xu-Hui Liu · Chen-Xi Su · Aimin Zhou · Yang Yu -
2021 : RL Research-to-RealLife Gap Panel »
Craig Buhr · Jeff Mendenhall · Yang Yu · Matthew Taylor