Timezone: »

Supported Trust Region Optimization for Offline Reinforcement Learning
Yixiu Mao · Hongchang Zhang · Chen Chen · Yi Xu · Xiangyang Ji

Wed Jul 26 05:00 PM -- 06:30 PM (PDT) @ Exhibit Hall 1 #305

Offline reinforcement learning suffers from the out-of-distribution issue and extrapolation error. Most policy constraint methods regularize the density of the trained policy towards the behavior policy, which is too restrictive in most cases. We propose Supported Trust Region optimization (STR) which performs trust region policy optimization with the policy constrained within the support of the behavior policy, enjoying the less restrictive support constraint. We show that, when assuming no approximation and sampling error, STR guarantees strict policy improvement until convergence to the optimal support-constrained policy in the dataset. Further with both errors incorporated, STR still guarantees safe policy improvement for each step. Empirical results validate the theory of STR and demonstrate its state-of-the-art performance on MuJoCo locomotion domains and much more challenging AntMaze domains.

Author Information

Yixiu Mao (Tsinghua University)
Hongchang Zhang (Tsinghua University, Tsinghua University)
Chen Chen (Qiyuan Lab)
Yi Xu (Alibaba Group (U.S.) Inc.)
Xiangyang Ji (Tsinghua University)

More from the Same Authors