Timezone: »

 
Poster
Constrained Decision Transformer for Offline Safe Reinforcement Learning
Zuxin Liu · Zijian Guo · Yihang Yao · Zhepeng Cen · Wenhao Yu · Tingnan Zhang · Ding Zhao

Thu Jul 27 04:30 PM -- 06:00 PM (PDT) @ Exhibit Hall 1 #232
Safe reinforcement learning (RL) trains a constraint satisfaction policy by interacting with the environment. We aim to tackle a more challenging problem: learning a safe policy from an offline dataset. We study the offline safe RL problem from a novel multi-objective optimization perspective and propose the $\epsilon$-reducible concept to characterize problem difficulties. The inherent trade-offs between safety and task performance inspire us to propose the constrained decision transformer (CDT) approach, which can dynamically adjust the trade-offs during deployment. Extensive experiments show the advantages of the proposed method in learning an adaptive, safe, robust, and high-reward policy. CDT outperforms its variants and strong offline safe RL baselines by a large margin with the same hyperparameters across all tasks, while keeping the zero-shot adaptation capability to different constraint thresholds, making our approach more suitable for real-world RL under constraints.

Author Information

Zuxin Liu (Carnegie Mellon University)
Zijian Guo (Carnegie Mellon University)
Yihang Yao (Carnegie Mellon University)
Zhepeng Cen (Carnegie Mellon University)
Wenhao Yu (Google)
Tingnan Zhang (Google)
Ding Zhao (Carnegie Mellon University)

More from the Same Authors