Timezone: »

Learning Pseudometric-based Action Representations for Offline Reinforcement Learning
Pengjie Gu · Mengchen Zhao · Chen Chen · Dong Li · Jianye Hao · Bo An

Wed Jul 20 03:30 PM -- 05:30 PM (PDT) @ Hall E #623

Offline reinforcement learning is a promising approach for practical applications since it does not require interactions with real-world environments. However, existing offline RL methods only work well in environments with continuous or small discrete action spaces. In environments with large and discrete action spaces, such as recommender systems and dialogue systems, the performance of existing methods decreases drastically because they suffer from inaccurate value estimation for a large proportion of out-of-distribution (o.o.d.) actions. While recent works have demonstrated that online RL benefits from incorporating semantic information in action representations, unfortunately, they fail to learn reasonable relative distances between action representations, which is key to offline RL to reduce the influence of o.o.d. actions. This paper proposes an action representation learning framework for offline RL based on a pseudometric, which measures both the behavioral relation and the data-distributional relation between actions. We provide theoretical analysis on the continuity of the expected Q-values and the offline policy improvement using the learned action representations. Experimental results show that our methods significantly improve the performance of two typical offline RL methods in environments with large and discrete action spaces.

Author Information

Pengjie Gu (Nanyang Technological University)
Mengchen Zhao (Huawei Noah's Ark Lab)
Chen Chen (Huawei Noah’s Ark Lab)
Dong Li (Huawei Noah's Ark Lab)
Jianye Hao (Huawei Noah's Ark Lab)
Bo An (Nanyang Technological University)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors