Timezone: »

Offline Reinforcement Learning with Imbalanced Datasets
Li Jiang · Sijie Cheng · Jielin Qiu · Victor Chan · Ding Zhao

The prevalent use of benchmarks in current offline reinforcement learning (RL) research has led to a neglect of the imbalance of real-world dataset distributions in the development of models. The real-world offline RL dataset is often imbalanced over the state space due to the challenge of exploration or safety considerations. In this paper, we specify properties of imbalanced datasets in offline RL, where the state coverage follows a power law distribution characterized by skewed policies. Theoretically and empirically, we show that typically offline RL methods based on distributional constraints, such as conservative Q-learning (CQL), are ineffective in extracting policies under the imbalanced dataset. Inspired by natural intelligence, we propose a novel offline RL method that utilizes the augmentation of CQL with a retrieval process to recall past related experiences, effectively alleviating the challenges posed by imbalanced datasets. We evaluate our method on several tasks in the context of imbalanced datasets with varying levels of imbalance, utilizing the variant of D4RL. Empirical results demonstrate the superiority of our method over other baselines.

Author Information

Li Jiang (Tsinghua University)
Sijie Cheng (Tsinghua University)
Jielin Qiu (Carnegie Mellon University)
Victor Chan (TBSI)
Ding Zhao (Carnegie Mellon University)

More from the Same Authors