Skip to yearly menu bar Skip to main content


Poster

Robust Task Representations for Offline Meta-Reinforcement Learning via Contrastive Learning

Haoqi Yuan · Zongqing Lu

Hall E #604

Keywords: [ RL: Deep RL ] [ DL: Other Representation Learning ] [ MISC: Representation Learning ] [ DL: Self-Supervised Learning ] [ RL: Batch/Offline ] [ MISC: Transfer, Multitask and Meta-learning ]


Abstract:

We study offline meta-reinforcement learning, a practical reinforcement learning paradigm that learns from offline data to adapt to new tasks. The distribution of offline data is determined jointly by the behavior policy and the task. Existing offline meta-reinforcement learning algorithms cannot distinguish these factors, making task representations unstable to the change of behavior policies. To address this problem, we propose a contrastive learning framework for task representations that are robust to the distribution mismatch of behavior policies in training and test. We design a bi-level encoder structure, use mutual information maximization to formalize task representation learning, derive a contrastive learning objective, and introduce several approaches to approximate the true distribution of negative pairs. Experiments on a variety of offline meta-reinforcement learning benchmarks demonstrate the advantages of our method over prior methods, especially on the generalization to out-of-distribution behavior policies.

Chat is not available.