Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Automated Reinforcement Learning: Exploring Meta-Learning, AutoML, and LLMs

Conditional Meta-Reinforcement Learning with State Representation

YUXUAN SUN · Laura Toni · Yiannis Andreopoulos

[ ] [ Project Page ]
Sat 27 Jul 1 a.m. PDT — 2 a.m. PDT

Abstract:

Reinforcement Learning (RL) has achieved remarkable success in diverse areas, yet its sample inefficiency—requiring extensive interactions to develop optimal policies—remains a challenge. Meta-Reinforcement Learning (Meta-RL) addresses this by leveraging previously acquired knowledge, often integrating contextual information into learning. This study delves into conditional Meta-RL, investigating how context influences learning efficiency. We introduce a novel theoretical framework for both unconditional and conditional Meta-RL scenarios, with a focus on approximating the value function using state representations in environments where the transition kernel is known. This framework lays the groundwork for understanding the advantages of conditional Meta-RL over unconditional Meta-RL approaches. Furthermore, we present a conditional Meta-RL algorithm that shown to offer more than 50 percent increase in the average mean than unconditional setting in MiniGrid environments.

Chat is not available.