Skip to yearly menu bar Skip to main content


Poster
in
Workshop: ICML 2021 Workshop on Unsupervised Reinforcement Learning

Disentangled Predictive Representation for Meta-Reinforcement Learning

Sephora Madjiheurem · Laura Toni


Abstract:

A major challenge in reinforcement learning is the design of agents that are able to generalize across tasks that share common dynamics. A viable solution is meta-reinforcement learning, which identifies common structures among past tasks to be then generalized to new tasks (meta-test). Prior works learn meta-representation jointly while solving tasks, resulting in representations that not generalize well across policies, leading to sampling-inefficiency during meta-test phases. In this work, we introduce state2vec, an efficient and low-complexity unsupervised framework for learning disentangled representation that are more general.
The state embedding vectors learned with state2vec capture the geometry of the underlying state space, resulting in high-quality basis functions for linear value function approximation.

Chat is not available.