Skip to yearly menu bar Skip to main content


Poster

EqR: Equivariant Representations for Data-Efficient Reinforcement Learning

Arnab Kumar Mondal · Vineet Jain · Kaleem Siddiqi · Siamak Ravanbakhsh

Hall E #1026

Keywords: [ DL: Self-Supervised Learning ] [ RL: Function Approximation ] [ MISC: Representation Learning ] [ RL: Deep RL ]


Abstract:

We study a variety of notions of equivariance as an inductive bias in Reinforcement Learning (RL). In particular, we propose new mechanisms for learning representations that are equivariant to both the agent’s action, as well as symmetry transformations of the state-action pairs. Whereas prior work on exploiting symmetries in deep RL can only incorporate predefined linear transformations, our approach allows non-linear symmetry transformations of state-action pairs to be learned from the data. This is achieved through 1) equivariant Lie algebraic parameterization of state and action encodings, 2) equivariant latent transition models, and 3) the incorporation of symmetry-based losses. We demonstrate the advantages of our method, which we call Equivariant representations for RL (EqR), for Atari games in a data-efficient setting limited to 100K steps of interactions with the environment.

Chat is not available.