Skip to yearly menu bar Skip to main content


On the Importance of Feature Decorrelation for Unsupervised Representation Learning in Reinforcement Learning

Hojoon Lee · Koanho Lee · Dongyoon Hwang · Hyunho Lee · Byungkun Lee · Jaegul Choo

Exhibit Hall 1 #726
[ ]
[ PDF [ Poster


Recently, unsupervised representation learning (URL) has improved the sample efficiency of Reinforcement Learning (RL) by pretraining a model from a large unlabeled dataset. The underlying principle of these methods is to learn temporally predictive representations by predicting future states in the latent space. However, an important challenge of this approach is the representational collapse, where the subspace of the latent representations collapses into a low-dimensional manifold. To address this issue, we propose a novel URL framework that causally predicts future states while increasing the dimension of the latent manifold by decorrelating the features in the latent space. Through extensive empirical studies, we demonstrate that our framework effectively learns predictive representations without collapse, which significantly improves the sample efficiency of state-of-the-art URL methods on the Atari 100k benchmark. The code is available at

Chat is not available.