Timezone: »

 
Poster
State Entropy Maximization with Random Encoders for Efficient Exploration
Younggyo Seo · Lili Chen · Jinwoo Shin · Honglak Lee · Pieter Abbeel · Kimin Lee

Tue Jul 20 09:00 AM -- 11:00 AM (PDT) @ Virtual

Recent exploration methods have proven to be a recipe for improving sample-efficiency in deep reinforcement learning (RL). However, efficient exploration in high-dimensional observation spaces still remains a challenge. This paper presents Random Encoders for Efficient Exploration (RE3), an exploration method that utilizes state entropy as an intrinsic reward. In order to estimate state entropy in environments with high-dimensional observations, we utilize a k-nearest neighbor entropy estimator in the low-dimensional representation space of a convolutional encoder. In particular, we find that the state entropy can be estimated in a stable and compute-efficient manner by utilizing a randomly initialized encoder, which is fixed throughout training. Our experiments show that RE3 significantly improves the sample-efficiency of both model-free and model-based RL methods on locomotion and navigation tasks from DeepMind Control Suite and MiniGrid benchmarks. We also show that RE3 allows learning diverse behaviors without extrinsic rewards, effectively improving sample-efficiency in downstream tasks.

Author Information

Younggyo Seo (KAIST)
Lili Chen (UC Berkeley)
Jinwoo Shin (KAIST)
Honglak Lee (Google / U. Michigan)
Pieter Abbeel (UC Berkeley & Covariant)
Kimin Lee (UC Berkeley)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors