Timezone: »

Reinforcement Learning with Action-Free Pre-Training from Videos
Younggyo Seo · Kimin Lee · Stephen James · Pieter Abbeel

Wed Jul 20 03:30 PM -- 05:30 PM (PDT) @ Hall E #916

Recent unsupervised pre-training methods have shown to be effective on language and vision domains by learning useful representations for multiple downstream tasks. In this paper, we investigate if such unsupervised pre-training methods can also be effective for vision-based reinforcement learning (RL). To this end, we introduce a framework that learns representations useful for understanding the dynamics via generative pre-training on videos. Our framework consists of two phases: we pre-train an action-free latent video prediction model, and then utilize the pre-trained representations for efficiently learning action-conditional world models on unseen environments. To incorporate additional action inputs during fine-tuning, we introduce a new architecture that stacks an action-conditional latent prediction model on top of the pre-trained action-free prediction model. Moreover, for better exploration, we propose a video-based intrinsic bonus that leverages pre-trained representations. We demonstrate that our framework significantly improves both final performances and sample-efficiency of vision-based RL in a variety of manipulation and locomotion tasks. Code is available at \url{https://github.com/younggyoseo/apv}.

Author Information

Younggyo Seo (KAIST / UC Berkeley)
Kimin Lee (Google)
Stephen James (UC Berkeley)
Pieter Abbeel (UC Berkeley & Covariant)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors