Timezone: »

Policy Contrastive Imitation Learning
Jialei Huang · Zhao-Heng Yin · Yingdong Hu · Yang Gao

Wed Jul 26 05:00 PM -- 06:30 PM (PDT) @ Exhibit Hall 1 #304

Adversarial imitation learning (AIL) is a popular method that has recently achieved much success. However, the performance of AIL is still unsatisfactory on the more challenging tasks. We find that one of the major reasons is due to the low quality of AIL discriminator representation. Since the AIL discriminator is trained via binary classification that does not necessarily discriminate the policy from the expert in a meaningful way, the resulting reward might not be meaningful either. We propose a new method called Policy Contrastive Imitation Learning (PCIL) to resolve this issue. PCIL learns a contrastive representation space by anchoring on different policies and uses a smooth cosine-similarity-based reward to encourage imitation learning. Our proposed representation learning objective can be viewed as a stronger version of the AIL objective and provide a more meaningful comparison between the agent and the policy. From a theoretical perspective, we show the validity of our method using the apprenticeship learning framework. Furthermore, our empirical evaluation on the DeepMind Control suite demonstrates that PCIL can achieve state-of-the-art performance. Finally, qualitative results suggest that PCIL builds a smoother and more meaningful representation space for imitation learning.

Author Information

Jialei Huang (tsinghua university)
Zhao-Heng Yin (University of California, Berkeley)
Yingdong Hu (Tsinghua University)
Yang Gao (Shanghai Qizhi Institute)

More from the Same Authors