Timezone: »

Adversarial Option-Aware Hierarchical Imitation Learning
Mingxuan Jing · Wenbing Huang · Fuchun Sun · Xiaojian Ma · Tao Kong · Chuang Gan · Lei Li

Wed Jul 21 07:40 AM -- 07:45 AM (PDT) @

It has been a challenge to learning skills for an agent from long-horizon unannotated demonstrations. Existing approaches like Hierarchical Imitation Learning(HIL) are prone to compounding errors or suboptimal solutions. In this paper, we propose Option-GAIL, a novel method to learn skills at long horizon. The key idea of Option-GAIL is modeling the task hierarchy by options and train the policy via generative adversarial optimization. In particular, we propose an Expectation-Maximization(EM)-style algorithm: an E-step that samples the options of expert conditioned on the current learned policy, and an M-step that updates the low- and high-level policies of agent simultaneously to minimize the newly proposed option-occupancy measurement between the expert and the agent. We theoretically prove the convergence of the proposed algorithm. Experiments show that Option-GAIL outperforms other counterparts consistently across a variety of tasks.

Author Information

Mingxuan Jing (Tsinghua University)
Wenbing Huang (Tsinghua University)
Fuchun Sun (Tsinghua)
Xiaojian Ma (University of California, Los Angeles)
Tao Kong (Bytedance)
Chuang Gan (MIT-IBM Watson AI Lab)
Lei Li (ByteDance AI Lab)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors