Timezone: »

Learning to Weight Imperfect Demonstrations
Yunke Wang · Chang Xu · Bo Du · Honglak Lee

Tue Jul 20 09:00 PM -- 11:00 PM (PDT) @ None #None

This paper investigates how to weight imperfect expert demonstrations for generative adversarial imitation learning (GAIL). The agent is expected to perform behaviors demonstrated by experts. But in many applications, experts could also make mistakes and their demonstrations would mislead or slow the learning process of the agent. Recently, existing methods for imitation learning from imperfect demonstrations mostly focus on using the preference or confidence scores to distinguish imperfect demonstrations. However, these auxiliary information needs to be collected with the help of an oracle, which is usually hard and expensive to afford in practice. In contrast, this paper proposes a method of learning to weight imperfect demonstrations in GAIL without imposing extensive prior information. We provide a rigorous mathematical analysis, presenting that the weights of demonstrations can be exactly determined by combining the discriminator and agent policy in GAIL. Theoretical analysis suggests that with the estimated weights the agent can learn a better policy beyond those plain expert demonstrations. Experiments in the Mujoco and Atari environments demonstrate that the proposed algorithm outperforms baseline methods in handling imperfect expert demonstrations.

Author Information

Yunke Wang (Wuhan University)
Chang Xu (University of Sydney)
Bo Du (Wuhan University)
Honglak Lee (Google / U. Michigan)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors