Timezone: »

Imitation Learning by Estimating Expertise of Demonstrators
Mark Beliaev · Andy Shih · Stefano Ermon · Dorsa Sadigh · Ramtin Pedarsani

Thu Jul 21 03:00 PM -- 05:00 PM (PDT) @ #1028

Many existing imitation learning datasets are collected from multiple demonstrators, each with different expertise at different parts of the environment. Yet, standard imitation learning algorithms typically treat all demonstrators as homogeneous, regardless of their expertise, absorbing the weaknesses of any suboptimal demonstrators. In this work, we show that unsupervised learning over demonstrator expertise can lead to a consistent boost in the performance of imitation learning algorithms. We develop and optimize a joint model over a learned policy and expertise levels of the demonstrators. This enables our model to learn from the optimal behavior and filter out the suboptimal behavior of each demonstrator. Our model learns a single policy that can outperform even the best demonstrator, and can be used to estimate the expertise of any demonstrator at any state. We illustrate our findings on real-robotic continuous control tasks from Robomimic and discrete environments such as MiniGrid and chess, out-performing competing methods in 21 out of 23 settings, with an average of 7% and up to 60% improvement in terms of the final reward.

Author Information

Mark Beliaev (University of California, Santa Barbara)
Andy Shih (Stanford University)
Stefano Ermon (Stanford University)
Dorsa Sadigh (Stanford University)
Ramtin Pedarsani (UC Santa Barbara)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors