Skip to yearly menu bar Skip to main content


Variational Imitation Learning with Diverse-quality Demonstrations

Voot Tangkaratt · Bo Han · Mohammad Emtiyaz Khan · Masashi Sugiyama

Keywords: [ Deep Reinforcement Learning ] [ Planning and Control ] [ Reinforcement Learning ] [ Reinforcement Learning - Deep RL ]


Learning from demonstrations can be challenging when the quality of demonstrations is diverse, and even more so when the quality is unknown and there is no additional information to estimate the quality. We propose a new method for imitation learning in such scenarios. We show that simple quality-estimation approaches might fail due to compounding error, and fix this issue by jointly estimating both the quality and reward using a variational approach. Our method is easy to implement within reinforcement-learning frameworks and also achieves state-of-the-art performance on continuous-control benchmarks.Our work enables scalable and data-efficient imitation learning under more realistic settings than before.

Chat is not available.