Keyframe-Focused Visual Imitation Learning

Chuan Wen · Jierui Lin · Jianing Qian · Yang Gao · Dinesh Jayaraman

[ Abstract ] [ Livestream: Visit Reinforcement Learning 7 ] [ Paper ]
Tue 20 Jul 6:25 p.m. — 6:30 p.m. PDT

Imitation learning trains control policies by mimicking pre-recorded expert demonstrations. In partially observable settings, imitation policies must rely on observation histories, but many seemingly paradoxical results show better performance for policies that only access the most recent observation. Recent solutions ranging from causal graph learning to deep information bottlenecks have shown promising results, but failed to scale to realistic settings such as visual imitation. We propose a solution that outperforms these prior approaches by upweighting demonstration keyframes corresponding to expert action changepoints. This simple approach easily scales to complex visual imitation settings. Our experimental results demonstrate consistent performance improvements over all baselines on image-based Gym MuJoCo continuous control tasks. Finally, on the CARLA photorealistic vision-based urban driving simulator, we resolve a long-standing issue in behavioral cloning for driving by demonstrating effective imitation from observation histories. Supplementary materials and code at: \url{}.

Chat is not available.