Skip to yearly menu bar Skip to main content


Robust Imitation Learning against Variations in Environment Dynamics

Jongseong Chae · Seungyul Han · Whiyoung Jung · MYUNG-SIK CHO · Sungho Choi · Youngchul Sung


Keywords: [ RL: Deep RL ] [ RL: Inverse ] [ Reinforcement Learning ]


In this paper, we propose a robust imitation learning (IL) framework that improves the robustness of IL when environment dynamics are perturbed. The existing IL framework trained in a single environment can catastrophically fail with perturbations in environment dynamics because it does not capture the situation that underlying environment dynamics can be changed. Our framework effectively deals with environments with varying dynamics by imitating multiple experts in sampled environment dynamics to enhance the robustness in general variations in environment dynamics. In order to robustly imitate the multiple sample experts, we minimize the risk with respect to the Jensen-Shannon divergence between the agent's policy and each of the sample experts. Numerical results show that our algorithm significantly improves robustness against dynamics perturbations compared to conventional IL baselines.

Chat is not available.