Skip to yearly menu bar Skip to main content


Poster
in
Workshop: The Second Workshop on Spurious Correlations, Invariance and Stability

Regularizing Adversarial Imitation Learning Using Causal Invariance

Ivan Ovinnikov · Joachim Buhmann


Abstract:
Imitation learning methods are used to infer a policy in a Markov decision process from a     dataset of expert demonstrations by minimizing a divergence measure    between the empirical state occupancy measures of the expert and the policy.     The guiding signal to the policy is provided by the discriminator used     is an adversarial optimization procedure. We observe that this model is prone     to absorbing spurious correlations present in the expert data.    To alleviate this issue, we propose     to use causal invariance as a regularization principle for adversarial training of these models.    The regularization objective is applicable in a straightforward manner to existing     adversarial imitation frameworks. We demonstrate the efficacy of the     regularized formulation in an illustrative two-dimensional setting     as well as a number of high-dimensional robot locomotion benchmark tasks.

Chat is not available.