Multi-agent imitation learning with function approximation: linear Markov games and beyond
Luca Viano ⋅ Till Freihaut ⋅ Emanuele Nevali ⋅ Volkan Cevher ⋅ Matthieu Geist ⋅ Giorgia Ramponi
Abstract
In this work, we present the first theoretical analysis of multi-agent imitation learning (MAIL) in linear Markov games where both the transition dynamics and each agent's reward function are linear in some given features. We demonstrate that by leveraging this structure, it is possible to replace the state-action level \emph{all policy deviation concentrability coefficient} \citep{freihaut2025rate} with a concentrability coefficient defined at the feature level which can be much smaller than the state-action analog when the features are informative about \emph{states' similarity}. Furthermore, to circumvent the need for any concentrability coefficient, we turn to the interactive setting. We provide the first, computationally efficient, interactive MAIL algorithm for linear Markov games and show that its sample complexity depends only on the dimension of the feature map $d$. Building on these theoretical findings, we propose a deep MAIL interactive algorithm which clearly outperforms BC on games such as Tic-Tac-Toe and Connect4.
Successful Page Load