Skip to yearly menu bar Skip to main content


Versatile Offline Imitation from Observations and Examples via Regularized State-Occupancy Matching

Jason Yecheng Ma · Andrew Shen · Dinesh Jayaraman · Osbert Bastani

Hall E #908

Keywords: [ RL: Inverse ] [ RL: Deep RL ] [ RL: Batch/Offline ]


We propose State Matching Offline DIstribution Correction Estimation (SMODICE), a novel and versatile regression-based offline imitation learning algorithm derived via state-occupancy matching. We show that the SMODICE objective admits a simple optimization procedure through an application of Fenchel duality and an analytic solution in tabular MDPs. Without requiring access to expert actions, SMODICE can be effectively applied to three offline IL settings: (i) imitation from observations (IfO), (ii) IfO with dynamics or morphologically mismatched expert, and (iii) example-based reinforcement learning, which we show can be formulated as a state-occupancy matching problem. We extensively evaluate SMODICE on both gridworld environments as well as on high-dimensional offline benchmarks. Our results demonstrate that SMODICE is effective for all three problem settings and significantly outperforms prior state-of-art.

Chat is not available.