Timezone: »

Versatile Offline Imitation from Observations and Examples via Regularized State-Occupancy Matching
Yecheng Jason Ma · Andrew Shen · Dinesh Jayaraman · Osbert Bastani

Thu Jul 21 11:00 AM -- 11:05 AM (PDT) @ Room 301 - 303

We propose State Matching Offline DIstribution Correction Estimation (SMODICE), a novel and versatile regression-based offline imitation learning algorithm derived via state-occupancy matching. We show that the SMODICE objective admits a simple optimization procedure through an application of Fenchel duality and an analytic solution in tabular MDPs. Without requiring access to expert actions, SMODICE can be effectively applied to three offline IL settings: (i) imitation from observations (IfO), (ii) IfO with dynamics or morphologically mismatched expert, and (iii) example-based reinforcement learning, which we show can be formulated as a state-occupancy matching problem. We extensively evaluate SMODICE on both gridworld environments as well as on high-dimensional offline benchmarks. Our results demonstrate that SMODICE is effective for all three problem settings and significantly outperforms prior state-of-art.

Author Information

Yecheng Jason Ma (University of Pennsylvania)
Andrew Shen (University of Melbourne)
Dinesh Jayaraman (University of Pennsylvania)
Osbert Bastani (University of Pennsylvania)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors