Timezone: »

 
Robust Inverse Reinforcement Learning Through Bayesian Theory of Mind
Ran Wei · Siliang Zeng · Chenliang Li · Alfredo Garcia · Anthony McDonald · Mingyi Hong

Fri Jul 28 03:15 PM -- 04:30 PM (PDT) @
Event URL: https://openreview.net/forum?id=iL1rdSiffz »

We consider the Bayesian theory of mind (BTOM) framework for learning from demonstrations via inverse reinforcement learning (IRL). The BTOM model consists of a joint representation of the agent’s reward function and the agent's internal subjective model of the environment dynamics, which may be inaccurate. In this paper, we make use of a class of prior distributions that parametrize how accurate is the agent’s model of the environment to develop efficient algorithms to estimate the agent's reward and subjective dynamics in high-dimensional settings. The BTOM framework departs from existing offline model-based IRL approaches by performing simultaneous estimation of reward and dynamics. Our analysis reveals a novel insight that the estimated policy exhibits robust performance when the (expert) agent is believed (a priori) to have a highly accurate model of the environment. We verify this observation in the MuJoCo environment and show that our algorithms outperform state-of-the-art offline IRL algorithms.

Author Information

Ran Wei (Texas A&M University - College Station)
Siliang Zeng (University of Minnesota, Twin Cities)
Chenliang Li (The Chinese University of Hong Kong)
Alfredo Garcia (Texas A&M University)
Anthony McDonald
Mingyi Hong (University of Minnesota)

More from the Same Authors