Timezone: »

 
Poster
A Game Theoretic Framework for Model Based Reinforcement Learning
Aravind Rajeswaran · Igor Mordatch · Vikash Kumar

Wed Jul 15 10:00 AM -- 10:45 AM & Wed Jul 15 09:00 PM -- 09:45 PM (PDT) @ None #None

Designing stable and efficient algorithms for model-based reinforcement learning (MBRL) with function approximation has remained challenging despite growing interest in the field. To help expose the practical challenges in MBRL and simplify algorithm design from the lens of abstraction, we develop a new framework that casts MBRL as a game between: (1)~a policy player, which attempts to maximize rewards under the learned model; (2)~a model player, which attempts to fit the real-world data collected by the policy player. We show that a near-optimal policy for the environment can be obtained by finding an approximate equilibrium for aforementioned game, and we develop two families of algorithms to find the game equilibrium by drawing upon ideas from Stackelberg games. Experimental studies suggest that the proposed algorithms achieve state of the art sample efficiency, match the asymptotic performance of model-free policy gradient, and scale gracefully to high-dimensional tasks like dexterous hand manipulation.

Author Information

Aravind Rajeswaran (University of Washington)
Igor Mordatch (Google Brain)
Vikash Kumar (Univ. Of Washington)

More from the Same Authors