Skip to yearly menu bar Skip to main content


Poster

A Game Theoretic Framework for Model Based Reinforcement Learning

Aravind Rajeswaran · Igor Mordatch · Vikash Kumar

Keywords: [ Reinforcement Learning - Deep RL ] [ Robotics ] [ Deep Reinforcement Learning ]


Abstract:

Designing stable and efficient algorithms for model-based reinforcement learning (MBRL) with function approximation has remained challenging despite growing interest in the field. To help expose the practical challenges in MBRL and simplify algorithm design from the lens of abstraction, we develop a new framework that casts MBRL as a game between: (1)~a policy player, which attempts to maximize rewards under the learned model; (2)~a model player, which attempts to fit the real-world data collected by the policy player. We show that a near-optimal policy for the environment can be obtained by finding an approximate equilibrium for aforementioned game, and we develop two families of algorithms to find the game equilibrium by drawing upon ideas from Stackelberg games. Experimental studies suggest that the proposed algorithms achieve state of the art sample efficiency, match the asymptotic performance of model-free policy gradient, and scale gracefully to high-dimensional tasks like dexterous hand manipulation.

Chat is not available.