Skip to yearly menu bar Skip to main content


Oral

Environment Design for Inverse Reinforcement Learning

Thomas Kleine Buening · Victor Villin · Christos Dimitrakakis

Lehar 1-4
[ ] [ Visit Oral 6F Experimental Design and Simulation ]
Thu 25 Jul 8:15 a.m. — 8:30 a.m. PDT
[ Slides

Abstract:

Learning a reward function from demonstrations suffers from low sample-efficiency. Even with abundant data, current inverse reinforcement learning methods that focus on learning from a single environment can fail to handle slight changes in the environment dynamics. We tackle these challenges through adaptive environment design. In our framework, the learner repeatedly interacts with the expert, with the former selecting environments to identify the reward function as quickly as possible from the expert’s demonstrations in said environments. This results in improvements in both sample-efficiency and robustness, as we show experimentally, for both exact and approximate inference.

Chat is not available.