Skip to yearly menu bar Skip to main content


Poster

A Unified Linear Programming Framework for Reward Learning with Offline Human Behavior and Feedback Data

Kihyun Kim · Jiawei Zhang · Pablo A. Parrilo · Asuman Ozdaglar


Abstract:

Inverse Reinforcement Learning (IRL) and Reinforcement Learning with Human Feedback (RLHF) are pivotal methodologies in reward learning, which involve inferring and shaping the underlying reward function of sequential decision-making problems based on observed human behavior and feedback. Most prior work in reward learning has relied on prior knowledge or assumptions about decision or preference models, potentially leading to robustness issues. This paper introduces a novel linear programming (LP) framework tailored for offline reward learning. This framework estimates a feasible reward set from the primal-dual optimality conditions of a suitably designed LP, utilizing pre-collected trajectories without online exploration, and offers an optimality guarantee with provable sample efficiency. Our LP framework also enables aligning the reward functions with human feedback, such as pairwise trajectory comparison data, while maintaining computational tractability and sample efficiency. We demonstrate that our framework may have better performance compared to conventional maximum likelihood estimation (MLE) approach through analytical examples and numerical experiments.

Live content is unavailable. Log in and register to view live content