Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Models of Human Feedback for AI Alignment

AMBER: An Entropy Maximizing Environment Design Algorithm for Inverse Reinforcement Learning

Paul Nitschke · Lars L. Ankile · Eura Nofshin · Siddharth Swaroop · Finale Doshi-Velez · Weiwei Pan

[ ] [ Project Page ]
Fri 26 Jul 8 a.m. PDT — 8 a.m. PDT

Abstract: In Inverse Reinforcement Learning (IRL), we learn the underlying reward function of humans from observations. Recent work shows that we can learn the reward function more accurately by observing the human in multiple related environments, but efficiently finding informative environments is an open question. We present $\texttt{AMBER}$, an information-theoretic algorithm that generates highly informative environments. With theoretical and empirical analysis, we show that $\texttt{AMBER}$ efficiently finds informative environments and improves reward learning.

Chat is not available.