Processing math: 100%
Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Models of Human Feedback for AI Alignment

AMBER: An Entropy Maximizing Environment Design Algorithm for Inverse Reinforcement Learning

Paul Nitschke · Lars L. Ankile · Eura Nofshin · Siddharth Swaroop · Finale Doshi-Velez · Weiwei Pan


Abstract: In Inverse Reinforcement Learning (IRL), we learn the underlying reward function of humans from observations. Recent work shows that we can learn the reward function more accurately by observing the human in multiple related environments, but efficiently finding informative environments is an open question. We present AMBER, an information-theoretic algorithm that generates highly informative environments. With theoretical and empirical analysis, we show that AMBER efficiently finds informative environments and improves reward learning.

Chat is not available.