Invited talk
in
Workshop: Interactive Learning with Implicit Human Feedback
Daniel Brown: Pitfalls and paths forward when learning rewards from human feedback
Abstract:
Human feedback is often incomplete, suboptimal, biased, and ambiguous, leading to misidentification of the human's true reward function and suboptimal agent behavior. I will discuss these pitfalls as well as some of our recent work that seeks to overcome these problems via techniques that calibrate to user biases, learn from multiple feedback types, use human feedback to align robot feature representations, and enable interpretable reward learning.
Chat is not available.