Skip to yearly menu bar Skip to main content


Oral
in
Workshop: Models of Human Feedback for AI Alignment

RLHF and IIA: Perverse Incentives

Wanqiao Xu · Shi Dong · Xiuyuan Lu · Grace Lam · Zheng Wen · Benjamin Van Roy

[ ] [ Project Page ]
Fri 26 Jul 1 a.m. PDT — 1:10 a.m. PDT
 
presentation: Models of Human Feedback for AI Alignment
Fri 26 Jul midnight PDT — 8 a.m. PDT

Abstract:

Existing algorithms for reinforcement learning from human feedback (RLHF) can incentivize responses at odds with preferences because they are based on models that assume independence of irrelevant alternatives (IIA). The perverse incentives induced by IIA hinder innovations on query formats and learning algorithms.

Chat is not available.