Skip to yearly menu bar Skip to main content


Spotlight Poster

Model Alignment as Prospect Theoretic Optimization

Kawin Ethayarajh · Winnie Xu · Niklas Muennighoff · Dan Jurafsky · Douwe Kiela

Hall C 4-9 #1204
[ ] [ Project Page ]
Tue 23 Jul 2:30 a.m. PDT — 4 a.m. PDT

Abstract: Kahneman & Tversky's $\textit{prospect theory}$ tells us that humans perceive random variables in a biased but well-defined manner (1992); for example, humans are famously loss-averse. We show that objectives for aligning LLMs with human feedback implicitly incorporate many of these biases---the success of these objectives (e.g., DPO) over cross-entropy minimization can partly be ascribed to them belonging to a family of loss functions that we call $\textit{human-aware losses}$ (HALOs). However, the utility functions these methods attribute to humans still differ from those in the prospect theory literature. Using a Kahneman-Tversky model of human utility, we propose a HALO that directly maximizes the utility of generations instead of maximizing the log-likelihood of preferences, as current methods do. We call this approach KTO, and it matches or exceeds the performance of preference-based methods at scales from 1B to 30B, despite only learning from a binary signal of whether an output is desirable. More broadly, our work suggests that there is no one HALO that is universally superior; the best loss depends on the inductive biases most appropriate for a given setting, an oft-overlooked consideration.

Live content is unavailable. Log in and register to view live content