Skip to yearly menu bar Skip to main content


Poster

When Can Proxies Improve the Sample Complexity of Preference Learning?

Yuchen Zhu · Daniel Augusto de Souza · Zhengyan Shi · Mengyue Yang · Pasquale Minervini · Matt Kusner · Alexander D'Amour

West Exhibition Hall B2-B3 #W-816
[ ] [ ]
Tue 15 Jul 11 a.m. PDT — 1:30 p.m. PDT

Abstract:

We address the problem of reward hacking, where maximising a proxy reward does not necessarily increase the true reward. This is a key concern for Large Language Models (LLMs), as they are often fine-tuned on human preferences that may not accurately reflect a true objective. Existing work uses various tricks such as regularisation, tweaks to the reward model, and reward hacking detectors, to limit the influence that such proxy preferences have on a model. Luckily, in many contexts such as medicine, education, and law, a sparse amount of expert data is often available. In these cases, it is often unclear whether the addition of proxy data can improve policy learning. We outline a set of sufficient conditions on proxy feedback that, if satisfied, indicate that proxy data can provably improve the sample complexity of learning the ground truth policy. These conditions can inform the data collection process for specific tasks. The result implies a parameterisation for LLMs that achieves this improved sample complexity. We detail how one can adapt existing architectures to yield this improved sample complexity.

Lay Summary:

We address the challenge of reward hacking, where AI models, such as large language models (LLMs), optimize for proxy rewards—like human preferences—that don’t always align with the true objective. This is especially relevant when LLMs are fine-tuned using human feedback, which may be biased or incomplete. While existing methods try to reduce this issue using techniques like regularization or reward model adjustments, we focus on a different angle. In fields like medicine, education, or law, small amounts of expert data are often available alongside less reliable proxy data. It’s not always clear whether using this additional proxy feedback helps or hurts learning. We identify a set of conditions under which proxy data can reliably improve learning efficiency—reducing the amount of expert data needed. These findings can guide how feedback is collected and used. We also describe how to adapt current LLM architectures to benefit from these insights and achieve better learning outcomes.

Live content is unavailable. Log in and register to view live content