Skip to yearly menu bar Skip to main content


Poster

Coactive Learning for Large Language Models using Implicit User Feedback

Aaron D. Tucker · Kianté Brantley · Adam Cahall · Thorsten Joachims

Hall C 4-9 #713
[ ]
Wed 24 Jul 2:30 a.m. PDT — 4 a.m. PDT

Abstract: We propose coactive learning as a model and feedback mechanism for training large language models (LLMs). The key insight is that users provide implicit feedback whenever they edit the text $y$ proposed by an LLM. While the edited text $\bar y$ is typically not a gold-standard example for supervised training, coactive learning merely requires that the edited text $\bar y$ is an improvement over the proposed text $y$. Note that such weak implicit preference feedback $\bar y \succ y$ is available in many application settings on a per-user basis, thus enabling the personalization of LLMs. In this paper, we develop the theoretical basis for coactive training of non-linear models, and we derive CoRLL as the first coactive learning algorithm for LLMs. Empirical results indicate that CoRLL is effective even for weak and noisy coactive preference feedback, making it a promising algorithm for training and personalization of LLMs from feedback that is naturally collected in many use cases.

Live content is unavailable. Log in and register to view live content