Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Workshop on Theoretical Foundations of Foundation Models (TF2M)

RLHF from Heterogeneous Feedback via Personalization and Preference Aggregation

Chanwoo Park · Mingyang Liu · Dingwen Kong · Kaiqing Zhang · Asuman Ozdaglar


Abstract:

Reinforcement learning from human feedback (RLHF) has been an effective technique for aligning AI systems with human values, with remarkable successes in fine-tuning large-language models recently. Most existing RLHF paradigms make the underlying assumption that human preferences are relatively \emph{homogeneous}, and can be encoded by a single reward model. In this paper, we focus on addressing the issues due to the inherent \textit{heterogeneity} in human preferences, as well as their potential \emph{strategic} behavior in providing feedback. Specifically, we propose two frameworks to address heterogeneous human feedback in principled ways: personalization-based one and preference-aggregation-based one. For the former, we propose two approaches based on representation learning and clustering, respectively, for learning \emph{multiple} reward models that trade-off the bias (due to preference heterogeneity) and variance (due to the use of fewer data for learning each model by personalization). We then establish sample complexity guarantees for both approaches. For the latter, we aim to adhere to the single-model framework, as already deployed in the current RLHF paradigm, by carefully \emph{aggregating} diverse and truthful preferences from humans. We propose two approaches based on reward and preference aggregation, respectively: the former utilizes social choice theory to aggregate individual reward models, with sample complexity guarantees; the latter directly aggregates the human feedback in the form of probabilistic opinions. Under the probabilistic-opinion-feedback model, we also develop an approach to handle strategic human labelers who may bias and manipulate the aggregated preferences with untruthful feedback. Based on the ideas in mechanism design, our approach ensures truthful preference reporting, with the induced aggregation rule maximizing social welfare functions.

Chat is not available.