Unbiased Reward Modeling from Implicit Preference
Abstract
Despite the success of reinforcement learning from human feedback (RLHF) in aligning language models, current reward modeling heavily relies on explicit preference data with high collection costs. In this work, we study implicit reward modeling---learning reward models from implicit human feedback---which offers a cost-effective alternative. We identify two fundamental challenges in implicit reward modeling: (1) Implicit preference data lacks definitive negative samples, which makes standard positive-negative classification methods inapplicable; (2) Implicit preference data suffers from user preference bias, where different responses have different propensities to elicit user feedback actions, which exacerbates the difficulty of distinguishing definitive negative samples. To address these challenges, we propose ImplicitRM, which aims to learn unbiased reward models from implicit preference data. ImplicitRM stratifies training samples into four latent groups via a stratification model. Building on this, it derives a learning objective through likelihood maximization, which we prove is theoretically unbiased, effectively resolving both challenges. Experiments demonstrate that ImplicitRM learns accurate reward models across implicit preference datasets. Code is available at https://anonymous.4open.science/r/ImplicitRM-5FB3.