Breaking the Self-Confirming Loop: Diagnosing and Mitigating Systemic Reward Bias in Self-Rewarding RL
Chuyi Tan ⋅ Peiwen Yuan ⋅ Xinglin Wang ⋅ Yiwei Li ⋅ Shaoxiong Feng ⋅ Yueqi Zhang ⋅ Jiayi Shi ⋅ Ji Zhang ⋅ Boyuan Pan ⋅ Yao Hu ⋅ Kan Li
Abstract
Reinforcement learning with verifiable rewards (RLVR) efficiently scales the reasoning ability of large language models but is bottlenecked by scarce labeled data. Reinforcement learning with intrinsic rewards (RLIR) offers a scalable alternative via self-rewarding, yet often suffers from instability and inferior performance. We trace this gap to a systemic bias in confidence-coupled self-rewarding: the model tends to over-reward high-confidence mistakes, forming a \textbf{self-confirming loop}. We quantify this feedback-loop bias with three metrics: reward noise magnitude ($\rho_{\text{noise}}$), policy–reward coupling ($\rho_{\text{selfbias}}$), and over-/under-reward skew ($\rho_{\text{symbias}}$). Our analyses show a compounding effect where strong coupling amplifies confidence-conditioned errors and drives a drift toward over-reward, leading to instability and a lower performance ceiling. To mitigate this, we propose reinforcement learning with ensembled rewards (\textbf{RLER}), which aggregates diverse models with adaptive reward interpolation and disagreement-aware rollout selection to reduce coupling and suppress over-reward drift. Extensive experiments show that RLER improves by 13.6\% over the best RLIR baseline and is within 3.6\% of RLVR, while exhibiting stable scaling on unlabeled samples.
Successful Page Load