Spurious Rewards: Rethinking Training Signals in RLVR
Abstract
We show that reinforcement learning with verifiable rewards (RLVR) can elicit strong mathematical reasoning in certain language models even with spurious rewards that have little, no, or outright negative correlation with the correct answer. For example, RLVR training with GRPO improves MATH-500 performance for Qwen2.5-Math-7B in absolute points by 21.4% using randomly assigned rewards, nearly matching the 29.1% gained with ground truth rewards. To explain this counterintuitive observation, we show that GRPO exhibits a clipping bias arising from the clip term, which can amplify high-prior behaviors learned during pre-training even without informative rewards. As a case study, we identify one such high-prior behavior for Qwen2.5-Math models, which we term code reasoning---reasoning in code without actual code execution; code reasoning frequency increases from 65% to over 90% with spurious rewards. However, the presence of such amplifiable behaviors is highly model-dependent. In practice, spurious rewards that are effective for Qwen models often fail to produce gains for other model families, such as Llama3 or OLMo2. Our results highlight the importance of validating RL methods across diverse models rather than relying on a single de facto choice: large performance gains can arise on Qwen models even from random rewards that do not reflect genuine capability improvements.