BroRL: Scaling Reinforcement Learning via Broadened Exploration
Jian Hu ⋅ Mingjie Liu ⋅ Ximing Lu ⋅ Fang Wu ⋅ Zaid Harchaoui ⋅ Shizhe Diao ⋅ Yejin Choi ⋅ Pavlo Molchanov ⋅ Jun Yang ⋅ Jan Kautz ⋅ Yi Dong
Abstract
Reinforcement Learning with Verifiable Rewards (RLVR) has emerged as a key ingredient for unlocking complex reasoning capabilities in large language models. Recent work ProRL \citep{liu2025prorl} has shown promise in scaling RL by increasing the number of training steps. However, performance plateaus after thousands of steps, with clear diminishing returns from allocating more computation to additional training. In this work, we investigate a complementary paradigm for scaling RL: \textbf{BroRL}—increasing the number of rollouts per example to hundreds to exhaustively \textbf{Bro}aden exploration, which yields continuous performance gains beyond the saturation point observed in ProRL when scaling the number of training steps. Our approach is motivated by a mass balance equation analysis allowing us to characterize the rate of change in probability mass for correct and incorrect tokens during the reinforcement process. We show that under a one-step RL assumption, sampled rollout tokens contribute to correct-mass expansion, while unsampled tokens outside rollouts may lead to gains or losses depending on their distribution and the net reward balance. Importantly, as the number of rollouts per example $N$ increases, the effect of unsampled terms diminishes, making overall correct-mass expansion more likely. To validate our theoretical analysis, we conduct simulations under more relaxed conditions and find that a sufficiently large rollout size $N$—corresponding to ample exploration—can increase the probability mass of correct tokens broadly, and in our simulator it increases all correct-token probabilities and eliminates knowledge shrinkage. Empirically, BroRL revives models saturated after 3K ProRL training steps and demonstrates robust, continuous improvement, achieving strong results for the 1.5B model across diverse benchmarks. Notably, under the same training time, BroRL is both more data- and compute-efficient: large-$N$ rollouts reduce the number of filtered samples during dynamic sampling at the algorithmic level and nearly double generation throughput compared to ProRL in our hardware setup; this throughput increase is consistent with shifting generation from a more memory-bound regime toward a more compute-bound one.
Successful Page Load