BOOSTAPR: Boosting Automated Program Repair via Execution-Grounded Reinforcement Learning with Dual Reward Models
Abstract
Reinforcement learning for program repair is hindered by sparse execution feedback and coarse sequence-level rewards that obscure which edits actually fix bugs. We present BoostAPR, a three-stage framework: (1) supervised fine-tuning on execution-verified demonstrations with reasoning traces, (2) training dual reward models—a sequence-level assessor and a line-level credit allocator—from execution outcomes, and (3) PPO optimization where the line-level model redistributes rewards to critical edit regions. This line-level credit assignment operates at an intermediate granularity naturally suited to code changes. Trained on SWE-Gym and evaluated on four benchmarks, BoostAPR achieves 40.7% on SWE-bench Verified (+22.9pp over the base model), 24.8% on Defects4J (Python→Java transfer), 84.5% on HumanEval-Java, and 95.0% on QuixBugs, showing competitive open-source performance with strong cross-language generalization.