Process Reward Agents for Steering Knowledge-Intensive Reasoning
Abstract
Reasoning in knowledge-intensive domains remains challenging because intermediate steps are often not locally verifiable: unlike math or code, evaluating step correctness may require synthesizing clues across large external knowledge sources. As a result, subtle errors can propagate through reasoning traces, potentially never to be detected. Prior work has proposed process reward models (PRMs), including retrieval-augmented variants. However, due to retrieval they operate post hoc by scoring completed trajectories which prevents their integration into dynamic inference procedures. Here, we introduce Process Reward Agents~(PRA), a test-time method for providing domain-grounded, online, step-wise rewards to a frozen reasoner. In contrast to prior retrieval-augmented PRMs, PRA enables search-based decoding to rank and prune candidate trajectories at every generation step. Experiments on multiple medical reasoning benchmarks demonstrate that PRA consistently outperforms strong baselines, achieving 80.9\% accuracy on MedQA with Qwen3-4B, a new state of the art at the 4B scale. Crucially, PRA generalizes to unseen frozen policy models ranging from 0.5B to 8B parameters, improving their accuracy by up to +25.7\% without any policy model updates. Ultimately, PRA suggests a paradigm in which frozen reasoners are decoupled from domain-specific tool-augmented reward modules, which enables the deployment of new backbones in complex domains without retraining. To support reproducibility, we release all code and data in an anonymous repository.