On Information Self-Locking in Reinforcement Learning for Active Reasoning
Abstract
Reinforcement learning (RL) with outcome-based rewards has achieved significant success in training large language model (LLM) agents for complex reasoning tasks. However, in active reasoning where agents need to strategically ask questions to acquire task-relevant information, we find that LLM agents trained with RL often suffer from information self-locking: the agent ceases to ask informative questions and sticks to uninformative decisions. To understand the phenomenon, we decompose active reasoning into two core capabilities: Action Selection (AS), which determines the observation stream through queries, and Belief Tracking (BT), which updates the agent’s belief based on collected evidence. We show that low AS and BT capabilities of LLMs will limit the information exploration during RL training. Furthermore, insufficient exploration in turn hinders the improvement of AS and BT, creating a feedback loop that locks the agent in a low-information regime. To resolve the issue, we propose a simple yet effective approach that directly promotes AS capability using proxy AS signals to help the agent escape the low-information regime. Extensive experiments with 6 benchmarks show that our approach mitigates the information self-locking, and brings up to 10% improvements.