Active Exploring like a Pigeon: Reinforcing Spatial Reasoning via Agentic Vision-Language Models
Abstract
Enabling Vision-Language Models (VLMs) to perform spatial reasoning remains challenging. Existing approaches treat VLMs as passive observers, which is difficult for real-world applications. Moreover, reinforcement learning methods rely on sparse rewards, limiting their effectiveness for complex reasoning tasks. Inspired by pigeons’ building and exploiting cognitive maps for navigation, we propose a novel agentic pipeline for spatial reasoning. First, we introduce a new dynamic cognitive map parameterizing scene layout as object positions and orientations, serving as persistent memory for new observations. Second, we propose a novel Spatial Assertion Codes (SAC), Python expressions programmatically describing spatial relationships. By collaborating with the dynamic cognitive map, SAC enables verification of intermediate reasoning steps, providing dense reward signals. We optimize the model via supervised and reinforcement finetuning. Experiments on MindCube benchmark demonstrate state-of-the-art performance with 80.5% overall accuracy, surpassing the best method by 53.2% on the challenging ROTATION subset. We will release the code and data soon.