VideoSEG-O3: A Multi-turn Reinforcement Learning Framework for Reasoning Video Object Segmentation
Ming Dai ⋅ Sen Yang ⋅ Boqiang Duan ⋅ Boyuan Tong ⋅ Jiedong Zhuang ⋅ Wankou Yang ⋅ Jingdong Wang
Abstract
Reasoning Video Object Segmentation (RVOS) demands a sophisticated integration of temporal dynamics, spatial details, and linguistic reasoning to achieve precise pixel-level localization. Existing methods are limited to reasoning over fixed initial inputs and lack the capacity to actively acquire further visual evidence, which is often essential for resolving complex references in long or intricate videos. To address this, we propose $\textbf{VideoSEG-O3}$, the first multi-turn reinforcement learning framework for RVOS that emulates the human $\textit{``coarse-to-fine''}$ cognitive process. It employs a $\textit{multi-turn temporal-spatial chain-of-thought}$ to capture fine-grained details by iteratively pinpointing critical intervals and keyframes. Additionally, to enable the policy to perceive segmentation quality beyond mere text probability of $\texttt{[SEG]}$ during the RL stage, we introduce $\textit{SEG-aware logit calibration}$, which integrates pixel-wise segmentation feedback directly into the token-level logits. Furthermore, we design a $\textit{decoupled thinking trace}$ to hierarchically decompose the reasoning process into temporal, spatial, and linguistic dimensions, and construct $\textbf{VTS-CoT}$, a specialized cold-start dataset featuring comprehensive reasoning trajectories. Extensive experiments demonstrate that VideoSEG-O3 achieves advanced performance across 8 mainstream RVOS benchmarks, particularly excelling in long-horizon and complex reasoning tasks.
Successful Page Load