Adaptive Time Series Reasoning via Segment Selection
Abstract
Time series reasoning tasks increasingly start from a natural language question and require targeted analysis of time series. Relevant evidence may be global or confined to a few short segments, so the model must decide what to inspect. Most existing methods compress the full series into a fixed representation before inference, preventing question-adaptive analysis. We introduce ARTIST, an approach that formulates time-series reasoning as a sequential decision problem and trains models to interleave reasoning with adaptive temporal segment selection. ARTIST uses a controller-reasoner architecture and reinforcement learning to optimize segment selection based on answer correctness, allowing the model to actively acquire task-relevant information during inference. We evaluate ARTIST on six time-series reasoning benchmarks against large language models, vision-language models, and prior time series reasoning systems. ARTIST improves average accuracy by 6.46 percentage points over the strongest model, with the largest gains on rare event localization and multi-segment evidence accumulation. Supervised fine-tuning improves performance, and reinforcement learning yields further gains by optimizing question-adaptive segment selection. Across datasets, ARTIST achieves higher accuracy while using a smaller fraction of the input time series, highlighting the importance of learned, selective data utilization for time series reasoning.