CombinationTS: A Modular Framework for Understanding Time-Series Forecasting Models
Xiaorui Wang ⋅ Fanda Fan ⋅ Chenxi Wang ⋅ Yuxuan Yang ⋅ Rui Tang ⋅ Kuoyu Gao ⋅ simiao pang ⋅ Yuanfeng Shang ⋅ Liu ⋅ Gao ⋅ Lei Wang ⋅ Jianfeng Zhan
Abstract
Recent progress in time-series forecasting has led to rapidly increasing architectural complexity, yet many reported State-of-the-Art gains are statistically fragile or misattributed. We argue that progress requires a shift from model selection to modular attribution, identifying which components truly drive performance. We propose CombinationTS, a self-contained probabilistic evaluation framework that decomposes forecasting models into orthogonal modules—Input Transformation, Embedding, Encoder, and Decoder—and evaluates them under a shared evaluation condition space. By quantifying each component via marginalized effectiveness ($\mu$) and stability ($\sigma^2$), CombinationTS enables robust attribution beyond fragile point estimates. Through large-scale paired evaluation, we uncover the Identity Paradox: once the data view is well-designed, a parameter-free Identity encoder often matches or outperforms complex backbones. We further show that explicit structural priors introduced via input transformations yield a more favorable effectiveness–stability trade-off than increasing encoder complexity, establishing a principled baseline for architectural necessity. The code is available at https://anonymous.4open.science/r/CombinationTS.
Successful Page Load