Time-PEFT: Temporal and Multichannel Complexity-Based Fine-Tuning for Time-Series Foundation Models
Jihye Na ⋅ Patara Trirat ⋅ Chanyoung Park ⋅ Jae-Gil Lee
Abstract
Recent studies have attempted to fine-tune time-series foundation models to enhance a target dataset's forecasting performance. However, these approaches proceed without a clear criterion for identifying complex datasets that require fine-tuning due to performance degradation in zero-shot forecasting. To distinguish datasets that are more challenging from standard benchmarks, we introduce data-driven temporal complexity and multichannel complexity. *Temporal complexity* captures the difficulty of identifying distinct patterns by quantifying spectral entropy in the frequency domain, while *multichannel complexity* captures inter-channel dependencies by measuring the channel information flow impacting predictive uncertainty. These metrics serve as *effective proxies for performance gains* achievable through fine-tuning. Based on the two metrics, we develop *Time-PEFT*, a parameter-efficient fine-tuning framework that incorporates a frequency adapter for top-$k$ filtering and a channel adapter for multichannel modeling. *Time-PEFT* is shown to significantly improve forecasting performance by up to 2.51 times compared with existing fine-tuning techniques on complex datasets.
Successful Page Load