HELIX: Hybrid Encoding with Learnable Identity and Cross-dimensional Synthesis for Time Series Imputation
Abstract
Time series imputation benefits from leveraging cross-feature correlations, yet existing attention based methods re-discover feature relationships at each layer, lacking persistent anchors to maintain consistent representations. To address this, we propose HELIX, which assigns each feature a learnable feature identity, a persistent embedding that captures intrinsic semantic properties throughout the network. Unlike graph-based methods that rely on predefined topology and assume homogeneous spatial relationships, HELIX learns arbitrary feature dependencies end-to-end from temporal co-variation, naturally handling datasets where features mix spatial locations with semantic variables. Integrated with hybrid temporal-feature attention, HELIX achieves the state-of-the-art performance, ranking first among 17 methods across 21 experimental settings. Furthermore, our mechanistic analysis reveals that feature attention progressively aligns with underlying physical structure across layers, demonstrating more effectively exploits cross-feature dependencies for imputation.