EpiTwin: Spatiotemporal Graph Transformers for Epileptic sEEG Signal Reconstruction
Abstract
Stereotactic electroencephalography (sEEG) provides temporally precise intracranial recordings but is inherently constrained by sparse and irregular spatial sampling due to clinical limitations on electrode implantation. Signal reconstruction under this setting aims to infer neural activity at unmonitored locations, potentially expanding the coverage of neural recordings without increasing the number of implanted electrodes. However, most existing sEEG reconstruction methods underutilize the spatial information of electrode contacts in both encoding and modeling, and rely on deterministic objectives that favor average patterns, leading to over-smoothed reconstructions. We propose EpiTwin, a conditional spatial graph transformer for sEEG signal reconstruction, comprising three key components. Hybrid Spatial Positional Encoding (HSPE) constructs explicit spatial identities from electrode coordinates, graph topology, and anatomical priors. Geometry–Functional Biased Attention (GFBA) incorporates geometric distance and data-driven functional similarity biases into attention computation. Adversarial Refinement Training employs a multi-scale discriminator to counter reconstruction over-smoothing. Experiments on real-world clinical sEEG data demonstrate that EpiTwin consistently achieves lower reconstruction error under electrode series-level masking, outperforming recent foundation models such as LaBraM with a 16.8\% relative reduction in RMSE. Furthermore, EpiTwin effectively mitigates spectral over-smoothing and improves reconstruction fidelity.