Towards Generalizable EEG-to-fMRI Synthesis via a Unified, Context-Aware Prompting Framework
Abstract
Functional magnetic resonance imaging (fMRI) provides dynamic measurements of human brain activity at high spatial resolution and depth, but its use is constrained by high cost, limited accessibility, and strict acquisition requirements. Synthesizing fMRI data from more accessible, non-invasive modalities such as electroencephalography (EEG) offers a promising alternative, enabling inference of deep brain dynamics from low-cost scalp recordings in naturalistic settings. Despite recent progress, existing EEG-to-fMRI translation methods typically rely on region-specific models and offer limited support for subject-level and dataset-level heterogeneity, restricting their generalizability. We propose UniEFS, a unified EEG-to-fMRI synthesis model that enables full-brain fMRI reconstruction while accommodating varying demographic and physiological contexts within a single model. Our approach leverages a pretrained fMRI decoder to embed rich spatial priors and introduces condition-aware prompt tokens that encode subject-level and experimental metadata, enabling effective handling of heterogeneous datasets. We extensively evaluate our model performance on eyes-closed resting-state data and demonstrate that it can reliably reconstruct temporally-resolved whole-brain fMRI activity, with strong potential to generalize to task-based fMRI and clinical populations in a zero-shot manner.