Harnessing Spectrum Video for Subject-Level Few-Shot and Cross-Montage EEG Generalization
Abstract
Existing EEG models are limited by electrode heterogeneity and rigid "channel-first" architectures that treat sensors as independent features. We propose Brain Signal Rendering (BSR), which reinterprets EEG as a physical projection of neural activity and transforms raw signals into geometry-aware Spectrum Videos. By utilizing VideoMAE for self-supervised pre-training, BSR learns robust, layout-agnostic spatiotemporal representations that preserve neural topology. We further introduce subject-level few-shot learning and cross-montage fine-tuning to rigorously evaluate generalization across subjects and electrode configurations. Experiments show that VideoMAE model integrated with the BSR framework significantly outperforms state-of-the-art spectrum based methods, providing a scalable and data-efficient foundation for generalizable EEG modeling.