ViEEG: Hierarchical Visual Neural Representation for EEG Brain Decoding
Abstract
Understanding and decoding brain activity into visual representations is a fundamental challenge at the intersection of neuroscience and artificial intelligence. While electroencephalogram (EEG) visual decoding has shown promise due to its non-invasive and low-cost nature, existing methods suffer from Hierarchical Neural Encoding Neglect (HNEN), a critical limitation in which flat neural representations fail to model the brain’s hierarchical visual processing. Inspired by the hierarchical organization of visual cortex, we propose ViEEG, a neuro-inspired framework that addresses HNEN. ViEEG decomposes each visual stimulus into three biologically aligned components, namely contour, foreground object, and contextual scene, which serve as anchors for a three-stream EEG encoder. These EEG features are progressively integrated via cross-attention routing, simulating cortical information flow from low-level to high-level vision. We further adopt hierarchical contrastive learning for EEG-CLIP representation alignment, enabling zero-shot object recognition. Extensive experiments on THINGS-EEG dataset demonstrate that ViEEG significantly outperforms previous methods by a large margin in both subject-dependent and subject-independent settings. Results on THINGS-MEG dataset further confirm ViEEG's generalization to different neural modalities. Our framework not only advances the performance frontier but also sets a new paradigm for EEG brain decoding. Code and pretrained models will be available.