Quantifying the Generalization Gap in Seizure Detection: A Large-Scale Empirical Benchmark via the SzCORE Challenge
Abstract
Reliable automatic seizure detection from long-term electroencephalogram recordings (EEG) remains an unsolved challenge, as current models often fail to generalize across patients or clinical settings. Manual EEG review still is the standard of care, highlighting the need for robust models and standardized evaluation. The current literature often reports high efficacy, yet these models frequently fail when deployed to unseen patient populations. To rigorously assess this generalization gap, we conducted a large-scale empirical study evaluating 28 state-of-the-art algorithmic architectures, ranging from classical feature engineering to modern Deep Learning. These algorithms were collected by organizing competition. A strictly held-out private dataset of continuous EEG recordings from 65 subjects, totaling 4'360 hours of data, was utilized to evaluate algorithm performance. Expert neurophysiologists annotated these recordings, establishing the ground truth for seizure events. Algorithms were evaluated using event-based metrics from the SzCORE framework, including sensitivity, precision, F1-score, and false positive rate per day. Results revealed significant performance variability among state-of-the-art approaches, with the top F1 score of 32% (sensitivity 37%, precision 29%), highlighting the persistent difficulty of this task for current machine learning methodologies. Our analysis uncovered a discordance between peak performance and population-level stability. The algorithms achieving the highest aggregate F1-scores did not achieve the most consistent ranking across subjects, indicating high performance variance and susceptibility to failure on outlier patients. This independent evaluation also exposed a notable gap between self-reported efficacies and hold-out performance, underscoring the critical need for standardized, rigorous benchmarking in developing clinically viable ML models. A comparison with previous challenges and commercial systems indicates that the best algorithm in this study surpassed prior methods. Critically, the evaluation infrastructure transitions into a continuously open benchmarking platform, fostering reproducible research and accelerating the development of robust seizure detection algorithms by allowing ongoing submissions and integration of additional private datasets. Clinical centers can also adopt this platform to evaluate seizure detection algorithms on their EEG data using a standardized, reproducible framework.