Skip to yearly menu bar Skip to main content


Oral
in
Workshop: Machine Learning for Multimodal Healthcare Data

SIM-CNN: Self-Supervised Individualized Multimodal Learning for Stress Prediction on Nurses Using Biosignals

Sunmin Eom · Sunwoo Eom · Peter Washington

Keywords: [ Data sparsity, incompleteness and complexity ] [ Multimodal fusion ] [ Multimodal biomarkers ] [ Benchmarking, domain shifts, and generalization ]


Abstract:

Precise stress recognition from biosignals is inherently challenging due to the heterogeneous nature of stress, individual physiological differences, and scarcity of labeled data. To address these issues, we developed SIM-CNN, a self-supervised learning (SSL) method for personalized stress-recognition models using multimodal biosignals. SIM-CNN involves training a multimodal 1D convolutional neural network (CNN) that leverages SSL to utilize massive unlabeled data, optimizing individual parameters and hyperparameters for precision health. SIM-CNN is evaluated on a real-world multimodal dataset collected from nurses that consists of 1,250 hours of biosignals, 83 hours of which are explicitly labeled with stress levels. SIM-CNN is pre-trained on the unlabeled biosignal data with next-step time series forecasting and fine-tuned on the labeled data for stress classification. Compared to SVMs and baseline CNNs with an identical architecture but without self-supervised pre-training, SIM-CNN shows clear improvements in the average AUC and accuracy, but a further examination of the data also suggests some intrinsic limitations of patient-specific stress recognition using biosignals recorded in the wild.

Chat is not available.