Skip to yearly menu bar Skip to main content


Oral (prerecorded)
in
Workshop: Machine Learning for Multimodal Healthcare Data

Latent Masking for Multimodal Self-supervised Learning in Health Timeseries

Shohreh Deldari · Dimitrios Spathis · Mohammad Malekzadeh · Fahim Kawsar · Flora Salim · Akhil Mathur

Keywords: [ Multimodal fusion ] [ Multimodal biomarkers ]


Abstract:

Limited availability of labeled data for machine learning on biomedical time-series hampers progress in the field. Self-supervised learning (SSL) is a promising approach to learning data representations without labels. However, current SSL methods require expensive computations for negative pairs and are designed for single modalities, limiting their versatility. To overcome these limitations, we introduce CroSSL (Cross-modal SSL). CroSSL introduces two novel concepts: masking intermediate embeddings from modality-specific encoders and aggregating them into a global embedding using a cross-modal aggregator. This enables the handling of missing modalities and end-to-end learning of cross-modal patterns without prior data preprocessing or time-consuming negative-pair sampling. We evaluate CroSSL on various multi-modal time-series benchmarks, including both medical-grade and consumer biosignals. Our results demonstrate superior performance compared to previous SSL techniques and supervised benchmarks with minimal labeled data. We additionally analyze the impact of different masking ratios and strategies and assess the robustness of the learned representations to missing modalities. Overall, our work achieves state-of-the-art performance while highlighting the benefits of masking latent embeddings for cross-modal learning in temporal health data.

Chat is not available.