Skip to yearly menu bar Skip to main content


Oral
in
Workshop: Machine Learning for Multimodal Healthcare Data

MaxCorrMGNN: A Multi-Graph Neural Framework for Generalized Multimodal Fusion of Medical Data for Outcome Prediction

Niharika D'Souza · Hongzhi Wang · Andrea Giovannini · Antonio Foncubierta-Rodríguez · Kristen Beck · Orest Boyko · Tanveer Syeda-Mahmood

Keywords: [ Multimodal biomarkers ] [ Multimodal fusion ]


Abstract:

With the emergence of multimodal electronic health records, the evidence for an outcome may be captured across multiple modalities ranging from clinical to imaging and genomic data. Predicting outcomes effectively requires fusion frameworks capable of modeling fine-grained and multi-faceted complex interactions between modality features within and across patients. We develop an innovative fusion approach called MaxCorrMGNN that models non-linear modality correlations within and across patients through Hirschfeld-Gebelein-Re`nyi maximal correlation (MaxCorr) embeddings, resulting in a multi-layered graph that preserves the identities of the modalities and patients. We then design, for the first time, a generalized multi-layered graph neural network (MGNN) for task-informed reasoning in multi-layered graphs, that learns the parameters defining patient-modality graph connectivity and message passing in an end-to-end fashion. We evaluate our model an outcome prediction task on a Tuberculosis (TB) dataset consistently outperforming several state-of-the-art neural, graph-based and traditional fusion techniques.

Chat is not available.