Timezone: »
Interpretable brain network models for disease prediction are of great value for the advancement of neuroscience. GNNs are promising to model complicated network data, but they are prone to overfitting and suffer from poor interpretability, which prevents their usage in decision-critical scenarios like healthcare. To bridge this gap, we propose BrainNNExplainer, an interpretable GNN framework for brain network analysis. It is mainly composed of two jointly learned modules: a backbone prediction model that is specifically designed for brain networks and an explanation generator that highlights disease-specific prominent brain network connections. Extensive experimental results with visualizations on two challenging disease prediction datasets demonstrate the unique interpretability and outstanding performance of BrainNNExplainer.
Author Information
Hejie Cui (Emory University)
Hi there! This is Hejie Cui (pronounced as “He-jay Tsuee”, 崔鹤洁 in Chinese). I also go by the name Kelly. I am a second-year Ph.D. student in Computer Science at Emory University, under the supervision of Dr. Carl Yang in Emory Graph Mining Lab. I have also been working with Dr. Eugene Agichtein in Emory Intelligent Information Access Lab. Before joining Emory, I got my bachelor’s degree in Software Engineering from Tongji University, where I was working with Dr. Lin Zhang. My current research interests lie in machine learning with an emphasis on graph representation learning and its application to multi-modality data and brain network analysis.
Wei Dai (Emory University)
Yanqiao Zhu (Institution of Automation, Chinese Academy of Sciences)
Xiaoxiao Li (The University of British Columbia)
Lifang He (Lehigh University)
Carl Yang (Emory University)
More from the Same Authors
-
2020 : (#12 / Sess. 1) Deep Graph Contrastive Representation Learning »
Yanqiao Zhu -
2021 : Effective and Interpretable fMRI Analysis with Functional Brain Network Generation »
Xuan Kan · Hejie Cui · Ying Guo · Carl Yang -
2021 : One Map Does Not Fit All: Evaluating Saliency Map Explanation on Multi-Modal Medical Images »
Weina Jin · Xiaoxiao Li · Ghassan Hamarneh -
2021 : One Map Does Not Fit All: Evaluating Saliency Map Explanation on Multi-Modal Medical Images »
Weina Jin · Xiaoxiao Li · Ghassan Hamarneh -
2023 : Open Visual Knowledge Extraction via Relation-Oriented Multimodality Model Prompting »
Hejie Cui · Xinyu Fang · Zihan Zhang · Ran Xu · Xuan Kan · Xin Liu · Manling Li · Yangqiu Song · Carl Yang -
2023 : A Survey on Knowledge Graphs for Healthcare: Resources, Application Progress, and Promise »
Hejie Cui · Jiaying Lu · Shiyu Wang · Ran Xu · Wenjing Ma · Shaojun Yu · Yue Yu · Xuan Kan · Tianfan Fu · Chen Ling · Joyce Ho · Fei Wang · Carl Yang -
2023 Poster: Federated Adversarial Learning: A Framework with Convergence Analysis »
Xiaoxiao Li · Zhao Song · Jiaming Yang -
2021 : Closing remarks »
Xiaoxiao Li -
2021 Workshop: Interpretable Machine Learning in Healthcare »
Yuyin Zhou · Xiaoxiao Li · Vicky Yao · Pengtao Xie · DOU QI · Nicha Dvornek · Julia Schnabel · Judy Wawira · Yifan Peng · Ronald Summers · Alan Karthikesalingam · Lei Xing · Eric Xing -
2021 Poster: FL-NTK: A Neural Tangent Kernel-based Framework for Federated Learning Analysis »
Baihe Huang · Xiaoxiao Li · Zhao Song · Xin Yang -
2021 Spotlight: FL-NTK: A Neural Tangent Kernel-based Framework for Federated Learning Analysis »
Baihe Huang · Xiaoxiao Li · Zhao Song · Xin Yang -
2020 Poster: Adaptive Checkpoint Adjoint Method for Gradient Estimation in Neural ODE »
Juntang Zhuang · Nicha Dvornek · Xiaoxiao Li · Sekhar Tatikonda · Xenophon Papademetris · James Duncan