Timezone: »
Being able to explain the prediction to clinical end-users is a necessity to leverage the power of AI models for clinical decision support. For medical images, saliency maps are the most common form of explanation. The maps highlight important features for AI model's prediction. Although many saliency map methods have been proposed, it is unknown how well they perform on explaining decisions on multi-modal medical images, where each modality/channel carries distinct clinical meanings of the same underlying biomedical phenomenon. Understanding such modality-dependent features is essential for clinical users' interpretation of AI decisions. To tackle this clinically important but technically ignored problem, we propose the MSFI (Modality-Specific Feature Importance) metric to examine whether saliency maps can highlight modality-specific important features. MSFI encodes the clinical requirements on modality prioritization and modality-specific feature localization. Our evaluations on 16 commonly used saliency map methods, including a clinician user study, show that although most saliency map methods captured modality importance information in general, most of them failed to highlight modality-specific important features consistently and precisely. The evaluation results guide the choices of saliency map methods and provide insights to propose new ones targeting clinical applications.
Author Information
Weina Jin (Simon Fraser University)
Xiaoxiao Li (The University of British Columbia)
Ghassan Hamarneh (Simon Fraser University)
More from the Same Authors
-
2021 : BrainNNExplainer: An Interpretable Graph Neural Network Framework for Brain Network based Disease Analysis »
Hejie Cui · Wei Dai · Yanqiao Zhu · Xiaoxiao Li · Lifang He · Carl Yang -
2021 : One Map Does Not Fit All: Evaluating Saliency Map Explanation on Multi-Modal Medical Images »
Weina Jin · Xiaoxiao Li · Ghassan Hamarneh -
2023 Workshop: 3rd Workshop on Interpretable Machine Learning in Healthcare (IMLH) »
Weina Jin · Ramin Zabih · S. Kevin Zhou · Yuyin Zhou · Xiaoxiao Li · Yifan Peng · Zongwei Zhou · Yucheng Tang · Yuzhe Yang · Agni Kumar -
2023 Poster: Federated Adversarial Learning: A Framework with Convergence Analysis »
Xiaoxiao Li · Zhao Song · Jiaming Yang -
2022 Workshop: 2nd Workshop on Interpretable Machine Learning in Healthcare (IMLH) »
Ramin Zabih · S. Kevin Zhou · Weina Jin · Yuyin Zhou · Ipek Oguz · Xiaoxiao Li · Yifan Peng · Zongwei Zhou · Yucheng Tang -
2021 : Closing remarks »
Xiaoxiao Li -
2021 Workshop: Interpretable Machine Learning in Healthcare »
Yuyin Zhou · Xiaoxiao Li · Vicky Yao · Pengtao Xie · DOU QI · Nicha Dvornek · Julia Schnabel · Judy Wawira · Yifan Peng · Ronald Summers · Alan Karthikesalingam · Lei Xing · Eric Xing -
2021 Poster: FL-NTK: A Neural Tangent Kernel-based Framework for Federated Learning Analysis »
Baihe Huang · Xiaoxiao Li · Zhao Song · Xin Yang -
2021 Spotlight: FL-NTK: A Neural Tangent Kernel-based Framework for Federated Learning Analysis »
Baihe Huang · Xiaoxiao Li · Zhao Song · Xin Yang -
2020 Poster: Adaptive Checkpoint Adjoint Method for Gradient Estimation in Neural ODE »
Juntang Zhuang · Nicha Dvornek · Xiaoxiao Li · Sekhar Tatikonda · Xenophon Papademetris · James Duncan