Workshop
Interpretable Machine Learning in Healthcare
Yuyin Zhou · Xiaoxiao Li · Vicky Yao · Pengtao Xie · DOU QI · Nicha Dvornek · Julia Schnabel · Judy Wawira · Yifan Peng · Ronald Summers · Alan Karthikesalingam · Lei Xing · Eric Xing
Fri 23 Jul, 6:15 a.m. PDT
Applying machine learning (ML) in healthcare is gaining momentum rapidly. However, the black-box characteristics of existing ML approaches inevitably lead to less interpretability and verifiability in making clinical predictions. To enhance the interpretability of medical intelligence, it becomes critical to develop methodologies to explain predictions as these systems are pervasively being introduced to the healthcare domain, which requires a higher level of safety and security. Such methodologies would make medical decisions more trustworthy and reliable for physicians, which could facilitate the deployment ultimately. On the other hand, it is also essential to develop more interpretable and transparent ML systems. For instance, by exploiting structured knowledge or prior clinical information, one can design models to learn aspects more coherent with clinical reasoning. Also, it may help mitigate biases in the learning process, or identify more relevant variables for making medical decisions.
In this workshop, we aim to bring together researchers in ML, computer vision, healthcare, medicine, NLP, and clinical fields to facilitate discussions including related challenges, definition, formalisms, evaluation protocols regarding interpretable medical machine intelligence. Additionally, we will also introduce possible solutions such as logic and symbolic reasoning over medical knowledge graphs, uncertainty quantification, composition models, etc. We hope that the proposed workshop is fruitful in offering a step toward building autonomous clinical decision systems with a higher-level understanding of interpretability.
Schedule
Fri 6:15 a.m. - 6:30 a.m.
|
Welcoming remarks and introduction
(
Welcome Session
)
>
SlidesLive Video |
Yuyin Zhou 🔗 |
Fri 6:30 a.m. - 7:00 a.m.
|
Quantitative epistemology: conceiving a new human-machine partnership
(
Invited Talk
)
>
SlidesLive Video |
Mihaela van der Schaar 🔗 |
Fri 7:00 a.m. - 7:30 a.m.
|
Integrating Convolutional Neural Networks and Probabilistic Graphical Models for Epileptic Seizure Detection and Localization
(
Invited Talk
)
>
SlidesLive Video |
Archana Venkataraman 🔗 |
Fri 7:30 a.m. - 7:40 a.m.
|
Poster spotlight #1 ( Spotlight ) > link | 🔗 |
Fri 7:40 a.m. - 8:30 a.m.
|
Posters I and coffee break ( Poster ) > link | 🔗 |
Fri 8:30 a.m. - 9:00 a.m.
|
Handling the long tail in medical imaging
(
Invited Talk
)
>
SlidesLive Video |
Jim Winkens · Abhijit Guha Roy 🔗 |
Fri 9:00 a.m. - 9:30 a.m.
|
In Search of Effective and Reproducible Clinical Imaging Biomarkers for Pancreatic Oncology Applications of Screening, Diagnosis and Prognosis
(
Invited Talk
)
>
SlidesLive Video |
Le Lu 🔗 |
Fri 9:30 a.m. - 9:40 a.m.
|
Poster spotlight #2 ( Spotlight ) > link | 🔗 |
Fri 9:40 a.m. - 10:30 a.m.
|
Lunch Break
|
🔗 |
Fri 10:30 a.m. - 11:00 a.m.
|
Towards Robust and Reliable Model Explanations for Healthcare
(
Keynote
)
>
SlidesLive Video |
Hima Lakkaraju 🔗 |
Fri 11:00 a.m. - 11:30 a.m.
|
Automating deep learning to interpret human genomic variations
(
Invited Talk
)
>
SlidesLive Video |
Olga Troyanskaya 🔗 |
Fri 11:30 a.m. - 11:40 a.m.
|
Poster spotlight #3
(
Spotlight
)
>
|
🔗 |
Fri 11:40 a.m. - 12:00 p.m.
|
Coffee break
|
🔗 |
Fri 12:00 p.m. - 12:30 p.m.
|
Practical Considerations of Model Interpretability in Clinical Medicine: Stability, Causality and Actionability
(
Invited Talk
)
>
SlidesLive Video |
Fei Wang 🔗 |
Fri 12:30 p.m. - 1:00 p.m.
|
Toward Interpretable Health Care
(
Invited Talk
)
>
SlidesLive Video |
Alan L Yuille 🔗 |
Fri 1:00 p.m. - 1:10 p.m.
|
Poster spotlight #4
(
Spotlight
)
>
|
🔗 |
Fri 1:10 p.m. - 2:00 p.m.
|
Posters II and coffee break ( Poster ) > link | 🔗 |
Fri 2:00 p.m. - 2:30 p.m.
|
Explainable AI for healthcare
(
Invited Talk
)
>
SlidesLive Video |
Su-In Lee 🔗 |
Fri 2:30 p.m. - 2:45 p.m.
|
Closing remarks
(
Closing remarks
)
>
SlidesLive Video |
Xiaoxiao Li 🔗 |
-
|
MACDA: Counterfactual Explanation with Multi-Agent Reinforcement Learning for Drug Target Prediction
(
Poster
)
>
|
Tri Nguyen · Thomas Quinn · Thin Nguyen · Truyen Tran 🔗 |
-
|
Causal Graph Recovery for Sepsis-Associated Derangements via Interpretable Hawkes Networks
(
Poster
)
>
|
Song Wei · Yao Xie · Rishi Kamaleswaran 🔗 |
-
|
Learning Robust Hierarchical Patterns of Human Brain across Many fMRI Studies
(
Poster
)
>
|
Dushyant Sahoo · Dushyant Sahoo · Christos Davatzikos 🔗 |
-
|
Quantifying Explainability in NLP and Analyzing Algorithms for Performance-Explainability Tradeoff
(
Poster
)
>
|
Mitchell Naylor 🔗 |
-
|
Using Associative Classification and Odds Ratios for In-Hospital Mortality Risk Estimation
(
Poster
)
>
|
Oliver Haas · Andreas Maier · Eva Rothgang 🔗 |
-
|
TransUNet: Transformers Make Strong Encoders for Medical Image Segmentation
(
Poster
)
>
|
Jie-Neng Chen · Yongyi Lu · Qihang Yu · Xiangde Luo · Ehsan Adeli · Yan Wang · Le Lu · Alan L Yuille · Yuyin Zhou 🔗 |
-
|
Tree-based local explanations of machine learning model predictions – AraucanaXAI
(
Poster
)
>
|
Enea Parimbelli · Giovanna Nicora · Szymon Wilk · Wojtek Michalowski · Riccardo Bellazzi 🔗 |
-
|
Counterfactual Explanations in Sequential Decision Making Under Uncertainty
(
Poster
)
>
|
Stratis Tsirtsis · Abir De · Manuel Gomez-Rodriguez 🔗 |
-
|
Reinforcement Learning for Workflow Recognition in Surgical Videos
(
Poster
)
>
|
Wang Wei · Jingze Zhang · Qi Dou 🔗 |
-
|
Transfer Learning with Real-World Nonverbal Vocalizations from Minimally Speaking Individuals
(
Poster
)
>
|
Jaya Narain 🔗 |
-
|
Enhancing interpretability and reducing uncertainties in deep learning of electrocardiograms using a sub-waveform representation
(
Poster
)
>
|
Hossein Honarvar · Chirag Agarwal · Sulaiman Somani · Girish Nadkarni · Marinka Zitnik · Fei Wang · Benjamin Glicksberg 🔗 |
-
|
Online structural kernel selection for mobile health
(
Poster
)
>
|
Eura Shin · Predag Klasnja · Susan Murphy · Finale Doshi-Velez 🔗 |
-
|
Hierarchical Analysis of Visual COVID-19 Features from Chest Radiographs
(
Poster
)
>
|
Shruthi Bannur · Ozan Oktay · Melanie Bernhardt · Anton Schwaighofer · Besmira Nushi · Aditya Nori · Javier Alvarez-Valle · Daniel Coelho de Castro 🔗 |
-
|
Towards Privacy-preserving Explanations in Medical Image Analysis
(
Poster
)
>
|
Helena Montenegro · Wilson Silva · Jaime S. Cardoso 🔗 |
-
|
Prediction of intracranial hypertension in patients with severe traumatic brain injury
(
Poster
)
>
|
Ruud van Kaam 🔗 |
-
|
iFedAvg – Interpretable Data-Interoperability for Federated Learning
(
Poster
)
>
|
David Roschewitz · Mary-Anne Hartley · Luca Corinzia · Martin Jaggi 🔗 |
-
|
Personalized and Reliable Decision Sets: Enhancing Interpretability in Clinical Decision Support Systems
(
Poster
)
>
|
Francisco Valente 🔗 |
-
|
Solving inverse problems with deep neural networks driven by sparse signal decomposition in a physics-based dictionary
(
Poster
)
>
|
Gaetan Rensonnet 🔗 |
-
|
Optimizing Clinical Early Warning Models to Meet False Alarm Constraints
(
Poster
)
>
|
Preetish Rath · Michael Hughes 🔗 |
-
|
Identifying cell type-specific chemokine correlates with hierarchical signal extraction from single-cell transcriptomes
(
Poster
)
>
|
Sherry Chao · Michael Brenner 🔗 |
-
|
Uncertainty Quantification for Amniotic Fluid Segmentation and Volume Prediction
(
Poster
)
>
|
Daniel Csillag · Lucas Monteiro Paes · Thiago Ramos · João Vitor Romano · Roberto Oliveira · Paulo Orenstein 🔗 |
-
|
Evaluating subgroup disparity using epistemic for breast density assessment in mammography
(
Poster
)
>
|
charlie lu · Andreanne Lemay · Katharina Hoebel · Jayashree Kalpathy-Cramer 🔗 |
-
|
BrainNNExplainer: An Interpretable Graph Neural Network Framework for Brain Network based Disease Analysis
(
Poster
)
>
|
Hejie Cui · Wei Dai · Yanqiao Zhu · Xiaoxiao Li · Lifang He · Carl Yang 🔗 |
-
|
Effective and Interpretable fMRI Analysis with Functional Brain Network Generation
(
Poster
)
>
|
Xuan Kan · Hejie Cui · Ying Guo · Carl Yang 🔗 |
-
|
Using a Cross-Task Grid of Linear Probes to Interpret CNN Model Predictions On Retinal Images
(
Poster
)
>
|
Katy Blumer · Subhashini Venugopalan · Michael Brenner · Jon Kleinberg 🔗 |
-
|
Learning sparse symbolic policies for sepsis treatment
(
Poster
)
>
|
Jacob Pettit · Brenden Petersen · Leno da Silva · Gary An · Daniel Faissol 🔗 |
-
|
Novel disease detection using ensembles with regularized disagreement
(
Poster
)
>
|
Alexandru Tifrea · Eric Stavarache · Fanny Yang 🔗 |
-
|
A reject option for automated sleep stage scoring
(
Poster
)
>
|
Dries Van der Plas · Wannes Meert · Jesse Davis 🔗 |
-
|
Assessing Bias in Medical AI
(
Poster
)
>
|
Melanie Ganz · Sune Hannibal Holm · Aasa Feragen 🔗 |
-
|
Enabling risk-aware Reinforcement Learning for medical interventions through uncertainty decomposition
(
Poster
)
>
|
Paul Festor · Giulia Luise · Matthieu Komorowski · Aldo Faisal 🔗 |
-
|
Do You See What I See? A Comparison of Radiologist Eye Gaze to Computer Vision Saliency Maps for Chest X-ray Classification
(
Poster
)
>
|
Jesse Kim · Helen Zhou · Zachary Lipton 🔗 |
-
|
An Interpretable Algorithm for Uveal Melanoma Subtyping from Whole Slide Cytology Images
(
Oral
)
>
|
Haomin Chen · Alvin Liu · Catalina Gomez · Zelia Correa · Mathias Unberath 🔗 |
-
|
One Map Does Not Fit All: Evaluating Saliency Map Explanation on Multi-Modal Medical Images
(
Oral
)
>
|
Weina Jin · Xiaoxiao Li · Ghassan Hamarneh 🔗 |
-
|
Reliable Post hoc Explanations: Modeling Uncertainty in Explainability
(
Oral
)
>
|
Dylan Slack · Sophie Hilgard · Sameer Singh · Hima Lakkaraju 🔗 |
-
|
Gifsplanation via Latent Shift: A Simple Autoencoder Approach to Counterfactual Generation for Chest X-rays
(
Oral
)
>
|
Joseph Paul Cohen · Rupert Brooks · Evan Zucker · Anuj Pareek · Lungren Matthew · Akshay Chaudhari 🔗 |
-
|
Variable selection via the sum of single effect neural networks with credible sets
(
Oral
)
>
|
Wei Cheng · Sohini Ramachandran · Lorin Crawford 🔗 |
-
|
Fast Hierarchical Games for Image Explanations
(
Oral
)
>
|
Jacopo Teneggi · Alexandre Luster · Jeremias Sulam 🔗 |
-
|
Interpretable learning-to-defer for sequential decision-making
(
Oral
)
>
|
Shalmali Joshi · Sonali Parbhoo · Finale Doshi-Velez 🔗 |
-
|
Interactive Visual Explanations for Deep Drug Repurposing
(
Oral
)
>
|
Qianwen Wang · Payal Chandak · Marinka Zitnik 🔗 |
-
|
An Interpretable Algorithm for Uveal Melanoma Subtyping from Whole Slide Cytology Images
(
Poster
)
>
|
Haomin Chen · Alvin Liu · Catalina Gomez · Zelia Correa · Mathias Unberath 🔗 |
-
|
One Map Does Not Fit All: Evaluating Saliency Map Explanation on Multi-Modal Medical Images
(
Poster
)
>
|
Weina Jin · Xiaoxiao Li · Ghassan Hamarneh 🔗 |
-
|
Reliable Post hoc Explanations: Modeling Uncertainty in Explainability
(
Poster
)
>
|
Dylan Slack · Sophie Hilgard · Sameer Singh · Hima Lakkaraju 🔗 |
-
|
Gifsplanation via Latent Shift: A Simple Autoencoder Approach to Counterfactual Generation for Chest X-rays
(
Poster
)
>
|
Joseph Paul Cohen · Rupert Brooks · Evan Zucker · Anuj Pareek · Lungren Matthew · Akshay Chaudhari 🔗 |
-
|
Variable selection via the sum of single effect neural networks with credible sets
(
Poster
)
>
|
Wei Cheng · Sohini Ramachandran · Lorin Crawford 🔗 |
-
|
Fast Hierarchical Games for Image Explanations
(
Poster
)
>
|
Jacopo Teneggi · Alexandre Luster · Jeremias Sulam 🔗 |
-
|
Interpretable learning-to-defer for sequential decision-making
(
Poster
)
>
|
Shalmali Joshi · Sonali Parbhoo · Finale Doshi-Velez 🔗 |
-
|
Interactive Visual Explanations for Deep Drug Repurposing
(
Poster
)
>
|
Qianwen Wang · Payal Chandak · Marinka Zitnik 🔗 |