Timezone: »

 
Workshop
Interpretable Machine Learning in Healthcare
Yuyin Zhou · Xiaoxiao Li · Vicky Yao · Pengtao Xie · DOU QI · Nicha Dvornek · Julia Schnabel · Judy Wawira · Yifan Peng · Ronald Summers · Alan Karthikesalingam · Lei Xing · Eric Xing

Fri Jul 23 06:15 AM -- 02:45 PM (PDT) @ None
Event URL: https://sites.google.com/view/imlh2021/ »

Applying machine learning (ML) in healthcare is gaining momentum rapidly. However, the black-box characteristics of existing ML approaches inevitably lead to less interpretability and verifiability in making clinical predictions. To enhance the interpretability of medical intelligence, it becomes critical to develop methodologies to explain predictions as these systems are pervasively being introduced to the healthcare domain, which requires a higher level of safety and security. Such methodologies would make medical decisions more trustworthy and reliable for physicians, which could facilitate the deployment ultimately. On the other hand, it is also essential to develop more interpretable and transparent ML systems. For instance, by exploiting structured knowledge or prior clinical information, one can design models to learn aspects more coherent with clinical reasoning. Also, it may help mitigate biases in the learning process, or identify more relevant variables for making medical decisions.

In this workshop, we aim to bring together researchers in ML, computer vision, healthcare, medicine, NLP, and clinical fields to facilitate discussions including related challenges, definition, formalisms, evaluation protocols regarding interpretable medical machine intelligence. Additionally, we will also introduce possible solutions such as logic and symbolic reasoning over medical knowledge graphs, uncertainty quantification, composition models, etc. We hope that the proposed workshop is fruitful in offering a step toward building autonomous clinical decision systems with a higher-level understanding of interpretability.

Fri 6:15 a.m. - 6:30 a.m.
Welcoming remarks and introduction (Welcome Session)   
Yuyin Zhou
Fri 6:30 a.m. - 7:00 a.m.
Quantitative epistemology: conceiving a new human-machine partnership (Invited Talk)   
Mihaela van der Schaar
Fri 7:00 a.m. - 7:30 a.m.
Integrating Convolutional Neural Networks and Probabilistic Graphical Models for Epileptic Seizure Detection and Localization (Invited Talk)   
Archana Venkataraman
Fri 7:30 a.m. - 7:40 a.m.
Poster spotlight #1 (Spotlight)  link »
Fri 7:40 a.m. - 8:30 a.m.
Posters I and coffee break (Poster)  link »
Fri 8:30 a.m. - 9:00 a.m.
Handling the long tail in medical imaging (Invited Talk)   
Jim Winkens, Abhijit Guha Roy
Fri 9:00 a.m. - 9:30 a.m.
In Search of Effective and Reproducible Clinical Imaging Biomarkers for Pancreatic Oncology Applications of Screening, Diagnosis and Prognosis (Invited Talk)   
Le Lu
Fri 9:30 a.m. - 9:40 a.m.
Poster spotlight #2 (Spotlight)  link »
Fri 9:40 a.m. - 10:30 a.m.
Lunch Break (Break)
Fri 10:30 a.m. - 11:00 a.m.
Towards Robust and Reliable Model Explanations for Healthcare (Keynote)   
Hima Lakkaraju
Fri 11:00 a.m. - 11:30 a.m.
Automating deep learning to interpret human genomic variations (Invited Talk)   
Olga Troyanskaya
Fri 11:30 a.m. - 11:40 a.m.
Poster spotlight #3 (Spotlight)
Fri 11:40 a.m. - 12:00 p.m.
Coffee break (Break)
Fri 12:00 p.m. - 12:30 p.m.
Practical Considerations of Model Interpretability in Clinical Medicine: Stability, Causality and Actionability (Invited Talk)   
Fei Wang
Fri 12:30 p.m. - 1:00 p.m.
Toward Interpretable Health Care (Invited Talk)   
Alan L Yuille
Fri 1:00 p.m. - 1:10 p.m.
Poster spotlight #4 (Spotlight)
Fri 1:10 p.m. - 2:00 p.m.
Posters II and coffee break (Poster)  link »
Fri 2:00 p.m. - 2:30 p.m.
Explainable AI for healthcare (Invited Talk)   
Su-In Lee
Fri 2:30 p.m. - 2:45 p.m.
Closing remarks   
Xiaoxiao Li
-
[ Visit Poster at Spot A0 in Virtual World ]

Most deep learning models on drug-target affinity (DTA) prediction are black box hence are difficult to interpret and verify its result, and thus risking acceptance. Explanation is necessary to allow the DTA model more trustworthy. The interaction between sub-structure of two inputs, drug functional groups and protein residues, is an important factor in the DTA model prediction. Explanation based on substructure interaction allows domain experts to verify the binding mechanism used by DTA model in its prediction. We propose a multi-agent reinforcement learning framework, Multi-Agent Counterfactual Drug-target binding Affinity (MACDA), to generate counterfactual explanations for the drug-protein complex. Our proposed framework provides human-interpretable counterfactual instances while optimizing both the input drug and target for counterfactual generation at the same time. MACDA also explains the substructure interaction between inputs in the DTA model prediction.

Tri Nguyen, Thomas Quinn, Thin Nguyen, Truyen Tran
-
[ Visit Poster at Spot A1 in Virtual World ]

Continuous, automated surveillance systems that incorporate machine learning models are becoming increasingly more common in healthcare environments. These models can capture temporally dependent changes across multiple patient variables and can enhance a clinician's situational awareness by providing an early warning alarm of an impending adverse event such as sepsis. However, most commonly used methods, e.g., XGBoost, fail to provide an interpretable mechanism for understanding why a model produced a sepsis alarm at a given time. The ``black box'' nature of many models is a severe limitation as it prevents clinicians from independently corroborating those physiologic features that have contributed to the sepsis alarm. To overcome this limitation, we propose a generalized linear model (GLM) approach to fit a Granger causal graph based on the physiology of several major sepsis-associated derangements (SADs). We adopt a recently developed stochastic monotone variational inequality-based estimator coupled with forwarding feature selection to learn the graph structure from both continuous and discrete-valued as well as regularly and irregularly sampled time series. Most importantly, we develop a non-asymptotic upper bound on the estimation error for any monotone link function in the GLM. We conduct real-data experiments and demonstrate that our proposed method can achieve comparable performance to popular and powerful prediction methods such as XGBoost while simultaneously maintaining a high level of interpretability.

Song Wei, Yao Xie, Rishi Kamaleswaran
-
[ Visit Poster at Spot A2 in Virtual World ]

Multi-site fMRI studies face the challenge that the pooling introduces systematic non-biological site-specific variance due to hardware, software, and environment. In this paper, we propose to reduce site-specific variance in the estimation of hierarchical Sparsity Connectivity Patterns (hSCPs) in fMRI data via a simple yet effective matrix factorization while preserving biologically relevant variations. Our method leverages unsupervised adversarial learning to improve the reproducibility of the components. Experiments on simulated datasets display that the proposed method can estimate components with higher accuracy and reproducibility, while preserving age-related variation on a multi-center clinical data set.

Dushyant Sahoo, Dushyant Sahoo, Christos Davatzikos
-
[ Visit Poster at Spot A3 in Virtual World ]

The healthcare domain is one of the most exciting application areas for machine learning, but a lack of model transparency contributes to a lag in adoption within the industry. In this work, we explore the current art of explainability and interpretability within a case study in clinical text classification, using a task of mortality prediction within MIMIC-III clinical notes. We demonstrate various visualization techniques for fully interpretable methods as well as model-agnostic post hoc attributions, and we provide a generalized method for evaluating the quality of explanations using infidelity and local Lipschitz across model types from logistic regression to BERT variants. With these metrics, we introduce a framework through which practitioners and researchers can assess the frontier between a model's predictive performance and the quality of its available explanations. We make our code available to encourage continued refinement of these methods.

Mitch Naylor
-
[ Visit Poster at Spot A4 in Virtual World ]
We propose a novel method based on associative classification in combination with odds ratios, a well-understood epidemiological metric, as an interpretable method for in-hospital mortality estimation, which is influenced by thousands of clinical variables.
We tested and validated the method for cases in intensive and emergency care.
The resulting model achieves an area under the receiver operating characteristic curve of 0.98.
The model is easy to interpret in the form of one-to-one rules and the corresponding odds ratios.
This study shows that associative classification combined with epidemiological metrics can be used as effective and interpretable machine learning models in the presence of outcomes that are influenced by thousands of variables.
Oliver Haas, Andreas Maier, Eva Rothgang
-
[ Visit Poster at Spot A5 in Virtual World ]

Medical image segmentation is an essential prerequisite for developing healthcare systems, especially for disease diagnosis and treatment planning. On various medical image segmentation tasks, the u-shaped architecture, also known as U-Net, has become the de-facto standard and achieved tremendous success. However, due to the intrinsic locality of convolution operations, U-Net generally demonstrates limitations in explicitly modeling long-range dependency. Transformers, designed for sequence-to-sequence prediction, have emerged as alternative architectures with innate global self-attention mechanisms, but can result in limited localization abilities due to insufficient low-level details. In this paper, we propose~\textbf{TransUNet}, which merits both Transformers and U-Net, as a strong alternative for medical image segmentation. On one hand, the Transformer encodes tokenized image patches from a convolution neural network (CNN) feature map as the input sequence for extracting global contexts. On the other hand, the decoder upsamples the encoded features which are then combined with the high-resolution CNN feature maps to enable precise localization. We argue that Transformers can serve as strong encoders for medical image segmentation tasks, with the combination of U-Net to enhance finer details by recovering localized spatial information. Extensive experimental results demonstrate the benefits of our TransUNet, which lead us to substantially outperform previous convolution based networks.

Jie-Neng Chen, Yongyi Lu, Qihang Yu, Xiangde Luo, Ehsan Adeli, Yan Wang, Le Lu, Alan L Yuille, Yuyin Zhou
-
[ Visit Poster at Spot A6 in Virtual World ]

Increasingly complex learning methods such as boosting, bagging and deep learning have made ML models more accurate, but harder to understand and interpret. A tradeoff between performance and intelligibility is often to be faced, especially in high-stakes applications like medicine. In the present article we propose a novel methodological approach for generating explanations of the predictions of a generic ML model, given a specific instance for which the prediction has been made, that can tackle both classification and regression tasks. Advantages of the proposed XAI approach include improved fidelity to the original model, ability to deal with non-linear decision boundaries, and native support to both classification and regression problems.

Enea Parimbelli, Giovanna Nicora, Szymon Wilk, Wojtek Michalowski, Riccardo Bellazzi
-
[ Visit Poster at Spot B0 in Virtual World ]

Methods to find counterfactual explanations have predominantly focused on one step decision making processes. In this work, we initiate the development of methods to find counterfactual explanations for decision making processes in which multiple, dependent actions are taken sequentially over time. We start by formally characterizing a sequence of actions and states using finite horizon Markov decision processes and the Gumbel-Max structural causal model. Building upon this characterization, we formally state the problem of finding counterfactual explanations for sequential decision making processes. In our problem formulation, the counterfactual explanation specifies an alternative sequence of actions differing in at most k actions from the observed sequence that could have led the observed process realization to a better outcome. Then, we introduce a polynomial time algorithm based on dynamic programming to build a counterfactual policy that is guaranteed to always provide the optimal counterfactual explanation on every possible realization of the counterfactual environment dynamics. We validate our algorithm using both synthetic and real data from cognitive behavioral therapy and show that the counterfactual explanations our algorithm finds can provide valuable insights to enhance sequential decision making under uncertainty.

Stratis Tsirtsis, Abir De, Manuel Gomez Rodriguez
-
[ Visit Poster at Spot B1 in Virtual World ]

Automatically recognizing surgical workflow plays a significant part in improving surgical training efficiency by providing automated skill assessment for surgeons. Based on a deep model(SV-RCNet) which mainly consists of a deep residual network(ResNet) and a long short term memory(LSTM) network, our framework introduced reinforcement learning method into surgical workflow or phase recognition for the first time and is evaluated on cholec80 dataset which contains 80 videos of cholecystectomy surgeries. In our framework, an intelligent agent is trained using Markov Decision Process (MDP) model and Proximal Policy Optimization(PPO) algorithm with discriminative spatio-temporal features extracted from the SV-RCNet as input. Experiments on cholec80 dataset have outperformed the SV-RCNet in terms of accuracy, precision and recall.

Wang Wei, Jingze Zhang, Qi Dou
-
[ Visit Poster at Spot B2 in Virtual World ]

We trained and evaluated several types of transfer learning to classify the affect and communication intent of nonverbal vocalizations from eight minimally speaking individuals (mv) with autism. Datasets were recorded in real-world settings with in-the-moment labels from a close family member. We trained deep neural nets (DNNs) on six audio datasets (including our dataset of nonverbal vocalizations) and then fine-tuned the models to classify affect and intent for each individual. We also evaluated a zero-shot approach for arousal and valence regression using an acted dataset of nonverbal vocalizations that occur amidst typical speech. For two of the eight mv communicators, fine-tuning improved model performance compared to fully personalized DNNs and there were weak groupings in arousal values inferred using zero-shot learning. The limited success of the evaluated transfer learning approaches highlights the need for specialized datasets with mv* individuals.

Jaya Narain
-
[ Visit Poster at Spot B3 in Virtual World ]

In electrocardiogram (ECG) deep learning (DL), researchers traditionally use the full duration of waveforms that create redundancies in feature learning and result in inaccurate predictions with large uncertainties. In this work, we introduce a new sub-waveform representation that leverages the rhythmic pattern of ECG waveforms by aligning the heartbeats to enhance the DL predictive capabilities. As a case study, we investigate the impact of waveform representations on DL predictions for identification of left ventricular dysfunction. We provide the explanation of how the sub-waveform representation opens up a new space for feature learning and minimizing uncertainties. By developing a novel scoring system, we carefully examine the feature interpretation and the clinical relevance. We note that the proposed representation enhances predictive power by engineering only at the waveform level (data-centric) rather than changing neural network architecture (model-centric). We expect that this added control over granularity of data will improve the ECG-DL modeling for developing new AI technologies in the cardiovascular space.

Hossein Honarvar, Chirag Agarwal, Sulaiman Somani, Girish Nadkarni, Marinka Zitnik, Fei Wang, Benjamin Glicksberg
-
[ Visit Poster at Spot B4 in Virtual World ]

Motivated by the need for efficient and personalized learning in mobile health, we investigate the problem of online kernel selection for Gaussian Process regression in the multi-task setting. We propose a novel generative process on the kernel composition for this purpose. Our method demonstrates that trajectories of kernel evolutions can be transferred between users to improve learning and that the kernels themselves are meaningful for the mHealth prediction goal.

Eura Shin, Predag Klasnja, Susan Murphy, Finale Doshi-Velez
-
[ Visit Poster at Spot B5 in Virtual World ]

Chest radiography has been a recommended procedure for patient triaging and resource management in intensive care units (ICUs) throughout the COVID-19 pandemic. The machine learning efforts to augment this workflow have been long challenged due to deficiencies in reporting, model evaluation, and failure mode analysis. To address some of those shortcomings, we model radiological features with a human-interpretable class hierarchy that aligns with the radiological decision process. Also, we propose the use of a data-driven error analysis methodology to uncover the blind spots of our model, providing further transparency on its clinical utility. For example, our experiments show that model failures highly correlate with ICU imaging conditions and with the inherent difficulty in distinguishing certain types of radiological features. Also, our hierarchical interpretation and analysis facilitates the comparison with respect to radiologists' findings and inter-variability, which in return helps us to better assess the clinical applicability of models.

Shruthi Bannur, Ozan Oktay, Melanie Bernhardt, Anton Schwaighofer, Besmira Nushi, Aditya Nori, Javier Alvarez-Valle, Daniel Coelho de Castro
-
[ Visit Poster at Spot B6 in Virtual World ]

The use of Deep Learning in the medical field is hindered by the lack of interpretability. Case-based interpretability strategies can provide intuitive explanations for deep learning models' decisions, thus, enhancing trust. However, the resulting explanations threaten patient privacy, motivating the development of privacy-preserving methods compatible with the specifics of medical data. In this work, we analyze existing privacy-preserving methods and their respective capacity to anonymize medical data while preserving disease-related semantic features. We find that the PPRL-VGAN deep learning method was the best at preserving the disease-related semantic features while guaranteeing a high level of privacy among the compared state-of-the-art methods. Nevertheless, we emphasize the need to improve privacy-preserving methods for medical imaging, as we identified relevant drawbacks in all existing privacy-preserving approaches.

Helena Montenegro, Wilson Silva, Jaime S. Cardoso
-
[ Visit Poster at Spot C0 in Virtual World ]

Intracranial hypertension is a key factor in the treatment and prevention of secondary brain injury in patients with traumatic brain injury. We aimed to develop a prediction model based on changes in intracranial pressure waveform morphology. A convolutional neural network with 10 hidden layers was trained on the dominant intracranial pressure waveform, computed over 1 minute of data, from control and pre-intracranial hypertension segments up to 1 hour prior to intracranial hypertension. The model obtained an accuracy, sensitivity, specificity and an area under the receiver operating characteristics curve of 0.70, 0.68, 0.72 and 0.74, respectively, for the time window 0-10 minutes before the onset of intracranial hypertension.

Ruud van Kaam
-
[ Visit Poster at Spot C1 in Virtual World ]

Recently, the ever-growing demand for privacy-oriented machine learning has motivated researchers to develop federated and decentralized learning techniques, allowing individual clients to train models collaboratively without disclosing their private datasets. However, widespread adoption has been limited in domains relying on high levels of user trust, where assessment of data compatibility is essential. In this work, we define and address low interoperability induced by underlying client data inconsistencies in federated learning for tabular data. The proposed method, iFedAvg, builds on federated averaging adding local element-wise affine layers to allow for a personalized and granular understanding of the collaborative learning process. Thus, enabling the detection of outlier datasets in the federation and also learning the compensation for local data distribution shifts without sharing any original data. We evaluate iFedAvg using several public benchmarks and a previously unstudied collection of real-world datasets from the 2014 - 2016 West African Ebola epidemic, jointly forming the largest such dataset in the world. In all evaluations, iFedAvg achieves competitive average performance with negligible overhead. It additionally shows substantial improvement on outlier clients, highlighting increased robustness to individual dataset shifts. Most importantly, our method provides valuable client-specific insights at a fine-grained level to guide interoperable federated learning.

David Roschewitz, Mary-Anne Hartley, Luca Corinzia, Martin Jaggi
-
[ Visit Poster at Spot A0 in Virtual World ]

In this study, we present a novel clinical decision support system and discuss its interpretability-related properties. It combines a decision set of rules with a machine learning scheme to offer global and local interpretability. More specifically, machine learning is used to predict the likelihood of each of those rules to be correct for a particular patient, which may also contribute to better predictive performances. Moreover, the reliability analysis of individual predictions is also addressed, contributing to further personalized interpretability. The combination of these several elements may be crucial to obtain the clinical stakeholders' trust, leading to a better assessment of patients' conditions and improvement of the physicians' decision-making.

Francisco Valente
-
[ Visit Poster at Spot A1 in Virtual World ]

Deep neural networks (DNN) have an impressive ability to invert very complex models, i.e. to learn the generative parameters from a model's output. Once trained, the forward pass of a DNN is often much faster than traditional, optimization-based methods used to solve inverse problems. This is however done at the cost of lower interpretability, a fundamental limitation in most medical applications. We propose an approach for solving general inverse problems which combines the efficiency of DNN and the interpretability of traditional analytical methods. The measurements are first projected onto a dense dictionary of model-based responses. The resulting sparse representation is then fed to a DNN with an architecture driven by the problem's physics for fast parameter learning. Our method can handle generative forward models that are costly to evaluate and exhibits similar performance in accuracy and computation time as a fully-learned DNN, while maintaining high interpretability and being easier to train. Concrete results are shown on an example of model-based brain parameter estimation from magnetic resonance imaging (MRI).

Gaetan Rensonnet
-
[ Visit Poster at Spot A2 in Virtual World ]

Deployed early warning systems in clinical settings often suffer from high false alarm rates that limit trustworthiness and overall utility. Despite the need to control false alarms, the dominant classifier training paradigm remains minimizing cross entropy, a loss function that has no direct relationship to false alarms. While existing efforts often use post-hoc threshold selection to address false alarms, in this paper we build on recent work to suggest a more comprehensive solution. We develop a family of tight bounds using the sigmoid function that let us maximize recall while satisfying a constraint that holds false alarms below a specified tolerance. This new differentiable objective can be easily integrated with generalized linear models, neural networks, and any other classifier trained with minibatch gradient descent. Through experiments on toy data and acute care mortality risk prediction, we demonstrate our method can satisfy a desired constraint on false alarms interpretable to clinical staff while achieving better recall than alternatives.

Preetish Rath, Michael Hughes
-
[ Visit Poster at Spot A4 in Virtual World ]

Biological data is inherently heterogeneous and high-dimensional. Single-cell sequencing of transcripts in a tissue sample generates data for thousands of cells, each of which is characterized by upwards of tens of thousands of genes. How to identify the subsets of cells and genes that are associated with a label of interest remains an open question. In this paper, we integrate a signal-extractive neural network architecture with axiomatic feature attribution to classify tissue samples based on single-cell gene expression profiles. This approach is not only interpretable but also robust to noise, requiring just 5% of genes and 23% of cells in an in silico tissue sample to encode signal in order to distinguish signal from noise with greater than 70% accuracy. We demonstrate its applicability in two real-world settings for discovering cell type-specific chemokine correlates: predicting response to immune checkpoint inhibitors in multiple tissue types and predicting DNA mismatch repair deficiency in colorectal cancer. Our approach not only significantly outperforms traditional machine learning classifiers but also presents actionable biological hypotheses of chemokine-mediated tumor immunogenicity.

Sherry Chao, Michael Brenner
-
[ Visit Poster at Spot A5 in Virtual World ]

In many medical segmentation tasks, it is crucial to provide valid confidence intervals to machine learning predictions. In the case of segmenting amniotic fluid using fetal MRIs, this allows doctors to better understand and control the segmentation masks, bound the fluid volume, and statistically detect anomalies such as cysts. In this work, we propose and evaluate different ways of creating confidence intervals for segmentation masks and volume predictions using tools from the field of conformal prediction. We show that simple but well-suited modifications of current methods, such as volume normalization and tuning of a leniency hyperparameter, lead to significant improvements, resulting in more consistent coverage and narrower confidence sets. These advances are thoroughly illustrated in the amniotic fluid segmentation problem.

Daniel Csillag, Lucas Monteiro Paes, Thiago Ramos, João Vitor Romano, Roberto Oliveira, Paulo Orenstein
-
[ Visit Poster at Spot A6 in Virtual World ]

As machine learning algorithms continue to expand into healthcare domains that affect decision making systems, new strategies will need to be incorporated to effectively detect and evaluate subgroup disparities in order to ensure accountability and generalizablility in clinical machine learning workflows. In this paper, we explore how uncertainty can be used as one way to evaluate disparity in both patient demographics (race) and data acquisition (scanner) subgroups for breast density assessment on a dataset of 108,190 mammograms collected from over 33 clinical sites. Our results show that the choice of uncertainty quantification varies significantly at the subgroup level even if aggregate performance is comparable. We hope this analysis can promote future work on how uncertainty can be incorporated into clinical workflows to increase transparency in machine learning. The integration of predictive uncertainty can have implications for both regulation and generalizability of machine learning applications in healthcare.

charlie lu, Andreanne Lemay, Katharina Hoebel, Jayashree Kalpathy-Cramer
-
[ Visit Poster at Spot B0 in Virtual World ]

Interpretable brain network models for disease prediction are of great value for the advancement of neuroscience. GNNs are promising to model complicated network data, but they are prone to overfitting and suffer from poor interpretability, which prevents their usage in decision-critical scenarios like healthcare. To bridge this gap, we propose BrainNNExplainer, an interpretable GNN framework for brain network analysis. It is mainly composed of two jointly learned modules: a backbone prediction model that is specifically designed for brain networks and an explanation generator that highlights disease-specific prominent brain network connections. Extensive experimental results with visualizations on two challenging disease prediction datasets demonstrate the unique interpretability and outstanding performance of BrainNNExplainer.

Hejie Cui, Wei Dai, Yanqiao Zhu, Xiaoxiao Li, Lifang He, Carl Yang
-
[ Visit Poster at Spot B1 in Virtual World ]

Recent studies in neuroscience show great potential of functional brain networks constructed from fMRI data for popularity modeling and clinical predictions. However, existing functional brain networks are noisy and unaware of downstream prediction tasks, while also incompatible with recent powerful machine learning models of GNNs.In this work, we develop an end-to-end trainable pipeline to extract prominent fMRI features, generate brain networks, and make predictions with GNNs, all under the guidance of downstream prediction tasks. Preliminary experiments on the PNC fMRI data show the superior effectiveness and unique interpretability of our framework.

Xuan Kan, Hejie Cui, Ying Guo, Carl Yang
-
[ Visit Poster at Spot B2 in Virtual World ]

We analyze a dataset of retinal images using linear probes: linear regression models trained on some target'' task, using embeddings from a deep convolutional (CNN) model trained on somesource'' task as input. We use this method across all possible pairings of 93 tasks in the UK Biobank dataset of retinal images, leading to ~164k different models. We analyze the performance of these linear probes by source and target task and by layer depth. We observe that representations from the middle layers of the network are more generalizable. We find that some target tasks are easily predicted irrespective of the source task, and that some other target tasks are more accurately predicted from correlated source tasks than from embeddings trained on the same task.

Katy Blumer, Subhashini Venugopalan, Michael Brenner, Jon Kleinberg
-
[ Visit Poster at Spot B3 in Virtual World ]

Sepsis is a life-threatening organ dysfunction caused by a dysregulated host response to infection. Despite its severity, no FDA-approved drug treatments exists. Recent work controlling sepsis simulations with deep reinforcement learning have successfully discovered effective cytokine mediation strategies. However, the performance of these neural-network based policies comes at the expense of their deployability in clinical settings, where sparsity and interpretability are required characteristics. To this end, we propose a pipeline to learn simple, sparse symbolic policies represented by constants and/or succinct, human-readable expressions. We demonstrate our approach by learning a sparse symbolic policy that is efficacious on simulated sepsis patients.

Jacob Pettit, Brenden Petersen, Leno Silva, Gary An, Daniel Faissol
-
[ Visit Poster at Spot B4 in Virtual World ]

Automated medical diagnosis systems need to be able to recognize when new diseases emerge, that are not represented in the training data (ID). Even though current out-of-distribution (OOD) detection algorithms can successfully distinguish completely different data sets, they fail to reliably identify samples from novel classes, that are similar to the training data. We develop a new ensemble-based procedure that promotes model diversity and exploits regularization to limit disagreement to only OOD samples, using a batch containing an unknown mixture of ID and OOD data. We show that our procedure significantly outperforms state-of-the-art methods, including those that have access, during training, to data that is known to be OOD. We run extensive comparisons of our approach on a variety of novel-class detection scenarios, on standard image data sets as well as on new disease detection on medical image data sets.

Alexandru Tifrea, Eric Stavarache, Fanny Yang
-
[ Visit Poster at Spot B5 in Virtual World ]

In medical applications, misclassifications can result in undetected diseases or incorrect diagnoses. Hence, being cautious when the model is uncertain is important. One way to be more cautious is to include a reject option in a classifier to allow it to abstain from making a prediction if its confidence in its prediction is low. This paper proposes a model-agnostic rejector based on the Local Outlier Factor anomaly score in the context of an important medical application: sleep stage scoring. This rejector improves the model's trustworthiness by detecting observations which substantially deviate from the training set. Moreover, the method can help identify populations which are missing in the training set.

Dries Van der Plas, Wannes Meert, Jesse Davis
-
[ Visit Poster at Spot B6 in Virtual World ]

Machine learning and artificial intelligence are increasingly deployed in critical societal functions such as finance, media and healthcare. Along with their deployment come increasing reports of their failure when viewed through the lens of ethical principles such as fairness, democracy and equal opportunity. As a result, research into fair algorithms and mitigation of bias in data and algorithms, has surged in recent years. However, while it might seem clear what fairness entails, and how to achieve it, in some applications, established concepts do not translate directly to other domains. In this work, we consider healthcare specifically, illustrating limitations and challenges of fair models within medical applications and give recommendations for the development of AI in healthcare.

Melanie Ganz, Sune Hannibal Holm, Aasa Feragen
-
[ Visit Poster at Spot C0 in Virtual World ]

Reinforcement Learning (RL) is emerging as tool for tackling complex control and decision-making problems. However, in high-risk environments such as healthcare, manufacturing, automotive or aerospace, it is often challenging to bridge the gap between an apparently optimal policy learned by an agent and its real-world deployment, due to the uncertainties and risk associated with it. Broadly speaking RL agents face two kinds of uncertainty, 1. aleatoric uncertainty, which reflects randomness or noise in the dynamics of the world, and 2. epistemic uncertainty, which reflects the bounded knowledge of the agent due to model limitations and finite amount of information/data the agent has acquired about the world. These two types of uncertainty carry fundamentally different implications for the evaluation of performance and the level of risk or trust. Yet these aleatoric and epistemic uncertainties are generally confounded as standard and even distributional RL is agnostic to this difference. Here we propose how a distributional approach (UA-DQN) can be recast to render uncertainties by decomposing the net effects of each uncertainty . We demonstrate the operation of this method in grid world examples to build intuition and then show a proof of concept application for an RL agent operating as a clinical decision support system in critical care.

Paul Festor, Giulia Luise, Matthieu Komorowski, Aldo Faisal
-
[ Visit Poster at Spot C1 in Virtual World ]

We qualitatively and quantitatively compare saliency maps generated from state-of-the-art deep learning chest X-ray classification models to radiologist eye gaze data. We find that across several saliency map methods, correct predictions have saliency maps more similar to the corresponding eye gaze data than the same for incorrect predictions. To incorporate eye gaze data into the model training procedure, we create DenseNet-Aug, a simple augmentation of the DenseNet model which performs comparably to the state-of-the-art. Finally, we extract salient annotated regions for each label class, thereby characterizing model attribution at the dataset level. While sample-level saliency maps visibly vary, these dataset-level regional comparisons indicate that across most class labels, radiologist eye gaze, DenseNet, and DenseNet-Aug often identify similar salient regions.

Jesse Kim, Helen Zhou, Zachary Lipton
-

Algorithmic decision support is rapidly becoming a staple of personalized medicine, especially for high-stakes recommendations in which access to certain information can drastically alter the course of treatment, and thus, patient outcome; a prominent example is radiomics for cancer subtyping. Because in these scenarios the stakes are high, it is desirable for decision systems to not only provide recommendations but supply transparent reasoning in support thereof. For learning-based systems, this can be achieved through an interpretable design of the inference pipeline. Herein we describe an automated yet interpretable system for uveal melanoma subtyping with digital cytology images from fine needle aspiration biopsies. Our method embeds every automatically segmented cell of a candidate cytology image as a point in a 2D manifold defined by many representative slides, which enables reasoning about the cell-level composition of the tissue sample, paving the way for interpretable subtyping of the biopsy. Finally, a rule-based slide-level classification algorithm is trained on the partitions of the circularly distorted 2D manifold. This process results in a simple rule set that is evaluated automatically but highly transparent for human verification. On our in house cytology dataset of 88 uveal melanoma patients, the proposed method achieves an accuracy of 87.5% that compares favorably to all competing approaches, including deep "black box'' models. The method comes with a user interface to facilitate interaction with cell-level content, which may offer additional insights for pathological assessment.

Haomin Chen, Alvin Liu, Catalina Gomez, Zelia Correa, Mathias Unberath
-

Being able to explain the prediction to clinical end-users is a necessity to leverage the power of AI models for clinical decision support. For medical images, saliency maps are the most common form of explanation. The maps highlight important features for AI model's prediction. Although many saliency map methods have been proposed, it is unknown how well they perform on explaining decisions on multi-modal medical images, where each modality/channel carries distinct clinical meanings of the same underlying biomedical phenomenon. Understanding such modality-dependent features is essential for clinical users' interpretation of AI decisions. To tackle this clinically important but technically ignored problem, we propose the MSFI (Modality-Specific Feature Importance) metric to examine whether saliency maps can highlight modality-specific important features. MSFI encodes the clinical requirements on modality prioritization and modality-specific feature localization. Our evaluations on 16 commonly used saliency map methods, including a clinician user study, show that although most saliency map methods captured modality importance information in general, most of them failed to highlight modality-specific important features consistently and precisely. The evaluation results guide the choices of saliency map methods and provide insights to propose new ones targeting clinical applications.

Weina Jin, Xiaoxiao Li, Ghassan Hamarneh
-

As black box explanations are increasingly being employed to establish model credibility in high stakes settings, it is important to ensure that these explanations are accurate and reliable. However, prior work demonstrates that explanations generated by state-of-the-art techniques are inconsistent, unstable, provide very little insight into their correctness and reliability, and are computationally inefficient. In this paper, we address the aforementioned challenges by developing a novel Bayesian framework for generating local explanations along with their associated uncertainty. We instantiate this framework to obtain Bayesian versions of LIME and KernelSHAP which output credible intervals for the feature importances, capturing the associated uncertainty. The resulting explanations not only enable us to make concrete inferences about their quality (e.g., there is a 95\% chance that the feature importance lies within the given range), but are also highly consistent and stable. We carry out a detailed theoretical analysis that leverages the aforementioned uncertainty to estimate how many perturbations to sample, and how to sample for faster convergence. This work makes the first attempt at addressing several critical issues with popular explanation methods in one shot, thereby generating consistent, stable, and reliable explanations with guarantees in a computationally efficient manner. Experimental evaluation with multiple real world datasets and user studies demonstrate that the efficacy of the proposed framework.

Dylan Slack, Sophie Hilgard, Sameer Singh, Hima Lakkaraju
-

Motivation:Prediction explanation methods for neural networks trained for medical imaging tasks are important for avoiding the unintended consequences of deploying AI systems when false positive predictions can impact patient care. However, traditional image attribution methods struggle to satisfactorily explain such predictions. Thus, there is a pressing need to develop improved models for model explainability and introspection.

Specific problem: Counterfactual explanations can transform input images to increase or decrease features which cause the prediction. However, current approaches are difficult to implement as they are monolithic or rely on GANs. These hurdles prevent wide adoption.

Our approach: Given an arbitrary classifier, we propose a simple autoencoder and gradient update (Latent Shift) that can transform the latent representation of a specific input image to exaggerate or curtail the features used for prediction. We use this method to study chest X-ray classifiers and evaluate their performance. We conduct a reader study with two radiologists assessing 240 chest X-ray predictions to identify which ones are false positives (half are) using traditional attribution maps or our proposed method.

Results: We found low overlap with ground truth pathology masks for models with reasonably high accuracy. However, the results from our reader study indicate that these models are generally looking at the correct features. We also found that the Latent Shift explanation allows a user to have more confidence in true positive predictions compared to traditional approaches (0.15±0.95 in a 5 point scale with p=0.01) with only a small increase in false positive predictions (0.04±1.06 with p=0.57).

Joseph Paul Cohen, Rupert Brooks, Evan Zucker, Anuj Pareek, Lungren Matthew, Akshay Chaudhari
-

We propose a new method for variable selection using Bayesian neural networks. We focus on quantifying uncertainty in which variables should be selected. Our method provides posterior summaries including posterior inclusion probabilities and credible sets for variable selection. Our framework generalizes the previous Sum of Single Effect model (SuSiE) to deep learning models for incorporating non-linearity. We provide a variational algorithm with several relaxation techniques that enables scalable inference. Our model can be used for both regression and classification tasks. We show that our method has competitive performance in variable selection using simulations. The method is suited for scenarios where input variables are correlated and effect variables are sparse. We illustrate the utility of our method for genetic fine-mapping in statistical genetics with the Stock Mice dataset.

Wei Cheng, Sohini Ramachandran, Lorin Crawford
-

As modern neural networks keep breaking records and solving harder problems, their predictions also become less intelligible. The current lack of interpretability undermines the deployment of accurate machine learning tools in sensitive settings. In this work, we present a model-agnostic explanation method for image classification based on a hierarchical extension of Shapley coefficients --Hierarchical Shap (h-Shap)-- that resolves some limitations of current approaches. Unlike other Shapley-based explanation methods, h-Shap is scalable and it does not need approximation. Under certain distributional assumptions, which are common in multiple instance learning, h-Shap retrieves the exact Shapley coefficients with an exponential improvement in computational complexity. We compare our hierarchical approach with popular Shapley-based and non-Shapley-based methods on a synthetic dataset, a medical imaging scenario, and a general computer vision problem. We show that h-Shap outperforms the state of the art in both accuracy and runtime.

Jacopo Teneggi, Alexandre Luster, jsulam Sulam
-

We focus on the problem of learning-to-defer to an expert under non-stationary dynamics in a sequential decision-making setting, by identifying pre-emptive deferral strategies. Pre-emptive deferral strategies are desirable when delaying deferral can result in suboptimal or undesirable long term outcomes, e.g. unexpected potential side-effects of a treatment. We formalize a deferral policy as being pre-emptive if delaying deferral does not lead to improved long-term outcomes. Our method, Sequential Learning-to-Defer (SLTD), explicitly measures the (expected) value of deferring now versus later based on the underlying uncertainty in non-stationary dynamics via posterior sampling. We demonstrate that capturing this uncertainty can allow us to test whether delaying deferral can help improve mean outcomes, and also provides domain experts with an indication of when the model's performance is reliable. Finally, we show that our approach outperforms existing non-sequential learning-to-defer baselines, whilst reducing overall uncertainty on multiple synthetic and semi-synthetic (Sepsis-Diabetes) simulators.

Shalmali Joshi, Sonali Parbhoo, Finale Doshi-Velez
-

Faced with skyrocketing costs for developing new drugs from scratch, repurposing existing drugs for new uses is an enticing alternative that considerably reduces safety risks and development costs. However, successful drug repurposing has been mainly based on serendipitous discoveries. Here, we present a tool that combines a graph transformer network with interactive visual explanations to assist scientists in generating, exploring, and understanding drug repurposing predictions. Leveraging semantic attention in our graph transformer network, our tool introduces a novel way to visualize meta path explanations that provide biomedical context for interpretation. Our results show that the tool generates accurate drug predictions and provides interpretable predictions.

Qianwen Wang, Payal Chandak, Marinka Zitnik
-
[ Visit Poster at Spot C2 in Virtual World ]

Algorithmic decision support is rapidly becoming a staple of personalized medicine, especially for high-stakes recommendations in which access to certain information can drastically alter the course of treatment, and thus, patient outcome; a prominent example is radiomics for cancer subtyping. Because in these scenarios the stakes are high, it is desirable for decision systems to not only provide recommendations but supply transparent reasoning in support thereof. For learning-based systems, this can be achieved through an interpretable design of the inference pipeline. Herein we describe an automated yet interpretable system for uveal melanoma subtyping with digital cytology images from fine needle aspiration biopsies. Our method embeds every automatically segmented cell of a candidate cytology image as a point in a 2D manifold defined by many representative slides, which enables reasoning about the cell-level composition of the tissue sample, paving the way for interpretable subtyping of the biopsy. Finally, a rule-based slide-level classification algorithm is trained on the partitions of the circularly distorted 2D manifold. This process results in a simple rule set that is evaluated automatically but highly transparent for human verification. On our in house cytology dataset of 88 uveal melanoma patients, the proposed method achieves an accuracy of 87.5% that compares favorably to all competing approaches, including deep "black box'' models. The method comes with a user interface to facilitate interaction with cell-level content, which may offer additional insights for pathological assessment.

Haomin Chen, Alvin Liu, Catalina Gomez, Zelia Correa, Mathias Unberath
-
[ Visit Poster at Spot C2 in Virtual World ]

Being able to explain the prediction to clinical end-users is a necessity to leverage the power of AI models for clinical decision support. For medical images, saliency maps are the most common form of explanation. The maps highlight important features for AI model's prediction. Although many saliency map methods have been proposed, it is unknown how well they perform on explaining decisions on multi-modal medical images, where each modality/channel carries distinct clinical meanings of the same underlying biomedical phenomenon. Understanding such modality-dependent features is essential for clinical users' interpretation of AI decisions. To tackle this clinically important but technically ignored problem, we propose the MSFI (Modality-Specific Feature Importance) metric to examine whether saliency maps can highlight modality-specific important features. MSFI encodes the clinical requirements on modality prioritization and modality-specific feature localization. Our evaluations on 16 commonly used saliency map methods, including a clinician user study, show that although most saliency map methods captured modality importance information in general, most of them failed to highlight modality-specific important features consistently and precisely. The evaluation results guide the choices of saliency map methods and provide insights to propose new ones targeting clinical applications.

Weina Jin, Xiaoxiao Li, Ghassan Hamarneh
-
[ Visit Poster at Spot C3 in Virtual World ]

As black box explanations are increasingly being employed to establish model credibility in high stakes settings, it is important to ensure that these explanations are accurate and reliable. However, prior work demonstrates that explanations generated by state-of-the-art techniques are inconsistent, unstable, provide very little insight into their correctness and reliability, and are computationally inefficient. In this paper, we address the aforementioned challenges by developing a novel Bayesian framework for generating local explanations along with their associated uncertainty. We instantiate this framework to obtain Bayesian versions of LIME and KernelSHAP which output credible intervals for the feature importances, capturing the associated uncertainty. The resulting explanations not only enable us to make concrete inferences about their quality (e.g., there is a 95\% chance that the feature importance lies within the given range), but are also highly consistent and stable. We carry out a detailed theoretical analysis that leverages the aforementioned uncertainty to estimate how many perturbations to sample, and how to sample for faster convergence. This work makes the first attempt at addressing several critical issues with popular explanation methods in one shot, thereby generating consistent, stable, and reliable explanations with guarantees in a computationally efficient manner. Experimental evaluation with multiple real world datasets and user studies demonstrate that the efficacy of the proposed framework.

Dylan Slack, Sophie Hilgard, Sameer Singh, Hima Lakkaraju
-
[ Visit Poster at Spot C3 in Virtual World ]

Motivation:Prediction explanation methods for neural networks trained for medical imaging tasks are important for avoiding the unintended consequences of deploying AI systems when false positive predictions can impact patient care. However, traditional image attribution methods struggle to satisfactorily explain such predictions. Thus, there is a pressing need to develop improved models for model explainability and introspection.

Specific problem: Counterfactual explanations can transform input images to increase or decrease features which cause the prediction. However, current approaches are difficult to implement as they are monolithic or rely on GANs. These hurdles prevent wide adoption.

Our approach: Given an arbitrary classifier, we propose a simple autoencoder and gradient update (Latent Shift) that can transform the latent representation of a specific input image to exaggerate or curtail the features used for prediction. We use this method to study chest X-ray classifiers and evaluate their performance. We conduct a reader study with two radiologists assessing 240 chest X-ray predictions to identify which ones are false positives (half are) using traditional attribution maps or our proposed method.

Results: We found low overlap with ground truth pathology masks for models with reasonably high accuracy. However, the results from our reader study indicate that these models are generally looking at the correct features. We also found that the Latent Shift explanation allows a user to have more confidence in true positive predictions compared to traditional approaches (0.15±0.95 in a 5 point scale with p=0.01) with only a small increase in false positive predictions (0.04±1.06 with p=0.57).

Joseph Paul Cohen, Rupert Brooks, Evan Zucker, Anuj Pareek, Lungren Matthew, Akshay Chaudhari
-
[ Visit Poster at Spot C4 in Virtual World ]

We propose a new method for variable selection using Bayesian neural networks. We focus on quantifying uncertainty in which variables should be selected. Our method provides posterior summaries including posterior inclusion probabilities and credible sets for variable selection. Our framework generalizes the previous Sum of Single Effect model (SuSiE) to deep learning models for incorporating non-linearity. We provide a variational algorithm with several relaxation techniques that enables scalable inference. Our model can be used for both regression and classification tasks. We show that our method has competitive performance in variable selection using simulations. The method is suited for scenarios where input variables are correlated and effect variables are sparse. We illustrate the utility of our method for genetic fine-mapping in statistical genetics with the Stock Mice dataset.

Wei Cheng, Sohini Ramachandran, Lorin Crawford
-
[ Visit Poster at Spot C4 in Virtual World ]

As modern neural networks keep breaking records and solving harder problems, their predictions also become less intelligible. The current lack of interpretability undermines the deployment of accurate machine learning tools in sensitive settings. In this work, we present a model-agnostic explanation method for image classification based on a hierarchical extension of Shapley coefficients --Hierarchical Shap (h-Shap)-- that resolves some limitations of current approaches. Unlike other Shapley-based explanation methods, h-Shap is scalable and it does not need approximation. Under certain distributional assumptions, which are common in multiple instance learning, h-Shap retrieves the exact Shapley coefficients with an exponential improvement in computational complexity. We compare our hierarchical approach with popular Shapley-based and non-Shapley-based methods on a synthetic dataset, a medical imaging scenario, and a general computer vision problem. We show that h-Shap outperforms the state of the art in both accuracy and runtime.

Jacopo Teneggi, Alexandre Luster, jsulam Sulam
-
[ Visit Poster at Spot C5 in Virtual World ]

We focus on the problem of learning-to-defer to an expert under non-stationary dynamics in a sequential decision-making setting, by identifying pre-emptive deferral strategies. Pre-emptive deferral strategies are desirable when delaying deferral can result in suboptimal or undesirable long term outcomes, e.g. unexpected potential side-effects of a treatment. We formalize a deferral policy as being pre-emptive if delaying deferral does not lead to improved long-term outcomes. Our method, Sequential Learning-to-Defer (SLTD), explicitly measures the (expected) value of deferring now versus later based on the underlying uncertainty in non-stationary dynamics via posterior sampling. We demonstrate that capturing this uncertainty can allow us to test whether delaying deferral can help improve mean outcomes, and also provides domain experts with an indication of when the model's performance is reliable. Finally, we show that our approach outperforms existing non-sequential learning-to-defer baselines, whilst reducing overall uncertainty on multiple synthetic and semi-synthetic (Sepsis-Diabetes) simulators.

Shalmali Joshi, Sonali Parbhoo, Finale Doshi-Velez
-
[ Visit Poster at Spot C5 in Virtual World ]

Faced with skyrocketing costs for developing new drugs from scratch, repurposing existing drugs for new uses is an enticing alternative that considerably reduces safety risks and development costs. However, successful drug repurposing has been mainly based on serendipitous discoveries. Here, we present a tool that combines a graph transformer network with interactive visual explanations to assist scientists in generating, exploring, and understanding drug repurposing predictions. Leveraging semantic attention in our graph transformer network, our tool introduces a novel way to visualize meta path explanations that provide biomedical context for interpretation. Our results show that the tool generates accurate drug predictions and provides interpretable predictions.

Qianwen Wang, Payal Chandak, Marinka Zitnik

Author Information

Yuyin Zhou (Johns Hopkins University)
Xiaoxiao Li (The University of British Columbia)
Vicky Yao (Rice University)
Pengtao Xie (Carnegie Mellon University)
DOU QI (The Chinese University of Hong Kong)
Nicha Dvornek (Yale University)
Julia Schnabel (King's College London)
Judy Wawira (Emory Radiology)
Yifan Peng (Weill Cornell Medicine)
Ronald Summers (NIH)
Alan Karthikesalingam (Google Health)
Lei Xing (Stanford University)
Eric Xing (Petuum Inc. and CMU)

More from the Same Authors