Timezone: »
Applying machine learning (ML) in healthcare is gaining momentum rapidly. However, the black-box characteristics of existing ML approaches inevitably lead to less interpretability and verifiability in making clinical predictions. To enhance the interpretability of medical intelligence, it becomes critical to develop methodologies to explain predictions as these systems are pervasively being introduced to the healthcare domain, which requires a higher level of safety and security. Such methodologies would make medical decisions more trustworthy and reliable for physicians, which could facilitate the deployment ultimately. On the other hand, it is also essential to develop more interpretable and transparent ML systems. For instance, by exploiting structured knowledge or prior clinical information, one can design models to learn aspects more coherent with clinical reasoning. Also, it may help mitigate biases in the learning process, or identify more relevant variables for making medical decisions.
In this workshop, we aim to bring together researchers in ML, computer vision, healthcare, medicine, NLP, and clinical fields to facilitate discussions including related challenges, definition, formalisms, evaluation protocols regarding interpretable medical machine intelligence. Additionally, we will also introduce possible solutions such as logic and symbolic reasoning over medical knowledge graphs, uncertainty quantification, composition models, etc. We hope that the proposed workshop is fruitful in offering a step toward building autonomous clinical decision systems with a higher-level understanding of interpretability.
Fri 6:15 a.m. - 6:30 a.m.
|
Welcoming remarks and introduction
(
Welcome Session
)
SlidesLive Video » |
Yuyin Zhou 🔗 |
Fri 6:30 a.m. - 7:00 a.m.
|
Quantitative epistemology: conceiving a new human-machine partnership
(
Invited Talk
)
SlidesLive Video » |
Mihaela van der Schaar 🔗 |
Fri 7:00 a.m. - 7:30 a.m.
|
Integrating Convolutional Neural Networks and Probabilistic Graphical Models for Epileptic Seizure Detection and Localization
(
Invited Talk
)
SlidesLive Video » |
Archana Venkataraman 🔗 |
Fri 7:30 a.m. - 7:40 a.m.
|
Poster spotlight #1 ( Spotlight ) link » | 🔗 |
Fri 7:40 a.m. - 8:30 a.m.
|
Posters I and coffee break ( Poster ) link » | 🔗 |
Fri 8:30 a.m. - 9:00 a.m.
|
Handling the long tail in medical imaging
(
Invited Talk
)
SlidesLive Video » |
Jim Winkens · Abhijit Guha Roy 🔗 |
Fri 9:00 a.m. - 9:30 a.m.
|
In Search of Effective and Reproducible Clinical Imaging Biomarkers for Pancreatic Oncology Applications of Screening, Diagnosis and Prognosis
(
Invited Talk
)
SlidesLive Video » |
Le Lu 🔗 |
Fri 9:30 a.m. - 9:40 a.m.
|
Poster spotlight #2 ( Spotlight ) link » | 🔗 |
Fri 9:40 a.m. - 10:30 a.m.
|
Lunch Break
|
🔗 |
Fri 10:30 a.m. - 11:00 a.m.
|
Towards Robust and Reliable Model Explanations for Healthcare
(
Keynote
)
SlidesLive Video » |
Hima Lakkaraju 🔗 |
Fri 11:00 a.m. - 11:30 a.m.
|
Automating deep learning to interpret human genomic variations
(
Invited Talk
)
SlidesLive Video » |
Olga Troyanskaya 🔗 |
Fri 11:30 a.m. - 11:40 a.m.
|
Poster spotlight #3
(
Spotlight
)
|
🔗 |
Fri 11:40 a.m. - 12:00 p.m.
|
Coffee break
|
🔗 |
Fri 12:00 p.m. - 12:30 p.m.
|
Practical Considerations of Model Interpretability in Clinical Medicine: Stability, Causality and Actionability
(
Invited Talk
)
SlidesLive Video » |
Fei Wang 🔗 |
Fri 12:30 p.m. - 1:00 p.m.
|
Toward Interpretable Health Care
(
Invited Talk
)
SlidesLive Video » |
Alan L Yuille 🔗 |
Fri 1:00 p.m. - 1:10 p.m.
|
Poster spotlight #4
(
Spotlight
)
|
🔗 |
Fri 1:10 p.m. - 2:00 p.m.
|
Posters II and coffee break ( Poster ) link » | 🔗 |
Fri 2:00 p.m. - 2:30 p.m.
|
Explainable AI for healthcare
(
Invited Talk
)
SlidesLive Video » |
Su-In Lee 🔗 |
Fri 2:30 p.m. - 2:45 p.m.
|
Closing remarks
SlidesLive Video » |
Xiaoxiao Li 🔗 |
-
|
MACDA: Counterfactual Explanation with Multi-Agent Reinforcement Learning for Drug Target Prediction
(
Poster
)
Most deep learning models on drug-target affinity (DTA) prediction are black box hence are difficult to interpret and verify its result, and thus risking acceptance. Explanation is necessary to allow the DTA model more trustworthy. The interaction between sub-structure of two inputs, drug functional groups and protein residues, is an important factor in the DTA model prediction. Explanation based on substructure interaction allows domain experts to verify the binding mechanism used by DTA model in its prediction. We propose a multi-agent reinforcement learning framework, Multi-Agent Counterfactual Drug-target binding Affinity (MACDA), to generate counterfactual explanations for the drug-protein complex. Our proposed framework provides human-interpretable counterfactual instances while optimizing both the input drug and target for counterfactual generation at the same time. MACDA also explains the substructure interaction between inputs in the DTA model prediction. |
Tri Nguyen · Thomas Quinn · Thin Nguyen · Truyen Tran 🔗 |
-
|
Causal Graph Recovery for Sepsis-Associated Derangements via Interpretable Hawkes Networks
(
Poster
)
Continuous, automated surveillance systems that incorporate machine learning models are becoming increasingly more common in healthcare environments. These models can capture temporally dependent changes across multiple patient variables and can enhance a clinician's situational awareness by providing an early warning alarm of an impending adverse event such as sepsis. However, most commonly used methods, e.g., XGBoost, fail to provide an interpretable mechanism for understanding why a model produced a sepsis alarm at a given time. The ``black box'' nature of many models is a severe limitation as it prevents clinicians from independently corroborating those physiologic features that have contributed to the sepsis alarm. To overcome this limitation, we propose a generalized linear model (GLM) approach to fit a Granger causal graph based on the physiology of several major sepsis-associated derangements (SADs). We adopt a recently developed stochastic monotone variational inequality-based estimator coupled with forwarding feature selection to learn the graph structure from both continuous and discrete-valued as well as regularly and irregularly sampled time series. Most importantly, we develop a non-asymptotic upper bound on the estimation error for any monotone link function in the GLM. We conduct real-data experiments and demonstrate that our proposed method can achieve comparable performance to popular and powerful prediction methods such as XGBoost while simultaneously maintaining a high level of interpretability. |
Song Wei · Yao Xie · Rishi Kamaleswaran 🔗 |
-
|
Learning Robust Hierarchical Patterns of Human Brain across Many fMRI Studies
(
Poster
)
Multi-site fMRI studies face the challenge that the pooling introduces systematic non-biological site-specific variance due to hardware, software, and environment. In this paper, we propose to reduce site-specific variance in the estimation of hierarchical Sparsity Connectivity Patterns (hSCPs) in fMRI data via a simple yet effective matrix factorization while preserving biologically relevant variations. Our method leverages unsupervised adversarial learning to improve the reproducibility of the components. Experiments on simulated datasets display that the proposed method can estimate components with higher accuracy and reproducibility, while preserving age-related variation on a multi-center clinical data set. |
Dushyant Sahoo · Dushyant Sahoo · Christos Davatzikos 🔗 |
-
|
Quantifying Explainability in NLP and Analyzing Algorithms for Performance-Explainability Tradeoff
(
Poster
)
The healthcare domain is one of the most exciting application areas for machine learning, but a lack of model transparency contributes to a lag in adoption within the industry. In this work, we explore the current art of explainability and interpretability within a case study in clinical text classification, using a task of mortality prediction within MIMIC-III clinical notes. We demonstrate various visualization techniques for fully interpretable methods as well as model-agnostic post hoc attributions, and we provide a generalized method for evaluating the quality of explanations using infidelity and local Lipschitz across model types from logistic regression to BERT variants. With these metrics, we introduce a framework through which practitioners and researchers can assess the frontier between a model's predictive performance and the quality of its available explanations. We make our code available to encourage continued refinement of these methods. |
Mitchell Naylor 🔗 |
-
|
Using Associative Classification and Odds Ratios for In-Hospital Mortality Risk Estimation
(
Poster
)
|
Oliver Haas · Andreas Maier · Eva Rothgang 🔗 |
-
|
TransUNet: Transformers Make Strong Encoders for Medical Image Segmentation
(
Poster
)
Medical image segmentation is an essential prerequisite for developing healthcare systems, especially for disease diagnosis and treatment planning. On various medical image segmentation tasks, the u-shaped architecture, also known as U-Net, has become the de-facto standard and achieved tremendous success. However, due to the intrinsic locality of convolution operations, U-Net generally demonstrates limitations in explicitly modeling long-range dependency. Transformers, designed for sequence-to-sequence prediction, have emerged as alternative architectures with innate global self-attention mechanisms, but can result in limited localization abilities due to insufficient low-level details. In this paper, we propose~\textbf{TransUNet}, which merits both Transformers and U-Net, as a strong alternative for medical image segmentation. On one hand, the Transformer encodes tokenized image patches from a convolution neural network (CNN) feature map as the input sequence for extracting global contexts. On the other hand, the decoder upsamples the encoded features which are then combined with the high-resolution CNN feature maps to enable precise localization. We argue that Transformers can serve as strong encoders for medical image segmentation tasks, with the combination of U-Net to enhance finer details by recovering localized spatial information. Extensive experimental results demonstrate the benefits of our TransUNet, which lead us to substantially outperform previous convolution based networks. |
Jie-Neng Chen · Yongyi Lu · Qihang Yu · Xiangde Luo · Ehsan Adeli · Yan Wang · Le Lu · Alan L Yuille · Yuyin Zhou 🔗 |
-
|
Tree-based local explanations of machine learning model predictions – AraucanaXAI
(
Poster
)
Increasingly complex learning methods such as boosting, bagging and deep learning have made ML models more accurate, but harder to understand and interpret. A tradeoff between performance and intelligibility is often to be faced, especially in high-stakes applications like medicine. In the present article we propose a novel methodological approach for generating explanations of the predictions of a generic ML model, given a specific instance for which the prediction has been made, that can tackle both classification and regression tasks. Advantages of the proposed XAI approach include improved fidelity to the original model, ability to deal with non-linear decision boundaries, and native support to both classification and regression problems. |
Enea Parimbelli · Giovanna Nicora · Szymon Wilk · Wojtek Michalowski · Riccardo Bellazzi 🔗 |
-
|
Counterfactual Explanations in Sequential Decision Making Under Uncertainty
(
Poster
)
Methods to find counterfactual explanations have predominantly focused on one step decision making processes. In this work, we initiate the development of methods to find counterfactual explanations for decision making processes in which multiple, dependent actions are taken sequentially over time. We start by formally characterizing a sequence of actions and states using finite horizon Markov decision processes and the Gumbel-Max structural causal model. Building upon this characterization, we formally state the problem of finding counterfactual explanations for sequential decision making processes. In our problem formulation, the counterfactual explanation specifies an alternative sequence of actions differing in at most k actions from the observed sequence that could have led the observed process realization to a better outcome. Then, we introduce a polynomial time algorithm based on dynamic programming to build a counterfactual policy that is guaranteed to always provide the optimal counterfactual explanation on every possible realization of the counterfactual environment dynamics. We validate our algorithm using both synthetic and real data from cognitive behavioral therapy and show that the counterfactual explanations our algorithm finds can provide valuable insights to enhance sequential decision making under uncertainty. |
Stratis Tsirtsis · Abir De · Manuel Gomez-Rodriguez 🔗 |
-
|
Reinforcement Learning for Workflow Recognition in Surgical Videos
(
Poster
)
Automatically recognizing surgical workflow plays a significant part in improving surgical training efficiency by providing automated skill assessment for surgeons. Based on a deep model(SV-RCNet) which mainly consists of a deep residual network(ResNet) and a long short term memory(LSTM) network, our framework introduced reinforcement learning method into surgical workflow or phase recognition for the first time and is evaluated on cholec80 dataset which contains 80 videos of cholecystectomy surgeries. In our framework, an intelligent agent is trained using Markov Decision Process (MDP) model and Proximal Policy Optimization(PPO) algorithm with discriminative spatio-temporal features extracted from the SV-RCNet as input. Experiments on cholec80 dataset have outperformed the SV-RCNet in terms of accuracy, precision and recall. |
Wang Wei · Jingze Zhang · Qi Dou 🔗 |
-
|
Transfer Learning with Real-World Nonverbal Vocalizations from Minimally Speaking Individuals
(
Poster
)
We trained and evaluated several types of transfer learning to classify the affect and communication intent of nonverbal vocalizations from eight minimally speaking individuals (mv) with autism. Datasets were recorded in real-world settings with in-the-moment labels from a close family member. We trained deep neural nets (DNNs) on six audio datasets (including our dataset of nonverbal vocalizations) and then fine-tuned the models to classify affect and intent for each individual. We also evaluated a zero-shot approach for arousal and valence regression using an acted dataset of nonverbal vocalizations that occur amidst typical speech. For two of the eight mv communicators, fine-tuning improved model performance compared to fully personalized DNNs and there were weak groupings in arousal values inferred using zero-shot learning. The limited success of the evaluated transfer learning approaches highlights the need for specialized datasets with mv* individuals. |
Jaya Narain 🔗 |
-
|
Enhancing interpretability and reducing uncertainties in deep learning of electrocardiograms using a sub-waveform representation
(
Poster
)
In electrocardiogram (ECG) deep learning (DL), researchers traditionally use the full duration of waveforms that create redundancies in feature learning and result in inaccurate predictions with large uncertainties. In this work, we introduce a new sub-waveform representation that leverages the rhythmic pattern of ECG waveforms by aligning the heartbeats to enhance the DL predictive capabilities. As a case study, we investigate the impact of waveform representations on DL predictions for identification of left ventricular dysfunction. We provide the explanation of how the sub-waveform representation opens up a new space for feature learning and minimizing uncertainties. By developing a novel scoring system, we carefully examine the feature interpretation and the clinical relevance. We note that the proposed representation enhances predictive power by engineering only at the waveform level (data-centric) rather than changing neural network architecture (model-centric). We expect that this added control over granularity of data will improve the ECG-DL modeling for developing new AI technologies in the cardiovascular space. |
Hossein Honarvar · Chirag Agarwal · Sulaiman Somani · Girish Nadkarni · Marinka Zitnik · Fei Wang · Benjamin Glicksberg 🔗 |
-
|
Online structural kernel selection for mobile health
(
Poster
)
Motivated by the need for efficient and personalized learning in mobile health, we investigate the problem of online kernel selection for Gaussian Process regression in the multi-task setting. We propose a novel generative process on the kernel composition for this purpose. Our method demonstrates that trajectories of kernel evolutions can be transferred between users to improve learning and that the kernels themselves are meaningful for the mHealth prediction goal. |
Eura Shin · Predag Klasnja · Susan Murphy · Finale Doshi-Velez 🔗 |
-
|
Hierarchical Analysis of Visual COVID-19 Features from Chest Radiographs
(
Poster
)
Chest radiography has been a recommended procedure for patient triaging and resource management in intensive care units (ICUs) throughout the COVID-19 pandemic. The machine learning efforts to augment this workflow have been long challenged due to deficiencies in reporting, model evaluation, and failure mode analysis. To address some of those shortcomings, we model radiological features with a human-interpretable class hierarchy that aligns with the radiological decision process. Also, we propose the use of a data-driven error analysis methodology to uncover the blind spots of our model, providing further transparency on its clinical utility. For example, our experiments show that model failures highly correlate with ICU imaging conditions and with the inherent difficulty in distinguishing certain types of radiological features. Also, our hierarchical interpretation and analysis facilitates the comparison with respect to radiologists' findings and inter-variability, which in return helps us to better assess the clinical applicability of models. |
Shruthi Bannur · Ozan Oktay · Melanie Bernhardt · Anton Schwaighofer · Besmira Nushi · Aditya Nori · Javier Alvarez-Valle · Daniel Coelho de Castro 🔗 |
-
|
Towards Privacy-preserving Explanations in Medical Image Analysis
(
Poster
)
The use of Deep Learning in the medical field is hindered by the lack of interpretability. Case-based interpretability strategies can provide intuitive explanations for deep learning models' decisions, thus, enhancing trust. However, the resulting explanations threaten patient privacy, motivating the development of privacy-preserving methods compatible with the specifics of medical data. In this work, we analyze existing privacy-preserving methods and their respective capacity to anonymize medical data while preserving disease-related semantic features. We find that the PPRL-VGAN deep learning method was the best at preserving the disease-related semantic features while guaranteeing a high level of privacy among the compared state-of-the-art methods. Nevertheless, we emphasize the need to improve privacy-preserving methods for medical imaging, as we identified relevant drawbacks in all existing privacy-preserving approaches. |
Helena Montenegro · Wilson Silva · Jaime S. Cardoso 🔗 |
-
|
Prediction of intracranial hypertension in patients with severe traumatic brain injury
(
Poster
)
Intracranial hypertension is a key factor in the treatment and prevention of secondary brain injury in patients with traumatic brain injury. We aimed to develop a prediction model based on changes in intracranial pressure waveform morphology. A convolutional neural network with 10 hidden layers was trained on the dominant intracranial pressure waveform, computed over 1 minute of data, from control and pre-intracranial hypertension segments up to 1 hour prior to intracranial hypertension. The model obtained an accuracy, sensitivity, specificity and an area under the receiver operating characteristics curve of 0.70, 0.68, 0.72 and 0.74, respectively, for the time window 0-10 minutes before the onset of intracranial hypertension. |
Ruud van Kaam 🔗 |
-
|
iFedAvg – Interpretable Data-Interoperability for Federated Learning
(
Poster
)
Recently, the ever-growing demand for privacy-oriented machine learning has motivated researchers to develop federated and decentralized learning techniques, allowing individual clients to train models collaboratively without disclosing their private datasets. However, widespread adoption has been limited in domains relying on high levels of user trust, where assessment of data compatibility is essential. In this work, we define and address low interoperability induced by underlying client data inconsistencies in federated learning for tabular data. The proposed method, iFedAvg, builds on federated averaging adding local element-wise affine layers to allow for a personalized and granular understanding of the collaborative learning process. Thus, enabling the detection of outlier datasets in the federation and also learning the compensation for local data distribution shifts without sharing any original data. We evaluate iFedAvg using several public benchmarks and a previously unstudied collection of real-world datasets from the 2014 - 2016 West African Ebola epidemic, jointly forming the largest such dataset in the world. In all evaluations, iFedAvg achieves competitive average performance with negligible overhead. It additionally shows substantial improvement on outlier clients, highlighting increased robustness to individual dataset shifts. Most importantly, our method provides valuable client-specific insights at a fine-grained level to guide interoperable federated learning. |
David Roschewitz · Mary-Anne Hartley · Luca Corinzia · Martin Jaggi 🔗 |
-
|
Personalized and Reliable Decision Sets: Enhancing Interpretability in Clinical Decision Support Systems
(
Poster
)
In this study, we present a novel clinical decision support system and discuss its interpretability-related properties. It combines a decision set of rules with a machine learning scheme to offer global and local interpretability. More specifically, machine learning is used to predict the likelihood of each of those rules to be correct for a particular patient, which may also contribute to better predictive performances. Moreover, the reliability analysis of individual predictions is also addressed, contributing to further personalized interpretability. The combination of these several elements may be crucial to obtain the clinical stakeholders' trust, leading to a better assessment of patients' conditions and improvement of the physicians' decision-making. |
Francisco Valente 🔗 |
-
|
Solving inverse problems with deep neural networks driven by sparse signal decomposition in a physics-based dictionary
(
Poster
)
Deep neural networks (DNN) have an impressive ability to invert very complex models, i.e. to learn the generative parameters from a model's output. Once trained, the forward pass of a DNN is often much faster than traditional, optimization-based methods used to solve inverse problems. This is however done at the cost of lower interpretability, a fundamental limitation in most medical applications. We propose an approach for solving general inverse problems which combines the efficiency of DNN and the interpretability of traditional analytical methods. The measurements are first projected onto a dense dictionary of model-based responses. The resulting sparse representation is then fed to a DNN with an architecture driven by the problem's physics for fast parameter learning. Our method can handle generative forward models that are costly to evaluate and exhibits similar performance in accuracy and computation time as a fully-learned DNN, while maintaining high interpretability and being easier to train. Concrete results are shown on an example of model-based brain parameter estimation from magnetic resonance imaging (MRI). |
Gaetan Rensonnet 🔗 |
-
|
Optimizing Clinical Early Warning Models to Meet False Alarm Constraints
(
Poster
)
Deployed early warning systems in clinical settings often suffer from high false alarm rates that limit trustworthiness and overall utility. Despite the need to control false alarms, the dominant classifier training paradigm remains minimizing cross entropy, a loss function that has no direct relationship to false alarms. While existing efforts often use post-hoc threshold selection to address false alarms, in this paper we build on recent work to suggest a more comprehensive solution. We develop a family of tight bounds using the sigmoid function that let us maximize recall while satisfying a constraint that holds false alarms below a specified tolerance. This new differentiable objective can be easily integrated with generalized linear models, neural networks, and any other classifier trained with minibatch gradient descent. Through experiments on toy data and acute care mortality risk prediction, we demonstrate our method can satisfy a desired constraint on false alarms interpretable to clinical staff while achieving better recall than alternatives. |
Preetish Rath · Michael Hughes 🔗 |
-
|
Identifying cell type-specific chemokine correlates with hierarchical signal extraction from single-cell transcriptomes
(
Poster
)
Biological data is inherently heterogeneous and high-dimensional. Single-cell sequencing of transcripts in a tissue sample generates data for thousands of cells, each of which is characterized by upwards of tens of thousands of genes. How to identify the subsets of cells and genes that are associated with a label of interest remains an open question. In this paper, we integrate a signal-extractive neural network architecture with axiomatic feature attribution to classify tissue samples based on single-cell gene expression profiles. This approach is not only interpretable but also robust to noise, requiring just 5% of genes and 23% of cells in an in silico tissue sample to encode signal in order to distinguish signal from noise with greater than 70% accuracy. We demonstrate its applicability in two real-world settings for discovering cell type-specific chemokine correlates: predicting response to immune checkpoint inhibitors in multiple tissue types and predicting DNA mismatch repair deficiency in colorectal cancer. Our approach not only significantly outperforms traditional machine learning classifiers but also presents actionable biological hypotheses of chemokine-mediated tumor immunogenicity. |
Sherry Chao · Michael Brenner 🔗 |
-
|
Uncertainty Quantification for Amniotic Fluid Segmentation and Volume Prediction
(
Poster
)
In many medical segmentation tasks, it is crucial to provide valid confidence intervals to machine learning predictions. In the case of segmenting amniotic fluid using fetal MRIs, this allows doctors to better understand and control the segmentation masks, bound the fluid volume, and statistically detect anomalies such as cysts. In this work, we propose and evaluate different ways of creating confidence intervals for segmentation masks and volume predictions using tools from the field of conformal prediction. We show that simple but well-suited modifications of current methods, such as volume normalization and tuning of a leniency hyperparameter, lead to significant improvements, resulting in more consistent coverage and narrower confidence sets. These advances are thoroughly illustrated in the amniotic fluid segmentation problem. |
Daniel Csillag · Lucas Monteiro Paes · Thiago Ramos · João Vitor Romano · Roberto Oliveira · Paulo Orenstein 🔗 |
-
|
Evaluating subgroup disparity using epistemic for breast density assessment in mammography
(
Poster
)
As machine learning algorithms continue to expand into healthcare domains that affect decision making systems, new strategies will need to be incorporated to effectively detect and evaluate subgroup disparities in order to ensure accountability and generalizablility in clinical machine learning workflows. In this paper, we explore how uncertainty can be used as one way to evaluate disparity in both patient demographics (race) and data acquisition (scanner) subgroups for breast density assessment on a dataset of 108,190 mammograms collected from over 33 clinical sites. Our results show that the choice of uncertainty quantification varies significantly at the subgroup level even if aggregate performance is comparable. We hope this analysis can promote future work on how uncertainty can be incorporated into clinical workflows to increase transparency in machine learning. The integration of predictive uncertainty can have implications for both regulation and generalizability of machine learning applications in healthcare. |
charlie lu · Andreanne Lemay · Katharina Hoebel · Jayashree Kalpathy-Cramer 🔗 |
-
|
BrainNNExplainer: An Interpretable Graph Neural Network Framework for Brain Network based Disease Analysis
(
Poster
)
Interpretable brain network models for disease prediction are of great value for the advancement of neuroscience. GNNs are promising to model complicated network data, but they are prone to overfitting and suffer from poor interpretability, which prevents their usage in decision-critical scenarios like healthcare. To bridge this gap, we propose BrainNNExplainer, an interpretable GNN framework for brain network analysis. It is mainly composed of two jointly learned modules: a backbone prediction model that is specifically designed for brain networks and an explanation generator that highlights disease-specific prominent brain network connections. Extensive experimental results with visualizations on two challenging disease prediction datasets demonstrate the unique interpretability and outstanding performance of BrainNNExplainer. |
Hejie Cui · Wei Dai · Yanqiao Zhu · Xiaoxiao Li · Lifang He · Carl Yang 🔗 |
-
|
Effective and Interpretable fMRI Analysis with Functional Brain Network Generation
(
Poster
)
Recent studies in neuroscience show great potential of functional brain networks constructed from fMRI data for popularity modeling and clinical predictions. However, existing functional brain networks are noisy and unaware of downstream prediction tasks, while also incompatible with recent powerful machine learning models of GNNs.In this work, we develop an end-to-end trainable pipeline to extract prominent fMRI features, generate brain networks, and make predictions with GNNs, all under the guidance of downstream prediction tasks. Preliminary experiments on the PNC fMRI data show the superior effectiveness and unique interpretability of our framework. |
Xuan Kan · Hejie Cui · Ying Guo · Carl Yang 🔗 |
-
|
Using a Cross-Task Grid of Linear Probes to Interpret CNN Model Predictions On Retinal Images
(
Poster
)
We analyze a dataset of retinal images using linear probes: linear regression models trained on some |
Katy Blumer · Subhashini Venugopalan · Michael Brenner · Jon Kleinberg 🔗 |
-
|
Learning sparse symbolic policies for sepsis treatment
(
Poster
)
Sepsis is a life-threatening organ dysfunction caused by a dysregulated host response to infection. Despite its severity, no FDA-approved drug treatments exists. Recent work controlling sepsis simulations with deep reinforcement learning have successfully discovered effective cytokine mediation strategies. However, the performance of these neural-network based policies comes at the expense of their deployability in clinical settings, where sparsity and interpretability are required characteristics. To this end, we propose a pipeline to learn simple, sparse symbolic policies represented by constants and/or succinct, human-readable expressions. We demonstrate our approach by learning a sparse symbolic policy that is efficacious on simulated sepsis patients. |
Jacob Pettit · Brenden Petersen · Leno da Silva · Gary An · Daniel Faissol 🔗 |
-
|
Novel disease detection using ensembles with regularized disagreement
(
Poster
)
Automated medical diagnosis systems need to be able to recognize when new diseases emerge, that are not represented in the training data (ID). Even though current out-of-distribution (OOD) detection algorithms can successfully distinguish completely different data sets, they fail to reliably identify samples from novel classes, that are similar to the training data. We develop a new ensemble-based procedure that promotes model diversity and exploits regularization to limit disagreement to only OOD samples, using a batch containing an unknown mixture of ID and OOD data. We show that our procedure significantly outperforms state-of-the-art methods, including those that have access, during training, to data that is known to be OOD. We run extensive comparisons of our approach on a variety of novel-class detection scenarios, on standard image data sets as well as on new disease detection on medical image data sets. |
Alexandru Tifrea · Eric Stavarache · Fanny Yang 🔗 |
-
|
A reject option for automated sleep stage scoring
(
Poster
)
In medical applications, misclassifications can result in undetected diseases or incorrect diagnoses. Hence, being cautious when the model is uncertain is important. One way to be more cautious is to include a reject option in a classifier to allow it to abstain from making a prediction if its confidence in its prediction is low. This paper proposes a model-agnostic rejector based on the Local Outlier Factor anomaly score in the context of an important medical application: sleep stage scoring. This rejector improves the model's trustworthiness by detecting observations which substantially deviate from the training set. Moreover, the method can help identify populations which are missing in the training set. |
Dries Van der Plas · Wannes Meert · Jesse Davis 🔗 |
-
|
Assessing Bias in Medical AI
(
Poster
)
Machine learning and artificial intelligence are increasingly deployed in critical societal functions such as finance, media and healthcare. Along with their deployment come increasing reports of their failure when viewed through the lens of ethical principles such as fairness, democracy and equal opportunity. As a result, research into fair algorithms and mitigation of bias in data and algorithms, has surged in recent years. However, while it might seem clear what fairness entails, and how to achieve it, in some applications, established concepts do not translate directly to other domains. In this work, we consider healthcare specifically, illustrating limitations and challenges of fair models within medical applications and give recommendations for the development of AI in healthcare. |
Melanie Ganz · Sune Hannibal Holm · Aasa Feragen 🔗 |
-
|
Enabling risk-aware Reinforcement Learning for medical interventions through uncertainty decomposition
(
Poster
)
Reinforcement Learning (RL) is emerging as tool for tackling complex control and decision-making problems. However, in high-risk environments such as healthcare, manufacturing, automotive or aerospace, it is often challenging to bridge the gap between an apparently optimal policy learned by an agent and its real-world deployment, due to the uncertainties and risk associated with it. Broadly speaking RL agents face two kinds of uncertainty, 1. aleatoric uncertainty, which reflects randomness or noise in the dynamics of the world, and 2. epistemic uncertainty, which reflects the bounded knowledge of the agent due to model limitations and finite amount of information/data the agent has acquired about the world. These two types of uncertainty carry fundamentally different implications for the evaluation of performance and the level of risk or trust. Yet these aleatoric and epistemic uncertainties are generally confounded as standard and even distributional RL is agnostic to this difference. Here we propose how a distributional approach (UA-DQN) can be recast to render uncertainties by decomposing the net effects of each uncertainty . We demonstrate the operation of this method in grid world examples to build intuition and then show a proof of concept application for an RL agent operating as a clinical decision support system in critical care. |
Paul Festor · Giulia Luise · Matthieu Komorowski · Aldo Faisal 🔗 |
-
|
Do You See What I See? A Comparison of Radiologist Eye Gaze to Computer Vision Saliency Maps for Chest X-ray Classification
(
Poster
)
We qualitatively and quantitatively compare saliency maps generated from state-of-the-art deep learning chest X-ray classification models to radiologist eye gaze data. We find that across several saliency map methods, correct predictions have saliency maps more similar to the corresponding eye gaze data than the same for incorrect predictions. To incorporate eye gaze data into the model training procedure, we create DenseNet-Aug, a simple augmentation of the DenseNet model which performs comparably to the state-of-the-art. Finally, we extract salient annotated regions for each label class, thereby characterizing model attribution at the dataset level. While sample-level saliency maps visibly vary, these dataset-level regional comparisons indicate that across most class labels, radiologist eye gaze, DenseNet, and DenseNet-Aug often identify similar salient regions. |
Jesse Kim · Helen Zhou · Zachary Lipton 🔗 |
-
|
An Interpretable Algorithm for Uveal Melanoma Subtyping from Whole Slide Cytology Images
(
Oral
)
Algorithmic decision support is rapidly becoming a staple of personalized medicine, especially for high-stakes recommendations in which access to certain information can drastically alter the course of treatment, and thus, patient outcome; a prominent example is radiomics for cancer subtyping. Because in these scenarios the stakes are high, it is desirable for decision systems to not only provide recommendations but supply transparent reasoning in support thereof. For learning-based systems, this can be achieved through an interpretable design of the inference pipeline. Herein we describe an automated yet interpretable system for uveal melanoma subtyping with digital cytology images from fine needle aspiration biopsies. Our method embeds every automatically segmented cell of a candidate cytology image as a point in a 2D manifold defined by many representative slides, which enables reasoning about the cell-level composition of the tissue sample, paving the way for interpretable subtyping of the biopsy. Finally, a rule-based slide-level classification algorithm is trained on the partitions of the circularly distorted 2D manifold. This process results in a simple rule set that is evaluated automatically but highly transparent for human verification. On our in house cytology dataset of 88 uveal melanoma patients, the proposed method achieves an accuracy of 87.5% that compares favorably to all competing approaches, including deep "black box'' models. The method comes with a user interface to facilitate interaction with cell-level content, which may offer additional insights for pathological assessment. |
Haomin Chen · Alvin Liu · Catalina Gomez · Zelia Correa · Mathias Unberath 🔗 |
-
|
One Map Does Not Fit All: Evaluating Saliency Map Explanation on Multi-Modal Medical Images
(
Oral
)
Being able to explain the prediction to clinical end-users is a necessity to leverage the power of AI models for clinical decision support. For medical images, saliency maps are the most common form of explanation. The maps highlight important features for AI model's prediction. Although many saliency map methods have been proposed, it is unknown how well they perform on explaining decisions on multi-modal medical images, where each modality/channel carries distinct clinical meanings of the same underlying biomedical phenomenon. Understanding such modality-dependent features is essential for clinical users' interpretation of AI decisions. To tackle this clinically important but technically ignored problem, we propose the MSFI (Modality-Specific Feature Importance) metric to examine whether saliency maps can highlight modality-specific important features. MSFI encodes the clinical requirements on modality prioritization and modality-specific feature localization. Our evaluations on 16 commonly used saliency map methods, including a clinician user study, show that although most saliency map methods captured modality importance information in general, most of them failed to highlight modality-specific important features consistently and precisely. The evaluation results guide the choices of saliency map methods and provide insights to propose new ones targeting clinical applications. |
Weina Jin · Xiaoxiao Li · Ghassan Hamarneh 🔗 |
-
|
Reliable Post hoc Explanations: Modeling Uncertainty in Explainability
(
Oral
)
As black box explanations are increasingly being employed to establish model credibility in high stakes settings, it is important to ensure that these explanations are accurate and reliable. However, prior work demonstrates that explanations generated by state-of-the-art techniques are inconsistent, unstable, provide very little insight into their correctness and reliability, and are computationally inefficient. In this paper, we address the aforementioned challenges by developing a novel Bayesian framework for generating local explanations along with their associated uncertainty. We instantiate this framework to obtain Bayesian versions of LIME and KernelSHAP which output credible intervals for the feature importances, capturing the associated uncertainty. The resulting explanations not only enable us to make concrete inferences about their quality (e.g., there is a 95\% chance that the feature importance lies within the given range), but are also highly consistent and stable. We carry out a detailed theoretical analysis that leverages the aforementioned uncertainty to estimate how many perturbations to sample, and how to sample for faster convergence. This work makes the first attempt at addressing several critical issues with popular explanation methods in one shot, thereby generating consistent, stable, and reliable explanations with guarantees in a computationally efficient manner. Experimental evaluation with multiple real world datasets and user studies demonstrate that the efficacy of the proposed framework. |
Dylan Slack · Sophie Hilgard · Sameer Singh · Hima Lakkaraju 🔗 |
-
|
Gifsplanation via Latent Shift: A Simple Autoencoder Approach to Counterfactual Generation for Chest X-rays
(
Oral
)
Motivation:Prediction explanation methods for neural networks trained for medical imaging tasks are important for avoiding the unintended consequences of deploying AI systems when false positive predictions can impact patient care. However, traditional image attribution methods struggle to satisfactorily explain such predictions. Thus, there is a pressing need to develop improved models for model explainability and introspection. Specific problem: Counterfactual explanations can transform input images to increase or decrease features which cause the prediction. However, current approaches are difficult to implement as they are monolithic or rely on GANs. These hurdles prevent wide adoption. Our approach: Given an arbitrary classifier, we propose a simple autoencoder and gradient update (Latent Shift) that can transform the latent representation of a specific input image to exaggerate or curtail the features used for prediction. We use this method to study chest X-ray classifiers and evaluate their performance. We conduct a reader study with two radiologists assessing 240 chest X-ray predictions to identify which ones are false positives (half are) using traditional attribution maps or our proposed method. Results: We found low overlap with ground truth pathology masks for models with reasonably high accuracy. However, the results from our reader study indicate that these models are generally looking at the correct features. We also found that the Latent Shift explanation allows a user to have more confidence in true positive predictions compared to traditional approaches (0.15±0.95 in a 5 point scale with p=0.01) with only a small increase in false positive predictions (0.04±1.06 with p=0.57). |
Joseph Paul Cohen · Rupert Brooks · Evan Zucker · Anuj Pareek · Lungren Matthew · Akshay Chaudhari 🔗 |
-
|
Variable selection via the sum of single effect neural networks with credible sets
(
Oral
)
We propose a new method for variable selection using Bayesian neural networks. We focus on quantifying uncertainty in which variables should be selected. Our method provides posterior summaries including posterior inclusion probabilities and credible sets for variable selection. Our framework generalizes the previous Sum of Single Effect model (SuSiE) to deep learning models for incorporating non-linearity. We provide a variational algorithm with several relaxation techniques that enables scalable inference. Our model can be used for both regression and classification tasks. We show that our method has competitive performance in variable selection using simulations. The method is suited for scenarios where input variables are correlated and effect variables are sparse. We illustrate the utility of our method for genetic fine-mapping in statistical genetics with the Stock Mice dataset. |
Wei Cheng · Sohini Ramachandran · Lorin Crawford 🔗 |
-
|
Fast Hierarchical Games for Image Explanations
(
Oral
)
As modern neural networks keep breaking records and solving harder problems, their predictions also become less intelligible. The current lack of interpretability undermines the deployment of accurate machine learning tools in sensitive settings. In this work, we present a model-agnostic explanation method for image classification based on a hierarchical extension of Shapley coefficients --Hierarchical Shap (h-Shap)-- that resolves some limitations of current approaches. Unlike other Shapley-based explanation methods, h-Shap is scalable and it does not need approximation. Under certain distributional assumptions, which are common in multiple instance learning, h-Shap retrieves the exact Shapley coefficients with an exponential improvement in computational complexity. We compare our hierarchical approach with popular Shapley-based and non-Shapley-based methods on a synthetic dataset, a medical imaging scenario, and a general computer vision problem. We show that h-Shap outperforms the state of the art in both accuracy and runtime. |
Jacopo Teneggi · Alexandre Luster · Jeremias Sulam 🔗 |
-
|
Interpretable learning-to-defer for sequential decision-making
(
Oral
)
We focus on the problem of learning-to-defer to an expert under non-stationary dynamics in a sequential decision-making setting, by identifying pre-emptive deferral strategies. Pre-emptive deferral strategies are desirable when delaying deferral can result in suboptimal or undesirable long term outcomes, e.g. unexpected potential side-effects of a treatment. We formalize a deferral policy as being pre-emptive if delaying deferral does not lead to improved long-term outcomes. Our method, Sequential Learning-to-Defer (SLTD), explicitly measures the (expected) value of deferring now versus later based on the underlying uncertainty in non-stationary dynamics via posterior sampling. We demonstrate that capturing this uncertainty can allow us to test whether delaying deferral can help improve mean outcomes, and also provides domain experts with an indication of when the model's performance is reliable. Finally, we show that our approach outperforms existing non-sequential learning-to-defer baselines, whilst reducing overall uncertainty on multiple synthetic and semi-synthetic (Sepsis-Diabetes) simulators. |
Shalmali Joshi · Sonali Parbhoo · Finale Doshi-Velez 🔗 |
-
|
Interactive Visual Explanations for Deep Drug Repurposing
(
Oral
)
Faced with skyrocketing costs for developing new drugs from scratch, repurposing existing drugs for new uses is an enticing alternative that considerably reduces safety risks and development costs. However, successful drug repurposing has been mainly based on serendipitous discoveries. Here, we present a tool that combines a graph transformer network with interactive visual explanations to assist scientists in generating, exploring, and understanding drug repurposing predictions. Leveraging semantic attention in our graph transformer network, our tool introduces a novel way to visualize meta path explanations that provide biomedical context for interpretation. Our results show that the tool generates accurate drug predictions and provides interpretable predictions. |
Qianwen Wang · Payal Chandak · Marinka Zitnik 🔗 |
-
|
An Interpretable Algorithm for Uveal Melanoma Subtyping from Whole Slide Cytology Images
(
Poster
)
Algorithmic decision support is rapidly becoming a staple of personalized medicine, especially for high-stakes recommendations in which access to certain information can drastically alter the course of treatment, and thus, patient outcome; a prominent example is radiomics for cancer subtyping. Because in these scenarios the stakes are high, it is desirable for decision systems to not only provide recommendations but supply transparent reasoning in support thereof. For learning-based systems, this can be achieved through an interpretable design of the inference pipeline. Herein we describe an automated yet interpretable system for uveal melanoma subtyping with digital cytology images from fine needle aspiration biopsies. Our method embeds every automatically segmented cell of a candidate cytology image as a point in a 2D manifold defined by many representative slides, which enables reasoning about the cell-level composition of the tissue sample, paving the way for interpretable subtyping of the biopsy. Finally, a rule-based slide-level classification algorithm is trained on the partitions of the circularly distorted 2D manifold. This process results in a simple rule set that is evaluated automatically but highly transparent for human verification. On our in house cytology dataset of 88 uveal melanoma patients, the proposed method achieves an accuracy of 87.5% that compares favorably to all competing approaches, including deep "black box'' models. The method comes with a user interface to facilitate interaction with cell-level content, which may offer additional insights for pathological assessment. |
Haomin Chen · Alvin Liu · Catalina Gomez · Zelia Correa · Mathias Unberath 🔗 |
-
|
One Map Does Not Fit All: Evaluating Saliency Map Explanation on Multi-Modal Medical Images
(
Poster
)
Being able to explain the prediction to clinical end-users is a necessity to leverage the power of AI models for clinical decision support. For medical images, saliency maps are the most common form of explanation. The maps highlight important features for AI model's prediction. Although many saliency map methods have been proposed, it is unknown how well they perform on explaining decisions on multi-modal medical images, where each modality/channel carries distinct clinical meanings of the same underlying biomedical phenomenon. Understanding such modality-dependent features is essential for clinical users' interpretation of AI decisions. To tackle this clinically important but technically ignored problem, we propose the MSFI (Modality-Specific Feature Importance) metric to examine whether saliency maps can highlight modality-specific important features. MSFI encodes the clinical requirements on modality prioritization and modality-specific feature localization. Our evaluations on 16 commonly used saliency map methods, including a clinician user study, show that although most saliency map methods captured modality importance information in general, most of them failed to highlight modality-specific important features consistently and precisely. The evaluation results guide the choices of saliency map methods and provide insights to propose new ones targeting clinical applications. |
Weina Jin · Xiaoxiao Li · Ghassan Hamarneh 🔗 |
-
|
Reliable Post hoc Explanations: Modeling Uncertainty in Explainability
(
Poster
)
As black box explanations are increasingly being employed to establish model credibility in high stakes settings, it is important to ensure that these explanations are accurate and reliable. However, prior work demonstrates that explanations generated by state-of-the-art techniques are inconsistent, unstable, provide very little insight into their correctness and reliability, and are computationally inefficient. In this paper, we address the aforementioned challenges by developing a novel Bayesian framework for generating local explanations along with their associated uncertainty. We instantiate this framework to obtain Bayesian versions of LIME and KernelSHAP which output credible intervals for the feature importances, capturing the associated uncertainty. The resulting explanations not only enable us to make concrete inferences about their quality (e.g., there is a 95\% chance that the feature importance lies within the given range), but are also highly consistent and stable. We carry out a detailed theoretical analysis that leverages the aforementioned uncertainty to estimate how many perturbations to sample, and how to sample for faster convergence. This work makes the first attempt at addressing several critical issues with popular explanation methods in one shot, thereby generating consistent, stable, and reliable explanations with guarantees in a computationally efficient manner. Experimental evaluation with multiple real world datasets and user studies demonstrate that the efficacy of the proposed framework. |
Dylan Slack · Sophie Hilgard · Sameer Singh · Hima Lakkaraju 🔗 |
-
|
Gifsplanation via Latent Shift: A Simple Autoencoder Approach to Counterfactual Generation for Chest X-rays
(
Poster
)
Motivation:Prediction explanation methods for neural networks trained for medical imaging tasks are important for avoiding the unintended consequences of deploying AI systems when false positive predictions can impact patient care. However, traditional image attribution methods struggle to satisfactorily explain such predictions. Thus, there is a pressing need to develop improved models for model explainability and introspection. Specific problem: Counterfactual explanations can transform input images to increase or decrease features which cause the prediction. However, current approaches are difficult to implement as they are monolithic or rely on GANs. These hurdles prevent wide adoption. Our approach: Given an arbitrary classifier, we propose a simple autoencoder and gradient update (Latent Shift) that can transform the latent representation of a specific input image to exaggerate or curtail the features used for prediction. We use this method to study chest X-ray classifiers and evaluate their performance. We conduct a reader study with two radiologists assessing 240 chest X-ray predictions to identify which ones are false positives (half are) using traditional attribution maps or our proposed method. Results: We found low overlap with ground truth pathology masks for models with reasonably high accuracy. However, the results from our reader study indicate that these models are generally looking at the correct features. We also found that the Latent Shift explanation allows a user to have more confidence in true positive predictions compared to traditional approaches (0.15±0.95 in a 5 point scale with p=0.01) with only a small increase in false positive predictions (0.04±1.06 with p=0.57). |
Joseph Paul Cohen · Rupert Brooks · Evan Zucker · Anuj Pareek · Lungren Matthew · Akshay Chaudhari 🔗 |
-
|
Variable selection via the sum of single effect neural networks with credible sets
(
Poster
)
We propose a new method for variable selection using Bayesian neural networks. We focus on quantifying uncertainty in which variables should be selected. Our method provides posterior summaries including posterior inclusion probabilities and credible sets for variable selection. Our framework generalizes the previous Sum of Single Effect model (SuSiE) to deep learning models for incorporating non-linearity. We provide a variational algorithm with several relaxation techniques that enables scalable inference. Our model can be used for both regression and classification tasks. We show that our method has competitive performance in variable selection using simulations. The method is suited for scenarios where input variables are correlated and effect variables are sparse. We illustrate the utility of our method for genetic fine-mapping in statistical genetics with the Stock Mice dataset. |
Wei Cheng · Sohini Ramachandran · Lorin Crawford 🔗 |
-
|
Fast Hierarchical Games for Image Explanations
(
Poster
)
As modern neural networks keep breaking records and solving harder problems, their predictions also become less intelligible. The current lack of interpretability undermines the deployment of accurate machine learning tools in sensitive settings. In this work, we present a model-agnostic explanation method for image classification based on a hierarchical extension of Shapley coefficients --Hierarchical Shap (h-Shap)-- that resolves some limitations of current approaches. Unlike other Shapley-based explanation methods, h-Shap is scalable and it does not need approximation. Under certain distributional assumptions, which are common in multiple instance learning, h-Shap retrieves the exact Shapley coefficients with an exponential improvement in computational complexity. We compare our hierarchical approach with popular Shapley-based and non-Shapley-based methods on a synthetic dataset, a medical imaging scenario, and a general computer vision problem. We show that h-Shap outperforms the state of the art in both accuracy and runtime. |
Jacopo Teneggi · Alexandre Luster · Jeremias Sulam 🔗 |
-
|
Interpretable learning-to-defer for sequential decision-making
(
Poster
)
We focus on the problem of learning-to-defer to an expert under non-stationary dynamics in a sequential decision-making setting, by identifying pre-emptive deferral strategies. Pre-emptive deferral strategies are desirable when delaying deferral can result in suboptimal or undesirable long term outcomes, e.g. unexpected potential side-effects of a treatment. We formalize a deferral policy as being pre-emptive if delaying deferral does not lead to improved long-term outcomes. Our method, Sequential Learning-to-Defer (SLTD), explicitly measures the (expected) value of deferring now versus later based on the underlying uncertainty in non-stationary dynamics via posterior sampling. We demonstrate that capturing this uncertainty can allow us to test whether delaying deferral can help improve mean outcomes, and also provides domain experts with an indication of when the model's performance is reliable. Finally, we show that our approach outperforms existing non-sequential learning-to-defer baselines, whilst reducing overall uncertainty on multiple synthetic and semi-synthetic (Sepsis-Diabetes) simulators. |
Shalmali Joshi · Sonali Parbhoo · Finale Doshi-Velez 🔗 |
-
|
Interactive Visual Explanations for Deep Drug Repurposing
(
Poster
)
Faced with skyrocketing costs for developing new drugs from scratch, repurposing existing drugs for new uses is an enticing alternative that considerably reduces safety risks and development costs. However, successful drug repurposing has been mainly based on serendipitous discoveries. Here, we present a tool that combines a graph transformer network with interactive visual explanations to assist scientists in generating, exploring, and understanding drug repurposing predictions. Leveraging semantic attention in our graph transformer network, our tool introduces a novel way to visualize meta path explanations that provide biomedical context for interpretation. Our results show that the tool generates accurate drug predictions and provides interpretable predictions. |
Qianwen Wang · Payal Chandak · Marinka Zitnik 🔗 |
Author Information
Yuyin Zhou (Johns Hopkins University)
Xiaoxiao Li (The University of British Columbia)
Vicky Yao (Rice University)
Pengtao Xie (Carnegie Mellon University)
DOU QI (The Chinese University of Hong Kong)
Nicha Dvornek (Yale University)
Julia Schnabel (King's College London)
Judy Wawira (Emory Radiology)
Yifan Peng (Weill Cornell Medicine)
Ronald Summers (NIH)
Alan Karthikesalingam (Google Health)
Lei Xing (Stanford University)
Eric Xing (Petuum Inc. and CMU)
More from the Same Authors
-
2021 : Towards Principled Disentanglement for Domain Generalization »
Hanlin Zhang · Yi-Fan Zhang · Weiyang Liu · Adrian Weller · Bernhard Schölkopf · Eric Xing -
2021 : TransUNet: Transformers Make Strong Encoders for Medical Image Segmentation »
Jie-Neng Chen · Yongyi Lu · Qihang Yu · Xiangde Luo · Ehsan Adeli · Yan Wang · Le Lu · Alan L Yuille · Yuyin Zhou -
2021 : BrainNNExplainer: An Interpretable Graph Neural Network Framework for Brain Network based Disease Analysis »
Hejie Cui · Wei Dai · Yanqiao Zhu · Xiaoxiao Li · Lifang He · Carl Yang -
2021 : One Map Does Not Fit All: Evaluating Saliency Map Explanation on Multi-Modal Medical Images »
Weina Jin · Xiaoxiao Li · Ghassan Hamarneh -
2021 : One Map Does Not Fit All: Evaluating Saliency Map Explanation on Multi-Modal Medical Images »
Weina Jin · Xiaoxiao Li · Ghassan Hamarneh -
2023 : Counterfactual Generation with Identifiability Guarantees »
Hanqi Yan · Lingjing Kong · Lin Gui · Yuejie Chi · Eric Xing · Yulan He · Kun Zhang -
2023 : Identification of Nonlinear Latent Hierarchical Causal Models »
Lingjing Kong · Biwei Huang · Feng Xie · Eric Xing · Yuejie Chi · Kun Zhang -
2023 : Making Scalable Meta Learning Practical »
Sang Keun Choe · Sanket Vaibhav Mehta · Hwijeen Ahn · Willie Neiswanger · Pengtao Xie · Emma Strubell · Eric Xing -
2023 : Mask, Stitch, and Re-Sample: Enhancing Robustness and Generalizability in Anomaly Detection through Automatic Diffusion Models »
Cosmin Bercea · Michael Neumayr · Daniel Rueckert · Julia Schnabel -
2023 : Charting the Course: A Deep Dive into the Evolution and Future Trajectory of Multimodal AI in Radiology »
Judy Wawira -
2023 Workshop: 3rd Workshop on Interpretable Machine Learning in Healthcare (IMLH) »
Weina Jin · Ramin Zabih · S. Kevin Zhou · Yuyin Zhou · Xiaoxiao Li · Yifan Peng · Zongwei Zhou · Yucheng Tang · Yuzhe Yang · Agni Kumar -
2023 Poster: Federated Adversarial Learning: A Framework with Convergence Analysis »
Xiaoxiao Li · Zhao Song · Jiaming Yang -
2023 Poster: Underspecification Presents Challenges for Credibility in Modern Machine Learning »
Alexander D'Amour · Katherine Heller · Dan Moldovan · Ben Adlam · Babak Alipanahi · Alex Beutel · Christina Chen · Jonathan Deaton · Jacob Eisenstein · Matthew Hoffman · Farhad Hormozdiari · Neil Houlsby · Shaobo Hou · Ghassen Jerfel · Alan Karthikesalingam · Mario Lucic · Yian Ma · Cory McLean · Diana Mincu · Akinori Mitani · Andrea Montanari · Zachary Nado · Vivek Natarajan · Christopher Nielson · Thomas F. Osborne · Rajiv Raman · Kim Ramasamy · Rory sayres · Jessica Schrouff · Martin Seneviratne · Shannon Sequeira · Harini Suresh · Victor Veitch · Maksym Vladymyrov · Xuezhi Wang · Kellie Webster · Steve Yadlowsky · Taedong Yun · Xiaohua Zhai · D. Sculley -
2022 Workshop: 2nd Workshop on Interpretable Machine Learning in Healthcare (IMLH) »
Ramin Zabih · S. Kevin Zhou · Weina Jin · Yuyin Zhou · Ipek Oguz · Xiaoxiao Li · Yifan Peng · Zongwei Zhou · Yucheng Tang -
2022 Workshop: The First Workshop on Pre-training: Perspectives, Pitfalls, and Paths Forward »
Huaxiu Yao · Hugo Larochelle · Percy Liang · Colin Raffel · Jian Tang · Ying WEI · Saining Xie · Eric Xing · Chelsea Finn -
2022 Poster: SDQ: Stochastic Differentiable Quantization with Mixed Precision »
Xijie Huang · Zhiqiang Shen · Shichao Li · Zechun Liu · Hu Xianghong · Jeffry Wicaksana · Eric Xing · Kwang-Ting Cheng -
2022 Spotlight: SDQ: Stochastic Differentiable Quantization with Mixed Precision »
Xijie Huang · Zhiqiang Shen · Shichao Li · Zechun Liu · Hu Xianghong · Jeffry Wicaksana · Eric Xing · Kwang-Ting Cheng -
2021 Workshop: Self-Supervised Learning for Reasoning and Perception »
Pengtao Xie · Shanghang Zhang · Ishan Misra · Pulkit Agrawal · Katerina Fragkiadaki · Ruisi Zhang · Tassilo Klein · Asli Celikyilmaz · Mihaela van der Schaar · Eric Xing -
2021 : Closing remarks »
Xiaoxiao Li -
2021 : Invited Talk: Eric P. Xing. A Data-Centric View for Composable Natural Language Processing. »
Eric Xing -
2021 : Welcoming remarks and introduction »
Yuyin Zhou -
2021 Poster: FL-NTK: A Neural Tangent Kernel-based Framework for Federated Learning Analysis »
Baihe Huang · Xiaoxiao Li · Zhao Song · Xin Yang -
2021 Spotlight: FL-NTK: A Neural Tangent Kernel-based Framework for Federated Learning Analysis »
Baihe Huang · Xiaoxiao Li · Zhao Song · Xin Yang -
2020 Poster: Adaptive Checkpoint Adjoint Method for Gradient Estimation in Neural ODE »
Juntang Zhuang · Nicha Dvornek · Xiaoxiao Li · Sekhar Tatikonda · Xenophon Papademetris · James Duncan -
2019 Workshop: Adaptive and Multitask Learning: Algorithms & Systems »
Maruan Al-Shedivat · Anthony Platanios · Otilia Stretcu · Jacob Andreas · Ameet Talwalkar · Rich Caruana · Tom Mitchell · Eric Xing -
2019 Workshop: Learning and Reasoning with Graph-Structured Representations »
Ethan Fetaya · Zhiting Hu · Thomas Kipf · Yujia Li · Xiaodan Liang · Renjie Liao · Raquel Urtasun · Hao Wang · Max Welling · Eric Xing · Richard Zemel -
2019 Poster: Theoretically Principled Trade-off between Robustness and Accuracy »
Hongyang Zhang · Yaodong Yu · Jiantao Jiao · Eric Xing · Laurent El Ghaoui · Michael Jordan -
2019 Oral: Theoretically Principled Trade-off between Robustness and Accuracy »
Hongyang Zhang · Yaodong Yu · Jiantao Jiao · Eric Xing · Laurent El Ghaoui · Michael Jordan -
2018 Poster: Orthogonality-Promoting Distance Metric Learning: Convex Relaxation and Theoretical Analysis »
Pengtao Xie · Wei Wu · Yichen Zhu · Eric Xing -
2018 Poster: Transformation Autoregressive Networks »
Junier Oliva · Kumar Avinava Dubey · Manzil Zaheer · Barnabás Póczos · Ruslan Salakhutdinov · Eric Xing · Jeff Schneider -
2018 Oral: Orthogonality-Promoting Distance Metric Learning: Convex Relaxation and Theoretical Analysis »
Pengtao Xie · Wei Wu · Yichen Zhu · Eric Xing -
2018 Oral: Transformation Autoregressive Networks »
Junier Oliva · Kumar Avinava Dubey · Manzil Zaheer · Barnabás Póczos · Ruslan Salakhutdinov · Eric Xing · Jeff Schneider -
2018 Poster: Nonoverlap-Promoting Variable Selection »
Pengtao Xie · Hongbao Zhang · Yichen Zhu · Eric Xing -
2018 Poster: DiCE: The Infinitely Differentiable Monte Carlo Estimator »
Jakob Foerster · Gregory Farquhar · Maruan Al-Shedivat · Tim Rocktäschel · Eric Xing · Shimon Whiteson -
2018 Poster: Gated Path Planning Networks »
Lisa Lee · Emilio Parisotto · Devendra Singh Chaplot · Eric Xing · Ruslan Salakhutdinov -
2018 Oral: Gated Path Planning Networks »
Lisa Lee · Emilio Parisotto · Devendra Singh Chaplot · Eric Xing · Ruslan Salakhutdinov -
2018 Oral: Nonoverlap-Promoting Variable Selection »
Pengtao Xie · Hongbao Zhang · Yichen Zhu · Eric Xing -
2018 Oral: DiCE: The Infinitely Differentiable Monte Carlo Estimator »
Jakob Foerster · Gregory Farquhar · Maruan Al-Shedivat · Tim Rocktäschel · Eric Xing · Shimon Whiteson -
2017 Poster: Toward Controlled Generation of Text »
Zhiting Hu · Zichao Yang · Xiaodan Liang · Ruslan Salakhutdinov · Eric Xing -
2017 Talk: Toward Controlled Generation of Text »
Zhiting Hu · Zichao Yang · Xiaodan Liang · Ruslan Salakhutdinov · Eric Xing -
2017 Poster: Uncorrelation and Evenness: a New Diversity-Promoting Regularizer »
Pengtao Xie · Aarti Singh · Eric Xing -
2017 Poster: Learning Latent Space Models with Angular Constraints »
Pengtao Xie · Yuntian Deng · Yi Zhou · Abhimanu Kumar · Yaoliang Yu · James Zou · Eric Xing -
2017 Talk: Learning Latent Space Models with Angular Constraints »
Pengtao Xie · Yuntian Deng · Yi Zhou · Abhimanu Kumar · Yaoliang Yu · James Zou · Eric Xing -
2017 Talk: Uncorrelation and Evenness: a New Diversity-Promoting Regularizer »
Pengtao Xie · Aarti Singh · Eric Xing -
2017 Poster: Post-Inference Prior Swapping »
Willie Neiswanger · Eric Xing -
2017 Talk: Post-Inference Prior Swapping »
Willie Neiswanger · Eric Xing