Timezone: »
Motivation:Prediction explanation methods for neural networks trained for medical imaging tasks are important for avoiding the unintended consequences of deploying AI systems when false positive predictions can impact patient care. However, traditional image attribution methods struggle to satisfactorily explain such predictions. Thus, there is a pressing need to develop improved models for model explainability and introspection.
Specific problem: Counterfactual explanations can transform input images to increase or decrease features which cause the prediction. However, current approaches are difficult to implement as they are monolithic or rely on GANs. These hurdles prevent wide adoption.
Our approach: Given an arbitrary classifier, we propose a simple autoencoder and gradient update (Latent Shift) that can transform the latent representation of a specific input image to exaggerate or curtail the features used for prediction. We use this method to study chest X-ray classifiers and evaluate their performance. We conduct a reader study with two radiologists assessing 240 chest X-ray predictions to identify which ones are false positives (half are) using traditional attribution maps or our proposed method.
Results: We found low overlap with ground truth pathology masks for models with reasonably high accuracy. However, the results from our reader study indicate that these models are generally looking at the correct features. We also found that the Latent Shift explanation allows a user to have more confidence in true positive predictions compared to traditional approaches (0.15±0.95 in a 5 point scale with p=0.01) with only a small increase in false positive predictions (0.04±1.06 with p=0.57).
Author Information
Joseph Paul Cohen (Stanford University)
Rupert Brooks (Nuance)
Evan Zucker (Stanford University)
Anuj Pareek (Stanford University)
Lungren Matthew (Stanford University)
Akshay Chaudhari (Stanford University)
More from the Same Authors
-
2021 : Gifsplanation via Latent Shift: A Simple Autoencoder Approach to Counterfactual Generation for Chest X-rays »
Joseph Paul Cohen · Rupert Brooks · Evan Zucker · Anuj Pareek · Lungren Matthew · Akshay Chaudhari -
2021 Poster: Measuring Robustness in Deep Learning Based Compressive Sensing »
Mohammad Zalbagi Darestani · Akshay Chaudhari · Reinhard Heckel -
2021 Oral: Measuring Robustness in Deep Learning Based Compressive Sensing »
Mohammad Zalbagi Darestani · Akshay Chaudhari · Reinhard Heckel -
2021 Poster: Simultaneous Similarity-based Self-Distillation for Deep Metric Learning »
Karsten Roth · Timo Milbich · Bjorn Ommer · Joseph Paul Cohen · Marzyeh Ghassemi -
2021 Spotlight: Simultaneous Similarity-based Self-Distillation for Deep Metric Learning »
Karsten Roth · Timo Milbich · Bjorn Ommer · Joseph Paul Cohen · Marzyeh Ghassemi -
2020 : Contributed Talk 4: A Benchmark of Medical Out of Distribution Detection »
Joseph Paul Cohen -
2020 Poster: Revisiting Training Strategies and Generalization Performance in Deep Metric Learning »
Karsten Roth · Timo Milbich · Samrath Sinha · Prateek Gupta · Bjorn Ommer · Joseph Paul Cohen -
2019 : Poster Session & Lunch break »
Kay Wiese · Brandon Carter · Dan DeBlasio · Mohammad Hashir · Rachel Chan · Matteo Manica · Ali Oskooei · Zhenqin Wu · Karren Yang · François FAGES · Ruishan Liu · Nicasia Beebe-Wang · Bryan He · Jacopo Cirrone · Pekka Marttinen · Elior Rahmani · Harri Lähdesmäki · Nikhil Yadala · Andreea-Ioana Deac · Ava Soleimany · Mansi Ranjit Mane · Jason Ernst · Joseph Paul Cohen · Joel Mathew · Vishal Agarwal · AN ZHENG