Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Interpretable Machine Learning in Healthcare

Do You See What I See? A Comparison of Radiologist Eye Gaze to Computer Vision Saliency Maps for Chest X-ray Classification

Jesse Kim · Helen Zhou · Zachary Lipton


Abstract:

We qualitatively and quantitatively compare saliency maps generated from state-of-the-art deep learning chest X-ray classification models to radiologist eye gaze data. We find that across several saliency map methods, correct predictions have saliency maps more similar to the corresponding eye gaze data than the same for incorrect predictions. To incorporate eye gaze data into the model training procedure, we create DenseNet-Aug, a simple augmentation of the DenseNet model which performs comparably to the state-of-the-art. Finally, we extract salient annotated regions for each label class, thereby characterizing model attribution at the dataset level. While sample-level saliency maps visibly vary, these dataset-level regional comparisons indicate that across most class labels, radiologist eye gaze, DenseNet, and DenseNet-Aug often identify similar salient regions.

Chat is not available.