Skip to yearly menu bar Skip to main content


Poster
in
Workshop: 3rd Workshop on Interpretable Machine Learning in Healthcare (IMLH)

Eye-tracking of clinician behaviour with explainable AI decision support: a high-fidelity simulation study

Myura Nagendran · Paul Festor · Matthieu Komorowski · Anthony Gordon · Aldo Faisal

Keywords: [ ICML ] [ Human-AI interaction ] [ clinical decision support system (CDSS) ] [ explainable AI (XAI) ] [ real world simulation ]


Abstract:

Explainable AI (XAI) is seen as important for AI-driven clinical decision support tools but most XAI has been evaluated on non-expert populations for proxy tasks and in low-fidelity settings. The rise of generative AI and the potential safety risk of hallucinatory AI suggestions causing patient harm has once again highlighted the question of whether XAI can act as a safety mitigation mechanism. We studied intensive care doctors in a high-fidelity simulation suite with eye-tracking glasses on a prescription dosing task to better understand their interaction dynamics with XAI (for both intentionally safe and unsafe (i.e. hallucinatory) AI suggestions). We show that it is feasible to perform eye-tracking and that the attention devoted to any of 4 types of XAI does not differ between safe and unsafe AI suggestions. This calls into question the utility of XAI as a mitigation against patient harm from clinicians erroneously following poor quality AI advice.

Chat is not available.