Skip to yearly menu bar Skip to main content


Afternoon Poster
in
Workshop: Artificial Intelligence & Human Computer Interaction

How vulnerable are doctors to unsafe hallucinatory AI suggestions? A framework for evaluation of safety in clinical human-AI cooperation

Paul Festor · Myura Nagendran · Anthony Gordon · Matthieu Komorowski · Aldo Faisal


Abstract:

As artificial intelligence-based decision support systems aim at assisting human specialists in high-stakes environments, studying the safety of the human-AI team as a whole is crucial, especially in the light of the danger posed by hallucinatory AI treatment suggestions from now ubiquitous large language models. In this work, we propose a method for safety assessment of the human-AI team in high-stakes decision-making scenarios. By studying the interactions between doctors and a decision support tool in a physical intensive care simulation centre, we conclude that most unsafe (i.e. potentially hallucinatory) AI recommendations would be stopped by the clinical team. Moreover, eye-tracking-based attention measurements indicate that doctors focus more on unsafe than safe AI suggestions.

Chat is not available.