Skip to yearly menu bar Skip to main content


Poster
in
Workshop: AI for Science: Scaling in AI for Scientific Discovery

SemioLLM: Assessing Large Language Models for Semiological Analysis in Epilepsy Research

Meghal Dani · Muthu Prakash · Zeynep Akata · Stefanie Liebe

Keywords: [ Seizure Onset Zone (SOZ) ] [ AI in Healthcare ] [ Epilepsy ] [ LLMs ] [ AI For Science ] [ Neuroscience ]


Abstract:

As Large Language Models advance, they have shown promising results in their ability to encode general medical knowledge. However, their potential application in clinical practice warrants rigorous evaluation in domain-specific tasks, where benchmarks are largely missing. In this study semioLLM, we test the ability of state-of-the-art LLMs (GPT-3.5, GPT-4, Mixtral 8x7B, and Qwen-72chat) to leverage their internal knowledge and reasoning for epilepsy diagnosis. Specifically, we obtain likelihood estimates linking unstructured text descriptions of seizures to seizure-generating brain regions, using an annotated clinical database containing 1269 entries. We evaluate the LLM's performance, confidence, reasoning, and citation abilities in comparison to clinical evaluation. Models achieve above-chance classification performance with prompt engineering significantly improving their outcome, with some models achieving close-to-clinical performance and reasoning. Our analyses also reveal significant pitfalls with several models being highly confident while showing poor performance, as well as exhibiting citation errors and hallucinations. In summary, our work provides the first extensive benchmark comparing current SOTA LLMs in the medical domain of epilepsy and highlights their ability to leverage unstructured texts from patients' medical history to aid diagnostic processes in health care.

Chat is not available.