Joseph Ford, Nathan Pevy, Richard Grunewald, Stephen Howell, Markus Reuber
{"title":"Can artificial intelligence diagnose seizures based on patients' descriptions? A study of GPT-4.","authors":"Joseph Ford, Nathan Pevy, Richard Grunewald, Stephen Howell, Markus Reuber","doi":"10.1111/epi.18322","DOIUrl":null,"url":null,"abstract":"<p><strong>Objective: </strong>Generalist large language models (LLMs) have shown diagnostic potential in various medical contexts but have not been explored extensively in relation to epilepsy. This paper aims to test the performance of an LLM (OpenAI's GPT-4) on the differential diagnosis of epileptic and functional/dissociative seizures (FDS) based on patients' descriptions.</p><p><strong>Methods: </strong>GPT-4 was asked to diagnose 41 cases of epilepsy (n = 16) or FDS (n = 25) based on transcripts of patients describing their symptoms (median word count = 399). It was first asked to perform this task without additional training examples (zero-shot) before being asked to perform it having been given one, two, and three examples of each condition (one-, two, and three-shot). As a benchmark, three experienced neurologists performed this task without access to any additional clinical or demographic information (e.g., age, gender, socioeconomic status).</p><p><strong>Results: </strong>In the zero-shot condition, GPT-4's average balanced accuracy was 57% (κ = .15). Balanced accuracy improved in the one-shot condition (64%, κ = .27), but did not improve any further in the two-shot (62%, κ = .24) and three-shot (62%, κ = .23) conditions. Performance in all four conditions was worse than the mean balanced accuracy of the experienced neurologists (71%, κ = .42). However, in the subset of 18 cases that all three neurologists had \"diagnosed\" correctly (median word count = 684), GPT-4's balanced accuracy was 81% (κ = .66).</p><p><strong>Significance: </strong>Although its \"raw\" performance was poor, GPT-4 showed noticeable improvement having been given just one example of a patient describing epilepsy and FDS. Giving two and three examples did not further improve performance, but the finding that GPT-4 did much better in those cases correctly diagnosed by all three neurologists suggests that providing more extensive clinical data and more elaborate approaches (e.g., more refined prompt engineering, fine-tuning, or retrieval augmented generation) could unlock the full diagnostic potential of LLMs.</p>","PeriodicalId":11768,"journal":{"name":"Epilepsia","volume":" ","pages":""},"PeriodicalIF":6.6000,"publicationDate":"2025-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Epilepsia","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1111/epi.18322","RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"CLINICAL NEUROLOGY","Score":null,"Total":0}
引用次数: 0
Abstract
Objective: Generalist large language models (LLMs) have shown diagnostic potential in various medical contexts but have not been explored extensively in relation to epilepsy. This paper aims to test the performance of an LLM (OpenAI's GPT-4) on the differential diagnosis of epileptic and functional/dissociative seizures (FDS) based on patients' descriptions.
Methods: GPT-4 was asked to diagnose 41 cases of epilepsy (n = 16) or FDS (n = 25) based on transcripts of patients describing their symptoms (median word count = 399). It was first asked to perform this task without additional training examples (zero-shot) before being asked to perform it having been given one, two, and three examples of each condition (one-, two, and three-shot). As a benchmark, three experienced neurologists performed this task without access to any additional clinical or demographic information (e.g., age, gender, socioeconomic status).
Results: In the zero-shot condition, GPT-4's average balanced accuracy was 57% (κ = .15). Balanced accuracy improved in the one-shot condition (64%, κ = .27), but did not improve any further in the two-shot (62%, κ = .24) and three-shot (62%, κ = .23) conditions. Performance in all four conditions was worse than the mean balanced accuracy of the experienced neurologists (71%, κ = .42). However, in the subset of 18 cases that all three neurologists had "diagnosed" correctly (median word count = 684), GPT-4's balanced accuracy was 81% (κ = .66).
Significance: Although its "raw" performance was poor, GPT-4 showed noticeable improvement having been given just one example of a patient describing epilepsy and FDS. Giving two and three examples did not further improve performance, but the finding that GPT-4 did much better in those cases correctly diagnosed by all three neurologists suggests that providing more extensive clinical data and more elaborate approaches (e.g., more refined prompt engineering, fine-tuning, or retrieval augmented generation) could unlock the full diagnostic potential of LLMs.
期刊介绍:
Epilepsia is the leading, authoritative source for innovative clinical and basic science research for all aspects of epilepsy and seizures. In addition, Epilepsia publishes critical reviews, opinion pieces, and guidelines that foster understanding and aim to improve the diagnosis and treatment of people with seizures and epilepsy.