Lorenzo Ferro Desideri, Janice Roth, Martin Zinkernagel, Rodrigo Anguita
{"title":"\"Application and accuracy of artificial intelligence-derived large language models in patients with age related macular degeneration\".","authors":"Lorenzo Ferro Desideri, Janice Roth, Martin Zinkernagel, Rodrigo Anguita","doi":"10.1186/s40942-023-00511-7","DOIUrl":null,"url":null,"abstract":"<p><strong>Introduction: </strong>Age-related macular degeneration (AMD) affects millions of people globally, leading to a surge in online research of putative diagnoses, causing potential misinformation and anxiety in patients and their parents. This study explores the efficacy of artificial intelligence-derived large language models (LLMs) like in addressing AMD patients' questions.</p><p><strong>Methods: </strong>ChatGPT 3.5 (2023), Bing AI (2023), and Google Bard (2023) were adopted as LLMs. Patients' questions were subdivided in two question categories, (a) general medical advice and (b) pre- and post-intravitreal injection advice and classified as (1) accurate and sufficient (2) partially accurate but sufficient and (3) inaccurate and not sufficient. Non-parametric test has been done to compare the means between the 3 LLMs scores and also an analysis of variance and reliability tests were performed among the 3 groups.</p><p><strong>Results: </strong>In category a) of questions, the average score was 1.20 (± 0.41) with ChatGPT 3.5, 1.60 (± 0.63) with Bing AI and 1.60 (± 0.73) with Google Bard, showing no significant differences among the 3 groups (p = 0.129). The average score in category b was 1.07 (± 0.27) with ChatGPT 3.5, 1.69 (± 0.63) with Bing AI and 1.38 (± 0.63) with Google Bard, showing a significant difference among the 3 groups (p = 0.0042). Reliability statistics showed Chronbach's α of 0.237 (range 0.448, 0.096-0.544).</p><p><strong>Conclusion: </strong>ChatGPT 3.5 consistently offered the most accurate and satisfactory responses, particularly with technical queries. While LLMs displayed promise in providing precise information about AMD; however, further improvements are needed especially in more technical questions.</p>","PeriodicalId":14289,"journal":{"name":"International Journal of Retina and Vitreous","volume":"9 1","pages":"71"},"PeriodicalIF":1.9000,"publicationDate":"2023-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10657493/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Retina and Vitreous","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1186/s40942-023-00511-7","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"OPHTHALMOLOGY","Score":null,"Total":0}
引用次数: 0
Abstract
Introduction: Age-related macular degeneration (AMD) affects millions of people globally, leading to a surge in online research of putative diagnoses, causing potential misinformation and anxiety in patients and their parents. This study explores the efficacy of artificial intelligence-derived large language models (LLMs) like in addressing AMD patients' questions.
Methods: ChatGPT 3.5 (2023), Bing AI (2023), and Google Bard (2023) were adopted as LLMs. Patients' questions were subdivided in two question categories, (a) general medical advice and (b) pre- and post-intravitreal injection advice and classified as (1) accurate and sufficient (2) partially accurate but sufficient and (3) inaccurate and not sufficient. Non-parametric test has been done to compare the means between the 3 LLMs scores and also an analysis of variance and reliability tests were performed among the 3 groups.
Results: In category a) of questions, the average score was 1.20 (± 0.41) with ChatGPT 3.5, 1.60 (± 0.63) with Bing AI and 1.60 (± 0.73) with Google Bard, showing no significant differences among the 3 groups (p = 0.129). The average score in category b was 1.07 (± 0.27) with ChatGPT 3.5, 1.69 (± 0.63) with Bing AI and 1.38 (± 0.63) with Google Bard, showing a significant difference among the 3 groups (p = 0.0042). Reliability statistics showed Chronbach's α of 0.237 (range 0.448, 0.096-0.544).
Conclusion: ChatGPT 3.5 consistently offered the most accurate and satisfactory responses, particularly with technical queries. While LLMs displayed promise in providing precise information about AMD; however, further improvements are needed especially in more technical questions.
期刊介绍:
International Journal of Retina and Vitreous focuses on the ophthalmic subspecialty of vitreoretinal disorders. The journal presents original articles on new approaches to diagnosis, outcomes of clinical trials, innovations in pharmacological therapy and surgical techniques, as well as basic science advances that impact clinical practice. Topical areas include, but are not limited to: -Imaging of the retina, choroid and vitreous -Innovations in optical coherence tomography (OCT) -Small-gauge vitrectomy, retinal detachment, chromovitrectomy -Electroretinography (ERG), microperimetry, other functional tests -Intraocular tumors -Retinal pharmacotherapy & drug delivery -Diabetic retinopathy & other vascular diseases -Age-related macular degeneration (AMD) & other macular entities