S. Saeed Mohammadi MD , Anadi Khatri MD , Tanya Jain MBBS, DNB , Zheng Xian Thng MD , Woong-sun Yoo MD, PhD , Negin Yavari MD , Vahid Bazojoo MD , Azadeh Mobasserian MD , Amir Akhavanrezayat MD , Ngoc Trong Tuong Than MD , Osama Elaraby MD , Battuya Ganbold MD , Dalia El Feky MD , Ba Trung Nguyen MD , Cigdem Yasar MD , Ankur Gupta MD, MS , Jia-Horung Hung MD , Quan Dong Nguyen MD, MSc
{"title":"评估 ChatGPT-4 对患者有关葡萄膜炎询问的回复的适当性和可读性","authors":"S. Saeed Mohammadi MD , Anadi Khatri MD , Tanya Jain MBBS, DNB , Zheng Xian Thng MD , Woong-sun Yoo MD, PhD , Negin Yavari MD , Vahid Bazojoo MD , Azadeh Mobasserian MD , Amir Akhavanrezayat MD , Ngoc Trong Tuong Than MD , Osama Elaraby MD , Battuya Ganbold MD , Dalia El Feky MD , Ba Trung Nguyen MD , Cigdem Yasar MD , Ankur Gupta MD, MS , Jia-Horung Hung MD , Quan Dong Nguyen MD, MSc","doi":"10.1016/j.xops.2024.100594","DOIUrl":null,"url":null,"abstract":"<div><h3>Purpose</h3><div>To compare the utility of ChatGPT-4 as an online uveitis patient education resource with existing patient education websites.</div></div><div><h3>Design</h3><div>Evaluation of technology.</div></div><div><h3>Participants</h3><div>Not applicable.</div></div><div><h3>Methods</h3><div>The term “uveitis” was entered into the Google search engine, and the first 8 nonsponsored websites were selected to be enrolled in the study. Information regarding uveitis for patients was extracted from Healthline, Mayo Clinic, WebMD, National Eye Institute, Ocular Uveitis and Immunology Foundation, American Academy of Ophthalmology, Cleveland Clinic, and National Health Service websites. ChatGPT-4 was then prompted to generate responses about uveitis in both standard and simplified formats. To generate the simplified response, the following request was added to the prompt: 'Please provide a response suitable for the average American adult, at a sixth-grade comprehension level.’ Three dual fellowship-trained specialists, all masked to the sources, graded the appropriateness of the contents (extracted from the existing websites) and responses (generated responses by ChatGPT-4) in terms of personal preference, comprehensiveness, and accuracy. Additionally, 5 readability indices, including Flesch Reading Ease, Flesch–Kincaid Grade Level, Gunning Fog Index, Coleman–Liau Index, and Simple Measure of Gobbledygook index were calculated using an online calculator, Readable.com, to assess the ease of comprehension of each answer.</div></div><div><h3>Main Outcome Measures</h3><div>Personal preference, accuracy, comprehensiveness, and readability of contents and responses about uveitis.</div></div><div><h3>Results</h3><div>A total of 497 contents and responses, including 71 contents from existing websites, 213 standard responses, and 213 simplified responses from ChatGPT-4 were recorded and graded. Standard ChatGPT-4 responses were preferred and perceived to be more comprehensive by dually trained (uveitis and retina) specialist ophthalmologists while maintaining similar accuracy level compared with existing websites. Moreover, simplified ChatGPT-4 responses matched almost all existing websites in terms of personal preference, accuracy, and comprehensiveness. Notably, almost all readability indices suggested that standard ChatGPT-4 responses demand a higher educational level for comprehension, whereas simplified responses required lower level of education compared with the existing websites.</div></div><div><h3>Conclusions</h3><div>This study shows that ChatGPT can provide patients with an avenue to access comprehensive and accurate information about uveitis, tailored to their educational level.</div></div><div><h3>Financial Disclosure(s)</h3><div>The author(s) have no proprietary or commercial interest in any materials discussed in this article.</div></div>","PeriodicalId":74363,"journal":{"name":"Ophthalmology science","volume":"5 1","pages":"Article 100594"},"PeriodicalIF":3.2000,"publicationDate":"2024-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Evaluation of the Appropriateness and Readability of ChatGPT-4 Responses to Patient Queries on Uveitis\",\"authors\":\"S. Saeed Mohammadi MD , Anadi Khatri MD , Tanya Jain MBBS, DNB , Zheng Xian Thng MD , Woong-sun Yoo MD, PhD , Negin Yavari MD , Vahid Bazojoo MD , Azadeh Mobasserian MD , Amir Akhavanrezayat MD , Ngoc Trong Tuong Than MD , Osama Elaraby MD , Battuya Ganbold MD , Dalia El Feky MD , Ba Trung Nguyen MD , Cigdem Yasar MD , Ankur Gupta MD, MS , Jia-Horung Hung MD , Quan Dong Nguyen MD, MSc\",\"doi\":\"10.1016/j.xops.2024.100594\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><h3>Purpose</h3><div>To compare the utility of ChatGPT-4 as an online uveitis patient education resource with existing patient education websites.</div></div><div><h3>Design</h3><div>Evaluation of technology.</div></div><div><h3>Participants</h3><div>Not applicable.</div></div><div><h3>Methods</h3><div>The term “uveitis” was entered into the Google search engine, and the first 8 nonsponsored websites were selected to be enrolled in the study. Information regarding uveitis for patients was extracted from Healthline, Mayo Clinic, WebMD, National Eye Institute, Ocular Uveitis and Immunology Foundation, American Academy of Ophthalmology, Cleveland Clinic, and National Health Service websites. ChatGPT-4 was then prompted to generate responses about uveitis in both standard and simplified formats. To generate the simplified response, the following request was added to the prompt: 'Please provide a response suitable for the average American adult, at a sixth-grade comprehension level.’ Three dual fellowship-trained specialists, all masked to the sources, graded the appropriateness of the contents (extracted from the existing websites) and responses (generated responses by ChatGPT-4) in terms of personal preference, comprehensiveness, and accuracy. Additionally, 5 readability indices, including Flesch Reading Ease, Flesch–Kincaid Grade Level, Gunning Fog Index, Coleman–Liau Index, and Simple Measure of Gobbledygook index were calculated using an online calculator, Readable.com, to assess the ease of comprehension of each answer.</div></div><div><h3>Main Outcome Measures</h3><div>Personal preference, accuracy, comprehensiveness, and readability of contents and responses about uveitis.</div></div><div><h3>Results</h3><div>A total of 497 contents and responses, including 71 contents from existing websites, 213 standard responses, and 213 simplified responses from ChatGPT-4 were recorded and graded. Standard ChatGPT-4 responses were preferred and perceived to be more comprehensive by dually trained (uveitis and retina) specialist ophthalmologists while maintaining similar accuracy level compared with existing websites. Moreover, simplified ChatGPT-4 responses matched almost all existing websites in terms of personal preference, accuracy, and comprehensiveness. Notably, almost all readability indices suggested that standard ChatGPT-4 responses demand a higher educational level for comprehension, whereas simplified responses required lower level of education compared with the existing websites.</div></div><div><h3>Conclusions</h3><div>This study shows that ChatGPT can provide patients with an avenue to access comprehensive and accurate information about uveitis, tailored to their educational level.</div></div><div><h3>Financial Disclosure(s)</h3><div>The author(s) have no proprietary or commercial interest in any materials discussed in this article.</div></div>\",\"PeriodicalId\":74363,\"journal\":{\"name\":\"Ophthalmology science\",\"volume\":\"5 1\",\"pages\":\"Article 100594\"},\"PeriodicalIF\":3.2000,\"publicationDate\":\"2024-08-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Ophthalmology science\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2666914524001301\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"OPHTHALMOLOGY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Ophthalmology science","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2666914524001301","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"OPHTHALMOLOGY","Score":null,"Total":0}
Evaluation of the Appropriateness and Readability of ChatGPT-4 Responses to Patient Queries on Uveitis
Purpose
To compare the utility of ChatGPT-4 as an online uveitis patient education resource with existing patient education websites.
Design
Evaluation of technology.
Participants
Not applicable.
Methods
The term “uveitis” was entered into the Google search engine, and the first 8 nonsponsored websites were selected to be enrolled in the study. Information regarding uveitis for patients was extracted from Healthline, Mayo Clinic, WebMD, National Eye Institute, Ocular Uveitis and Immunology Foundation, American Academy of Ophthalmology, Cleveland Clinic, and National Health Service websites. ChatGPT-4 was then prompted to generate responses about uveitis in both standard and simplified formats. To generate the simplified response, the following request was added to the prompt: 'Please provide a response suitable for the average American adult, at a sixth-grade comprehension level.’ Three dual fellowship-trained specialists, all masked to the sources, graded the appropriateness of the contents (extracted from the existing websites) and responses (generated responses by ChatGPT-4) in terms of personal preference, comprehensiveness, and accuracy. Additionally, 5 readability indices, including Flesch Reading Ease, Flesch–Kincaid Grade Level, Gunning Fog Index, Coleman–Liau Index, and Simple Measure of Gobbledygook index were calculated using an online calculator, Readable.com, to assess the ease of comprehension of each answer.
Main Outcome Measures
Personal preference, accuracy, comprehensiveness, and readability of contents and responses about uveitis.
Results
A total of 497 contents and responses, including 71 contents from existing websites, 213 standard responses, and 213 simplified responses from ChatGPT-4 were recorded and graded. Standard ChatGPT-4 responses were preferred and perceived to be more comprehensive by dually trained (uveitis and retina) specialist ophthalmologists while maintaining similar accuracy level compared with existing websites. Moreover, simplified ChatGPT-4 responses matched almost all existing websites in terms of personal preference, accuracy, and comprehensiveness. Notably, almost all readability indices suggested that standard ChatGPT-4 responses demand a higher educational level for comprehension, whereas simplified responses required lower level of education compared with the existing websites.
Conclusions
This study shows that ChatGPT can provide patients with an avenue to access comprehensive and accurate information about uveitis, tailored to their educational level.
Financial Disclosure(s)
The author(s) have no proprietary or commercial interest in any materials discussed in this article.