Evaluation of the Appropriateness and Readability of ChatGPT-4 Responses to Patient Queries on Uveitis

IF 3.2 Q1 OPHTHALMOLOGY Ophthalmology science Pub Date : 2024-08-08 DOI:10.1016/j.xops.2024.100594
S. Saeed Mohammadi MD , Anadi Khatri MD , Tanya Jain MBBS, DNB , Zheng Xian Thng MD , Woong-sun Yoo MD, PhD , Negin Yavari MD , Vahid Bazojoo MD , Azadeh Mobasserian MD , Amir Akhavanrezayat MD , Ngoc Trong Tuong Than MD , Osama Elaraby MD , Battuya Ganbold MD , Dalia El Feky MD , Ba Trung Nguyen MD , Cigdem Yasar MD , Ankur Gupta MD, MS , Jia-Horung Hung MD , Quan Dong Nguyen MD, MSc
{"title":"Evaluation of the Appropriateness and Readability of ChatGPT-4 Responses to Patient Queries on Uveitis","authors":"S. Saeed Mohammadi MD ,&nbsp;Anadi Khatri MD ,&nbsp;Tanya Jain MBBS, DNB ,&nbsp;Zheng Xian Thng MD ,&nbsp;Woong-sun Yoo MD, PhD ,&nbsp;Negin Yavari MD ,&nbsp;Vahid Bazojoo MD ,&nbsp;Azadeh Mobasserian MD ,&nbsp;Amir Akhavanrezayat MD ,&nbsp;Ngoc Trong Tuong Than MD ,&nbsp;Osama Elaraby MD ,&nbsp;Battuya Ganbold MD ,&nbsp;Dalia El Feky MD ,&nbsp;Ba Trung Nguyen MD ,&nbsp;Cigdem Yasar MD ,&nbsp;Ankur Gupta MD, MS ,&nbsp;Jia-Horung Hung MD ,&nbsp;Quan Dong Nguyen MD, MSc","doi":"10.1016/j.xops.2024.100594","DOIUrl":null,"url":null,"abstract":"<div><h3>Purpose</h3><div>To compare the utility of ChatGPT-4 as an online uveitis patient education resource with existing patient education websites.</div></div><div><h3>Design</h3><div>Evaluation of technology.</div></div><div><h3>Participants</h3><div>Not applicable.</div></div><div><h3>Methods</h3><div>The term “uveitis” was entered into the Google search engine, and the first 8 nonsponsored websites were selected to be enrolled in the study. Information regarding uveitis for patients was extracted from Healthline, Mayo Clinic, WebMD, National Eye Institute, Ocular Uveitis and Immunology Foundation, American Academy of Ophthalmology, Cleveland Clinic, and National Health Service websites. ChatGPT-4 was then prompted to generate responses about uveitis in both standard and simplified formats. To generate the simplified response, the following request was added to the prompt: 'Please provide a response suitable for the average American adult, at a sixth-grade comprehension level.’ Three dual fellowship-trained specialists, all masked to the sources, graded the appropriateness of the contents (extracted from the existing websites) and responses (generated responses by ChatGPT-4) in terms of personal preference, comprehensiveness, and accuracy. Additionally, 5 readability indices, including Flesch Reading Ease, Flesch–Kincaid Grade Level, Gunning Fog Index, Coleman–Liau Index, and Simple Measure of Gobbledygook index were calculated using an online calculator, Readable.com, to assess the ease of comprehension of each answer.</div></div><div><h3>Main Outcome Measures</h3><div>Personal preference, accuracy, comprehensiveness, and readability of contents and responses about uveitis.</div></div><div><h3>Results</h3><div>A total of 497 contents and responses, including 71 contents from existing websites, 213 standard responses, and 213 simplified responses from ChatGPT-4 were recorded and graded. Standard ChatGPT-4 responses were preferred and perceived to be more comprehensive by dually trained (uveitis and retina) specialist ophthalmologists while maintaining similar accuracy level compared with existing websites. Moreover, simplified ChatGPT-4 responses matched almost all existing websites in terms of personal preference, accuracy, and comprehensiveness. Notably, almost all readability indices suggested that standard ChatGPT-4 responses demand a higher educational level for comprehension, whereas simplified responses required lower level of education compared with the existing websites.</div></div><div><h3>Conclusions</h3><div>This study shows that ChatGPT can provide patients with an avenue to access comprehensive and accurate information about uveitis, tailored to their educational level.</div></div><div><h3>Financial Disclosure(s)</h3><div>The author(s) have no proprietary or commercial interest in any materials discussed in this article.</div></div>","PeriodicalId":74363,"journal":{"name":"Ophthalmology science","volume":"5 1","pages":"Article 100594"},"PeriodicalIF":3.2000,"publicationDate":"2024-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Ophthalmology science","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2666914524001301","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"OPHTHALMOLOGY","Score":null,"Total":0}
引用次数: 0

Abstract

Purpose

To compare the utility of ChatGPT-4 as an online uveitis patient education resource with existing patient education websites.

Design

Evaluation of technology.

Participants

Not applicable.

Methods

The term “uveitis” was entered into the Google search engine, and the first 8 nonsponsored websites were selected to be enrolled in the study. Information regarding uveitis for patients was extracted from Healthline, Mayo Clinic, WebMD, National Eye Institute, Ocular Uveitis and Immunology Foundation, American Academy of Ophthalmology, Cleveland Clinic, and National Health Service websites. ChatGPT-4 was then prompted to generate responses about uveitis in both standard and simplified formats. To generate the simplified response, the following request was added to the prompt: 'Please provide a response suitable for the average American adult, at a sixth-grade comprehension level.’ Three dual fellowship-trained specialists, all masked to the sources, graded the appropriateness of the contents (extracted from the existing websites) and responses (generated responses by ChatGPT-4) in terms of personal preference, comprehensiveness, and accuracy. Additionally, 5 readability indices, including Flesch Reading Ease, Flesch–Kincaid Grade Level, Gunning Fog Index, Coleman–Liau Index, and Simple Measure of Gobbledygook index were calculated using an online calculator, Readable.com, to assess the ease of comprehension of each answer.

Main Outcome Measures

Personal preference, accuracy, comprehensiveness, and readability of contents and responses about uveitis.

Results

A total of 497 contents and responses, including 71 contents from existing websites, 213 standard responses, and 213 simplified responses from ChatGPT-4 were recorded and graded. Standard ChatGPT-4 responses were preferred and perceived to be more comprehensive by dually trained (uveitis and retina) specialist ophthalmologists while maintaining similar accuracy level compared with existing websites. Moreover, simplified ChatGPT-4 responses matched almost all existing websites in terms of personal preference, accuracy, and comprehensiveness. Notably, almost all readability indices suggested that standard ChatGPT-4 responses demand a higher educational level for comprehension, whereas simplified responses required lower level of education compared with the existing websites.

Conclusions

This study shows that ChatGPT can provide patients with an avenue to access comprehensive and accurate information about uveitis, tailored to their educational level.

Financial Disclosure(s)

The author(s) have no proprietary or commercial interest in any materials discussed in this article.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
评估 ChatGPT-4 对患者有关葡萄膜炎询问的回复的适当性和可读性
目的比较 ChatGPT-4 作为在线葡萄膜炎患者教育资源与现有患者教育网站的实用性。方法在谷歌搜索引擎中输入 "葡萄膜炎 "一词,然后选择前 8 个非赞助网站作为研究对象。从 Healthline、Mayo Clinic、WebMD、美国国家眼科研究所、眼葡萄膜炎和免疫学基金会、美国眼科学会、克利夫兰诊所和国家卫生服务网站上提取患者葡萄膜炎相关信息。然后提示 ChatGPT-4 生成标准和简化格式的葡萄膜炎回复。为了生成简化回复,在提示中添加了以下要求:请提供适合普通美国成年人六年级理解水平的回复。三位接受过双重研究员培训的专家均不透露信息来源,他们根据个人偏好、全面性和准确性对内容(从现有网站中提取)和回复(由 ChatGPT-4 生成的回复)的适当性进行了评分。此外,还使用在线计算器 Readable.com 计算了 5 个可读性指数,包括 Flesch Reading Ease、Flesch-Kincaid Grade Level、Gunning Fog Index、Coleman-Liau Index 和 Simple Measure of Gobbledygook Index,以评估每个答案的易懂程度。结果共记录了497条内容和回答,包括71条来自现有网站的内容、213条标准回答和213条来自ChatGPT-4的简化回答,并进行了评分。受过双重培训(葡萄膜炎和视网膜)的专科眼科医生更喜欢标准版 ChatGPT-4 回答,并认为其更全面,同时与现有网站相比保持了相似的准确度。此外,简化版 ChatGPT-4 在个人偏好、准确性和全面性方面几乎与所有现有网站相匹配。值得注意的是,几乎所有的可读性指数都表明,标准版 ChatGPT-4 回答需要较高的教育水平才能理解,而与现有网站相比,简化版回答需要较低的教育水平。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Ophthalmology science
Ophthalmology science Ophthalmology
CiteScore
3.40
自引率
0.00%
发文量
0
审稿时长
89 days
期刊最新文献
Ocular Adverse Events Following Coronavirus Disease 2019 Infection: A Self-controlled Case Series Study from the Entire Korean Population Progression of Capillary Hypoperfusion in Advanced Stages of Nonproliferative Diabetic Retinopathy: 6-month Analysis of RICHARD Study The Optical Nature of Myopic Changes in Retinal Vessel Caliber ReCLAIM-2: A Randomized Phase II Clinical Trial Evaluating Elamipretide in Age-related Macular Degeneration, Geographic Atrophy Growth, Visual Function, and Ellipsoid Zone Preservation Interplay between Lipids and Complement Proteins—How Multiomics Data Integration Can Help Unravel Age-related Macular Degeneration Pathophysiology: A Proof-of-concept Study
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1