Presentation suitability and readability of ChatGPT's medical responses to patient questions about on knee osteoarthritis.

IF 2.2 3区 医学 Q2 HEALTH CARE SCIENCES & SERVICES Health Informatics Journal Pub Date : 2025-01-01 DOI:10.1177/14604582251315587
Myungeun Yoo, Chan Woong Jang
{"title":"Presentation suitability and readability of ChatGPT's medical responses to patient questions about on knee osteoarthritis.","authors":"Myungeun Yoo, Chan Woong Jang","doi":"10.1177/14604582251315587","DOIUrl":null,"url":null,"abstract":"<p><p><b>Objective:</b> This study aimed to evaluate the presentation suitability and readability of ChatGPT's responses to common patient questions, as well as its potential to enhance readability. <b>Methods:</b> We initially analyzed 30 ChatGPT responses related to knee osteoarthritis (OA) on March 20, 2023, using readability and presentation suitability metrics. Subsequently, we assessed the impact of detailed and simplified instructions provided to ChatGPT for same responses, focusing on readability improvement. <b>Results:</b> The readability scores for responses related to knee OA significantly exceeded the recommended sixth-grade reading level (<i>p</i> < .001). While the presentation of information was rated as \"adequate,\" the content lacked high-quality, reliable details. After the intervention, readability improved slightly for responses related to knee OA; however, there was no significant difference in readability between the groups receiving detailed versus simplified instructions. <b>Conclusions:</b> Although ChatGPT provides informative responses, they are often difficult to read and lack sufficient quality. Current capabilities do not effectively simplify medical information for the general public. Technological advancements are needed to improve user-friendliness and practical utility.</p>","PeriodicalId":55069,"journal":{"name":"Health Informatics Journal","volume":"31 1","pages":"14604582251315587"},"PeriodicalIF":2.2000,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Health Informatics Journal","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1177/14604582251315587","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"HEALTH CARE SCIENCES & SERVICES","Score":null,"Total":0}
引用次数: 0

Abstract

Objective: This study aimed to evaluate the presentation suitability and readability of ChatGPT's responses to common patient questions, as well as its potential to enhance readability. Methods: We initially analyzed 30 ChatGPT responses related to knee osteoarthritis (OA) on March 20, 2023, using readability and presentation suitability metrics. Subsequently, we assessed the impact of detailed and simplified instructions provided to ChatGPT for same responses, focusing on readability improvement. Results: The readability scores for responses related to knee OA significantly exceeded the recommended sixth-grade reading level (p < .001). While the presentation of information was rated as "adequate," the content lacked high-quality, reliable details. After the intervention, readability improved slightly for responses related to knee OA; however, there was no significant difference in readability between the groups receiving detailed versus simplified instructions. Conclusions: Although ChatGPT provides informative responses, they are often difficult to read and lack sufficient quality. Current capabilities do not effectively simplify medical information for the general public. Technological advancements are needed to improve user-friendliness and practical utility.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
ChatGPT对患者膝关节骨关节炎问题的医学回应的呈现、适用性和可读性
目的:本研究旨在评估ChatGPT对常见患者问题的回答的呈现适用性和可读性,以及其提高可读性的潜力。方法:我们首先分析了2023年3月20日与膝骨关节炎(OA)相关的30例ChatGPT反应,使用可读性和呈现性指标。随后,我们评估了提供给ChatGPT的详细和简化的说明对相同响应的影响,重点是可读性的改进。结果:膝关节OA相关反应的可读性评分明显超过推荐的六年级阅读水平(p < 0.001)。虽然信息的呈现被评为“充分”,但内容缺乏高质量、可靠的细节。干预后,与膝关节OA相关的反应的可读性略有提高;然而,接受详细说明和简化说明的两组在可读性上没有显著差异。结论:尽管ChatGPT提供了信息丰富的回答,但它们通常难以阅读并且缺乏足够的质量。目前的功能不能有效地简化面向公众的医疗信息。需要技术进步来提高用户友好性和实用性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Health Informatics Journal
Health Informatics Journal HEALTH CARE SCIENCES & SERVICES-MEDICAL INFORMATICS
CiteScore
7.80
自引率
6.70%
发文量
80
审稿时长
6 months
期刊介绍: Health Informatics Journal is an international peer-reviewed journal. All papers submitted to Health Informatics Journal are subject to peer review by members of a carefully appointed editorial board. The journal operates a conventional single-blind reviewing policy in which the reviewer’s name is always concealed from the submitting author.
期刊最新文献
Researching public health datasets in the era of deep learning: a systematic literature review. A blueprint for large language model-augmented telehealth for HIV mitigation in Indonesia: A scoping review of a novel therapeutic modality. Advancing African American and hispanic health literacy with a bilingual, personalized, prevention smartphone application. Evaluating the quality of Spanish-language information for patients with type 2 diabetes on YouTube and Facebook. Pathways to usage intention of mobile health apps among hypertensive patients: A fuzzy-set qualitative comparative analysis.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1