Can ChatGPT Reliably Answer the Most Common Patient Questions Regarding Total Shoulder Arthroplasty?

IF 2.9 2区 医学 Q1 ORTHOPEDICS Journal of Shoulder and Elbow Surgery Pub Date : 2024-10-15 DOI:10.1016/j.jse.2024.08.025
Christopher A White, Yehuda A Masturov, Eric Haunschild, Evan Michaelson, Dave R Shukla, Paul J Cagle
{"title":"Can ChatGPT Reliably Answer the Most Common Patient Questions Regarding Total Shoulder Arthroplasty?","authors":"Christopher A White, Yehuda A Masturov, Eric Haunschild, Evan Michaelson, Dave R Shukla, Paul J Cagle","doi":"10.1016/j.jse.2024.08.025","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>Increasingly, patients are turning to artificial intelligence (AI) programs such as ChatGPT to answer medical questions either before or after consulting a physician. Although ChatGPT's popularity implies its potential in improving patient education, concerns exist regarding the validity of the chatbot's responses. Therefore, the objective of this study was to evaluate the quality and accuracy of ChatGPT's answers to commonly asked patient questions surrounding total shoulder arthroplasty (TSA).</p><p><strong>Methods: </strong>Eleven trusted healthcare websites were searched to compose a list of the 15 most frequently asked patient questions about TSA. Each question was posed to the ChatGPT user interface, with no follow-up questions or opportunity for clarification permitted. Individual response accuracy was graded by three board-certified orthopedic surgeons using an alphabetical grading system (i.e., A-F). Overall grades, descriptive analyses, and commentary were provided for each of the ChatGPT responses.</p><p><strong>Results: </strong>Overall, ChatGPT received a cumulative grade of B-. The question responses surrounding general/preoperative and postoperative questions received a grade of B- and B-, respectively. ChatGPT's responses adequately responded to patient questions with sound recommendations. However, the chatbot neglected recent research in its responses, resulting in recommendations that warrant professional clarification. The interface deferred specific questions to orthopedic surgeons in 8/15 questions, suggesting its awareness of its own limitations. Moreover, ChatGPT often went beyond the scope of the question after the first two sentences, and generally made errors when attempting to supplement its own response.</p><p><strong>Conclusion: </strong>Overall, this is the first study to our knowledge to utilize AI to answer the most common patient questions surrounding TSA. ChatGPT achieved an overall grade of B-. Ultimately, while AI is an attractive tool for initial patient inquiries, at this time it cannot provide responses to TSA-specific questions that can substitute the knowledge of an orthopedic surgeon.</p>","PeriodicalId":50051,"journal":{"name":"Journal of Shoulder and Elbow Surgery","volume":null,"pages":null},"PeriodicalIF":2.9000,"publicationDate":"2024-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Shoulder and Elbow Surgery","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1016/j.jse.2024.08.025","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ORTHOPEDICS","Score":null,"Total":0}
引用次数: 0

Abstract

Background: Increasingly, patients are turning to artificial intelligence (AI) programs such as ChatGPT to answer medical questions either before or after consulting a physician. Although ChatGPT's popularity implies its potential in improving patient education, concerns exist regarding the validity of the chatbot's responses. Therefore, the objective of this study was to evaluate the quality and accuracy of ChatGPT's answers to commonly asked patient questions surrounding total shoulder arthroplasty (TSA).

Methods: Eleven trusted healthcare websites were searched to compose a list of the 15 most frequently asked patient questions about TSA. Each question was posed to the ChatGPT user interface, with no follow-up questions or opportunity for clarification permitted. Individual response accuracy was graded by three board-certified orthopedic surgeons using an alphabetical grading system (i.e., A-F). Overall grades, descriptive analyses, and commentary were provided for each of the ChatGPT responses.

Results: Overall, ChatGPT received a cumulative grade of B-. The question responses surrounding general/preoperative and postoperative questions received a grade of B- and B-, respectively. ChatGPT's responses adequately responded to patient questions with sound recommendations. However, the chatbot neglected recent research in its responses, resulting in recommendations that warrant professional clarification. The interface deferred specific questions to orthopedic surgeons in 8/15 questions, suggesting its awareness of its own limitations. Moreover, ChatGPT often went beyond the scope of the question after the first two sentences, and generally made errors when attempting to supplement its own response.

Conclusion: Overall, this is the first study to our knowledge to utilize AI to answer the most common patient questions surrounding TSA. ChatGPT achieved an overall grade of B-. Ultimately, while AI is an attractive tool for initial patient inquiries, at this time it cannot provide responses to TSA-specific questions that can substitute the knowledge of an orthopedic surgeon.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
ChatGPT 能否可靠地回答患者关于全肩关节置换术的最常见问题?
背景:越来越多的患者在咨询医生之前或之后求助于人工智能(AI)程序,如 ChatGPT 来回答医疗问题。虽然 ChatGPT 的流行意味着它在改善患者教育方面的潜力,但人们对聊天机器人回答的有效性仍存在担忧。因此,本研究的目的是评估 ChatGPT 回答患者关于全肩关节置换术(TSA)常见问题的质量和准确性:方法: 我们搜索了 11 个可信赖的医疗保健网站,整理出患者最常问到的 15 个有关 TSA 的问题。每个问题都是在 ChatGPT 用户界面上提出的,没有后续问题或澄清机会。个人回答的准确性由三位获得认证的骨科外科医生使用字母分级系统(即 A-F)进行评分。对每个 ChatGPT 回答进行了总体评分、描述性分析和评论:结果:总的来说,ChatGPT 的累计评分为 B-。围绕一般/术前和术后问题的回答分别获得了 B- 和 B-。ChatGPT 的回复充分回答了患者的问题,并提出了合理的建议。但是,聊天机器人在回复中忽略了近期的研究,导致建议需要专业人员的澄清。在 8/15 个问题中,该界面将具体问题推给了骨科医生,这表明它意识到了自身的局限性。此外,ChatGPT 在回答问题的前两句后经常会超出问题的范围,而且在试图补充自己的回答时通常会出错:总的来说,据我们所知,这是第一项利用人工智能来回答与 TSA 有关的最常见患者问题的研究。ChatGPT 的总体评分为 B-。归根结底,虽然人工智能对于患者的初步咨询来说是一个很有吸引力的工具,但目前它还不能回答与 TSA 有关的具体问题,无法取代骨科医生的知识。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
6.50
自引率
23.30%
发文量
604
审稿时长
11.2 weeks
期刊介绍: The official publication for eight leading specialty organizations, this authoritative journal is the only publication to focus exclusively on medical, surgical, and physical techniques for treating injury/disease of the upper extremity, including the shoulder girdle, arm, and elbow. Clinically oriented and peer-reviewed, the Journal provides an international forum for the exchange of information on new techniques, instruments, and materials. Journal of Shoulder and Elbow Surgery features vivid photos, professional illustrations, and explicit diagrams that demonstrate surgical approaches and depict implant devices. Topics covered include fractures, dislocations, diseases and injuries of the rotator cuff, imaging techniques, arthritis, arthroscopy, arthroplasty, and rehabilitation.
期刊最新文献
Comparable low revision rates of stemmed and stemless total anatomic shoulder arthroplasties after exclusion of metal-backed glenoid components: a collaboration between the Australian and Danish national shoulder arthroplasty registries. Open Bankart repair plus inferior capsular shift versus isolated arthroscopic Bankart repair in collision athletes with recurrent anterior shoulder instability: a prospective study. Glenoid track revisited. Management of the failed Latarjet procedure. Comparison of 3D computer-assisted planning with and without patient-specific instrumentation for severe bone defects in reverse total shoulder arthroplasty.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1