比较医护人员和 ChatGPT 对电子健康记录中乳房再造患者问题的回答。

IF 1.4 4区 医学 Q3 SURGERY Annals of Plastic Surgery Pub Date : 2024-11-01 DOI:10.1097/SAP.0000000000004090
Daniel Soroudi, Aileen Gozali, Jacquelyn A Knox, Nisha Parmeshwar, Ryan Sadjadi, Jasmin C Wilson, Seung Ah Lee, Merisa L Piper
{"title":"比较医护人员和 ChatGPT 对电子健康记录中乳房再造患者问题的回答。","authors":"Daniel Soroudi, Aileen Gozali, Jacquelyn A Knox, Nisha Parmeshwar, Ryan Sadjadi, Jasmin C Wilson, Seung Ah Lee, Merisa L Piper","doi":"10.1097/SAP.0000000000004090","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>Patient-directed Electronic Health Record (EHR) messaging is used as an adjunct to enhance patient-physician interactions but further burdens the physician. There is a need for clear electronic patient communication in all aspects of medicine, including plastic surgery. We can potentially utilize innovative communication tools like ChatGPT. This study assesses ChatGPT's effectiveness in answering breast reconstruction queries, comparing its accuracy, empathy, and readability with healthcare providers' responses.</p><p><strong>Methods: </strong>Ten deidentified questions regarding breast reconstruction were extracted from electronic messages. They were presented to ChatGPT3, ChatGPT4, plastic surgeons, and advanced practice providers for response. ChatGPT3 and ChatGPT4 were also prompted to give brief responses. Using 1-5 Likert scoring, accuracy and empathy were graded by 2 plastic surgeons and medical students, respectively. Readability was measured using Flesch Reading Ease. Grades were compared using 2-tailed t tests.</p><p><strong>Results: </strong>Combined provider responses had better Flesch Reading Ease scores compared to all combined chatbot responses (53.3 ± 13.3 vs 36.0 ± 11.6, P < 0.001) and combined brief chatbot responses (53.3 ± 13.3 vs 34.7 ± 12.8, P < 0.001). Empathy scores were higher in all combined chatbot than in those from combined providers (2.9 ± 0.8 vs 2.0 ± 0.9, P < 0.001). There were no statistically significant differences in accuracy between combined providers and all combined chatbot responses (4.3 ± 0.9 vs 4.5 ± 0.6, P = 0.170) or combined brief chatbot responses (4.3 ± 0.9 vs 4.6 ± 0.6, P = 0.128).</p><p><strong>Conclusions: </strong>Amid the time constraints and complexities of plastic surgery decision making, our study underscores ChatGPT's potential to enhance patient communication. ChatGPT excels in empathy and accuracy, yet its readability presents limitations that should be addressed.</p>","PeriodicalId":8060,"journal":{"name":"Annals of Plastic Surgery","volume":"93 5","pages":"541-545"},"PeriodicalIF":1.4000,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Comparing Provider and ChatGPT Responses to Breast Reconstruction Patient Questions in the Electronic Health Record.\",\"authors\":\"Daniel Soroudi, Aileen Gozali, Jacquelyn A Knox, Nisha Parmeshwar, Ryan Sadjadi, Jasmin C Wilson, Seung Ah Lee, Merisa L Piper\",\"doi\":\"10.1097/SAP.0000000000004090\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Background: </strong>Patient-directed Electronic Health Record (EHR) messaging is used as an adjunct to enhance patient-physician interactions but further burdens the physician. There is a need for clear electronic patient communication in all aspects of medicine, including plastic surgery. We can potentially utilize innovative communication tools like ChatGPT. This study assesses ChatGPT's effectiveness in answering breast reconstruction queries, comparing its accuracy, empathy, and readability with healthcare providers' responses.</p><p><strong>Methods: </strong>Ten deidentified questions regarding breast reconstruction were extracted from electronic messages. They were presented to ChatGPT3, ChatGPT4, plastic surgeons, and advanced practice providers for response. ChatGPT3 and ChatGPT4 were also prompted to give brief responses. Using 1-5 Likert scoring, accuracy and empathy were graded by 2 plastic surgeons and medical students, respectively. Readability was measured using Flesch Reading Ease. Grades were compared using 2-tailed t tests.</p><p><strong>Results: </strong>Combined provider responses had better Flesch Reading Ease scores compared to all combined chatbot responses (53.3 ± 13.3 vs 36.0 ± 11.6, P < 0.001) and combined brief chatbot responses (53.3 ± 13.3 vs 34.7 ± 12.8, P < 0.001). Empathy scores were higher in all combined chatbot than in those from combined providers (2.9 ± 0.8 vs 2.0 ± 0.9, P < 0.001). There were no statistically significant differences in accuracy between combined providers and all combined chatbot responses (4.3 ± 0.9 vs 4.5 ± 0.6, P = 0.170) or combined brief chatbot responses (4.3 ± 0.9 vs 4.6 ± 0.6, P = 0.128).</p><p><strong>Conclusions: </strong>Amid the time constraints and complexities of plastic surgery decision making, our study underscores ChatGPT's potential to enhance patient communication. ChatGPT excels in empathy and accuracy, yet its readability presents limitations that should be addressed.</p>\",\"PeriodicalId\":8060,\"journal\":{\"name\":\"Annals of Plastic Surgery\",\"volume\":\"93 5\",\"pages\":\"541-545\"},\"PeriodicalIF\":1.4000,\"publicationDate\":\"2024-11-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Annals of Plastic Surgery\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.1097/SAP.0000000000004090\",\"RegionNum\":4,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"SURGERY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Annals of Plastic Surgery","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1097/SAP.0000000000004090","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"SURGERY","Score":null,"Total":0}
引用次数: 0

摘要

背景:以患者为导向的电子健康记录(EHR)信息传递被用作加强医患互动的辅助手段,但却进一步加重了医生的负担。包括整形外科在内的所有医学领域都需要与患者进行清晰的电子交流。我们有可能利用像 ChatGPT 这样的创新交流工具。本研究评估了 ChatGPT 在回答乳房重建问题时的有效性,并将其准确性、同理心和可读性与医疗服务提供者的回答进行了比较:方法:从电子信息中提取了 10 个与乳房重建相关的去标识化问题。这些问题分别提交给 ChatGPT3、ChatGPT4、整形外科医生和高级医疗服务提供者进行回答。同时还提示 ChatGPT3 和 ChatGPT4 作出简短回答。2 名整形外科医生和医科学生分别采用 1-5 级李克特评分法对准确性和共鸣性进行评分。可读性采用 Flesch 阅读容易度进行测量。采用双尾 t 检验对评分进行比较:结果:与所有综合聊天机器人回复(53.3 ± 13.3 vs 36.0 ± 11.6,P < 0.001)和综合简短聊天机器人回复(53.3 ± 13.3 vs 34.7 ± 12.8,P < 0.001)相比,综合提供商回复的Flesch阅读易读性得分更高。所有综合聊天机器人的移情得分均高于综合提供者的移情得分(2.9 ± 0.8 vs 2.0 ± 0.9,P < 0.001)。在准确性方面,综合提供者与所有综合聊天机器人回复(4.3 ± 0.9 vs 4.5 ± 0.6,P = 0.170)或综合简短聊天机器人回复(4.3 ± 0.9 vs 4.6 ± 0.6,P = 0.128)之间没有明显的统计学差异:在时间紧迫和整形手术决策复杂的情况下,我们的研究强调了 ChatGPT 在加强患者沟通方面的潜力。ChatGPT 在同理心和准确性方面表现出色,但其可读性存在局限性,应加以解决。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Comparing Provider and ChatGPT Responses to Breast Reconstruction Patient Questions in the Electronic Health Record.

Background: Patient-directed Electronic Health Record (EHR) messaging is used as an adjunct to enhance patient-physician interactions but further burdens the physician. There is a need for clear electronic patient communication in all aspects of medicine, including plastic surgery. We can potentially utilize innovative communication tools like ChatGPT. This study assesses ChatGPT's effectiveness in answering breast reconstruction queries, comparing its accuracy, empathy, and readability with healthcare providers' responses.

Methods: Ten deidentified questions regarding breast reconstruction were extracted from electronic messages. They were presented to ChatGPT3, ChatGPT4, plastic surgeons, and advanced practice providers for response. ChatGPT3 and ChatGPT4 were also prompted to give brief responses. Using 1-5 Likert scoring, accuracy and empathy were graded by 2 plastic surgeons and medical students, respectively. Readability was measured using Flesch Reading Ease. Grades were compared using 2-tailed t tests.

Results: Combined provider responses had better Flesch Reading Ease scores compared to all combined chatbot responses (53.3 ± 13.3 vs 36.0 ± 11.6, P < 0.001) and combined brief chatbot responses (53.3 ± 13.3 vs 34.7 ± 12.8, P < 0.001). Empathy scores were higher in all combined chatbot than in those from combined providers (2.9 ± 0.8 vs 2.0 ± 0.9, P < 0.001). There were no statistically significant differences in accuracy between combined providers and all combined chatbot responses (4.3 ± 0.9 vs 4.5 ± 0.6, P = 0.170) or combined brief chatbot responses (4.3 ± 0.9 vs 4.6 ± 0.6, P = 0.128).

Conclusions: Amid the time constraints and complexities of plastic surgery decision making, our study underscores ChatGPT's potential to enhance patient communication. ChatGPT excels in empathy and accuracy, yet its readability presents limitations that should be addressed.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
2.70
自引率
13.30%
发文量
584
审稿时长
6 months
期刊介绍: The only independent journal devoted to general plastic and reconstructive surgery, Annals of Plastic Surgery serves as a forum for current scientific and clinical advances in the field and a sounding board for ideas and perspectives on its future. The journal publishes peer-reviewed original articles, brief communications, case reports, and notes in all areas of interest to the practicing plastic surgeon. There are also historical and current reviews, descriptions of surgical technique, and lively editorials and letters to the editor.
期刊最新文献
Changing Perspectives in Mastectomy: The Case for Nipple Preservation. Clinical Outcomes of Gender-Affirming Surgery in Individuals With Connective Tissue Disorders. Hourglass Constriction of a Single Fascicle of the Anterior Interosseous Nerve: A Case Report. Interprogram Differences in Core General, Core Plastic, and Plastic Surgery-Adjacent Training. Simple Approach to Cosmetic Medial Epicanthoplasty: A Modification of the Skin Redraping Method.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1