使用大型语言模型进行提示工程,协助医疗服务提供者回复患者咨询:在电子健康记录中的实时实施。

IF 2.5 Q2 HEALTH CARE SCIENCES & SERVICES JAMIA Open Pub Date : 2024-08-20 eCollection Date: 2024-10-01 DOI:10.1093/jamiaopen/ooae080
Majid Afshar, Yanjun Gao, Graham Wills, Jason Wang, Matthew M Churpek, Christa J Westenberger, David T Kunstman, Joel E Gordon, Cherodeep Goswami, Frank J Liao, Brian Patterson
{"title":"使用大型语言模型进行提示工程,协助医疗服务提供者回复患者咨询:在电子健康记录中的实时实施。","authors":"Majid Afshar, Yanjun Gao, Graham Wills, Jason Wang, Matthew M Churpek, Christa J Westenberger, David T Kunstman, Joel E Gordon, Cherodeep Goswami, Frank J Liao, Brian Patterson","doi":"10.1093/jamiaopen/ooae080","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>Large language models (LLMs) can assist providers in drafting responses to patient inquiries. We examined a prompt engineering strategy to draft responses for providers in the electronic health record. The aim was to evaluate the change in usability after prompt engineering.</p><p><strong>Materials and methods: </strong>A pre-post study over 8 months was conducted across 27 providers. The primary outcome was the provider use of LLM-generated messages from Generative Pre-Trained Transformer 4 (GPT-4) in a mixed-effects model, and the secondary outcome was provider sentiment analysis.</p><p><strong>Results: </strong>Of the 7605 messages generated, 17.5% (<i>n</i> = 1327) were used. There was a reduction in negative sentiment with an odds ratio of 0.43 (95% CI, 0.36-0.52), but message use decreased (<i>P</i> < .01). The addition of nurses after the study period led to an increase in message use to 35.8% (<i>P</i> < .01).</p><p><strong>Discussion: </strong>The improvement in sentiment with prompt engineering suggests better content quality, but the initial decrease in usage highlights the need for integration with human factors design.</p><p><strong>Conclusion: </strong>Future studies should explore strategies for optimizing the integration of LLMs into the provider workflow to maximize both usability and effectiveness.</p>","PeriodicalId":36278,"journal":{"name":"JAMIA Open","volume":null,"pages":null},"PeriodicalIF":2.5000,"publicationDate":"2024-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11335368/pdf/","citationCount":"0","resultStr":"{\"title\":\"Prompt engineering with a large language model to assist providers in responding to patient inquiries: a real-time implementation in the electronic health record.\",\"authors\":\"Majid Afshar, Yanjun Gao, Graham Wills, Jason Wang, Matthew M Churpek, Christa J Westenberger, David T Kunstman, Joel E Gordon, Cherodeep Goswami, Frank J Liao, Brian Patterson\",\"doi\":\"10.1093/jamiaopen/ooae080\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Background: </strong>Large language models (LLMs) can assist providers in drafting responses to patient inquiries. We examined a prompt engineering strategy to draft responses for providers in the electronic health record. The aim was to evaluate the change in usability after prompt engineering.</p><p><strong>Materials and methods: </strong>A pre-post study over 8 months was conducted across 27 providers. The primary outcome was the provider use of LLM-generated messages from Generative Pre-Trained Transformer 4 (GPT-4) in a mixed-effects model, and the secondary outcome was provider sentiment analysis.</p><p><strong>Results: </strong>Of the 7605 messages generated, 17.5% (<i>n</i> = 1327) were used. There was a reduction in negative sentiment with an odds ratio of 0.43 (95% CI, 0.36-0.52), but message use decreased (<i>P</i> < .01). The addition of nurses after the study period led to an increase in message use to 35.8% (<i>P</i> < .01).</p><p><strong>Discussion: </strong>The improvement in sentiment with prompt engineering suggests better content quality, but the initial decrease in usage highlights the need for integration with human factors design.</p><p><strong>Conclusion: </strong>Future studies should explore strategies for optimizing the integration of LLMs into the provider workflow to maximize both usability and effectiveness.</p>\",\"PeriodicalId\":36278,\"journal\":{\"name\":\"JAMIA Open\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":2.5000,\"publicationDate\":\"2024-08-20\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11335368/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"JAMIA Open\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1093/jamiaopen/ooae080\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2024/10/1 0:00:00\",\"PubModel\":\"eCollection\",\"JCR\":\"Q2\",\"JCRName\":\"HEALTH CARE SCIENCES & SERVICES\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"JAMIA Open","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1093/jamiaopen/ooae080","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/10/1 0:00:00","PubModel":"eCollection","JCR":"Q2","JCRName":"HEALTH CARE SCIENCES & SERVICES","Score":null,"Total":0}
引用次数: 0

摘要

背景:大语言模型(LLM)可以帮助医疗服务提供者起草对患者询问的回复。我们研究了在电子健康记录中为医疗服务提供者起草回复的提示工程策略。目的是评估提示工程后可用性的变化:我们对 27 家医疗机构进行了为期 8 个月的前后对比研究。在混合效应模型中,主要结果是医疗服务提供者对由生成式预训练转换器 4 (GPT-4) 生成的 LLM 信息的使用情况,次要结果是医疗服务提供者的情感分析:在生成的 7605 条信息中,17.5%(n = 1327)被使用。负面情绪有所减少,几率比为 0.43(95% CI,0.36-0.52),但信息使用率有所下降(P P 讨论):提示工程对情绪的改善表明内容质量有所提高,但最初使用率的下降突出了与人为因素设计相结合的必要性:结论:未来的研究应探索将 LLM 优化整合到提供者工作流程中的策略,以最大限度地提高可用性和有效性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Prompt engineering with a large language model to assist providers in responding to patient inquiries: a real-time implementation in the electronic health record.

Background: Large language models (LLMs) can assist providers in drafting responses to patient inquiries. We examined a prompt engineering strategy to draft responses for providers in the electronic health record. The aim was to evaluate the change in usability after prompt engineering.

Materials and methods: A pre-post study over 8 months was conducted across 27 providers. The primary outcome was the provider use of LLM-generated messages from Generative Pre-Trained Transformer 4 (GPT-4) in a mixed-effects model, and the secondary outcome was provider sentiment analysis.

Results: Of the 7605 messages generated, 17.5% (n = 1327) were used. There was a reduction in negative sentiment with an odds ratio of 0.43 (95% CI, 0.36-0.52), but message use decreased (P < .01). The addition of nurses after the study period led to an increase in message use to 35.8% (P < .01).

Discussion: The improvement in sentiment with prompt engineering suggests better content quality, but the initial decrease in usage highlights the need for integration with human factors design.

Conclusion: Future studies should explore strategies for optimizing the integration of LLMs into the provider workflow to maximize both usability and effectiveness.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
JAMIA Open
JAMIA Open Medicine-Health Informatics
CiteScore
4.10
自引率
4.80%
发文量
102
审稿时长
16 weeks
期刊最新文献
Bridging clinical informatics and implementation science to improve cancer symptom management in ambulatory oncology practices: experiences from the IMPACT consortium. Leveraging a global, federated, real-world data network to optimize investigator-initiated pediatric clinical trials: the TriNetX Pediatric Collaboratory Network. Implementing virtual desktops for clinical research at an academic health center: a case report. Prompt engineering with a large language model to assist providers in responding to patient inquiries: a real-time implementation in the electronic health record. Development and evaluation of an artificial intelligence-based workflow for the prioritization of patient portal messages.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1