Large language models can support generation of standardized discharge summaries – A retrospective study utilizing ChatGPT-4 and electronic health records

IF 3.7 2区 医学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS International Journal of Medical Informatics Pub Date : 2024-10-14 DOI:10.1016/j.ijmedinf.2024.105654
Arne Schwieger , Katrin Angst , Mateo de Bardeci , Achim Burrer , Flurin Cathomas , Stefano Ferrea , Franziska Grätz , Marius Knorr , Golo Kronenberg , Tobias Spiller , David Troi , Erich Seifritz , Samantha Weber , Sebastian Olbrich
{"title":"Large language models can support generation of standardized discharge summaries – A retrospective study utilizing ChatGPT-4 and electronic health records","authors":"Arne Schwieger ,&nbsp;Katrin Angst ,&nbsp;Mateo de Bardeci ,&nbsp;Achim Burrer ,&nbsp;Flurin Cathomas ,&nbsp;Stefano Ferrea ,&nbsp;Franziska Grätz ,&nbsp;Marius Knorr ,&nbsp;Golo Kronenberg ,&nbsp;Tobias Spiller ,&nbsp;David Troi ,&nbsp;Erich Seifritz ,&nbsp;Samantha Weber ,&nbsp;Sebastian Olbrich","doi":"10.1016/j.ijmedinf.2024.105654","DOIUrl":null,"url":null,"abstract":"<div><h3>Objective</h3><div>To evaluate whether psychiatric discharge summaries (DS) generated with ChatGPT-4 from electronic health records (EHR) can match the quality of DS written by psychiatric residents.</div></div><div><h3>Methods</h3><div>At a psychiatric primary care hospital, we compared 20 inpatient DS, written by residents, to those written with ChatGPT-4 from pseudonymized residents’ notes of the patients’ EHRs and a standardized prompt. 8 blinded psychiatry specialists rated both versions on a custom Likert scale from 1 to 5 across 15 quality subcategories. The primary outcome was the overall rating difference between the two groups. The secondary outcomes were the rating differences at the level of individual question, case, and rater.</div></div><div><h3>Results</h3><div>Human-written DS were rated significantly higher than AI (mean ratings: human 3.78, AI 3.12, p &lt; 0.05). They surpassed AI significantly in 12/15 questions and 16/20 cases and were favored significantly by 7/8 raters. For “low expected correction effort”, human DS were rated as 67 % favorable, 19 % neutral, and 14 % unfavorable, whereas AI-DS were rated as 22 % favorable, 33 % neutral, and 45 % unfavorable. Hallucinations were present in 40 % of AI-DS, with 37.5 % deemed highly clinically relevant. Minor content mistakes were found in 30 % of AI and 10 % of human DS. Raters correctly identified AI-DS with 81 % sensitivity and 75 % specificity.</div></div><div><h3>Discussion</h3><div>Overall, AI-DS did not match the quality of resident-written DS but performed similarly in 20% of cases and were rated as favorable for “low expected correction effort” in 22% of cases. AI-DS lacked most in content specificity, ability to distill key case information, and coherence but performed adequately in conciseness, adherence to formalities, relevance of included content, and form.</div></div><div><h3>Conclusion</h3><div>LLM-written DS show potential as templates for physicians to finalize, potentially saving time in the future.</div></div>","PeriodicalId":54950,"journal":{"name":"International Journal of Medical Informatics","volume":"192 ","pages":"Article 105654"},"PeriodicalIF":3.7000,"publicationDate":"2024-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Medical Informatics","FirstCategoryId":"3","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1386505624003174","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

Abstract

Objective

To evaluate whether psychiatric discharge summaries (DS) generated with ChatGPT-4 from electronic health records (EHR) can match the quality of DS written by psychiatric residents.

Methods

At a psychiatric primary care hospital, we compared 20 inpatient DS, written by residents, to those written with ChatGPT-4 from pseudonymized residents’ notes of the patients’ EHRs and a standardized prompt. 8 blinded psychiatry specialists rated both versions on a custom Likert scale from 1 to 5 across 15 quality subcategories. The primary outcome was the overall rating difference between the two groups. The secondary outcomes were the rating differences at the level of individual question, case, and rater.

Results

Human-written DS were rated significantly higher than AI (mean ratings: human 3.78, AI 3.12, p < 0.05). They surpassed AI significantly in 12/15 questions and 16/20 cases and were favored significantly by 7/8 raters. For “low expected correction effort”, human DS were rated as 67 % favorable, 19 % neutral, and 14 % unfavorable, whereas AI-DS were rated as 22 % favorable, 33 % neutral, and 45 % unfavorable. Hallucinations were present in 40 % of AI-DS, with 37.5 % deemed highly clinically relevant. Minor content mistakes were found in 30 % of AI and 10 % of human DS. Raters correctly identified AI-DS with 81 % sensitivity and 75 % specificity.

Discussion

Overall, AI-DS did not match the quality of resident-written DS but performed similarly in 20% of cases and were rated as favorable for “low expected correction effort” in 22% of cases. AI-DS lacked most in content specificity, ability to distill key case information, and coherence but performed adequately in conciseness, adherence to formalities, relevance of included content, and form.

Conclusion

LLM-written DS show potential as templates for physicians to finalize, potentially saving time in the future.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
大型语言模型可支持生成标准化出院摘要--一项利用 ChatGPT-4 和电子健康记录进行的回顾性研究。
目的评估使用 ChatGPT-4 从电子健康记录(EHR)中生成的精神科出院摘要(DS)的质量是否能与精神科住院医生撰写的出院摘要相媲美:在一家精神科初级保健医院,我们比较了由住院医师撰写的 20 份住院病人出院摘要,以及使用 ChatGPT-4 从患者电子健康记录的化名住院医师笔记和标准化提示中撰写的出院摘要。8 位双盲精神病学专家采用自定义的李克特量表,从 1 到 5 对 15 个质量子类别对两个版本进行评分。主要结果是两组之间的总体评分差异。次要结果是单个问题、病例和评分者的评分差异:结果:人工撰写的数据集的评分明显高于人工智能(平均评分:人工 3.78,人工智能 3.12,p 讨论):总体而言,人工智能答题系统的质量无法与居民撰写的答题系统相提并论,但在 20% 的案例中表现类似,在 22% 的案例中因 "预期修正工作量低 "而被评为良好。人工智能数据集在内容具体性、提炼关键病例信息的能力和连贯性方面最为欠缺,但在简洁性、遵守格式、所含内容的相关性和形式方面表现良好:LLM编写的DS显示出作为模板供医生最终确定的潜力,将来有可能节省时间。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
International Journal of Medical Informatics
International Journal of Medical Informatics 医学-计算机:信息系统
CiteScore
8.90
自引率
4.10%
发文量
217
审稿时长
42 days
期刊介绍: International Journal of Medical Informatics provides an international medium for dissemination of original results and interpretative reviews concerning the field of medical informatics. The Journal emphasizes the evaluation of systems in healthcare settings. The scope of journal covers: Information systems, including national or international registration systems, hospital information systems, departmental and/or physician''s office systems, document handling systems, electronic medical record systems, standardization, systems integration etc.; Computer-aided medical decision support systems using heuristic, algorithmic and/or statistical methods as exemplified in decision theory, protocol development, artificial intelligence, etc. Educational computer based programs pertaining to medical informatics or medicine in general; Organizational, economic, social, clinical impact, ethical and cost-benefit aspects of IT applications in health care.
期刊最新文献
Editorial Board Predicting abnormal C-reactive protein level for improving utilization by deep neural network model Analysis of missing data in electronic health records of people with diabetes in primary care in Spain: A population-based cohort study What information do patients pay more attention to in online physician selection? Information needs model for online medical choice decision-making based on trust theory and fuzzy decision Systematic construction of composite radiation therapy dataset using automated data pipeline for prognosis prediction
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1