Enhancing Adverse Event Reporting With Clinical Language Models: Inpatient Falls.

IF 3.8 3区 医学 Q1 NURSING Journal of Advanced Nursing Pub Date : 2025-02-13 DOI:10.1111/jan.16812
Insook Cho, Hyunchul Park, Byeong Sun Park, Dong-Geon Lee
{"title":"Enhancing Adverse Event Reporting With Clinical Language Models: Inpatient Falls.","authors":"Insook Cho, Hyunchul Park, Byeong Sun Park, Dong-Geon Lee","doi":"10.1111/jan.16812","DOIUrl":null,"url":null,"abstract":"<p><strong>Aims: </strong>To develop a method for computationally detecting fall events using clinical language models to complement existing self-reporting mechanisms.</p><p><strong>Design: </strong>Retrospective observational study.</p><p><strong>Methods: </strong>Text data were collected from the unstructured nursing notes of three hospitals' electronic health records and the Korean national patient safety reports, totalling 34,480 records covering the period from January 2015 to December 2019. Note-level labelling was conducted by two researchers with 95% agreement. Preprocessing data anonymisation and English translation were followed by semantic validation. Five language models based on pretrained Bidirectional Encoder Representations from Transformers (BERT) and Generative Pretrained Transformer (GPT)-4 with prompt programming were explored. Model performance was assessed using F measurements. Error analysis was conducted for the GPT-4 results.</p><p><strong>Results: </strong>Fine-tuned BERT models with the English data set outperformed GPT-4, with Bio+Clinical BERT achieving the highest F1 score of 0.98. Fine-tuned Korean BERT with the Korean data set also reached an F1 score of 0.98, while GPT-4 achieved a competitive F1 score of 0.94. GPT-4 with prompt programming showed much higher F1 scores than GPT-4 with a standardised prompt for the English data set (0.85 vs. 0.39) and the Korean data set (0.94 vs. 0.03). The error analysis identified that the common misclassification patterns included fall history and homonyms, causing false positives and implicit expressions and missing contextual information, causing false negatives.</p><p><strong>Conclusion: </strong>The clinical language model approach, if used alongside the existing self-reporting, promises to increase the chance of identifying the majority of factual falls without the need for additional chart reviews.</p><p><strong>Impact: </strong>Inpatient falls are often underreported, with up to 91% of incidents missed in self-reports. Using language models, we identified a significant portion of these unreported falls, improving the accuracy of adverse event tracking while reducing the self-reporting burden on nurses.</p><p><strong>Patient or public contribution: </strong>Not applicable.</p>","PeriodicalId":54897,"journal":{"name":"Journal of Advanced Nursing","volume":" ","pages":""},"PeriodicalIF":3.8000,"publicationDate":"2025-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Advanced Nursing","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1111/jan.16812","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"NURSING","Score":null,"Total":0}
引用次数: 0

Abstract

Aims: To develop a method for computationally detecting fall events using clinical language models to complement existing self-reporting mechanisms.

Design: Retrospective observational study.

Methods: Text data were collected from the unstructured nursing notes of three hospitals' electronic health records and the Korean national patient safety reports, totalling 34,480 records covering the period from January 2015 to December 2019. Note-level labelling was conducted by two researchers with 95% agreement. Preprocessing data anonymisation and English translation were followed by semantic validation. Five language models based on pretrained Bidirectional Encoder Representations from Transformers (BERT) and Generative Pretrained Transformer (GPT)-4 with prompt programming were explored. Model performance was assessed using F measurements. Error analysis was conducted for the GPT-4 results.

Results: Fine-tuned BERT models with the English data set outperformed GPT-4, with Bio+Clinical BERT achieving the highest F1 score of 0.98. Fine-tuned Korean BERT with the Korean data set also reached an F1 score of 0.98, while GPT-4 achieved a competitive F1 score of 0.94. GPT-4 with prompt programming showed much higher F1 scores than GPT-4 with a standardised prompt for the English data set (0.85 vs. 0.39) and the Korean data set (0.94 vs. 0.03). The error analysis identified that the common misclassification patterns included fall history and homonyms, causing false positives and implicit expressions and missing contextual information, causing false negatives.

Conclusion: The clinical language model approach, if used alongside the existing self-reporting, promises to increase the chance of identifying the majority of factual falls without the need for additional chart reviews.

Impact: Inpatient falls are often underreported, with up to 91% of incidents missed in self-reports. Using language models, we identified a significant portion of these unreported falls, improving the accuracy of adverse event tracking while reducing the self-reporting burden on nurses.

Patient or public contribution: Not applicable.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
目的:开发一种利用临床语言模型计算检测跌倒事件的方法,以补充现有的自我报告机制:设计:回顾性观察研究:从三家医院电子健康记录的非结构化护理记录和韩国国家患者安全报告中收集文本数据,共计 34480 条记录,时间跨度为 2015 年 1 月至 2019 年 12 月。笔记级标注由两名研究人员进行,95%的标注一致。预处理数据匿名化和英文翻译后进行语义验证。研究人员探讨了基于预训练的双向编码器表示变换器(BERT)和生成预训练变换器(GPT)-4 的五个语言模型,并进行了提示编程。使用 F 测量评估了模型性能。对 GPT-4 的结果进行了误差分析:结果:使用英语数据集的微调 BERT 模型的性能优于 GPT-4,其中 Bio+Clinical BERT 的 F1 得分最高,为 0.98。使用韩文数据集的微调韩文 BERT 的 F1 分数也达到了 0.98,而 GPT-4 的 F1 分数只有 0.94。对于英语数据集(0.85 vs. 0.39)和韩语数据集(0.94 vs. 0.03),使用提示编程的 GPT-4 比使用标准化提示的 GPT-4 显示出更高的 F1 分数。误差分析表明,常见的错误分类模式包括:跌倒史和同音词导致的假阳性,隐含表达和上下文信息缺失导致的假阴性:临床语言模型方法如果与现有的自我报告方法一起使用,有望增加识别大多数事实跌倒的机会,而无需额外的病历审查:影响:住院病人跌倒事件经常被漏报,自我报告中的漏报率高达 91%。通过使用语言模型,我们发现了这些未报告的跌倒事件中的很大一部分,提高了不良事件追踪的准确性,同时减轻了护士自我报告的负担:患者或公众贡献:不适用。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
6.40
自引率
7.90%
发文量
369
审稿时长
3 months
期刊介绍: The Journal of Advanced Nursing (JAN) contributes to the advancement of evidence-based nursing, midwifery and healthcare by disseminating high quality research and scholarship of contemporary relevance and with potential to advance knowledge for practice, education, management or policy. All JAN papers are required to have a sound scientific, evidential, theoretical or philosophical base and to be critical, questioning and scholarly in approach. As an international journal, JAN promotes diversity of research and scholarship in terms of culture, paradigm and healthcare context. For JAN’s worldwide readership, authors are expected to make clear the wider international relevance of their work and to demonstrate sensitivity to cultural considerations and differences.
期刊最新文献
Gender Differences in Disease Burden, Symptom Burden, and Quality of Life Among People Living With Heart Failure and Multimorbidity: Cross‐Sectional Study How Hiring Process Satisfaction Influences Nursing Staff's Willingness to Recommend Their Organisation: A Mixed Methods Study Predictive Model for Hypoglycaemia Risk in Type 2 Diabetes Mellitus Patients During the Peri-Colonoscopy Period: A Retrospective Cohort Study Monitoring the Sustainability of a Breastfeeding Guideline During the COVID-19 Pandemic: A Mixed-Methods Study Factors Influencing Nurses' Culturally Competent Cancer Care for LGBT Individuals in Taiwan: A Qualitative Study Applying the Socio-Ecological Model
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1