Machine learning to predict notes for chart review in the oncology setting: a proof of concept strategy for improving clinician note-writing.

IF 4.7 2区 医学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Journal of the American Medical Informatics Association Pub Date : 2024-06-20 DOI:10.1093/jamia/ocae092
Sharon Jiang, Barbara D Lam, Monica Agrawal, Shannon Shen, Nicholas Kurtzman, Steven Horng, David R Karger, David Sontag
{"title":"Machine learning to predict notes for chart review in the oncology setting: a proof of concept strategy for improving clinician note-writing.","authors":"Sharon Jiang, Barbara D Lam, Monica Agrawal, Shannon Shen, Nicholas Kurtzman, Steven Horng, David R Karger, David Sontag","doi":"10.1093/jamia/ocae092","DOIUrl":null,"url":null,"abstract":"<p><strong>Objective: </strong>Leverage electronic health record (EHR) audit logs to develop a machine learning (ML) model that predicts which notes a clinician wants to review when seeing oncology patients.</p><p><strong>Materials and methods: </strong>We trained logistic regression models using note metadata and a Term Frequency Inverse Document Frequency (TF-IDF) text representation. We evaluated performance with precision, recall, F1, AUC, and a clinical qualitative assessment.</p><p><strong>Results: </strong>The metadata only model achieved an AUC 0.930 and the metadata and TF-IDF model an AUC 0.937. Qualitative assessment revealed a need for better text representation and to further customize predictions for the user.</p><p><strong>Discussion: </strong>Our model effectively surfaces the top 10 notes a clinician wants to review when seeing an oncology patient. Further studies can characterize different types of clinician users and better tailor the task for different care settings.</p><p><strong>Conclusion: </strong>EHR audit logs can provide important relevance data for training ML models that assist with note-writing in the oncology setting.</p>","PeriodicalId":50016,"journal":{"name":"Journal of the American Medical Informatics Association","volume":null,"pages":null},"PeriodicalIF":4.7000,"publicationDate":"2024-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11187428/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of the American Medical Informatics Association","FirstCategoryId":"91","ListUrlMain":"https://doi.org/10.1093/jamia/ocae092","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

Abstract

Objective: Leverage electronic health record (EHR) audit logs to develop a machine learning (ML) model that predicts which notes a clinician wants to review when seeing oncology patients.

Materials and methods: We trained logistic regression models using note metadata and a Term Frequency Inverse Document Frequency (TF-IDF) text representation. We evaluated performance with precision, recall, F1, AUC, and a clinical qualitative assessment.

Results: The metadata only model achieved an AUC 0.930 and the metadata and TF-IDF model an AUC 0.937. Qualitative assessment revealed a need for better text representation and to further customize predictions for the user.

Discussion: Our model effectively surfaces the top 10 notes a clinician wants to review when seeing an oncology patient. Further studies can characterize different types of clinician users and better tailor the task for different care settings.

Conclusion: EHR audit logs can provide important relevance data for training ML models that assist with note-writing in the oncology setting.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
通过机器学习预测肿瘤病历审阅笔记:改善临床医师笔记书写的概念验证策略。
目的:利用电子健康记录(EHR)审计日志开发一种机器学习(ML)模型,该模型可预测临床医生在为肿瘤患者看病时希望查看哪些笔记:利用电子健康记录(EHR)审计日志开发一种机器学习(ML)模型,该模型可预测临床医生在接诊肿瘤患者时希望查看哪些笔记:我们使用笔记元数据和术语频率反向文档频率(TF-IDF)文本表示法训练了逻辑回归模型。我们用精确度、召回率、F1、AUC 和临床定性评估来评价模型的性能:结果:仅元数据模型的 AUC 为 0.930,元数据和 TF-IDF 模型的 AUC 为 0.937。定性评估显示,需要更好的文本表示,并进一步为用户定制预测:我们的模型能有效地显示临床医生在看肿瘤病人时想要查看的前 10 条笔记。进一步的研究可以确定不同类型临床医生用户的特征,并针对不同的医疗环境更好地定制任务:结论:电子病历审计日志可为训练 ML 模型提供重要的相关性数据,而 ML 模型可在肿瘤学环境中协助笔记书写。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Journal of the American Medical Informatics Association
Journal of the American Medical Informatics Association 医学-计算机:跨学科应用
CiteScore
14.50
自引率
7.80%
发文量
230
审稿时长
3-8 weeks
期刊介绍: JAMIA is AMIA''s premier peer-reviewed journal for biomedical and health informatics. Covering the full spectrum of activities in the field, JAMIA includes informatics articles in the areas of clinical care, clinical research, translational science, implementation science, imaging, education, consumer health, public health, and policy. JAMIA''s articles describe innovative informatics research and systems that help to advance biomedical science and to promote health. Case reports, perspectives and reviews also help readers stay connected with the most important informatics developments in implementation, policy and education.
期刊最新文献
Trending in the right direction: critical access hospitals increased adoption of advanced electronic health record functions from 2018 to 2023. Evaluating gradient-based explanation methods for neural network ECG analysis using heatmaps. The role of routine and structured social needs data collection in improving care in US hospitals. Correction to: Artificial intelligence for optimizing recruitment and retention in clinical trials: a scoping review. Is ChatGPT worthy enough for provisioning clinical decision support?
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1