Cross sectional pilot study on clinical review generation using large language models

IF 15.1 1区 医学 Q1 HEALTH CARE SCIENCES & SERVICES NPJ Digital Medicine Pub Date : 2025-03-19 DOI:10.1038/s41746-025-01535-z
Zining Luo, Yang Qiao, Xinyu Xu, Xiangyu Li, Mengyan Xiao, Aijia Kang, Dunrui Wang, Yueshan Pang, Xing Xie, Sijun Xie, Dachen Luo, Xuefeng Ding, Zhenglong Liu, Ying Liu, Aimin Hu, Yixing Ren, Jiebin Xie
{"title":"Cross sectional pilot study on clinical review generation using large language models","authors":"Zining Luo, Yang Qiao, Xinyu Xu, Xiangyu Li, Mengyan Xiao, Aijia Kang, Dunrui Wang, Yueshan Pang, Xing Xie, Sijun Xie, Dachen Luo, Xuefeng Ding, Zhenglong Liu, Ying Liu, Aimin Hu, Yixing Ren, Jiebin Xie","doi":"10.1038/s41746-025-01535-z","DOIUrl":null,"url":null,"abstract":"<p>As the volume of medical literature accelerates, necessitating efficient tools to synthesize evidence for clinical practice and research, the interest in leveraging large language models (LLMs) for generating clinical reviews has surged. However, there are significant concerns regarding the reliability associated with integrating LLMs into the clinical review process. This study presents a systematic comparison between LLM-generated and human-authored clinical reviews, revealing that while AI can quickly produce reviews, it often has fewer references, less comprehensive insights, and lower logical consistency while exhibiting lower authenticity and accuracy in their citations. Additionally, a higher proportion of its references are from lower-tier journals. Moreover, the study uncovers a concerning inefficiency in current detection systems for identifying AI-generated content, suggesting a need for more advanced checking systems and a stronger ethical framework to ensure academic transparency. Addressing these challenges is vital for the responsible integration of LLMs into clinical research.</p>","PeriodicalId":19349,"journal":{"name":"NPJ Digital Medicine","volume":"183 1","pages":""},"PeriodicalIF":15.1000,"publicationDate":"2025-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"NPJ Digital Medicine","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1038/s41746-025-01535-z","RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"HEALTH CARE SCIENCES & SERVICES","Score":null,"Total":0}
引用次数: 0

Abstract

As the volume of medical literature accelerates, necessitating efficient tools to synthesize evidence for clinical practice and research, the interest in leveraging large language models (LLMs) for generating clinical reviews has surged. However, there are significant concerns regarding the reliability associated with integrating LLMs into the clinical review process. This study presents a systematic comparison between LLM-generated and human-authored clinical reviews, revealing that while AI can quickly produce reviews, it often has fewer references, less comprehensive insights, and lower logical consistency while exhibiting lower authenticity and accuracy in their citations. Additionally, a higher proportion of its references are from lower-tier journals. Moreover, the study uncovers a concerning inefficiency in current detection systems for identifying AI-generated content, suggesting a need for more advanced checking systems and a stronger ethical framework to ensure academic transparency. Addressing these challenges is vital for the responsible integration of LLMs into clinical research.

Abstract Image

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
使用大型语言模型进行临床评论生成的横断面试点研究
随着医学文献量的增加,需要有效的工具来合成临床实践和研究的证据,利用大型语言模型(llm)生成临床评论的兴趣激增。然而,将法学硕士纳入临床审查过程的可靠性值得关注。本研究对法学硕士生成的临床评论和人类撰写的临床评论进行了系统的比较,结果表明,虽然人工智能可以快速生成评论,但它通常具有较少的参考文献、较不全面的见解、较低的逻辑一致性,同时在引用方面表现出较低的真实性和准确性。此外,较高比例的参考文献来自较低层次的期刊。此外,该研究还发现,目前用于识别人工智能生成内容的检测系统效率低下,这表明需要更先进的检查系统和更强有力的道德框架来确保学术透明度。解决这些挑战对于法学硕士负责任地整合到临床研究中至关重要。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
25.10
自引率
3.30%
发文量
170
审稿时长
15 weeks
期刊介绍: npj Digital Medicine is an online open-access journal that focuses on publishing peer-reviewed research in the field of digital medicine. The journal covers various aspects of digital medicine, including the application and implementation of digital and mobile technologies in clinical settings, virtual healthcare, and the use of artificial intelligence and informatics. The primary goal of the journal is to support innovation and the advancement of healthcare through the integration of new digital and mobile technologies. When determining if a manuscript is suitable for publication, the journal considers four important criteria: novelty, clinical relevance, scientific rigor, and digital innovation.
期刊最新文献
Benchmarking large language model-based agent systems for clinical decision tasks BRIDGE pilot study: a bilateral regulatory investigation of data governance and exchange Empowering genetic discoveries and cardiovascular risk assessment by predicting electrocardiograms from genotype Smartphone videos are a scalable tool for gait evaluation in Parkinson’s disease Health economic simulation modeling of an AI-enabled clinical decision support system for coronary revascularization
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1