The Clinicians' Guide to Large Language Models: A General Perspective With a Focus on Hallucinations.

IF 1.9 Q3 MEDICINE, RESEARCH & EXPERIMENTAL Interactive Journal of Medical Research Pub Date : 2025-01-28 DOI:10.2196/59823
Dimitri Roustan, François Bastardot
{"title":"The Clinicians' Guide to Large Language Models: A General Perspective With a Focus on Hallucinations.","authors":"Dimitri Roustan, François Bastardot","doi":"10.2196/59823","DOIUrl":null,"url":null,"abstract":"<p><p>Large language models (LLMs) are artificial intelligence tools that have the prospect of profoundly changing how we practice all aspects of medicine. Considering the incredible potential of LLMs in medicine and the interest of many health care stakeholders for implementation into routine practice, it is therefore essential that clinicians be aware of the basic risks associated with the use of these models. Namely, a significant risk associated with the use of LLMs is their potential to create hallucinations. Hallucinations (false information) generated by LLMs arise from a multitude of causes, including both factors related to the training dataset as well as their auto-regressive nature. The implications for clinical practice range from the generation of inaccurate diagnostic and therapeutic information to the reinforcement of flawed diagnostic reasoning pathways, as well as a lack of reliability if not used properly. To reduce this risk, we developed a general technical framework for approaching LLMs in general clinical practice, as well as for implementation on a larger institutional scale.</p>","PeriodicalId":51757,"journal":{"name":"Interactive Journal of Medical Research","volume":"14 ","pages":"e59823"},"PeriodicalIF":1.9000,"publicationDate":"2025-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11815294/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Interactive Journal of Medical Research","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.2196/59823","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"MEDICINE, RESEARCH & EXPERIMENTAL","Score":null,"Total":0}
引用次数: 0

Abstract

Large language models (LLMs) are artificial intelligence tools that have the prospect of profoundly changing how we practice all aspects of medicine. Considering the incredible potential of LLMs in medicine and the interest of many health care stakeholders for implementation into routine practice, it is therefore essential that clinicians be aware of the basic risks associated with the use of these models. Namely, a significant risk associated with the use of LLMs is their potential to create hallucinations. Hallucinations (false information) generated by LLMs arise from a multitude of causes, including both factors related to the training dataset as well as their auto-regressive nature. The implications for clinical practice range from the generation of inaccurate diagnostic and therapeutic information to the reinforcement of flawed diagnostic reasoning pathways, as well as a lack of reliability if not used properly. To reduce this risk, we developed a general technical framework for approaching LLMs in general clinical practice, as well as for implementation on a larger institutional scale.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
大语言模型的临床医生指南:以幻觉为焦点的一般观点。
大型语言模型(llm)是一种人工智能工具,有望深刻改变我们在医学实践的各个方面。考虑到法学硕士在医学领域的巨大潜力,以及许多医疗保健利益相关者对将其应用于日常实践的兴趣,因此临床医生必须意识到与使用这些模型相关的基本风险。也就是说,与使用llm相关的一个重大风险是它们可能产生幻觉。llm产生的幻觉(虚假信息)由多种原因引起,包括与训练数据集相关的因素以及它们的自回归性质。对临床实践的影响范围从产生不准确的诊断和治疗信息到强化有缺陷的诊断推理途径,以及如果使用不当则缺乏可靠性。为了降低这种风险,我们开发了一个通用的技术框架,用于在一般临床实践中接近法学硕士,以及在更大的机构规模上实施。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Interactive Journal of Medical Research
Interactive Journal of Medical Research MEDICINE, RESEARCH & EXPERIMENTAL-
自引率
0.00%
发文量
45
审稿时长
12 weeks
期刊最新文献
The Lifetime Dance Exposure Questionnaire for Professional Training: Survey Development and Reliability Study. Prevalence and Associated Factors of Excessive Dietary Supplement Use Among Japanese Adults: Cross-Sectional Study. Information and Communication Technologies for Chronic Disease Self-Management in Adults Aged 65 Years and Older: Scoping Review. Global Research Trends and Hotspots in Gene Editing and Stem Cell Therapies for Neurodegenerative Diseases: Bibliometric and Visualization Analysis. Perspectives of Indian Gastroenterologists and Hepatologists on Nonalcoholic Fatty Liver Disease Diagnosis and Management: Insights From the Nationwide Web-Based Cross-Sectional DRIVE Survey.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1