Comparing large language models for antibiotic prescribing in different clinical scenarios: which performs better?

IF 8.5 1区 医学 Q1 INFECTIOUS DISEASES Clinical Microbiology and Infection Pub Date : 2025-08-01 Epub Date: 2025-03-19 DOI:10.1016/j.cmi.2025.03.002
Andrea De Vito , Nicholas Geremia , Davide Fiore Bavaro , Susan K. Seo , Justin Laracy , Maria Mazzitelli , Andrea Marino , Alberto Enrico Maraolo , Antonio Russo , Agnese Colpani , Michele Bartoletti , Anna Maria Cattelan , Cristina Mussini , Saverio Giuseppe Parisi , Luigi Angelo Vaira , Giuseppe Nunnari , Giordano Madeddu
{"title":"Comparing large language models for antibiotic prescribing in different clinical scenarios: which performs better?","authors":"Andrea De Vito ,&nbsp;Nicholas Geremia ,&nbsp;Davide Fiore Bavaro ,&nbsp;Susan K. Seo ,&nbsp;Justin Laracy ,&nbsp;Maria Mazzitelli ,&nbsp;Andrea Marino ,&nbsp;Alberto Enrico Maraolo ,&nbsp;Antonio Russo ,&nbsp;Agnese Colpani ,&nbsp;Michele Bartoletti ,&nbsp;Anna Maria Cattelan ,&nbsp;Cristina Mussini ,&nbsp;Saverio Giuseppe Parisi ,&nbsp;Luigi Angelo Vaira ,&nbsp;Giuseppe Nunnari ,&nbsp;Giordano Madeddu","doi":"10.1016/j.cmi.2025.03.002","DOIUrl":null,"url":null,"abstract":"<div><h3>Objectives</h3><div>Large language models (LLMs) show promise in clinical decision-making, but comparative evaluations of their antibiotic prescribing accuracy are limited. This study assesses the performance of various LLMs in recommending antibiotic treatments across diverse clinical scenarios.</div></div><div><h3>Methods</h3><div>Fourteen LLMs, including standard and premium versions of ChatGPT, Claude, Copilot, Gemini, Le Chat, Grok, Perplexity, and Pi.ai, were evaluated using 60 clinical cases with antibiograms covering 10 infection types. A standardized prompt was used for antibiotic recommendations focusing on drug choice, dosage, and treatment duration. Responses were anonymized and reviewed by a blinded expert panel assessing antibiotic appropriateness, dosage correctness, and duration adequacy.</div></div><div><h3>Results</h3><div>A total of 840 responses were collected and analysed. ChatGPT-o1 demonstrated the highest accuracy in antibiotic prescriptions, with 71.7% (43/60) of its recommendations classified as correct and only one (1.7%) incorrect. Gemini and Claude 3 Opus had the lowest accuracy. Dosage correctness was highest for ChatGPT-o1 (96.7%, 58/60), followed by Perplexity Pro (90.0%, 54/60) and Claude 3.5 Sonnet (91.7%, 55/60). In treatment duration, Gemini provided the most appropriate recommendations (75.0%, 45/60), whereas Claude 3.5 Sonnet tended to over-prescribe duration. Performance declined with increasing case complexity, particularly for difficult-to-treat microorganisms.</div></div><div><h3>Discussion</h3><div>: There is significant variability among LLMs in prescribing appropriate antibiotics, dosages, and treatment durations. ChatGPT-o1 outperformed other models, indicating the potential of advanced LLMs as decision-support tools in antibiotic prescribing. However, decreased accuracy in complex cases and inconsistencies among models highlight the need for careful validation before clinical utilization.</div></div>","PeriodicalId":10444,"journal":{"name":"Clinical Microbiology and Infection","volume":"31 8","pages":"Pages 1336-1342"},"PeriodicalIF":8.5000,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Clinical Microbiology and Infection","FirstCategoryId":"3","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1198743X2500120X","RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/3/19 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"INFECTIOUS DISEASES","Score":null,"Total":0}
引用次数: 0

Abstract

Objectives

Large language models (LLMs) show promise in clinical decision-making, but comparative evaluations of their antibiotic prescribing accuracy are limited. This study assesses the performance of various LLMs in recommending antibiotic treatments across diverse clinical scenarios.

Methods

Fourteen LLMs, including standard and premium versions of ChatGPT, Claude, Copilot, Gemini, Le Chat, Grok, Perplexity, and Pi.ai, were evaluated using 60 clinical cases with antibiograms covering 10 infection types. A standardized prompt was used for antibiotic recommendations focusing on drug choice, dosage, and treatment duration. Responses were anonymized and reviewed by a blinded expert panel assessing antibiotic appropriateness, dosage correctness, and duration adequacy.

Results

A total of 840 responses were collected and analysed. ChatGPT-o1 demonstrated the highest accuracy in antibiotic prescriptions, with 71.7% (43/60) of its recommendations classified as correct and only one (1.7%) incorrect. Gemini and Claude 3 Opus had the lowest accuracy. Dosage correctness was highest for ChatGPT-o1 (96.7%, 58/60), followed by Perplexity Pro (90.0%, 54/60) and Claude 3.5 Sonnet (91.7%, 55/60). In treatment duration, Gemini provided the most appropriate recommendations (75.0%, 45/60), whereas Claude 3.5 Sonnet tended to over-prescribe duration. Performance declined with increasing case complexity, particularly for difficult-to-treat microorganisms.

Discussion

: There is significant variability among LLMs in prescribing appropriate antibiotics, dosages, and treatment durations. ChatGPT-o1 outperformed other models, indicating the potential of advanced LLMs as decision-support tools in antibiotic prescribing. However, decreased accuracy in complex cases and inconsistencies among models highlight the need for careful validation before clinical utilization.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
比较不同临床情况下抗生素处方的大语言模型:哪个表现更好?
目的:大型语言模型(LLMs)在临床决策中显示出希望,但对其抗生素处方准确性的比较评估有限。本研究评估了不同llm在不同临床情况下推荐抗生素治疗的表现。方法:14个llm,包括标准和高级版本的ChatGPT, Claude, Copilot, Gemini, Le Chat, Grok, Perplexity和Pi。对60例临床病例进行10种感染类型的抗生素图评估。标准化提示用于抗生素推荐,重点是药物选择,剂量和治疗时间。反应是匿名的,并由一个盲法专家小组评估抗生素的适宜性、剂量的正确性和持续时间的充分性。结果:共收集并分析了840份问卷。chatgpt - 01在抗生素处方中显示出最高的准确性,其中71.7%(43/60)的建议被归类为正确,只有一个(1.7%)不正确。Gemini和Claude 3 Opus的准确率最低。chatgpt - 01的剂量正确性最高(96.7%,58/60),其次是Perplexity Pro(90.0%, 54/60)和Claude 3.5Sonnet(91.7%, 55/60)。在治疗时间上,Gemini提供了最合适的建议(75.0%,45/60),而Claude 3.5 Sonnet倾向于过度规定治疗时间。随着病例复杂性的增加,特别是对于难以治疗的微生物,性能下降。结论:法学硕士在处方合适的抗生素、剂量和治疗时间方面存在显著差异。chatgpt - 01优于其他模型,表明先进llm作为抗生素处方决策支持工具的潜力。然而,在复杂病例中准确性的降低和模型之间的不一致性突出了在临床应用前仔细验证的必要性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
25.30
自引率
2.10%
发文量
441
审稿时长
2-4 weeks
期刊介绍: Clinical Microbiology and Infection (CMI) is a monthly journal published by the European Society of Clinical Microbiology and Infectious Diseases. It focuses on peer-reviewed papers covering basic and applied research in microbiology, infectious diseases, virology, parasitology, immunology, and epidemiology as they relate to therapy and diagnostics.
期刊最新文献
Systematic investigation of baseline nosocomial transmission of tuberculosis in the Kyrgyz Republic, Central Asia. Real-world effectiveness of influenza vaccination and subsequent waning in a tropical setting: a retrospective cohort study. Hemophagocytic lymphohistiocytosis and disseminated toxoplasmosis after stem cell transplant. Avoiding resistance development to newer drugs: open research lines. On the shoulders of a giant: an appraisal of the legacy of Dr. Gerald P Bodey to infectious diseases.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1