Andrea De Vito , Nicholas Geremia , Davide Fiore Bavaro , Susan K. Seo , Justin Laracy , Maria Mazzitelli , Andrea Marino , Alberto Enrico Maraolo , Antonio Russo , Agnese Colpani , Michele Bartoletti , Anna Maria Cattelan , Cristina Mussini , Saverio Giuseppe Parisi , Luigi Angelo Vaira , Giuseppe Nunnari , Giordano Madeddu
{"title":"Comparing large language models for antibiotic prescribing in different clinical scenarios: which performs better?","authors":"Andrea De Vito , Nicholas Geremia , Davide Fiore Bavaro , Susan K. Seo , Justin Laracy , Maria Mazzitelli , Andrea Marino , Alberto Enrico Maraolo , Antonio Russo , Agnese Colpani , Michele Bartoletti , Anna Maria Cattelan , Cristina Mussini , Saverio Giuseppe Parisi , Luigi Angelo Vaira , Giuseppe Nunnari , Giordano Madeddu","doi":"10.1016/j.cmi.2025.03.002","DOIUrl":null,"url":null,"abstract":"<div><h3>Objectives</h3><div>Large language models (LLMs) show promise in clinical decision-making, but comparative evaluations of their antibiotic prescribing accuracy are limited. This study assesses the performance of various LLMs in recommending antibiotic treatments across diverse clinical scenarios.</div></div><div><h3>Methods</h3><div>Fourteen LLMs, including standard and premium versions of ChatGPT, Claude, Copilot, Gemini, Le Chat, Grok, Perplexity, and Pi.ai, were evaluated using 60 clinical cases with antibiograms covering 10 infection types. A standardized prompt was used for antibiotic recommendations focusing on drug choice, dosage, and treatment duration. Responses were anonymized and reviewed by a blinded expert panel assessing antibiotic appropriateness, dosage correctness, and duration adequacy.</div></div><div><h3>Results</h3><div>A total of 840 responses were collected and analysed. ChatGPT-o1 demonstrated the highest accuracy in antibiotic prescriptions, with 71.7% (43/60) of its recommendations classified as correct and only one (1.7%) incorrect. Gemini and Claude 3 Opus had the lowest accuracy. Dosage correctness was highest for ChatGPT-o1 (96.7%, 58/60), followed by Perplexity Pro (90.0%, 54/60) and Claude 3.5 Sonnet (91.7%, 55/60). In treatment duration, Gemini provided the most appropriate recommendations (75.0%, 45/60), whereas Claude 3.5 Sonnet tended to over-prescribe duration. Performance declined with increasing case complexity, particularly for difficult-to-treat microorganisms.</div></div><div><h3>Discussion</h3><div>: There is significant variability among LLMs in prescribing appropriate antibiotics, dosages, and treatment durations. ChatGPT-o1 outperformed other models, indicating the potential of advanced LLMs as decision-support tools in antibiotic prescribing. However, decreased accuracy in complex cases and inconsistencies among models highlight the need for careful validation before clinical utilization.</div></div>","PeriodicalId":10444,"journal":{"name":"Clinical Microbiology and Infection","volume":"31 8","pages":"Pages 1336-1342"},"PeriodicalIF":8.5000,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Clinical Microbiology and Infection","FirstCategoryId":"3","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1198743X2500120X","RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/3/19 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"INFECTIOUS DISEASES","Score":null,"Total":0}
引用次数: 0
Abstract
Objectives
Large language models (LLMs) show promise in clinical decision-making, but comparative evaluations of their antibiotic prescribing accuracy are limited. This study assesses the performance of various LLMs in recommending antibiotic treatments across diverse clinical scenarios.
Methods
Fourteen LLMs, including standard and premium versions of ChatGPT, Claude, Copilot, Gemini, Le Chat, Grok, Perplexity, and Pi.ai, were evaluated using 60 clinical cases with antibiograms covering 10 infection types. A standardized prompt was used for antibiotic recommendations focusing on drug choice, dosage, and treatment duration. Responses were anonymized and reviewed by a blinded expert panel assessing antibiotic appropriateness, dosage correctness, and duration adequacy.
Results
A total of 840 responses were collected and analysed. ChatGPT-o1 demonstrated the highest accuracy in antibiotic prescriptions, with 71.7% (43/60) of its recommendations classified as correct and only one (1.7%) incorrect. Gemini and Claude 3 Opus had the lowest accuracy. Dosage correctness was highest for ChatGPT-o1 (96.7%, 58/60), followed by Perplexity Pro (90.0%, 54/60) and Claude 3.5 Sonnet (91.7%, 55/60). In treatment duration, Gemini provided the most appropriate recommendations (75.0%, 45/60), whereas Claude 3.5 Sonnet tended to over-prescribe duration. Performance declined with increasing case complexity, particularly for difficult-to-treat microorganisms.
Discussion
: There is significant variability among LLMs in prescribing appropriate antibiotics, dosages, and treatment durations. ChatGPT-o1 outperformed other models, indicating the potential of advanced LLMs as decision-support tools in antibiotic prescribing. However, decreased accuracy in complex cases and inconsistencies among models highlight the need for careful validation before clinical utilization.
期刊介绍:
Clinical Microbiology and Infection (CMI) is a monthly journal published by the European Society of Clinical Microbiology and Infectious Diseases. It focuses on peer-reviewed papers covering basic and applied research in microbiology, infectious diseases, virology, parasitology, immunology, and epidemiology as they relate to therapy and diagnostics.