Background
The use of multiple medications increases the risk of harmful drug-drug interactions (DDIs). Conventional DDI screening databases vary in coverage and often trigger low-relevance alerts, contributing to alert fatigue. Large language models (LLMs) have emerged as potential tools for DDI identification, however, their performance compared to established databases using real-world patient data remains under-explored.
Methods
In this exploratory study, we compared conventional database screening with LLM-based screening using anonymized medication lists from rheumatology patients. Lexicomp, Medscape and Drugs.com were used to compile a reference set of 204 clinically relevant interactions across 57 cases. Using identical prompts, we then queried ChatGPT, Google Gemini and Microsoft Copilot for interactions potentially requiring pharmacists' intervention. We calculated sensitivity, specificity, precision and F1 score.
Results
Compared to the reference set of 204 DDIs, ChatGPT identified 439, Gemini 1556, and Copilot 1813 potential interactions. While Gemini achieved the highest sensitivity (0.697), ChatGPT demonstrated higher specificity (0.868). All three platforms demonstrated low precision scores. Overall, ChatGPT achieved the highest performance by F1 score (0.2520), followed by Gemini (0.1933) and Copilot (0.1153). Our results suggest that no AI systems assessed achieve the required balance of precision and sensitivity for reliable clinical decision-making in DDI screening.
Conclusion
Although LLMs show promise as complementary tools in DDI screening, as they proved effective in identifying true interactions, they generate clinically inaccurate information due to hallucinations, which limits their reliability as standalone screening tools. Consequently, while LLMs could support clinical pharmacists in polypharmacy management, their outputs must always undergo professional validation to ensure patient safety.
扫码关注我们
求助内容:
应助结果提醒方式:
