Brian P. Triana MD, MBA , Walter F. Wiggins MD, PhD , Nicholas Befera MD , Christopher Roth MD, MMCI , Brendan Cline MD
{"title":"Proof-of-Concept Prompted Large Language Model for Radiology Procedure Request Routing","authors":"Brian P. Triana MD, MBA , Walter F. Wiggins MD, PhD , Nicholas Befera MD , Christopher Roth MD, MMCI , Brendan Cline MD","doi":"10.1016/j.jvir.2025.03.012","DOIUrl":null,"url":null,"abstract":"<div><h3>Purpose</h3><div>To measure the accuracy and cost of a proof-of-concept prompted large language model (LLM) to route procedure requests to the appropriate phone number or pager at a single large academic hospital.</div></div><div><h3>Materials and Methods</h3><div>At a large academic hospital, existing teams, pager/phone numbers, and schedules were used to create text-based rules for procedure requests. A prompted LLM was created to route procedure requests at specific days and times to the appropriate teams. The prompted LLM was tested on 250 “in-scope” requests (explicitly defined by provided rules) and 25 “out-of-scope” requests using generative pretrained transformer (GPT)–3.5-turbo and GPT-4 models from OpenAI and 4 open-weight models.</div></div><div><h3>Results</h3><div>The prompted LLM correctly routed 96.4% of in-scope and 76.0% of out-of-scope requests using GPT-4, which outperformed all other models (<em>P</em> < .001). All models demonstrated worse performance for requests during evening and weekend hours (<em>P</em> < .001). OpenAI application programming interface costs were approximately $0.03 per request for GPT-4 and $0.0006 per request for GPT-3.5-turbo.</div></div><div><h3>Conclusions</h3><div>This study demonstrates the accuracy of low-cost prompted LLMs to appropriately route procedure requests in a large academic hospital system. A similar approach may be used to help clinicians navigate a radiology phone tree or as a tool to help reading room coordinators route requests effectively.</div></div>","PeriodicalId":49962,"journal":{"name":"Journal of Vascular and Interventional Radiology","volume":"36 7","pages":"Pages 1201-1207"},"PeriodicalIF":2.6000,"publicationDate":"2025-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Vascular and Interventional Radiology","FirstCategoryId":"3","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1051044325002490","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"PERIPHERAL VASCULAR DISEASE","Score":null,"Total":0}
引用次数: 0
Abstract
Purpose
To measure the accuracy and cost of a proof-of-concept prompted large language model (LLM) to route procedure requests to the appropriate phone number or pager at a single large academic hospital.
Materials and Methods
At a large academic hospital, existing teams, pager/phone numbers, and schedules were used to create text-based rules for procedure requests. A prompted LLM was created to route procedure requests at specific days and times to the appropriate teams. The prompted LLM was tested on 250 “in-scope” requests (explicitly defined by provided rules) and 25 “out-of-scope” requests using generative pretrained transformer (GPT)–3.5-turbo and GPT-4 models from OpenAI and 4 open-weight models.
Results
The prompted LLM correctly routed 96.4% of in-scope and 76.0% of out-of-scope requests using GPT-4, which outperformed all other models (P < .001). All models demonstrated worse performance for requests during evening and weekend hours (P < .001). OpenAI application programming interface costs were approximately $0.03 per request for GPT-4 and $0.0006 per request for GPT-3.5-turbo.
Conclusions
This study demonstrates the accuracy of low-cost prompted LLMs to appropriately route procedure requests in a large academic hospital system. A similar approach may be used to help clinicians navigate a radiology phone tree or as a tool to help reading room coordinators route requests effectively.
期刊介绍:
JVIR, published continuously since 1990, is an international, monthly peer-reviewed interventional radiology journal. As the official journal of the Society of Interventional Radiology, JVIR is the peer-reviewed journal of choice for interventional radiologists, radiologists, cardiologists, vascular surgeons, neurosurgeons, and other clinicians who seek current and reliable information on every aspect of vascular and interventional radiology. Each issue of JVIR covers critical and cutting-edge medical minimally invasive, clinical, basic research, radiological, pathological, and socioeconomic issues of importance to the field.