Evaluating Large Language Models for Automated CPT Code Prediction in Endovascular Neurosurgery.

IF 5.7 3区 医学 Q1 HEALTH CARE SCIENCES & SERVICES Journal of Medical Systems Pub Date : 2025-01-24 DOI:10.1007/s10916-025-02149-4
Joanna M Roy, D Mitchell Self, Emily Isch, Basel Musmar, Matthews Lan, Kavantissa Keppetipola, Sravanthi Koduri, Mary-Katharine Pontarelli, Stavropoula I Tjoumakaris, M Reid Gooch, Robert H Rosenwasser, Pascal M Jabbour
{"title":"Evaluating Large Language Models for Automated CPT Code Prediction in Endovascular Neurosurgery.","authors":"Joanna M Roy, D Mitchell Self, Emily Isch, Basel Musmar, Matthews Lan, Kavantissa Keppetipola, Sravanthi Koduri, Mary-Katharine Pontarelli, Stavropoula I Tjoumakaris, M Reid Gooch, Robert H Rosenwasser, Pascal M Jabbour","doi":"10.1007/s10916-025-02149-4","DOIUrl":null,"url":null,"abstract":"<p><p>Large language models (LLMs) have been utilized to automate tasks like writing discharge summaries and operative reports in neurosurgery. The present study evaluates their ability to identify current procedural terminology (CPT) codes from operative reports. Three LLMs (ChatGPT 4.0, AtlasGPT and Gemini) were evaluated in their ability to provide CPT codes for diagnostic or interventional procedures in endovascular neurosurgery at a single institution. Responses were classified as correct, partially correct or incorrect, and the percentage of correctly identified CPT codes were calculated. The Chi-Square test and Kruskal Wallis test were used to compare responses across LLMs. A total of 30 operative notes were used in the present study. AtlasGPT identified CPT codes for 98.3% procedures with partially correct responses, while ChatGPT and Gemini provided partially correct responses for 86.7% and 30% procedures, respectively (P < 0.001). AtlasGPT identified CPT codes correctly in an average of 35.3% of procedures, followed by ChatGPT (35.1%) and Gemini (8.9%) (P < 0.001). A pairwise comparison among three LLMs revealed that AtlasGPT and ChatGPT outperformed Gemini. Untrained LLMs have the ability to identify partially correct CPT codes in endovascular neurosurgery. Training these models could further enhance their ability to identify CPT codes and minimize healthcare expenditure.</p>","PeriodicalId":16338,"journal":{"name":"Journal of Medical Systems","volume":"49 1","pages":"15"},"PeriodicalIF":5.7000,"publicationDate":"2025-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Medical Systems","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1007/s10916-025-02149-4","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"HEALTH CARE SCIENCES & SERVICES","Score":null,"Total":0}
引用次数: 0

Abstract

Large language models (LLMs) have been utilized to automate tasks like writing discharge summaries and operative reports in neurosurgery. The present study evaluates their ability to identify current procedural terminology (CPT) codes from operative reports. Three LLMs (ChatGPT 4.0, AtlasGPT and Gemini) were evaluated in their ability to provide CPT codes for diagnostic or interventional procedures in endovascular neurosurgery at a single institution. Responses were classified as correct, partially correct or incorrect, and the percentage of correctly identified CPT codes were calculated. The Chi-Square test and Kruskal Wallis test were used to compare responses across LLMs. A total of 30 operative notes were used in the present study. AtlasGPT identified CPT codes for 98.3% procedures with partially correct responses, while ChatGPT and Gemini provided partially correct responses for 86.7% and 30% procedures, respectively (P < 0.001). AtlasGPT identified CPT codes correctly in an average of 35.3% of procedures, followed by ChatGPT (35.1%) and Gemini (8.9%) (P < 0.001). A pairwise comparison among three LLMs revealed that AtlasGPT and ChatGPT outperformed Gemini. Untrained LLMs have the ability to identify partially correct CPT codes in endovascular neurosurgery. Training these models could further enhance their ability to identify CPT codes and minimize healthcare expenditure.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
评估血管内神经外科自动CPT代码预测的大型语言模型。
在神经外科中,大型语言模型(llm)已被用于自动编写出院摘要和手术报告等任务。本研究评估他们从手术报告中识别当前程序术语(CPT)代码的能力。三位llm (ChatGPT 4.0, AtlasGPT和Gemini)在同一机构为血管内神经外科的诊断或介入程序提供CPT代码的能力进行了评估。回答被分类为正确、部分正确或不正确,并计算正确识别的CPT代码的百分比。采用卡方检验和Kruskal Wallis检验比较不同llm的反应。本研究共使用了30个手术笔记。AtlasGPT为98.3%的程序识别出部分正确的CPT代码,而ChatGPT和Gemini分别为86.7%和30%的程序提供了部分正确的响应(P
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Journal of Medical Systems
Journal of Medical Systems 医学-卫生保健
CiteScore
11.60
自引率
1.90%
发文量
83
审稿时长
4.8 months
期刊介绍: Journal of Medical Systems provides a forum for the presentation and discussion of the increasingly extensive applications of new systems techniques and methods in hospital clinic and physician''s office administration; pathology radiology and pharmaceutical delivery systems; medical records storage and retrieval; and ancillary patient-support systems. The journal publishes informative articles essays and studies across the entire scale of medical systems from large hospital programs to novel small-scale medical services. Education is an integral part of this amalgamation of sciences and selected articles are published in this area. Since existing medical systems are constantly being modified to fit particular circumstances and to solve specific problems the journal includes a special section devoted to status reports on current installations.
期刊最新文献
Clinical Use of Non-Certified Generative AI in Healthcare: Governing the Regulatory Grey Zone from Convenience to Legal Accountability. Protecting Peer Review from the Surge of Low Fidelity Systematic Reviews in the Generative AI Era. Tracking Greenhouse Gas Emission Initiatives Across a Large Academic Health System Utilizing Innovative Dashboards. The Impact of Healthcare Professionals' Characteristics on the Evaluation of Clinical Decision Support Systems: Insights from a Cross-Country Usability and Technology Acceptance Study of the iCARE Tool. Emerging Utility of Multimodal Large Language Models in Cardiovascular Diagnostics.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1