Testing ChatGPT's Ability to Provide Patient and Physician Information on Aortic Aneurysm

IF 1.8 3区 医学 Q2 SURGERY Journal of Surgical Research Pub Date : 2025-03-01 DOI:10.1016/j.jss.2025.01.015
Daniel J. Bertges MD , Adam W. Beck MD , Marc Schermerhorn MD , Mark K. Eskandari MD , Jens Eldrup-Jorgensen MD , Sean Liebscher MD , Robyn Guinto MD , Mead Ferris MD , Andy Stanley MD , Georg Steinthorsson MD , Matthew Alef MD , Salvatore T. Scali MD
{"title":"Testing ChatGPT's Ability to Provide Patient and Physician Information on Aortic Aneurysm","authors":"Daniel J. Bertges MD ,&nbsp;Adam W. Beck MD ,&nbsp;Marc Schermerhorn MD ,&nbsp;Mark K. Eskandari MD ,&nbsp;Jens Eldrup-Jorgensen MD ,&nbsp;Sean Liebscher MD ,&nbsp;Robyn Guinto MD ,&nbsp;Mead Ferris MD ,&nbsp;Andy Stanley MD ,&nbsp;Georg Steinthorsson MD ,&nbsp;Matthew Alef MD ,&nbsp;Salvatore T. Scali MD","doi":"10.1016/j.jss.2025.01.015","DOIUrl":null,"url":null,"abstract":"<div><h3>Introduction</h3><div>Our objective was to test the ability of ChatGPT 4.0 to provide accurate information for patients and physicians about abdominal aortic aneurysms (AAA) and to assess its alignment with Society for Vascular Surgery (SVS) clinical practice guidelines (CPG) for AAA care.</div></div><div><h3>Material and methods</h3><div>Fifteen patient-level questions, 37 questions selected to reflect 28 SVS CPGs and 4 questions regarding AAA rupture risk were posed to ChatGPT 4.0. Single responses were recorded and graded for accuracy and quality by ten board-certified vascular surgeons as well as two fellow trainees using a 5-point Likert scale; 1 = very poor, 2 = poor, 3 = fair, 4 = good, and 5 = excellent.</div></div><div><h3>Results</h3><div>The mean of the means (MoM) accuracy rating across all 15 patient-level questions was 4.4 (SD 0.4, quartile range (QR) 4.2-4.7). ChatGPT 4.0 demonstrated good alignment with SVS practice guidelines (MoM: 4.2, SD: 0.4, QR: 3.9-4.5). The accuracy of responses was consistent across guideline categories; screening or surveillance (4.2), indications for surgery (4.5), preoperative risk assessment (4.5), perioperative coronary revascularization (4.1), and perioperative management (4.2). The generative artificial intelligence bot demonstrated only fair performance in answering the annual AAA rupture risk (MoM: 3.4, SD: 1.2, QR: 2.3-4.3).</div></div><div><h3>Conclusions</h3><div>ChatGPT 4.0 provided accurate responses to a variety of patient-level questions regarding AAA. Responses were well-aligned with current SVS CPGs except for inaccuracies in the risk of AAA rupture at varying diameters. The emergence of generative artificial intelligence bots presents an opportunity for study of applications in patient education and to determine their ability to augment the vascular specialist's knowledge base.</div></div>","PeriodicalId":17030,"journal":{"name":"Journal of Surgical Research","volume":"307 ","pages":"Pages 129-138"},"PeriodicalIF":1.8000,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Surgical Research","FirstCategoryId":"3","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0022480425000332","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"SURGERY","Score":null,"Total":0}
引用次数: 0

Abstract

Introduction

Our objective was to test the ability of ChatGPT 4.0 to provide accurate information for patients and physicians about abdominal aortic aneurysms (AAA) and to assess its alignment with Society for Vascular Surgery (SVS) clinical practice guidelines (CPG) for AAA care.

Material and methods

Fifteen patient-level questions, 37 questions selected to reflect 28 SVS CPGs and 4 questions regarding AAA rupture risk were posed to ChatGPT 4.0. Single responses were recorded and graded for accuracy and quality by ten board-certified vascular surgeons as well as two fellow trainees using a 5-point Likert scale; 1 = very poor, 2 = poor, 3 = fair, 4 = good, and 5 = excellent.

Results

The mean of the means (MoM) accuracy rating across all 15 patient-level questions was 4.4 (SD 0.4, quartile range (QR) 4.2-4.7). ChatGPT 4.0 demonstrated good alignment with SVS practice guidelines (MoM: 4.2, SD: 0.4, QR: 3.9-4.5). The accuracy of responses was consistent across guideline categories; screening or surveillance (4.2), indications for surgery (4.5), preoperative risk assessment (4.5), perioperative coronary revascularization (4.1), and perioperative management (4.2). The generative artificial intelligence bot demonstrated only fair performance in answering the annual AAA rupture risk (MoM: 3.4, SD: 1.2, QR: 2.3-4.3).

Conclusions

ChatGPT 4.0 provided accurate responses to a variety of patient-level questions regarding AAA. Responses were well-aligned with current SVS CPGs except for inaccuracies in the risk of AAA rupture at varying diameters. The emergence of generative artificial intelligence bots presents an opportunity for study of applications in patient education and to determine their ability to augment the vascular specialist's knowledge base.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
3.90
自引率
4.50%
发文量
627
审稿时长
138 days
期刊介绍: The Journal of Surgical Research: Clinical and Laboratory Investigation publishes original articles concerned with clinical and laboratory investigations relevant to surgical practice and teaching. The journal emphasizes reports of clinical investigations or fundamental research bearing directly on surgical management that will be of general interest to a broad range of surgeons and surgical researchers. The articles presented need not have been the products of surgeons or of surgical laboratories. The Journal of Surgical Research also features review articles and special articles relating to educational, research, or social issues of interest to the academic surgical community.
期刊最新文献
Whole Blood Requirements in Civilian Trauma Resuscitation: Implications for Blood Inventory Program The Hidden Overlap Between Patient Group Means in Bariatric Randomized Controlled Trials Testing ChatGPT's Ability to Provide Patient and Physician Information on Aortic Aneurysm Short-Term Exposure to Ambient Particulate Matter Pollution and Surgical Outcomes Reducing Air Leak After Empyema Surgery: COPD’s Role and Patient Management
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1