Assessing the clinical support capabilities of ChatGPT 4o and ChatGPT 4o mini in managing lumbar disc herniation.

IF 3.4 3区 医学 Q2 MEDICINE, RESEARCH & EXPERIMENTAL European Journal of Medical Research Pub Date : 2025-01-22 DOI:10.1186/s40001-025-02296-x
Suning Wang, Ying Wang, Linlin Jiang, Yong Chang, Shiji Zhang, Kun Zhao, Lu Chen, Chunzheng Gao
{"title":"Assessing the clinical support capabilities of ChatGPT 4o and ChatGPT 4o mini in managing lumbar disc herniation.","authors":"Suning Wang, Ying Wang, Linlin Jiang, Yong Chang, Shiji Zhang, Kun Zhao, Lu Chen, Chunzheng Gao","doi":"10.1186/s40001-025-02296-x","DOIUrl":null,"url":null,"abstract":"<p><strong>Purpose: </strong>This study evaluated and compared the clinical support capabilities of ChatGPT 4o and ChatGPT 4o mini in diagnosing and treating lumbar disc herniation (LDH) with radiculopathy.</p><p><strong>Methods: </strong>Twenty-one questions (across 5 categories) from NASS Clinical Guidelines were input into ChatGPT 4o and ChatGPT 4o mini. Five orthopedic surgeons assessed their responses using a 5-point Likert scale for accuracy and completeness, and a 7-point scale for reliability. Flesch Reading Ease scores were calculated to assess readability. Additionally, ChatGPT 4o analyzed lumbar images from 53 patients, comparing its recognizable agreement with orthopedic surgeons using Kappa values.</p><p><strong>Results: </strong>Both models demonstrated strong clinical support capabilities with no significant differences in accuracy or reliability. However, ChatGPT 4o provided more comprehensive and consistent responses. The Flesch Reading Ease scores for both models indicated that their generated content was \"very difficult to read,\" potentially limiting patient accessibility. In evaluating lumbar disc herniation images, ChatGPT 4o achieved an overall accuracy of 0.81, with LDH recognition precision, recall, and F1 scores exceeding 0.80. The AUC was 0.80, and the Kappa value was 0.61, indicating moderate agreement between the model's predictions and actual diagnoses, though with room for improvement.</p><p><strong>Conclusion: </strong>While both models are effective, ChatGPT 4o offers more comprehensive clinical responses, making it more suitable for high-integrity medical tasks. However, the difficulty in reading AI-generated content and occasional use of misleading terms, such as \"tumor,\" indicate a need for further improvements to reduce patient anxiety.</p>","PeriodicalId":11949,"journal":{"name":"European Journal of Medical Research","volume":"30 1","pages":"45"},"PeriodicalIF":3.4000,"publicationDate":"2025-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11753088/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"European Journal of Medical Research","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1186/s40001-025-02296-x","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"MEDICINE, RESEARCH & EXPERIMENTAL","Score":null,"Total":0}
引用次数: 0

Abstract

Purpose: This study evaluated and compared the clinical support capabilities of ChatGPT 4o and ChatGPT 4o mini in diagnosing and treating lumbar disc herniation (LDH) with radiculopathy.

Methods: Twenty-one questions (across 5 categories) from NASS Clinical Guidelines were input into ChatGPT 4o and ChatGPT 4o mini. Five orthopedic surgeons assessed their responses using a 5-point Likert scale for accuracy and completeness, and a 7-point scale for reliability. Flesch Reading Ease scores were calculated to assess readability. Additionally, ChatGPT 4o analyzed lumbar images from 53 patients, comparing its recognizable agreement with orthopedic surgeons using Kappa values.

Results: Both models demonstrated strong clinical support capabilities with no significant differences in accuracy or reliability. However, ChatGPT 4o provided more comprehensive and consistent responses. The Flesch Reading Ease scores for both models indicated that their generated content was "very difficult to read," potentially limiting patient accessibility. In evaluating lumbar disc herniation images, ChatGPT 4o achieved an overall accuracy of 0.81, with LDH recognition precision, recall, and F1 scores exceeding 0.80. The AUC was 0.80, and the Kappa value was 0.61, indicating moderate agreement between the model's predictions and actual diagnoses, though with room for improvement.

Conclusion: While both models are effective, ChatGPT 4o offers more comprehensive clinical responses, making it more suitable for high-integrity medical tasks. However, the difficulty in reading AI-generated content and occasional use of misleading terms, such as "tumor," indicate a need for further improvements to reduce patient anxiety.

Abstract Image

Abstract Image

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
评估ChatGPT 40和ChatGPT 40 mini治疗腰椎间盘突出症的临床支持能力。
目的:本研究评估并比较了ChatGPT 40和ChatGPT 40 mini在诊断和治疗腰椎间盘突出症(LDH)并神经根病中的临床支持能力。方法:将NASS临床指南中的21个问题(5类)输入ChatGPT 40和ChatGPT 40 mini。5位整形外科医生用5分李克特量表评估他们的反应的准确性和完整性,用7分量表评估可靠性。计算Flesch Reading Ease分数来评估可读性。此外,ChatGPT 40分析了53名患者的腰椎图像,使用Kappa值将其与骨科医生的识别一致性进行了比较。结果:两种模型均表现出较强的临床支持能力,准确性和可靠性无显著差异。然而,ChatGPT 40提供了更全面和一致的响应。这两种模型的Flesch Reading Ease分数表明,它们生成的内容“非常难以阅读”,可能会限制患者的可访问性。在评估腰椎间盘突出图像时,ChatGPT 40的总体准确率为0.81,LDH识别精度、召回率和F1评分均超过0.80。AUC为0.80,Kappa值为0.61,表明该模型的预测与实际诊断有一定的一致性,但仍有改进的余地。结论:两种模型均有效,但ChatGPT 40的临床反应更全面,更适合高完整性的医疗任务。然而,人工智能生成内容的阅读困难以及偶尔使用误导性术语,如“肿瘤”,表明需要进一步改进以减少患者的焦虑。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
European Journal of Medical Research
European Journal of Medical Research 医学-医学:研究与实验
CiteScore
3.20
自引率
0.00%
发文量
247
审稿时长
>12 weeks
期刊介绍: European Journal of Medical Research publishes translational and clinical research of international interest across all medical disciplines, enabling clinicians and other researchers to learn about developments and innovations within these disciplines and across the boundaries between disciplines. The journal publishes high quality research and reviews and aims to ensure that the results of all well-conducted research are published, regardless of their outcome.
期刊最新文献
Association between the dietary index for gut microbiota and post-stroke depression: a cross-sectional study. Neutrophil activation, NETs formation, and inflammation in diabetic kidney disease: therapeutic perspectives. Ability of the continuous carotid Doppler patch to track directional changes in cardiac output in cardiac surgery patients. Risk factors associated with non-curative resection and metachronous cancer after endoscopic submucosal dissection in patients with ulcerative early gastric cancer. Incremental value of novel immunonutritional-inflammatory markers over clinical models in predicting sepsis after ureteroscopic lithotripsy.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1