Comparison of radiological interpretation made by veterinary radiologists and state-of-the-art commercial AI software for canine and feline radiographic studies.

IF 2.9 2区 农林科学 Q1 VETERINARY SCIENCES Frontiers in Veterinary Science Pub Date : 2025-02-21 eCollection Date: 2025-01-01 DOI:10.3389/fvets.2025.1502790
Yero S Ndiaye, Peter Cramton, Chavdar Chernev, Axel Ockenfels, Tobias Schwarz
{"title":"Comparison of radiological interpretation made by veterinary radiologists and state-of-the-art commercial AI software for canine and feline radiographic studies.","authors":"Yero S Ndiaye, Peter Cramton, Chavdar Chernev, Axel Ockenfels, Tobias Schwarz","doi":"10.3389/fvets.2025.1502790","DOIUrl":null,"url":null,"abstract":"<p><strong>Introduction: </strong>As human medical diagnostic expertise is scarcely available, especially in veterinary care, artificial intelligence (AI) has been increasingly used as a remedy. AI's promise comes from improving human diagnostics or providing good diagnostics at lower cost, increasing access. This study analyzed the diagnostic performance of a widely used AI radiology software vs. veterinary radiologists in interpreting canine and feline radiographs. We aimed to establish whether the performance of commonly used AI matches the performance of a typical radiologist and thus can be reliably used. Secondly, we try to identify in which cases AI is effective.</p><p><strong>Methods: </strong>Fifty canine and feline radiographic studies in DICOM format were anonymized and reported by 11 board-certified veterinary radiologists (ECVDI or ACVR) and processed with commercial and widely used AI software dedicated to small animal radiography (SignalRAY<sup>®</sup>, SignalPET<sup>®</sup> Dallas, TX, USA). The AI software used a deep-learning algorithm and returned a coded <i>abnormal</i> or <i>normal</i> diagnosis for each finding in the study. The radiologists provided a written report in English. All reports' findings were coded into categories matching the codes from the AI software and classified as <i>normal</i> or <i>abnormal</i>. The sensitivity, specificity, and accuracy of each radiologist and the AI software were calculated. The variance in agreement between each radiologist and the AI software was measured to calculate the ambiguity of each radiological finding.</p><p><strong>Results: </strong>AI matched the best radiologist in accuracy and was more specific but less sensitive than human radiologists. AI did better than the median radiologist overall in low- and high-ambiguity cases. In high-ambiguity cases, AI's accuracy remained high, though it was less effective at detecting abnormalities but better at identifying normal findings. The study confirmed AI's reliability, especially in low-ambiguity scenarios.</p><p><strong>Conclusion: </strong>Our findings suggest that AI performs almost as well as the best veterinary radiologist in all settings of descriptive radiographic findings. However, its strengths lie more in confirming normality than detecting abnormalities, and it does not provide differential diagnoses. Therefore, the broader use of AI could reliably increase diagnostic availability but requires further human input. Given the unique strengths of human experts and AI and the differences in sensitivity vs. specificity and low-ambiguity vs. high-ambiguity settings, AI will likely complement rather than replace human experts.</p>","PeriodicalId":12772,"journal":{"name":"Frontiers in Veterinary Science","volume":"12 ","pages":"1502790"},"PeriodicalIF":2.9000,"publicationDate":"2025-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11886591/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Frontiers in Veterinary Science","FirstCategoryId":"97","ListUrlMain":"https://doi.org/10.3389/fvets.2025.1502790","RegionNum":2,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/1/1 0:00:00","PubModel":"eCollection","JCR":"Q1","JCRName":"VETERINARY SCIENCES","Score":null,"Total":0}
引用次数: 0

Abstract

Introduction: As human medical diagnostic expertise is scarcely available, especially in veterinary care, artificial intelligence (AI) has been increasingly used as a remedy. AI's promise comes from improving human diagnostics or providing good diagnostics at lower cost, increasing access. This study analyzed the diagnostic performance of a widely used AI radiology software vs. veterinary radiologists in interpreting canine and feline radiographs. We aimed to establish whether the performance of commonly used AI matches the performance of a typical radiologist and thus can be reliably used. Secondly, we try to identify in which cases AI is effective.

Methods: Fifty canine and feline radiographic studies in DICOM format were anonymized and reported by 11 board-certified veterinary radiologists (ECVDI or ACVR) and processed with commercial and widely used AI software dedicated to small animal radiography (SignalRAY®, SignalPET® Dallas, TX, USA). The AI software used a deep-learning algorithm and returned a coded abnormal or normal diagnosis for each finding in the study. The radiologists provided a written report in English. All reports' findings were coded into categories matching the codes from the AI software and classified as normal or abnormal. The sensitivity, specificity, and accuracy of each radiologist and the AI software were calculated. The variance in agreement between each radiologist and the AI software was measured to calculate the ambiguity of each radiological finding.

Results: AI matched the best radiologist in accuracy and was more specific but less sensitive than human radiologists. AI did better than the median radiologist overall in low- and high-ambiguity cases. In high-ambiguity cases, AI's accuracy remained high, though it was less effective at detecting abnormalities but better at identifying normal findings. The study confirmed AI's reliability, especially in low-ambiguity scenarios.

Conclusion: Our findings suggest that AI performs almost as well as the best veterinary radiologist in all settings of descriptive radiographic findings. However, its strengths lie more in confirming normality than detecting abnormalities, and it does not provide differential diagnoses. Therefore, the broader use of AI could reliably increase diagnostic availability but requires further human input. Given the unique strengths of human experts and AI and the differences in sensitivity vs. specificity and low-ambiguity vs. high-ambiguity settings, AI will likely complement rather than replace human experts.

Abstract Image

Abstract Image

Abstract Image

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
比较兽医放射科医生的放射学解释和最先进的商业人工智能软件,用于犬和猫的放射学研究。
导读:由于人类医疗诊断专业知识很少,特别是在兽医护理方面,人工智能(AI)已越来越多地被用作补救措施。人工智能的前景来自于改善人类诊断或以更低的成本提供良好的诊断,增加获取机会。本研究分析了广泛使用的人工智能放射学软件与兽医放射学家在解释犬和猫的x射线片方面的诊断性能。我们的目标是确定常用人工智能的性能是否与典型放射科医生的性能相匹配,从而可以可靠地使用。其次,我们试图确定在哪些情况下人工智能是有效的。方法:50份DICOM格式的犬和猫放射学研究报告由11名委员会认证的兽医放射学家(ECVDI或ACVR)匿名报告,并使用商业和广泛使用的小动物放射学专用人工智能软件(SignalRAY®,SignalPET®Dallas, TX, USA)进行处理。人工智能软件使用深度学习算法,并为研究中的每个发现返回编码的异常或正常诊断。放射科医生提供了一份英文书面报告。所有报告的发现都被编码成与人工智能软件的代码相匹配的类别,并被分类为正常或异常。计算每位放射科医生和人工智能软件的敏感性、特异性和准确性。测量每个放射科医生和人工智能软件之间的一致性差异,以计算每个放射发现的模糊性。结果:人工智能在准确性上与最好的放射科医生相匹配,比人类放射科医生更具体,但敏感度较低。在低模糊度和高模糊度病例中,人工智能总体上优于中等放射科医生。在高度模糊的情况下,人工智能的准确性仍然很高,尽管它在检测异常方面的效率较低,但在识别正常发现方面却更好。该研究证实了人工智能的可靠性,尤其是在低模糊性场景下。结论:我们的研究结果表明,在所有描述性放射检查结果的设置中,人工智能的表现几乎与最好的兽医放射科医生一样好。然而,它的优势更多地在于确认正常而不是发现异常,并且它不能提供鉴别诊断。因此,更广泛地使用人工智能可以可靠地增加诊断的可用性,但需要进一步的人力投入。考虑到人类专家和人工智能的独特优势,以及灵敏度与特异性、低模糊性与高模糊性设置的差异,人工智能可能会补充而不是取代人类专家。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Frontiers in Veterinary Science
Frontiers in Veterinary Science Veterinary-General Veterinary
CiteScore
4.80
自引率
9.40%
发文量
1870
审稿时长
14 weeks
期刊介绍: Frontiers in Veterinary Science is a global, peer-reviewed, Open Access journal that bridges animal and human health, brings a comparative approach to medical and surgical challenges, and advances innovative biotechnology and therapy. Veterinary research today is interdisciplinary, collaborative, and socially relevant, transforming how we understand and investigate animal health and disease. Fundamental research in emerging infectious diseases, predictive genomics, stem cell therapy, and translational modelling is grounded within the integrative social context of public and environmental health, wildlife conservation, novel biomarkers, societal well-being, and cutting-edge clinical practice and specialization. Frontiers in Veterinary Science brings a 21st-century approach—networked, collaborative, and Open Access—to communicate this progress and innovation to both the specialist and to the wider audience of readers in the field. Frontiers in Veterinary Science publishes articles on outstanding discoveries across a wide spectrum of translational, foundational, and clinical research. The journal''s mission is to bring all relevant veterinary sciences together on a single platform with the goal of improving animal and human health.
期刊最新文献
Establishment and field validation of a rapid on-site recombinase polymerase amplification-lateral flow assay for BRSV and BVDV. A pilot study of a novel portable mass spectrometer for rapid, simultaneous detection of multiple anesthetic drug concentrations. Advances in the immunosuppression of porcine reproductive and respiratory syndrome virus. Correction: Shear wave speed changes in the cervix and vulvar lips of Kivircik ewes before and after parturition. Case Report: Renal hemangiosarcoma in a free-ranging red fox (Vulpesvulpes).
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1