Zhao Shi, Bin Hu, Mengjie Lu, Manting Zhang, Haiting Yang, Bo He, Jiyao Ma, Chunfeng Hu, Li Lu, Sheng Li, Shiyu Ren, Yonggao Zhang, Jun Li, Mayidili Nijiati, Jia-Ke Dong, Hao Wang, Zhen Zhou, Fan Dong Zhang, Chengwei Pan, Yizhou Yu, Zijian Chen, Chang Sheng Zhou, Yongyue Wei, Junlin Zhou, Long Jiang Zhang
{"title":"Development and Validation of a Sham-AI Model for Intracranial Aneurysm Detection at CT Angiography.","authors":"Zhao Shi, Bin Hu, Mengjie Lu, Manting Zhang, Haiting Yang, Bo He, Jiyao Ma, Chunfeng Hu, Li Lu, Sheng Li, Shiyu Ren, Yonggao Zhang, Jun Li, Mayidili Nijiati, Jia-Ke Dong, Hao Wang, Zhen Zhou, Fan Dong Zhang, Chengwei Pan, Yizhou Yu, Zijian Chen, Chang Sheng Zhou, Yongyue Wei, Junlin Zhou, Long Jiang Zhang","doi":"10.1148/ryai.240140","DOIUrl":null,"url":null,"abstract":"<p><p><i>\"Just Accepted\" papers have undergone full peer review and have been accepted for publication in <i>Radiology: Artificial Intelligence</i>. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content.</i> Purpose To evaluate a Sham-AI model acting as a placebo control for a Standard-AI model for intracranial aneurysm diagnosis. Materials and Methods This retrospective crossover, blinded, multireader multicase study was conducted from November 2022 to March 2023. A Sham-AI model with near-zero sensitivity and similar specificity to a Standard-AI model was developed using 16,422 CT angiography (CTA) examinations. Digital subtraction angiography-verified CTA examinations from four hospitals were collected, half of which were processed by Standard-AI and the others by Sham-AI to generate Sequence A; Sequence B was generated reversely. Twenty-eight radiologists from seven hospitals were randomly assigned with either sequence, and then assigned with the other sequence after a washout period. The diagnostic performances of radiologists alone, radiologists with Standard-AI-assisted, and radiologists with Sham-AI-assisted were compared using sensitivity and specificity, and radiologists' susceptibility to Sham-AI suggestions was assessed. Results The testing dataset included 300 patients (median age, 61 (IQR, 52.0-67.0) years; 199 male), 50 of which had aneurysms. Standard-AI and Sham-AI performed as expected (sensitivity: 96.0% versus 0.0%, specificity: 82.0% versus 76.0%). The differences in sensitivity and specificity between Standard-AI-assisted and Sham-AIassisted readings were +20.7% (95%CI: 15.8%-25.5%, superiority) and 0.0% (95%CI: -2.0%-2.0%, noninferiority), respectively. The difference between Sham-AI-assisted readings and radiologists alone was-2.6% (95%CI: -3.8%--1.4%, noninferiority) for both sensitivity and specificity. 5.3% (44/823) of true-positive and 1.2% (7/577) of false-negative results of radiologists alone were changed following Sham-AI suggestions. Conclusion Radiologists' diagnostic performance was not compromised when aided by the proposed Sham-AI model compared with their unassisted performance. Published under a CC BY 4.0 license.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240140"},"PeriodicalIF":8.1000,"publicationDate":"2025-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Radiology-Artificial Intelligence","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1148/ryai.240140","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
"Just Accepted" papers have undergone full peer review and have been accepted for publication in Radiology: Artificial Intelligence. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content. Purpose To evaluate a Sham-AI model acting as a placebo control for a Standard-AI model for intracranial aneurysm diagnosis. Materials and Methods This retrospective crossover, blinded, multireader multicase study was conducted from November 2022 to March 2023. A Sham-AI model with near-zero sensitivity and similar specificity to a Standard-AI model was developed using 16,422 CT angiography (CTA) examinations. Digital subtraction angiography-verified CTA examinations from four hospitals were collected, half of which were processed by Standard-AI and the others by Sham-AI to generate Sequence A; Sequence B was generated reversely. Twenty-eight radiologists from seven hospitals were randomly assigned with either sequence, and then assigned with the other sequence after a washout period. The diagnostic performances of radiologists alone, radiologists with Standard-AI-assisted, and radiologists with Sham-AI-assisted were compared using sensitivity and specificity, and radiologists' susceptibility to Sham-AI suggestions was assessed. Results The testing dataset included 300 patients (median age, 61 (IQR, 52.0-67.0) years; 199 male), 50 of which had aneurysms. Standard-AI and Sham-AI performed as expected (sensitivity: 96.0% versus 0.0%, specificity: 82.0% versus 76.0%). The differences in sensitivity and specificity between Standard-AI-assisted and Sham-AIassisted readings were +20.7% (95%CI: 15.8%-25.5%, superiority) and 0.0% (95%CI: -2.0%-2.0%, noninferiority), respectively. The difference between Sham-AI-assisted readings and radiologists alone was-2.6% (95%CI: -3.8%--1.4%, noninferiority) for both sensitivity and specificity. 5.3% (44/823) of true-positive and 1.2% (7/577) of false-negative results of radiologists alone were changed following Sham-AI suggestions. Conclusion Radiologists' diagnostic performance was not compromised when aided by the proposed Sham-AI model compared with their unassisted performance. Published under a CC BY 4.0 license.
期刊介绍:
Radiology: Artificial Intelligence is a bi-monthly publication that focuses on the emerging applications of machine learning and artificial intelligence in the field of imaging across various disciplines. This journal is available online and accepts multiple manuscript types, including Original Research, Technical Developments, Data Resources, Review articles, Editorials, Letters to the Editor and Replies, Special Reports, and AI in Brief.