首页 > 最新文献

Radiology-Artificial Intelligence最新文献

英文 中文
Improving Automated Hemorrhage Detection at Sparse-View CT via U-Net-based Artifact Reduction. 通过基于 U-Net 的伪影消除技术改进稀疏视图 CT 中的出血自动检测功能
IF 8.1 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-07-01 DOI: 10.1148/ryai.230275
Johannes Thalhammer, Manuel Schultheiß, Tina Dorosti, Tobias Lasser, Franz Pfeiffer, Daniela Pfeiffer, Florian Schaff

Purpose To explore the potential benefits of deep learning-based artifact reduction in sparse-view cranial CT scans and its impact on automated hemorrhage detection. Materials and Methods In this retrospective study, a U-Net was trained for artifact reduction on simulated sparse-view cranial CT scans in 3000 patients, obtained from a public dataset and reconstructed with varying sparse-view levels. Additionally, EfficientNet-B2 was trained on full-view CT data from 17 545 patients for automated hemorrhage detection. Detection performance was evaluated using the area under the receiver operating characteristic curve (AUC), with differences assessed using the DeLong test, along with confusion matrices. A total variation (TV) postprocessing approach, commonly applied to sparse-view CT, served as the basis for comparison. A Bonferroni-corrected significance level of .001/6 = .00017 was used to accommodate for multiple hypotheses testing. Results Images with U-Net postprocessing were better than unprocessed and TV-processed images with respect to image quality and automated hemorrhage detection. With U-Net postprocessing, the number of views could be reduced from 4096 (AUC: 0.97 [95% CI: 0.97, 0.98]) to 512 (0.97 [95% CI: 0.97, 0.98], P < .00017) and to 256 views (0.97 [95% CI: 0.96, 0.97], P < .00017) with a minimal decrease in hemorrhage detection performance. This was accompanied by mean structural similarity index measure increases of 0.0210 (95% CI: 0.0210, 0.0211) and 0.0560 (95% CI: 0.0559, 0.0560) relative to unprocessed images. Conclusion U-Net-based artifact reduction substantially enhanced automated hemorrhage detection in sparse-view cranial CT scans. Keywords: CT, Head/Neck, Hemorrhage, Diagnosis, Supervised Learning Supplemental material is available for this article. © RSNA, 2024.

"刚刚接受 "的论文经过同行评审,已被接受在《放射学》上发表:人工智能》上发表。这篇文章在以最终版本发表之前,还将经过校对、排版和校对审核。请注意,在制作最终校对稿的过程中,可能会发现一些错误,从而影响文章内容。目的 探讨在稀疏视图头颅 CT 扫描中基于深度学习减少伪影的潜在好处及其对自动出血检测的影响。材料与方法 在这项回顾性研究中,对 U-Net 进行了训练,以减少从公共数据集中获取的 3000 名患者的模拟稀疏视图头颅 CT 扫描中的伪影,并以不同的稀疏视图水平进行重建。此外,EfficientNetB2 还在来自 17,545 名患者的全视角 CT 数据上进行了自动出血检测训练。检测性能采用接收器操作者特征曲线下面积(AUC)进行评估,差异采用 DeLong 检验和混淆矩阵进行评估。通常应用于稀疏视图的总变异(TV)后处理方法是比较的基础。采用 Bonferronic 校正显著性水平 0.001/6 = 0.00017,以适应多重假设检验。结果 在图像质量和出血自动检测方面,经过 U-Net 后处理的图像优于未经处理的图像和经过 TV 处理的图像。通过 U-Net 后处理,视图数量可从 4096 个(AUC:0.97;95% CI:0.97-0.98)减少到 512 个(0.97;0.97-0.98;P < .00017)和 256 个视图(0.97;0.96-0.97;P < .00017),而出血检测性能下降极小。与未经处理的图像相比,平均结构相似性指数分别增加了 0.0210 (95% CI: 0.0210-0.0211) 和 0.0560 (95% CI: 0.0559-0.0560) 。结论 基于 U-Net 的伪影去除技术大大提高了稀疏视角头颅 CT 中出血的自动检测能力。©RSNA, 2024.
{"title":"Improving Automated Hemorrhage Detection at Sparse-View CT via U-Net-based Artifact Reduction.","authors":"Johannes Thalhammer, Manuel Schultheiß, Tina Dorosti, Tobias Lasser, Franz Pfeiffer, Daniela Pfeiffer, Florian Schaff","doi":"10.1148/ryai.230275","DOIUrl":"10.1148/ryai.230275","url":null,"abstract":"<p><p>Purpose To explore the potential benefits of deep learning-based artifact reduction in sparse-view cranial CT scans and its impact on automated hemorrhage detection. Materials and Methods In this retrospective study, a U-Net was trained for artifact reduction on simulated sparse-view cranial CT scans in 3000 patients, obtained from a public dataset and reconstructed with varying sparse-view levels. Additionally, EfficientNet-B2 was trained on full-view CT data from 17 545 patients for automated hemorrhage detection. Detection performance was evaluated using the area under the receiver operating characteristic curve (AUC), with differences assessed using the DeLong test, along with confusion matrices. A total variation (TV) postprocessing approach, commonly applied to sparse-view CT, served as the basis for comparison. A Bonferroni-corrected significance level of .001/6 = .00017 was used to accommodate for multiple hypotheses testing. Results Images with U-Net postprocessing were better than unprocessed and TV-processed images with respect to image quality and automated hemorrhage detection. With U-Net postprocessing, the number of views could be reduced from 4096 (AUC: 0.97 [95% CI: 0.97, 0.98]) to 512 (0.97 [95% CI: 0.97, 0.98], <i>P</i> < .00017) and to 256 views (0.97 [95% CI: 0.96, 0.97], <i>P</i> < .00017) with a minimal decrease in hemorrhage detection performance. This was accompanied by mean structural similarity index measure increases of 0.0210 (95% CI: 0.0210, 0.0211) and 0.0560 (95% CI: 0.0559, 0.0560) relative to unprocessed images. Conclusion U-Net-based artifact reduction substantially enhanced automated hemorrhage detection in sparse-view cranial CT scans. <b>Keywords:</b> CT, Head/Neck, Hemorrhage, Diagnosis, Supervised Learning <i>Supplemental material is available for this article.</i> © RSNA, 2024.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":8.1,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11294955/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140877469","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Vision Transformer-based Deep Learning Models Accelerate Further Research for Predicting Neurosurgical Intervention. 基于视觉转换器的深度学习模型加速了预测神经外科干预的进一步研究。
IF 8.1 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-07-01 DOI: 10.1148/ryai.240117
Kengo Takahashi, Takuma Usuzaki, Ryusei Inamori
{"title":"Vision Transformer-based Deep Learning Models Accelerate Further Research for Predicting Neurosurgical Intervention.","authors":"Kengo Takahashi, Takuma Usuzaki, Ryusei Inamori","doi":"10.1148/ryai.240117","DOIUrl":"10.1148/ryai.240117","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":8.1,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11294944/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141307009","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bridging Pixels to Genes. 连接像素与基因
IF 8.1 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-07-01 DOI: 10.1148/ryai.240262
Mana Moassefi, Bradley J Erickson
{"title":"Bridging Pixels to Genes.","authors":"Mana Moassefi, Bradley J Erickson","doi":"10.1148/ryai.240262","DOIUrl":"10.1148/ryai.240262","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":8.1,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11294947/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141427771","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
From Nicki Minaj to Neuroblastoma: What Rigorous Approaches to Rhythms and Radiomics Have in Common. 从 Nicki Minaj 到神经母细胞瘤:节奏和放射组学的严格方法有何共同之处?
IF 8.1 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-07-01 DOI: 10.1148/ryai.240350
Nabile M Safdar, Alina Galaria
{"title":"From Nicki Minaj to Neuroblastoma: What Rigorous Approaches to Rhythms and Radiomics Have in Common.","authors":"Nabile M Safdar, Alina Galaria","doi":"10.1148/ryai.240350","DOIUrl":"10.1148/ryai.240350","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":8.1,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11294945/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141627888","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Clinical, Cultural, Computational, and Regulatory Considerations to Deploy AI in Radiology: Perspectives of RSNA and MICCAI Experts. 在放射学中部署人工智能的临床、文化、计算和监管考虑因素:RSNA 和 MICCAI 专家的观点。
IF 8.1 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-07-01 DOI: 10.1148/ryai.240225
Marius George Linguraru, Spyridon Bakas, Mariam Aboian, Peter D Chang, Adam E Flanders, Jayashree Kalpathy-Cramer, Felipe C Kitamura, Matthew P Lungren, John Mongan, Luciano M Prevedello, Ronald M Summers, Carol C Wu, Maruf Adewole, Charles E Kahn

The Radiological Society of North of America (RSNA) and the Medical Image Computing and Computer Assisted Intervention (MICCAI) Society have led a series of joint panels and seminars focused on the present impact and future directions of artificial intelligence (AI) in radiology. These conversations have collected viewpoints from multidisciplinary experts in radiology, medical imaging, and machine learning on the current clinical penetration of AI technology in radiology and how it is impacted by trust, reproducibility, explainability, and accountability. The collective points-both practical and philosophical-define the cultural changes for radiologists and AI scientists working together and describe the challenges ahead for AI technologies to meet broad approval. This article presents the perspectives of experts from MICCAI and RSNA on the clinical, cultural, computational, and regulatory considerations-coupled with recommended reading materials-essential to adopt AI technology successfully in radiology and, more generally, in clinical practice. The report emphasizes the importance of collaboration to improve clinical deployment, highlights the need to integrate clinical and medical imaging data, and introduces strategies to ensure smooth and incentivized integration. Keywords: Adults and Pediatrics, Computer Applications-General (Informatics), Diagnosis, Prognosis © RSNA, 2024.

"刚刚接受 "的论文经过同行评审,已被接受在《放射学》上发表:人工智能》上发表。这篇文章在以最终版本发表之前,还将经过校对、排版和校对审核。请注意,在制作最终校对稿的过程中,可能会发现一些可能影响内容的错误。北美放射学会(RSNA)和医学影像计算与计算机辅助介入学会(MICCAI)联合举办了一系列专题讨论会和研讨会,重点探讨人工智能(AI)在放射学领域的当前影响和未来发展方向。这些对话收集了来自放射学、医学影像和机器学习等多学科专家的观点,探讨了人工智能技术目前在放射学中的临床应用,以及它如何受到信任、可重复性、可解释性和问责制的影响。这些观点从实践和哲学角度共同定义了放射科医生和人工智能科学家合作的文化变革,并描述了人工智能技术要获得广泛认可所面临的挑战。本文介绍了来自 MICCAI 和 RSNA 的专家对临床、文化、计算和监管方面的考虑因素的观点,以及推荐的阅读材料,这些因素对于在放射学和更广泛的临床实践中成功采用人工智能技术至关重要。该报告强调了合作对于改进临床部署的重要性,强调了整合临床和医学影像数据的必要性,并介绍了确保顺利整合和激励整合的策略。©RSNA,2024。
{"title":"Clinical, Cultural, Computational, and Regulatory Considerations to Deploy AI in Radiology: Perspectives of RSNA and MICCAI Experts.","authors":"Marius George Linguraru, Spyridon Bakas, Mariam Aboian, Peter D Chang, Adam E Flanders, Jayashree Kalpathy-Cramer, Felipe C Kitamura, Matthew P Lungren, John Mongan, Luciano M Prevedello, Ronald M Summers, Carol C Wu, Maruf Adewole, Charles E Kahn","doi":"10.1148/ryai.240225","DOIUrl":"10.1148/ryai.240225","url":null,"abstract":"<p><p>The Radiological Society of North of America (RSNA) and the Medical Image Computing and Computer Assisted Intervention (MICCAI) Society have led a series of joint panels and seminars focused on the present impact and future directions of artificial intelligence (AI) in radiology. These conversations have collected viewpoints from multidisciplinary experts in radiology, medical imaging, and machine learning on the current clinical penetration of AI technology in radiology and how it is impacted by trust, reproducibility, explainability, and accountability. The collective points-both practical and philosophical-define the cultural changes for radiologists and AI scientists working together and describe the challenges ahead for AI technologies to meet broad approval. This article presents the perspectives of experts from MICCAI and RSNA on the clinical, cultural, computational, and regulatory considerations-coupled with recommended reading materials-essential to adopt AI technology successfully in radiology and, more generally, in clinical practice. The report emphasizes the importance of collaboration to improve clinical deployment, highlights the need to integrate clinical and medical imaging data, and introduces strategies to ensure smooth and incentivized integration. <b>Keywords:</b> Adults and Pediatrics, Computer Applications-General (Informatics), Diagnosis, Prognosis © RSNA, 2024.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":8.1,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11294958/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141564666","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimizing Performance of Transformer-based Models for Fetal Brain MR Image Segmentation. 优化基于变压器模型的胎儿脑磁共振图像分割性能
IF 8.1 Pub Date : 2024-06-26 DOI: 10.1148/ryai.230229
Nicolò Pecco, Pasquale Anthony Della Rosa, Matteo Canini, Gianluca Nocera, Paola Scifo, Paolo Ivo Cavoretto, Massimo Candiani, Andrea Falini, Antonella Castellano, Cristina Baldoli

"Just Accepted" papers have undergone full peer review and have been accepted for publication in Radiology: Artificial Intelligence. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content. Purpose To test transformer-based models' performance when manipulating pretraining weights, dataset size, input size and comparing the best-model with reference standard and state-of-the-art models for a resting-state functional (rs-fMRI) fetal brain extraction task. Materials and Methods An internal retrospective dataset (fetuses = 172; images = 519; collected from 2018-2022) was used to investigate influence of dataset size, pretraining approaches and image input size on Swin-UNETR and UNETR models. The internal and an external (fetuses = 131; images = 561) datasets were used to cross-validate and to assess generalization capability of the best model against state-of-the-art models on different scanner types and number of gestational weeks (GW). The Dice similarity coefficient (DSC) and the Balanced average Hausdorff distance (BAHD) were used as segmentation performance metrics. GEE multifactorial models were used to assess significant model and interaction effects of interest. Results Swin-UNETR was not affected by pretraining approach and dataset size and performed best with the mean dataset image size, with a mean DSC of 0.92 and BAHD of 0.097. The Swin-UNETR was not affected by scanner type. Generalization results on the internal dataset showed that Swin-UNETR had lower performances compared with reference standard models and comparable performances on the external dataset. Cross-validation on internal and external test sets demonstrated better and comparable performance of Swin-UNETR versus convolutional neural network architectures during the late-fetal period (GWs > 25) but lower performance during the midfetal period (GWs ≤ 25). Conclusion Swin-UNTER showed flexibility in dealing with smaller datasets, regardless of pretraining approaches. For fetal brain extraction of rs-fMRI, Swin-UNTER showed comparable performance with reference standard models during the late-fetal period and lower performance during the early GW period. ©RSNA, 2024.

"刚刚接受 "的论文经过同行评审,已被接受在《放射学》上发表:人工智能》上发表。这篇文章在以最终版本发表之前,还将经过校对、排版和校对审核。请注意,在制作最终校对稿的过程中,可能会发现影响内容的错误。目的 测试基于转换器的模型在处理预训练权重、数据集大小、输入大小时的性能,并将最佳模型与参考标准模型和最先进模型进行比较,用于静息态功能(rs-fMRI)胎儿大脑提取任务。材料与方法 使用内部回顾性数据集(胎儿 = 172;图像 = 519;收集时间为 2018-2022 年)研究数据集大小、预训练方法和图像输入大小对 Swin-UNETR 和 UNETR 模型的影响。内部数据集和外部数据集(胎儿 = 131;图像 = 561)用于交叉验证和评估最佳模型在不同扫描仪类型和孕周数(GW)上与最先进模型的泛化能力。狄斯相似系数(DSC)和平衡平均豪斯多夫距离(BAHD)被用作分割性能指标。使用 GEE 多因素模型来评估感兴趣的重要模型和交互效应。结果 Swin-UNETR 不受预训练方法和数据集大小的影响,在使用平均数据集图像大小时表现最佳,平均 DSC 为 0.92,BAHD 为 0.097。Swin-UNETR 不受扫描仪类型的影响。内部数据集的泛化结果表明,与参考标准模型相比,Swin-UNETR 的性能较低,而在外部数据集上的性能相当。内部和外部测试集的交叉验证结果表明,在胎儿晚期(GWs > 25),Swin-UNETR 与卷积神经网络架构的性能更好,两者性能相当,但在胎儿中期(GWs ≤ 25),Swin-UNETR 的性能较低。结论 无论采用哪种预训练方法,Swin-UNTER 在处理较小的数据集时都表现出了灵活性。对于 rs-fMRI 的胎儿大脑提取,Swin-UNTER 在胎儿晚期表现出与参考标准模型相当的性能,而在 GW 早期表现较差。©RSNA,2024。
{"title":"Optimizing Performance of Transformer-based Models for Fetal Brain MR Image Segmentation.","authors":"Nicolò Pecco, Pasquale Anthony Della Rosa, Matteo Canini, Gianluca Nocera, Paola Scifo, Paolo Ivo Cavoretto, Massimo Candiani, Andrea Falini, Antonella Castellano, Cristina Baldoli","doi":"10.1148/ryai.230229","DOIUrl":"https://doi.org/10.1148/ryai.230229","url":null,"abstract":"<p><p><i>\"Just Accepted\" papers have undergone full peer review and have been accepted for publication in <i>Radiology: Artificial Intelligence</i>. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content.</i> Purpose To test transformer-based models' performance when manipulating pretraining weights, dataset size, input size and comparing the best-model with reference standard and state-of-the-art models for a resting-state functional (rs-fMRI) fetal brain extraction task. Materials and Methods An internal retrospective dataset (fetuses = 172; images = 519; collected from 2018-2022) was used to investigate influence of dataset size, pretraining approaches and image input size on Swin-UNETR and UNETR models. The internal and an external (fetuses = 131; images = 561) datasets were used to cross-validate and to assess generalization capability of the best model against state-of-the-art models on different scanner types and number of gestational weeks (GW). The Dice similarity coefficient (DSC) and the Balanced average Hausdorff distance (BAHD) were used as segmentation performance metrics. GEE multifactorial models were used to assess significant model and interaction effects of interest. Results Swin-UNETR was not affected by pretraining approach and dataset size and performed best with the mean dataset image size, with a mean DSC of 0.92 and BAHD of 0.097. The Swin-UNETR was not affected by scanner type. Generalization results on the internal dataset showed that Swin-UNETR had lower performances compared with reference standard models and comparable performances on the external dataset. Cross-validation on internal and external test sets demonstrated better and comparable performance of Swin-UNETR versus convolutional neural network architectures during the late-fetal period (GWs > 25) but lower performance during the midfetal period (GWs ≤ 25). Conclusion Swin-UNTER showed flexibility in dealing with smaller datasets, regardless of pretraining approaches. For fetal brain extraction of rs-fMRI, Swin-UNTER showed comparable performance with reference standard models during the late-fetal period and lower performance during the early GW period. ©RSNA, 2024.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":8.1,"publicationDate":"2024-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141451658","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Performance of an Artificial Intelligence System for Breast Cancer Detection on Screening Mammograms from BreastScreen Norway. 挪威 BreastScreen 乳腺癌筛查乳房 X 线照片的人工智能乳腺癌检测系统性能。
IF 9.8 Pub Date : 2024-05-01 DOI: 10.1148/ryai.230375
Marthe Larsen, Camilla F Olstad, Christoph I Lee, Tone Hovda, Solveig R Hoff, Marit A Martiniussen, Karl Øyvind Mikalsen, Håkon Lund-Hanssen, Helene S Solli, Marko Silberhorn, Åse Ø Sulheim, Steinar Auensen, Jan F Nygård, Solveig Hofvind

Purpose To explore the stand-alone breast cancer detection performance, at different risk score thresholds, of a commercially available artificial intelligence (AI) system. Materials and Methods This retrospective study included information from 661 695 digital mammographic examinations performed among 242 629 female individuals screened as a part of BreastScreen Norway, 2004-2018. The study sample included 3807 screen-detected cancers and 1110 interval breast cancers. A continuous examination-level risk score by the AI system was used to measure performance as the area under the receiver operating characteristic curve (AUC) with 95% CIs and cancer detection at different AI risk score thresholds. Results The AUC of the AI system was 0.93 (95% CI: 0.92, 0.93) for screen-detected cancers and interval breast cancers combined and 0.97 (95% CI: 0.97, 0.97) for screen-detected cancers. In a setting where 10% of the examinations with the highest AI risk scores were defined as positive and 90% with the lowest scores as negative, 92.0% (3502 of 3807) of the screen-detected cancers and 44.6% (495 of 1110) of the interval breast cancers were identified with AI. In this scenario, 68.5% (10 987 of 16 040) of false-positive screening results (negative recall assessment) were considered negative by AI. When 50% was used as the cutoff, 99.3% (3781 of 3807) of the screen-detected cancers and 85.2% (946 of 1110) of the interval breast cancers were identified as positive by AI, whereas 17.0% (2725 of 16 040) of the false-positive results were considered negative. Conclusion The AI system showed high performance in detecting breast cancers within 2 years of screening mammography and a potential for use to triage low-risk mammograms to reduce radiologist workload. Keywords: Mammography, Breast, Screening, Convolutional Neural Network (CNN), Deep Learning Algorithms Supplemental material is available for this article. © RSNA, 2024 See also commentary by Bahl and Do in this issue.

"刚刚接受 "的论文经过同行评审,已被接受在《放射学》上发表:人工智能》上发表。这篇文章在以最终版本发表之前,还将经过校对、排版和校对审核。请注意,在制作最终校对稿的过程中,可能会发现一些错误,从而影响文章内容。目的 探讨市售人工智能(AI)系统在不同风险评分阈值下的独立乳腺癌检测性能。材料与方法 这项回顾性研究纳入了 2004-2018 年作为 x 的一部分进行筛查的 242629 名女性中进行的 661695 次数字乳腺 X 光检查的信息。研究样本包括 3807 例筛查出的癌症(SDC)和 1110 例间期乳腺癌(IC)。采用人工智能系统的连续检查水平风险评分来衡量不同人工智能风险评分阈值下的接收者操作特征曲线下面积(AUC)及 95% CIs 和癌症检出率的性能。结果 AI 系统对 SDC 和 IC 的 AUC 值分别为 0.93(95% CI:0.92-0.93)和 0.97(95% CI:0.97-0.97)。在 AI 风险评分最高的检查中有 10% 被定义为阳性,评分最低的检查中有 90% 被定义为阴性的情况下,92.0%(3502/3807)的 SDC 和 44.6%(495/1100)的 IC 是通过 AI 识别的。在这种情况下,68.5%(10 987/16 029)的假阳性筛查结果(阴性回忆评估)被人工智能视为阴性。当以 50%为临界值时,人工智能识别出 99.3%(3781/3807)的 SDC 和 85.2%(946/1100)的 IC 为阳性,而 17.0%(2725/16 029)的假阳性结果被视为阴性。结论 人工智能系统在乳腺放射摄影筛查后两年内检测出乳腺癌方面表现出很高的性能,并有可能对低风险乳腺放射摄影进行分流,以减少放射医师的工作量。©RSNA,2024。
{"title":"Performance of an Artificial Intelligence System for Breast Cancer Detection on Screening Mammograms from BreastScreen Norway.","authors":"Marthe Larsen, Camilla F Olstad, Christoph I Lee, Tone Hovda, Solveig R Hoff, Marit A Martiniussen, Karl Øyvind Mikalsen, Håkon Lund-Hanssen, Helene S Solli, Marko Silberhorn, Åse Ø Sulheim, Steinar Auensen, Jan F Nygård, Solveig Hofvind","doi":"10.1148/ryai.230375","DOIUrl":"10.1148/ryai.230375","url":null,"abstract":"<p><p>Purpose To explore the stand-alone breast cancer detection performance, at different risk score thresholds, of a commercially available artificial intelligence (AI) system. Materials and Methods This retrospective study included information from 661 695 digital mammographic examinations performed among 242 629 female individuals screened as a part of BreastScreen Norway, 2004-2018. The study sample included 3807 screen-detected cancers and 1110 interval breast cancers. A continuous examination-level risk score by the AI system was used to measure performance as the area under the receiver operating characteristic curve (AUC) with 95% CIs and cancer detection at different AI risk score thresholds. Results The AUC of the AI system was 0.93 (95% CI: 0.92, 0.93) for screen-detected cancers and interval breast cancers combined and 0.97 (95% CI: 0.97, 0.97) for screen-detected cancers. In a setting where 10% of the examinations with the highest AI risk scores were defined as positive and 90% with the lowest scores as negative, 92.0% (3502 of 3807) of the screen-detected cancers and 44.6% (495 of 1110) of the interval breast cancers were identified with AI. In this scenario, 68.5% (10 987 of 16 040) of false-positive screening results (negative recall assessment) were considered negative by AI. When 50% was used as the cutoff, 99.3% (3781 of 3807) of the screen-detected cancers and 85.2% (946 of 1110) of the interval breast cancers were identified as positive by AI, whereas 17.0% (2725 of 16 040) of the false-positive results were considered negative. Conclusion The AI system showed high performance in detecting breast cancers within 2 years of screening mammography and a potential for use to triage low-risk mammograms to reduce radiologist workload. <b>Keywords:</b> Mammography, Breast, Screening, Convolutional Neural Network (CNN), Deep Learning Algorithms <i>Supplemental material is available for this article</i>. © RSNA, 2024 See also commentary by Bahl and Do in this issue.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":9.8,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11140504/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140862082","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Efficient Health Care: Decreasing MRI Scan Time. 高效的医疗保健:缩短磁共振成像扫描时间
IF 9.8 Pub Date : 2024-05-01 DOI: 10.1148/ryai.240174
Farid GharehMohammadi, Ronnie A Sebro
{"title":"Efficient Health Care: Decreasing MRI Scan Time.","authors":"Farid GharehMohammadi, Ronnie A Sebro","doi":"10.1148/ryai.240174","DOIUrl":"10.1148/ryai.240174","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":9.8,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11140514/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140865263","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Semi-supervised Learning for Generalizable Intracranial Hemorrhage Detection and Segmentation. 用于颅内出血检测和分割的半监督学习。
IF 9.8 Pub Date : 2024-05-01 DOI: 10.1148/ryai.230077
Emily Lin, Esther L Yuh

Purpose To develop and evaluate a semi-supervised learning model for intracranial hemorrhage detection and segmentation on an out-of-distribution head CT evaluation set. Materials and Methods This retrospective study used semi-supervised learning to bootstrap performance. An initial "teacher" deep learning model was trained on 457 pixel-labeled head CT scans collected from one U.S. institution from 2010 to 2017 and used to generate pseudo labels on a separate unlabeled corpus of 25 000 examinations from the Radiological Society of North America and American Society of Neuroradiology. A second "student" model was trained on this combined pixel- and pseudo-labeled dataset. Hyperparameter tuning was performed on a validation set of 93 scans. Testing for both classification (n = 481 examinations) and segmentation (n = 23 examinations, or 529 images) was performed on CQ500, a dataset of 481 scans performed in India, to evaluate out-of-distribution generalizability. The semi-supervised model was compared with a baseline model trained on only labeled data using area under the receiver operating characteristic curve, Dice similarity coefficient, and average precision metrics. Results The semi-supervised model achieved a statistically significant higher examination area under the receiver operating characteristic curve on CQ500 compared with the baseline (0.939 [95% CI: 0.938, 0.940] vs 0.907 [95% CI: 0.906, 0.908]; P = .009). It also achieved a higher Dice similarity coefficient (0.829 [95% CI: 0.825, 0.833] vs 0.809 [95% CI: 0.803, 0.812]; P = .012) and pixel average precision (0.848 [95% CI: 0.843, 0.853]) vs 0.828 [95% CI: 0.817, 0.828]) compared with the baseline. Conclusion The addition of unlabeled data in a semi-supervised learning framework demonstrates stronger generalizability potential for intracranial hemorrhage detection and segmentation compared with a supervised baseline. Keywords: Semi-supervised Learning, Traumatic Brain Injury, CT, Machine Learning Supplemental material is available for this article. Published under a CC BY 4.0 license. See also the commentary by Swimburne in this issue.

"刚刚接受 "的论文经过同行评审,已被接受在《放射学》上发表:人工智能》上发表。这篇文章在以最终版本发表之前,还将经过校对、排版和校对审核。请注意,在制作最终校对稿的过程中,可能会发现影响文章内容的错误。目的 在分布外头部 CT 评估集上开发和评估用于颅内出血检测和分割的半监督学习模型。材料与方法 这项回顾性研究使用半监督学习来引导性能。最初的 "教师 "深度学习模型是在 2010-2017 年间从一家美国机构收集的 457 个像素标记的头部 CT 扫描上训练的,并用于在来自 RSNA 和 ASNR 的 25,000 次检查的单独无标记语料库上生成伪标签。第二个 "学生 "模型是在这个像素与伪标签相结合的数据集上进行训练的。超参数调整在 93 个扫描的验证集上进行。分类(n = 481 次检查)和分割(n = 23 次检查,或 529 张图像)测试在 CQ500(印度进行的 481 次扫描的数据集)上进行,以评估分布外的通用性。使用接收者工作特征曲线下面积 (AUC)、Dice 相似性系数 (DSC) 和平均精确度 (AP) 指标,将半监督模型与仅在标记数据上训练的基线模型进行比较。结果 与基线模型相比,半监督模型在 CQ500 上的检查 AUC 明显更高(0.939 [0.938, 0.940] 对 0.907 [0.906, 0.908])(P = .009)。与基线相比,DSC(0.829 [0.825, 0.833] 对 0.809 [0.803, 0.812])(P = .012)和 Pixel AP(0.848 [0.843, 0.853])对 0.828 [0.817, 0.828])也更高。结论 与监督基线相比,在半监督学习框架中加入无标记数据,可为颅内出血检测和分割提供更强的通用性。©RSNA, 2024.
{"title":"Semi-supervised Learning for Generalizable Intracranial Hemorrhage Detection and Segmentation.","authors":"Emily Lin, Esther L Yuh","doi":"10.1148/ryai.230077","DOIUrl":"10.1148/ryai.230077","url":null,"abstract":"<p><p>Purpose To develop and evaluate a semi-supervised learning model for intracranial hemorrhage detection and segmentation on an out-of-distribution head CT evaluation set. Materials and Methods This retrospective study used semi-supervised learning to bootstrap performance. An initial \"teacher\" deep learning model was trained on 457 pixel-labeled head CT scans collected from one U.S. institution from 2010 to 2017 and used to generate pseudo labels on a separate unlabeled corpus of 25 000 examinations from the Radiological Society of North America and American Society of Neuroradiology. A second \"student\" model was trained on this combined pixel- and pseudo-labeled dataset. Hyperparameter tuning was performed on a validation set of 93 scans. Testing for both classification (<i>n</i> = 481 examinations) and segmentation (<i>n</i> = 23 examinations, or 529 images) was performed on CQ500, a dataset of 481 scans performed in India, to evaluate out-of-distribution generalizability. The semi-supervised model was compared with a baseline model trained on only labeled data using area under the receiver operating characteristic curve, Dice similarity coefficient, and average precision metrics. Results The semi-supervised model achieved a statistically significant higher examination area under the receiver operating characteristic curve on CQ500 compared with the baseline (0.939 [95% CI: 0.938, 0.940] vs 0.907 [95% CI: 0.906, 0.908]; <i>P</i> = .009). It also achieved a higher Dice similarity coefficient (0.829 [95% CI: 0.825, 0.833] vs 0.809 [95% CI: 0.803, 0.812]; <i>P</i> = .012) and pixel average precision (0.848 [95% CI: 0.843, 0.853]) vs 0.828 [95% CI: 0.817, 0.828]) compared with the baseline. Conclusion The addition of unlabeled data in a semi-supervised learning framework demonstrates stronger generalizability potential for intracranial hemorrhage detection and segmentation compared with a supervised baseline. <b>Keywords:</b> Semi-supervised Learning, Traumatic Brain Injury, CT, Machine Learning <i>Supplemental material is available for this article.</i> Published under a CC BY 4.0 license. See also the commentary by Swimburne in this issue.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":9.8,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11140498/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140040505","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Faster, More Practical, but Still Accurate: Deep Learning for Diagnosis of Progressive Supranuclear Palsy. 更快、更实用,但仍然准确:深度学习诊断进行性核上性麻痹。
IF 9.8 Pub Date : 2024-05-01 DOI: 10.1148/ryai.240181
Bahram Mohajer
{"title":"Faster, More Practical, but Still Accurate: Deep Learning for Diagnosis of Progressive Supranuclear Palsy.","authors":"Bahram Mohajer","doi":"10.1148/ryai.240181","DOIUrl":"10.1148/ryai.240181","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":9.8,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11140513/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140858206","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Radiology-Artificial Intelligence
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1