Transformer-based Deep Neural Network for Breast Cancer Classification on Digital Breast Tomosynthesis Images.

IF 8.1 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Radiology-Artificial Intelligence Pub Date : 2023-05-10 eCollection Date: 2023-05-01 DOI:10.1148/ryai.220159
Weonsuk Lee, Hyeonsoo Lee, Hyunjae Lee, Eun Kyung Park, Hyeonseob Nam, Thijs Kooi
{"title":"Transformer-based Deep Neural Network for Breast Cancer Classification on Digital Breast Tomosynthesis Images.","authors":"Weonsuk Lee, Hyeonsoo Lee, Hyunjae Lee, Eun Kyung Park, Hyeonseob Nam, Thijs Kooi","doi":"10.1148/ryai.220159","DOIUrl":null,"url":null,"abstract":"<p><strong>Purpose: </strong>To develop an efficient deep neural network model that incorporates context from neighboring image sections to detect breast cancer on digital breast tomosynthesis (DBT) images.</p><p><strong>Materials and methods: </strong>The authors adopted a transformer architecture that analyzes neighboring sections of the DBT stack. The proposed method was compared with two baselines: an architecture based on three-dimensional (3D) convolutions and a two-dimensional model that analyzes each section individually. The models were trained with 5174 four-view DBT studies, validated with 1000 four-view DBT studies, and tested on 655 four-view DBT studies, which were retrospectively collected from nine institutions in the United States through an external entity. Methods were compared using area under the receiver operating characteristic curve (AUC), sensitivity at a fixed specificity, and specificity at a fixed sensitivity.</p><p><strong>Results: </strong>On the test set of 655 DBT studies, both 3D models showed higher classification performance than did the per-section baseline model. The proposed transformer-based model showed a significant increase in AUC (0.88 vs 0.91, <i>P</i> = .002), sensitivity (81.0% vs 87.7%, <i>P</i> = .006), and specificity (80.5% vs 86.4%, <i>P</i> < .001) at clinically relevant operating points when compared with the single-DBT-section baseline. The transformer-based model used only 25% of the number of floating-point operations per second used by the 3D convolution model while demonstrating similar classification performance.</p><p><strong>Conclusion: </strong>A transformer-based deep neural network using data from neighboring sections improved breast cancer classification performance compared with a per-section baseline model and was more efficient than a model using 3D convolutions.<b>Keywords:</b> Breast, Tomosynthesis, Diagnosis, Supervised Learning, Convolutional Neural Network (CNN), Digital Breast Tomosynthesis, Breast Cancer, Deep Neural Networks, Transformers <i>Supplemental material is available for this article.</i> © RSNA, 2023.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":8.1000,"publicationDate":"2023-05-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10245183/pdf/ryai.220159.pdf","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Radiology-Artificial Intelligence","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1148/ryai.220159","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2023/5/1 0:00:00","PubModel":"eCollection","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Purpose: To develop an efficient deep neural network model that incorporates context from neighboring image sections to detect breast cancer on digital breast tomosynthesis (DBT) images.

Materials and methods: The authors adopted a transformer architecture that analyzes neighboring sections of the DBT stack. The proposed method was compared with two baselines: an architecture based on three-dimensional (3D) convolutions and a two-dimensional model that analyzes each section individually. The models were trained with 5174 four-view DBT studies, validated with 1000 four-view DBT studies, and tested on 655 four-view DBT studies, which were retrospectively collected from nine institutions in the United States through an external entity. Methods were compared using area under the receiver operating characteristic curve (AUC), sensitivity at a fixed specificity, and specificity at a fixed sensitivity.

Results: On the test set of 655 DBT studies, both 3D models showed higher classification performance than did the per-section baseline model. The proposed transformer-based model showed a significant increase in AUC (0.88 vs 0.91, P = .002), sensitivity (81.0% vs 87.7%, P = .006), and specificity (80.5% vs 86.4%, P < .001) at clinically relevant operating points when compared with the single-DBT-section baseline. The transformer-based model used only 25% of the number of floating-point operations per second used by the 3D convolution model while demonstrating similar classification performance.

Conclusion: A transformer-based deep neural network using data from neighboring sections improved breast cancer classification performance compared with a per-section baseline model and was more efficient than a model using 3D convolutions.Keywords: Breast, Tomosynthesis, Diagnosis, Supervised Learning, Convolutional Neural Network (CNN), Digital Breast Tomosynthesis, Breast Cancer, Deep Neural Networks, Transformers Supplemental material is available for this article. © RSNA, 2023.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于变压器的深度神经网络用于数字乳腺断层合成图像的乳腺癌分类
目的:开发一种高效的深度神经网络模型,结合相邻图像部分的上下文,检测数字乳腺断层合成(DBT)图像上的乳腺癌:作者采用了一种转换器架构,对 DBT 叠加图像的相邻部分进行分析。提出的方法与两种基线进行了比较:一种是基于三维(3D)卷积的架构,另一种是单独分析每个截面的二维模型。这些模型用 5174 项四视角 DBT 研究进行了训练,用 1000 项四视角 DBT 研究进行了验证,并用 655 项四视角 DBT 研究进行了测试。使用接收器操作特征曲线下面积(AUC)、固定特异性下的灵敏度和固定灵敏度下的特异性对方法进行了比较:结果:在由 655 项 DBT 研究组成的测试集上,两种三维模型的分类性能均高于按截面基线模型。与单 DBT 截面基线模型相比,基于变压器的模型在临床相关操作点上的 AUC(0.88 vs 0.91,P = .002)、灵敏度(81.0% vs 87.7%,P = .006)和特异性(80.5% vs 86.4%,P < .001)均有显著提高。基于变压器的模型每秒使用的浮点运算次数仅为三维卷积模型的 25%,但却表现出了类似的分类性能:结论:与单切片基线模型相比,基于变压器的深度神经网络使用相邻切片的数据提高了乳腺癌分类性能,而且比使用三维卷积的模型更高效:乳腺 Tomosynthesis 诊断 监督学习 卷积神经网络(CNN) 数字乳腺 Tomosynthesis 乳腺癌 深度神经网络 变压器 本文有补充材料。© RSNA, 2023.
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
16.20
自引率
1.00%
发文量
0
期刊介绍: Radiology: Artificial Intelligence is a bi-monthly publication that focuses on the emerging applications of machine learning and artificial intelligence in the field of imaging across various disciplines. This journal is available online and accepts multiple manuscript types, including Original Research, Technical Developments, Data Resources, Review articles, Editorials, Letters to the Editor and Replies, Special Reports, and AI in Brief.
期刊最新文献
Integrated Deep Learning Model for the Detection, Segmentation, and Morphologic Analysis of Intracranial Aneurysms Using CT Angiography. RSNA 2023 Abdominal Trauma AI Challenge Review and Outcomes Analysis. SCIseg: Automatic Segmentation of Intramedullary Lesions in Spinal Cord Injury on T2-weighted MRI Scans. Combining Biology-based and MRI Data-driven Modeling to Predict Response to Neoadjuvant Chemotherapy in Patients with Triple-Negative Breast Cancer. Optimizing Performance of Transformer-based Models for Fetal Brain MR Image Segmentation.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1