CNN-Based Cross-Modality Fusion for Enhanced Breast Cancer Detection Using Mammography and Ultrasound.

IF 2.2 4区 医学 Q2 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Tomography Pub Date : 2024-12-12 DOI:10.3390/tomography10120145
Yi-Ming Wang, Chi-Yuan Wang, Kuo-Ying Liu, Yung-Hui Huang, Tai-Been Chen, Kon-Ning Chiu, Chih-Yu Liang, Nan-Han Lu
{"title":"CNN-Based Cross-Modality Fusion for Enhanced Breast Cancer Detection Using Mammography and Ultrasound.","authors":"Yi-Ming Wang, Chi-Yuan Wang, Kuo-Ying Liu, Yung-Hui Huang, Tai-Been Chen, Kon-Ning Chiu, Chih-Yu Liang, Nan-Han Lu","doi":"10.3390/tomography10120145","DOIUrl":null,"url":null,"abstract":"<p><p><b>Background/Objectives:</b> Breast cancer is a leading cause of mortality among women in Taiwan and globally. Non-invasive imaging methods, such as mammography and ultrasound, are critical for early detection, yet standalone modalities have limitations in regard to their diagnostic accuracy. This study aims to enhance breast cancer detection through a cross-modality fusion approach combining mammography and ultrasound imaging, using advanced convolutional neural network (CNN) architectures. <b>Materials and Methods:</b> Breast images were sourced from public datasets, including the RSNA, the PAS, and Kaggle, and categorized into malignant and benign groups. Data augmentation techniques were used to address imbalances in the ultrasound dataset. Three models were developed: (1) pre-trained CNNs integrated with machine learning classifiers, (2) transfer learning-based CNNs, and (3) a custom-designed 17-layer CNN for direct classification. The performance of the models was evaluated using metrics such as accuracy and the Kappa score. <b>Results:</b> The custom 17-layer CNN outperformed the other models, achieving an accuracy of 0.964 and a Kappa score of 0.927. The transfer learning model achieved moderate performance (accuracy 0.846, Kappa 0.694), while the pre-trained CNNs with machine learning classifiers yielded the lowest results (accuracy 0.780, Kappa 0.559). Cross-modality fusion proved effective in leveraging the complementary strengths of mammography and ultrasound imaging. <b>Conclusions:</b> This study demonstrates the potential of cross-modality imaging and tailored CNN architectures to significantly improve diagnostic accuracy and reliability in breast cancer detection. The custom-designed model offers a practical solution for early detection, potentially reducing false positives and false negatives, and improving patient outcomes through timely and accurate diagnosis.</p>","PeriodicalId":51330,"journal":{"name":"Tomography","volume":"10 12","pages":"2038-2057"},"PeriodicalIF":2.2000,"publicationDate":"2024-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11679931/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Tomography","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.3390/tomography10120145","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING","Score":null,"Total":0}
引用次数: 0

Abstract

Background/Objectives: Breast cancer is a leading cause of mortality among women in Taiwan and globally. Non-invasive imaging methods, such as mammography and ultrasound, are critical for early detection, yet standalone modalities have limitations in regard to their diagnostic accuracy. This study aims to enhance breast cancer detection through a cross-modality fusion approach combining mammography and ultrasound imaging, using advanced convolutional neural network (CNN) architectures. Materials and Methods: Breast images were sourced from public datasets, including the RSNA, the PAS, and Kaggle, and categorized into malignant and benign groups. Data augmentation techniques were used to address imbalances in the ultrasound dataset. Three models were developed: (1) pre-trained CNNs integrated with machine learning classifiers, (2) transfer learning-based CNNs, and (3) a custom-designed 17-layer CNN for direct classification. The performance of the models was evaluated using metrics such as accuracy and the Kappa score. Results: The custom 17-layer CNN outperformed the other models, achieving an accuracy of 0.964 and a Kappa score of 0.927. The transfer learning model achieved moderate performance (accuracy 0.846, Kappa 0.694), while the pre-trained CNNs with machine learning classifiers yielded the lowest results (accuracy 0.780, Kappa 0.559). Cross-modality fusion proved effective in leveraging the complementary strengths of mammography and ultrasound imaging. Conclusions: This study demonstrates the potential of cross-modality imaging and tailored CNN architectures to significantly improve diagnostic accuracy and reliability in breast cancer detection. The custom-designed model offers a practical solution for early detection, potentially reducing false positives and false negatives, and improving patient outcomes through timely and accurate diagnosis.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于cnn的交叉模态融合增强乳房x线摄影和超声的乳腺癌检测。
背景/目的:乳腺癌是台湾及全球女性死亡的主要原因。非侵入性成像方法,如乳房x光检查和超声检查,对于早期发现至关重要,但独立模式在诊断准确性方面存在局限性。本研究旨在利用先进的卷积神经网络(CNN)架构,通过结合乳房x线摄影和超声成像的跨模态融合方法来增强乳腺癌的检测。材料和方法:乳房图像来源于RSNA、PAS和Kaggle等公开数据集,并分为恶性和良性两组。数据增强技术用于解决超声数据集中的不平衡问题。开发了三种模型:(1)与机器学习分类器集成的预训练CNN,(2)基于迁移学习的CNN,(3)定制设计的17层直接分类CNN。使用准确性和Kappa分数等指标来评估模型的性能。结果:自定义17层CNN优于其他模型,准确率为0.964,Kappa评分为0.927。迁移学习模型获得了中等的性能(准确率0.846,Kappa 0.694),而使用机器学习分类器预训练的cnn获得了最低的结果(准确率0.780,Kappa 0.559)。跨模态融合被证明有效地利用了乳房x线摄影和超声成像的互补优势。结论:本研究证明了跨模态成像和定制CNN架构在显著提高乳腺癌诊断准确性和可靠性方面的潜力。定制设计的模型为早期检测提供了实用的解决方案,有可能减少假阳性和假阴性,并通过及时准确的诊断改善患者的预后。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Tomography
Tomography Medicine-Radiology, Nuclear Medicine and Imaging
CiteScore
2.70
自引率
10.50%
发文量
222
期刊介绍: TomographyTM publishes basic (technical and pre-clinical) and clinical scientific articles which involve the advancement of imaging technologies. Tomography encompasses studies that use single or multiple imaging modalities including for example CT, US, PET, SPECT, MR and hyperpolarization technologies, as well as optical modalities (i.e. bioluminescence, photoacoustic, endomicroscopy, fiber optic imaging and optical computed tomography) in basic sciences, engineering, preclinical and clinical medicine. Tomography also welcomes studies involving exploration and refinement of contrast mechanisms and image-derived metrics within and across modalities toward the development of novel imaging probes for image-based feedback and intervention. The use of imaging in biology and medicine provides unparalleled opportunities to noninvasively interrogate tissues to obtain real-time dynamic and quantitative information required for diagnosis and response to interventions and to follow evolving pathological conditions. As multi-modal studies and the complexities of imaging technologies themselves are ever increasing to provide advanced information to scientists and clinicians. Tomography provides a unique publication venue allowing investigators the opportunity to more precisely communicate integrated findings related to the diverse and heterogeneous features associated with underlying anatomical, physiological, functional, metabolic and molecular genetic activities of normal and diseased tissue. Thus Tomography publishes peer-reviewed articles which involve the broad use of imaging of any tissue and disease type including both preclinical and clinical investigations. In addition, hardware/software along with chemical and molecular probe advances are welcome as they are deemed to significantly contribute towards the long-term goal of improving the overall impact of imaging on scientific and clinical discovery.
期刊最新文献
Enhanced Detection of Residual Breast Cancer Post-Excisional Biopsy: Comparative Analysis of Contrast-Enhanced MRI with and Without Diffusion-Weighted Imaging. Comparative Sensitivity of MRI Indices for Myelin Assessment in Spinal Cord Regions. CT Angiography Assessment of Dorsal Pancreatic Artery and Intrapancreatic Arcade Anatomy: Impact on Whipple Surgery Outcomes. Fast Hadamard-Encoded 7T Spectroscopic Imaging of Human Brain. Unraveling the Invisible: Topological Data Analysis as the New Frontier in Radiology's Diagnostic Arsenal.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1