Multimodal breast cancer hybrid explainable computer-aided diagnosis using medical mammograms and ultrasound Images

IF 5.3 2区 医学 Q1 ENGINEERING, BIOMEDICAL Biocybernetics and Biomedical Engineering Pub Date : 2024-07-01 DOI:10.1016/j.bbe.2024.08.007
{"title":"Multimodal breast cancer hybrid explainable computer-aided diagnosis using medical mammograms and ultrasound Images","authors":"","doi":"10.1016/j.bbe.2024.08.007","DOIUrl":null,"url":null,"abstract":"<div><p>Breast cancer is a prevalent global disease where early detection is crucial for effective treatment and reducing mortality rates. To address this challenge, a novel Computer-Aided Diagnosis (CAD) framework leveraging Artificial Intelligence (AI) techniques has been developed. This framework integrates capabilities for the simultaneous detection and classification of breast lesions. The AI-based CAD framework is meticulously structured into two pipelines (Stage 1 and Stage 2). The first pipeline (Stage 1) focuses on detectable cases where lesions are identified during the detection task. The second pipeline (Stage 2) is dedicated to cases where lesions are not initially detected. Various experimental scenarios, including binary (benign vs. malignant) and multi-class classifications based on BI-RADS scores, were conducted for training and evaluation. Additionally, a verification and validation (V&amp;V) scenario was implemented to assess the reliability of the framework using unseen multimodal datasets for both binary and multi-class tasks. For the detection tasks, the recent AI detectors like YOLO (You Only Look Once) variants were fine-tuned and optimized to localize breast lesions. For classification tasks, hybrid AI models incorporating ensemble convolutional neural networks (CNNs) and the attention mechanism of Vision Transformers were proposed to enhance prediction performance. The proposed AI-based CAD framework was trained and evaluated using various multimodal ultrasound datasets (BUSI and US2) and mammogram datasets (MIAS, INbreast, real private mammograms, KAU-BCMD, and CBIS-DDSM), either individually or in merged forms. Visual t-SNE techniques were applied to visually harmonize data distributions across ultrasound and mammogram datasets for effective various datasets merging. To generate visually explainable heatmaps in both pipelines (stages 1 and 2), Grad-CAM was utilized. These heatmaps assisted in finalizing detected boxes, especially in stage 2 when the AI detector failed to automatically detect breast lesions. The highest evaluation metrics achieved for merged dataset (BUSI, INbreast, and MIAS) were 97.73% accuracy and 97.27% mAP50 in the first pipeline. In the second pipeline, the proposed CAD achieved 91.66% accuracy with 95.65% mAP50 on MIAS and 95.65% accuracy with 96.10% mAP50 on the merged dataset (INbreast and MIAS). Meanwhile, exceptional performance was demonstrated using BI-RADS scores, achieving 87.29% accuracy, 91.68% AUC, 86.72% mAP50, and 64.75% mAP50-95 on a combined dataset of INbreast and CBIS-DDSM. These results underscore the practical significance of the proposed CAD framework in automatically annotating suspected lesions for radiologists.</p></div>","PeriodicalId":55381,"journal":{"name":"Biocybernetics and Biomedical Engineering","volume":null,"pages":null},"PeriodicalIF":5.3000,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Biocybernetics and Biomedical Engineering","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0208521624000603","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, BIOMEDICAL","Score":null,"Total":0}
引用次数: 0

Abstract

Breast cancer is a prevalent global disease where early detection is crucial for effective treatment and reducing mortality rates. To address this challenge, a novel Computer-Aided Diagnosis (CAD) framework leveraging Artificial Intelligence (AI) techniques has been developed. This framework integrates capabilities for the simultaneous detection and classification of breast lesions. The AI-based CAD framework is meticulously structured into two pipelines (Stage 1 and Stage 2). The first pipeline (Stage 1) focuses on detectable cases where lesions are identified during the detection task. The second pipeline (Stage 2) is dedicated to cases where lesions are not initially detected. Various experimental scenarios, including binary (benign vs. malignant) and multi-class classifications based on BI-RADS scores, were conducted for training and evaluation. Additionally, a verification and validation (V&V) scenario was implemented to assess the reliability of the framework using unseen multimodal datasets for both binary and multi-class tasks. For the detection tasks, the recent AI detectors like YOLO (You Only Look Once) variants were fine-tuned and optimized to localize breast lesions. For classification tasks, hybrid AI models incorporating ensemble convolutional neural networks (CNNs) and the attention mechanism of Vision Transformers were proposed to enhance prediction performance. The proposed AI-based CAD framework was trained and evaluated using various multimodal ultrasound datasets (BUSI and US2) and mammogram datasets (MIAS, INbreast, real private mammograms, KAU-BCMD, and CBIS-DDSM), either individually or in merged forms. Visual t-SNE techniques were applied to visually harmonize data distributions across ultrasound and mammogram datasets for effective various datasets merging. To generate visually explainable heatmaps in both pipelines (stages 1 and 2), Grad-CAM was utilized. These heatmaps assisted in finalizing detected boxes, especially in stage 2 when the AI detector failed to automatically detect breast lesions. The highest evaluation metrics achieved for merged dataset (BUSI, INbreast, and MIAS) were 97.73% accuracy and 97.27% mAP50 in the first pipeline. In the second pipeline, the proposed CAD achieved 91.66% accuracy with 95.65% mAP50 on MIAS and 95.65% accuracy with 96.10% mAP50 on the merged dataset (INbreast and MIAS). Meanwhile, exceptional performance was demonstrated using BI-RADS scores, achieving 87.29% accuracy, 91.68% AUC, 86.72% mAP50, and 64.75% mAP50-95 on a combined dataset of INbreast and CBIS-DDSM. These results underscore the practical significance of the proposed CAD framework in automatically annotating suspected lesions for radiologists.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
利用医学乳房 X 线照片和超声波图像进行多模态乳腺癌混合可解释计算机辅助诊断
乳腺癌是一种全球流行的疾病,早期检测对于有效治疗和降低死亡率至关重要。为了应对这一挑战,我们利用人工智能(AI)技术开发了一种新型计算机辅助诊断(CAD)框架。该框架集成了同时检测和分类乳腺病变的功能。基于人工智能的 CAD 框架分为两个管道(第一阶段和第二阶段),结构严谨。第一条管道(阶段 1)侧重于可检测的病例,即在检测任务中识别出病变。第二个管道(阶段 2)专门用于最初未检测到病变的情况。在训练和评估过程中进行了各种实验,包括基于 BI-RADS 评分的二元分类(良性与恶性)和多类分类。此外,还实施了验证和确认(V&V)方案,使用未见的多模态数据集评估二元和多类任务框架的可靠性。在检测任务中,对最近的人工智能检测器(如 YOLO(You Only Look Once)变体)进行了微调和优化,以定位乳腺病变。对于分类任务,则提出了结合了集合卷积神经网络(CNN)和 Vision Transformers 注意力机制的混合人工智能模型,以提高预测性能。利用各种多模态超声数据集(BUSI 和 US2)和乳房 X 线照片数据集(MIAS、INbreast、真实私人乳房 X 线照片、KAU-BCMD 和 CBIS-DDSM),对所提出的基于人工智能的 CAD 框架进行了单独或合并形式的训练和评估。采用可视化 t-SNE 技术从视觉上协调超声和乳房 X 线照片数据集的数据分布,以便有效合并各种数据集。为了在两个管道(第 1 和第 2 阶段)中生成可视化解释的热图,我们使用了 Grad-CAM。这些热图有助于最终确定检测到的方框,尤其是在第 2 阶段,当人工智能检测器未能自动检测到乳腺病变时。在第一个管道中,合并数据集(BUSI、INbreast 和 MIAS)的最高评估指标分别为 97.73% 的准确率和 97.27% 的 mAP50。在第二个管道中,所提出的 CAD 在 MIAS 上的准确率为 91.66%,mAP50 为 95.65%;在合并数据集(INbreast 和 MIAS)上的准确率为 95.65%,mAP50 为 96.10%。同时,BI-RADS 评分也表现出了卓越的性能,在 INbreast 和 CBIS-DDSM 合并数据集上达到了 87.29% 的准确率、91.68% 的 AUC、86.72% 的 mAP50 和 64.75% 的 mAP50-95。这些结果凸显了所提出的 CAD 框架在为放射科医生自动标注疑似病变方面的实际意义。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
16.50
自引率
6.20%
发文量
77
审稿时长
38 days
期刊介绍: Biocybernetics and Biomedical Engineering is a quarterly journal, founded in 1981, devoted to publishing the results of original, innovative and creative research investigations in the field of Biocybernetics and biomedical engineering, which bridges mathematical, physical, chemical and engineering methods and technology to analyse physiological processes in living organisms as well as to develop methods, devices and systems used in biology and medicine, mainly in medical diagnosis, monitoring systems and therapy. The Journal''s mission is to advance scientific discovery into new or improved standards of care, and promotion a wide-ranging exchange between science and its application to humans.
期刊最新文献
Automating synaptic plasticity analysis: A deep learning approach to segmenting hippocampal field potential signal Probabilistic and explainable modeling of Phase–Phase Cross-Frequency Coupling patterns in EEG. Application to dyslexia diagnosis Skin cancer diagnosis using NIR spectroscopy data of skin lesions in vivo using machine learning algorithms Validation of a body sensor network for cardiorespiratory monitoring during dynamic activities Quantitative evaluation of the effect of circle of willis structures on cerebral hyperperfusion: A multi-scale model analysis
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1