Riyadh M. Al-Tam , Aymen M. Al-Hejri , Sultan S. Alshamrani , Mugahed A. Al-antari , Sachin M. Narangale
{"title":"利用医学乳房 X 线照片和超声波图像进行多模态乳腺癌混合可解释计算机辅助诊断","authors":"Riyadh M. Al-Tam , Aymen M. Al-Hejri , Sultan S. Alshamrani , Mugahed A. Al-antari , Sachin M. Narangale","doi":"10.1016/j.bbe.2024.08.007","DOIUrl":null,"url":null,"abstract":"<div><p>Breast cancer is a prevalent global disease where early detection is crucial for effective treatment and reducing mortality rates. To address this challenge, a novel Computer-Aided Diagnosis (CAD) framework leveraging Artificial Intelligence (AI) techniques has been developed. This framework integrates capabilities for the simultaneous detection and classification of breast lesions. The AI-based CAD framework is meticulously structured into two pipelines (Stage 1 and Stage 2). The first pipeline (Stage 1) focuses on detectable cases where lesions are identified during the detection task. The second pipeline (Stage 2) is dedicated to cases where lesions are not initially detected. Various experimental scenarios, including binary (benign vs. malignant) and multi-class classifications based on BI-RADS scores, were conducted for training and evaluation. Additionally, a verification and validation (V&V) scenario was implemented to assess the reliability of the framework using unseen multimodal datasets for both binary and multi-class tasks. For the detection tasks, the recent AI detectors like YOLO (You Only Look Once) variants were fine-tuned and optimized to localize breast lesions. For classification tasks, hybrid AI models incorporating ensemble convolutional neural networks (CNNs) and the attention mechanism of Vision Transformers were proposed to enhance prediction performance. The proposed AI-based CAD framework was trained and evaluated using various multimodal ultrasound datasets (BUSI and US2) and mammogram datasets (MIAS, INbreast, real private mammograms, KAU-BCMD, and CBIS-DDSM), either individually or in merged forms. Visual t-SNE techniques were applied to visually harmonize data distributions across ultrasound and mammogram datasets for effective various datasets merging. To generate visually explainable heatmaps in both pipelines (stages 1 and 2), Grad-CAM was utilized. These heatmaps assisted in finalizing detected boxes, especially in stage 2 when the AI detector failed to automatically detect breast lesions. The highest evaluation metrics achieved for merged dataset (BUSI, INbreast, and MIAS) were 97.73% accuracy and 97.27% mAP50 in the first pipeline. In the second pipeline, the proposed CAD achieved 91.66% accuracy with 95.65% mAP50 on MIAS and 95.65% accuracy with 96.10% mAP50 on the merged dataset (INbreast and MIAS). Meanwhile, exceptional performance was demonstrated using BI-RADS scores, achieving 87.29% accuracy, 91.68% AUC, 86.72% mAP50, and 64.75% mAP50-95 on a combined dataset of INbreast and CBIS-DDSM. These results underscore the practical significance of the proposed CAD framework in automatically annotating suspected lesions for radiologists.</p></div>","PeriodicalId":55381,"journal":{"name":"Biocybernetics and Biomedical Engineering","volume":"44 3","pages":"Pages 731-758"},"PeriodicalIF":5.3000,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Multimodal breast cancer hybrid explainable computer-aided diagnosis using medical mammograms and ultrasound Images\",\"authors\":\"Riyadh M. Al-Tam , Aymen M. Al-Hejri , Sultan S. Alshamrani , Mugahed A. Al-antari , Sachin M. Narangale\",\"doi\":\"10.1016/j.bbe.2024.08.007\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Breast cancer is a prevalent global disease where early detection is crucial for effective treatment and reducing mortality rates. To address this challenge, a novel Computer-Aided Diagnosis (CAD) framework leveraging Artificial Intelligence (AI) techniques has been developed. This framework integrates capabilities for the simultaneous detection and classification of breast lesions. The AI-based CAD framework is meticulously structured into two pipelines (Stage 1 and Stage 2). The first pipeline (Stage 1) focuses on detectable cases where lesions are identified during the detection task. The second pipeline (Stage 2) is dedicated to cases where lesions are not initially detected. Various experimental scenarios, including binary (benign vs. malignant) and multi-class classifications based on BI-RADS scores, were conducted for training and evaluation. Additionally, a verification and validation (V&V) scenario was implemented to assess the reliability of the framework using unseen multimodal datasets for both binary and multi-class tasks. For the detection tasks, the recent AI detectors like YOLO (You Only Look Once) variants were fine-tuned and optimized to localize breast lesions. For classification tasks, hybrid AI models incorporating ensemble convolutional neural networks (CNNs) and the attention mechanism of Vision Transformers were proposed to enhance prediction performance. The proposed AI-based CAD framework was trained and evaluated using various multimodal ultrasound datasets (BUSI and US2) and mammogram datasets (MIAS, INbreast, real private mammograms, KAU-BCMD, and CBIS-DDSM), either individually or in merged forms. Visual t-SNE techniques were applied to visually harmonize data distributions across ultrasound and mammogram datasets for effective various datasets merging. To generate visually explainable heatmaps in both pipelines (stages 1 and 2), Grad-CAM was utilized. These heatmaps assisted in finalizing detected boxes, especially in stage 2 when the AI detector failed to automatically detect breast lesions. The highest evaluation metrics achieved for merged dataset (BUSI, INbreast, and MIAS) were 97.73% accuracy and 97.27% mAP50 in the first pipeline. In the second pipeline, the proposed CAD achieved 91.66% accuracy with 95.65% mAP50 on MIAS and 95.65% accuracy with 96.10% mAP50 on the merged dataset (INbreast and MIAS). Meanwhile, exceptional performance was demonstrated using BI-RADS scores, achieving 87.29% accuracy, 91.68% AUC, 86.72% mAP50, and 64.75% mAP50-95 on a combined dataset of INbreast and CBIS-DDSM. These results underscore the practical significance of the proposed CAD framework in automatically annotating suspected lesions for radiologists.</p></div>\",\"PeriodicalId\":55381,\"journal\":{\"name\":\"Biocybernetics and Biomedical Engineering\",\"volume\":\"44 3\",\"pages\":\"Pages 731-758\"},\"PeriodicalIF\":5.3000,\"publicationDate\":\"2024-07-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Biocybernetics and Biomedical Engineering\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0208521624000603\",\"RegionNum\":2,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ENGINEERING, BIOMEDICAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Biocybernetics and Biomedical Engineering","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0208521624000603","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, BIOMEDICAL","Score":null,"Total":0}
Multimodal breast cancer hybrid explainable computer-aided diagnosis using medical mammograms and ultrasound Images
Breast cancer is a prevalent global disease where early detection is crucial for effective treatment and reducing mortality rates. To address this challenge, a novel Computer-Aided Diagnosis (CAD) framework leveraging Artificial Intelligence (AI) techniques has been developed. This framework integrates capabilities for the simultaneous detection and classification of breast lesions. The AI-based CAD framework is meticulously structured into two pipelines (Stage 1 and Stage 2). The first pipeline (Stage 1) focuses on detectable cases where lesions are identified during the detection task. The second pipeline (Stage 2) is dedicated to cases where lesions are not initially detected. Various experimental scenarios, including binary (benign vs. malignant) and multi-class classifications based on BI-RADS scores, were conducted for training and evaluation. Additionally, a verification and validation (V&V) scenario was implemented to assess the reliability of the framework using unseen multimodal datasets for both binary and multi-class tasks. For the detection tasks, the recent AI detectors like YOLO (You Only Look Once) variants were fine-tuned and optimized to localize breast lesions. For classification tasks, hybrid AI models incorporating ensemble convolutional neural networks (CNNs) and the attention mechanism of Vision Transformers were proposed to enhance prediction performance. The proposed AI-based CAD framework was trained and evaluated using various multimodal ultrasound datasets (BUSI and US2) and mammogram datasets (MIAS, INbreast, real private mammograms, KAU-BCMD, and CBIS-DDSM), either individually or in merged forms. Visual t-SNE techniques were applied to visually harmonize data distributions across ultrasound and mammogram datasets for effective various datasets merging. To generate visually explainable heatmaps in both pipelines (stages 1 and 2), Grad-CAM was utilized. These heatmaps assisted in finalizing detected boxes, especially in stage 2 when the AI detector failed to automatically detect breast lesions. The highest evaluation metrics achieved for merged dataset (BUSI, INbreast, and MIAS) were 97.73% accuracy and 97.27% mAP50 in the first pipeline. In the second pipeline, the proposed CAD achieved 91.66% accuracy with 95.65% mAP50 on MIAS and 95.65% accuracy with 96.10% mAP50 on the merged dataset (INbreast and MIAS). Meanwhile, exceptional performance was demonstrated using BI-RADS scores, achieving 87.29% accuracy, 91.68% AUC, 86.72% mAP50, and 64.75% mAP50-95 on a combined dataset of INbreast and CBIS-DDSM. These results underscore the practical significance of the proposed CAD framework in automatically annotating suspected lesions for radiologists.
期刊介绍:
Biocybernetics and Biomedical Engineering is a quarterly journal, founded in 1981, devoted to publishing the results of original, innovative and creative research investigations in the field of Biocybernetics and biomedical engineering, which bridges mathematical, physical, chemical and engineering methods and technology to analyse physiological processes in living organisms as well as to develop methods, devices and systems used in biology and medicine, mainly in medical diagnosis, monitoring systems and therapy. The Journal''s mission is to advance scientific discovery into new or improved standards of care, and promotion a wide-ranging exchange between science and its application to humans.