首页 > 最新文献

Journal of imaging informatics in medicine最新文献

英文 中文
Enhanced Pelvic CT Segmentation via Deep Learning: A Study on Loss Function Effects. 基于深度学习的增强骨盆CT分割:损失函数效应的研究。
Pub Date : 2026-02-01 Epub Date: 2025-05-29 DOI: 10.1007/s10278-025-01550-2
Elnaz Ghaedi, Ali Asadi, Seyed Abolfazl Hosseini, Hossein Arabi

Effective radiotherapy planning requires precise delineation of organs at risk (OARs), but the traditional manual method is laborious and subject to variability. This study explores using convolutional neural networks (CNNs) for automating OAR segmentation in pelvic CT images, focusing on the bladder, prostate, rectum, and femoral heads (FHs) as an efficient alternative to manual segmentation. Utilizing the Medical Open Network for AI (MONAI) framework, we implemented and compared U-Net, ResU-Net, SegResNet, and Attention U-Net models and explored different loss functions to enhance segmentation accuracy. Our study involved 240 patients for prostate segmentation and 220 patients for the other organs. The models' performance was evaluated using metrics such as the Dice similarity coefficient (DSC), Jaccard index (JI), and the 95th percentile Hausdorff distance (95thHD), benchmarking the results against expert segmentation masks. SegResNet outperformed all models, achieving DSC values of 0.951 for the bladder, 0.829 for the prostate, 0.860 for the rectum, 0.979 for the left FH, and 0.985 for the right FH (p < 0.05 vs. U-Net and ResU-Net). Attention U-Net also excelled, particularly for bladder and rectum segmentation. Experiments with loss functions on SegResNet showed that Dice loss consistently delivered optimal or equivalent performance across OARs, while DiceCE slightly enhanced prostate segmentation (DSC = 0.845, p = 0.0138). These results indicate that advanced CNNs, especially SegResNet, paired with optimized loss functions, provide a reliable, efficient alternative to manual methods, promising improved precision in radiotherapy planning.

有效的放射治疗计划需要精确地描绘危险器官(OARs),但传统的手工方法是费力的,并且受到变化的影响。本研究探讨了使用卷积神经网络(cnn)对骨盆CT图像进行自动分割,重点关注膀胱、前列腺、直肠和股骨头(FHs),作为人工分割的有效替代方法。利用医学开放人工智能网络(MONAI)框架,我们实现并比较了U-Net、ResU-Net、SegResNet和Attention U-Net模型,并探索了不同的损失函数来提高分割精度。我们的研究包括240例前列腺分割患者和220例其他器官分割患者。使用骰子相似系数(DSC)、Jaccard指数(JI)和第95百分位豪斯多夫距离(95thHD)等指标对模型的性能进行评估,并将结果与专家分割掩码进行基准测试。SegResNet优于所有模型,膀胱的DSC值为0.951,前列腺为0.829,直肠为0.860,左侧FH为0.979,右侧FH为0.985
{"title":"Enhanced Pelvic CT Segmentation via Deep Learning: A Study on Loss Function Effects.","authors":"Elnaz Ghaedi, Ali Asadi, Seyed Abolfazl Hosseini, Hossein Arabi","doi":"10.1007/s10278-025-01550-2","DOIUrl":"10.1007/s10278-025-01550-2","url":null,"abstract":"<p><p>Effective radiotherapy planning requires precise delineation of organs at risk (OARs), but the traditional manual method is laborious and subject to variability. This study explores using convolutional neural networks (CNNs) for automating OAR segmentation in pelvic CT images, focusing on the bladder, prostate, rectum, and femoral heads (FHs) as an efficient alternative to manual segmentation. Utilizing the Medical Open Network for AI (MONAI) framework, we implemented and compared U-Net, ResU-Net, SegResNet, and Attention U-Net models and explored different loss functions to enhance segmentation accuracy. Our study involved 240 patients for prostate segmentation and 220 patients for the other organs. The models' performance was evaluated using metrics such as the Dice similarity coefficient (DSC), Jaccard index (JI), and the 95th percentile Hausdorff distance (95thHD), benchmarking the results against expert segmentation masks. SegResNet outperformed all models, achieving DSC values of 0.951 for the bladder, 0.829 for the prostate, 0.860 for the rectum, 0.979 for the left FH, and 0.985 for the right FH (p < 0.05 vs. U-Net and ResU-Net). Attention U-Net also excelled, particularly for bladder and rectum segmentation. Experiments with loss functions on SegResNet showed that Dice loss consistently delivered optimal or equivalent performance across OARs, while DiceCE slightly enhanced prostate segmentation (DSC = 0.845, p = 0.0138). These results indicate that advanced CNNs, especially SegResNet, paired with optimized loss functions, provide a reliable, efficient alternative to manual methods, promising improved precision in radiotherapy planning.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"422-435"},"PeriodicalIF":0.0,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12921067/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144176489","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Machine learning-based model assists in differentiating Mycobacterium avium Complex Pulmonary Disease from Pulmonary Tuberculosis: A Multicenter Study. 基于机器学习的模型有助于区分鸟分枝杆菌复杂肺部疾病和肺结核:一项多中心研究。
Pub Date : 2026-02-01 Epub Date: 2025-04-01 DOI: 10.1007/s10278-025-01486-7
Jiacheng Zhang, Tingting Huang, Xu He, Dingsheng Han, Qian Xu, Fukun Shi, Lan Zhang, Dailun Hou

The number of Mycobacterium avium-intracellulare complex pulmonary disease patients is increasing globally. Distinguishing Mycobacterium avium-intracellulare complex pulmonary disease from pulmonary tuberculosis is difficult due to similar manifestations and characteristics. We aimed to build and validate a machine learning model using clinical data and computed tomography features to differentiate them. This multi-centered, retrospective study included 169 patients diagnosed with Mycobacterium avium-intracellulare complex and pulmonary tuberculosis from date to date. Data were analyzed, and logistic regression, random forest, and support vector machine models were established and validated. Performance was evaluated using receiver operating characteristic and precision-recall curves. In total, 84 patients with Mycobacterium avium-intracellulare complex pulmonary disease and 85 with pulmonary tuberculosis were analyzed. Patients with Mycobacterium avium-intracellulare complex pulmonary disease were older. Hemoptysis rate, cavity number and morphology, bronchiectasis type, and distribution differed. The support vector machine model performed better. In the training set, the area under the curve was 0.960, and in the validation set it was 0.885. The precision-recall curve showed high accuracy and low recall for the support vector machine model. The support vector machine learning-based model, which integrates clinical data and computed tomography imaging features, exhibited excellent diagnostic performance and can assist in differentiating Mycobacterium avium-intracellulare complex pulmonary disease from pulmonary tuberculosis.

在全球范围内,鸟分枝杆菌-细胞内复杂性肺病患者的数量正在增加。鸟分枝杆菌-细胞内复合体肺部疾病与肺结核因其相似的表现和特征而难以区分。我们的目标是建立并验证一个机器学习模型,使用临床数据和计算机断层扫描特征来区分它们。这项多中心回顾性研究纳入了迄今为止诊断为鸟分枝杆菌胞内复合体和肺结核的169例患者。对数据进行分析,建立并验证了逻辑回归、随机森林和支持向量机模型。使用接收器工作特性和精确召回曲线对性能进行评估。总共分析了84例鸟分枝杆菌-细胞内复合体肺部疾病和85例肺结核。鸟分枝杆菌-细胞内复合体肺部疾病患者年龄较大。咯血率、腔数及形态、支气管扩张类型及分布有差异。支持向量机模型表现较好。在训练集中,曲线下面积为0.960,在验证集中,曲线下面积为0.885。结果表明,支持向量机模型的准确率较高,召回率较低。基于支持向量机器学习的模型整合了临床数据和计算机断层扫描成像特征,表现出出色的诊断性能,可以帮助区分鸟分枝杆菌-细胞内复杂肺部疾病和肺结核。
{"title":"Machine learning-based model assists in differentiating Mycobacterium avium Complex Pulmonary Disease from Pulmonary Tuberculosis: A Multicenter Study.","authors":"Jiacheng Zhang, Tingting Huang, Xu He, Dingsheng Han, Qian Xu, Fukun Shi, Lan Zhang, Dailun Hou","doi":"10.1007/s10278-025-01486-7","DOIUrl":"10.1007/s10278-025-01486-7","url":null,"abstract":"<p><p>The number of Mycobacterium avium-intracellulare complex pulmonary disease patients is increasing globally. Distinguishing Mycobacterium avium-intracellulare complex pulmonary disease from pulmonary tuberculosis is difficult due to similar manifestations and characteristics. We aimed to build and validate a machine learning model using clinical data and computed tomography features to differentiate them. This multi-centered, retrospective study included 169 patients diagnosed with Mycobacterium avium-intracellulare complex and pulmonary tuberculosis from date to date. Data were analyzed, and logistic regression, random forest, and support vector machine models were established and validated. Performance was evaluated using receiver operating characteristic and precision-recall curves. In total, 84 patients with Mycobacterium avium-intracellulare complex pulmonary disease and 85 with pulmonary tuberculosis were analyzed. Patients with Mycobacterium avium-intracellulare complex pulmonary disease were older. Hemoptysis rate, cavity number and morphology, bronchiectasis type, and distribution differed. The support vector machine model performed better. In the training set, the area under the curve was 0.960, and in the validation set it was 0.885. The precision-recall curve showed high accuracy and low recall for the support vector machine model. The support vector machine learning-based model, which integrates clinical data and computed tomography imaging features, exhibited excellent diagnostic performance and can assist in differentiating Mycobacterium avium-intracellulare complex pulmonary disease from pulmonary tuberculosis.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"59-70"},"PeriodicalIF":0.0,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12921107/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143766303","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TMAN: A Triple Morphological Feature Attention Network for Fine-Grained Classification of Breast Ultrasound Images. 用于乳腺超声图像细粒度分类的三重形态学特征注意网络。
Pub Date : 2026-02-01 Epub Date: 2025-04-08 DOI: 10.1007/s10278-025-01496-5
Dongyue Wang, Min Xue, Hui Wang

Accurately diagnosing various types of breast lesions is critical for assessing breast cancer risk and predicting patient outcomes, which necessitates a fine-grained classification approach. While convolutional neural networks (CNNs) are predominantly employed in fine-grained classification tasks for breast lesions, they often struggle to effectively capture and model the intricate relationships between local and global features, an aspect that is vital for achieving high classification accuracy. Additionally, Color Doppler Flow Imaging (CDFI) and Strain Elastography (SE) are two important ultrasound imaging techniques widely used in the diagnosis of breast lesions. However, their specific contributions to fine-grained classification have not been thoroughly investigated. In this paper, we introduce a Triple Morphological Feature Attention Network (TMAN) designed to enhance fine-grained classification of breast ultrasound images. The TMAN architecture comprises three key modules: Local Margin Attention (LMA), Structured Texture Attention (STA), and Fusion Attention (FA), each focused on extracting distinct morphological features. TMAN achieved an average accuracy of 74.40%, precision of 73.18%, and specificity of 96.02%, surpassing state-of-the-art methods. The findings reveal that incorporating CDFI significantly improved classification for malignant subtypes with a 10% accuracy boost, while SE had a negligible impact. These findings highlight the effectiveness of TMAN in extracting nuanced morphological features and advancing precision in breast ultrasound diagnosis. The source code is accessible at https://github.com/windywindyw/TMAN .

准确诊断各种类型的乳腺病变对于评估乳腺癌风险和预测患者预后至关重要,这就需要一种细粒度的分类方法。虽然卷积神经网络(cnn)主要用于乳腺病变的细粒度分类任务,但它们往往难以有效地捕获和建模局部和全局特征之间的复杂关系,而这对于实现高分类精度至关重要。此外,彩色多普勒血流成像(CDFI)和应变弹性成像(SE)是两种重要的超声成像技术,广泛应用于乳腺病变的诊断。然而,它们对细粒度分类的具体贡献尚未得到彻底的研究。在本文中,我们引入了一个三重形态学特征注意网络(TMAN)来增强乳腺超声图像的细粒度分类。TMAN架构包括三个关键模块:局部边缘注意(LMA)、结构纹理注意(STA)和融合注意(FA),每个模块都专注于提取不同的形态特征。TMAN的平均准确率为74.40%,精密度为73.18%,特异度为96.02%,优于现有方法。研究结果显示,结合CDFI可显著改善恶性亚型的分类,准确率提高10%,而SE的影响可以忽略不计。这些发现突出了TMAN在提取细微形态学特征和提高乳腺超声诊断精度方面的有效性。源代码可从https://github.com/windywindyw/TMAN访问。
{"title":"TMAN: A Triple Morphological Feature Attention Network for Fine-Grained Classification of Breast Ultrasound Images.","authors":"Dongyue Wang, Min Xue, Hui Wang","doi":"10.1007/s10278-025-01496-5","DOIUrl":"10.1007/s10278-025-01496-5","url":null,"abstract":"<p><p>Accurately diagnosing various types of breast lesions is critical for assessing breast cancer risk and predicting patient outcomes, which necessitates a fine-grained classification approach. While convolutional neural networks (CNNs) are predominantly employed in fine-grained classification tasks for breast lesions, they often struggle to effectively capture and model the intricate relationships between local and global features, an aspect that is vital for achieving high classification accuracy. Additionally, Color Doppler Flow Imaging (CDFI) and Strain Elastography (SE) are two important ultrasound imaging techniques widely used in the diagnosis of breast lesions. However, their specific contributions to fine-grained classification have not been thoroughly investigated. In this paper, we introduce a Triple Morphological Feature Attention Network (TMAN) designed to enhance fine-grained classification of breast ultrasound images. The TMAN architecture comprises three key modules: Local Margin Attention (LMA), Structured Texture Attention (STA), and Fusion Attention (FA), each focused on extracting distinct morphological features. TMAN achieved an average accuracy of 74.40%, precision of 73.18%, and specificity of 96.02%, surpassing state-of-the-art methods. The findings reveal that incorporating CDFI significantly improved classification for malignant subtypes with a 10% accuracy boost, while SE had a negligible impact. These findings highlight the effectiveness of TMAN in extracting nuanced morphological features and advancing precision in breast ultrasound diagnosis. The source code is accessible at https://github.com/windywindyw/TMAN .</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"82-102"},"PeriodicalIF":0.0,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12921089/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143813390","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SSL-DA: Semi-and Self-Supervised Learning with Dual Attention for Echocardiogram Segmentation. 超声心动图分割的双注意半监督和自监督学习。
Pub Date : 2026-02-01 Epub Date: 2025-05-12 DOI: 10.1007/s10278-025-01532-4
Lin Lv, Xing Han, Zhengxiang Sun, Zhaoguang Li, Xiuying Wang, Tong Jiang, Yiren Liu, Tianshu Li, Jingjing Xu, Liangzhen You, Guihua Yao, Feng-Rong Sun, Jianping Xing

Echocardiogram analysis plays a crucial role in assessing and diagnosing cardiac function, providing essential data to support medical diagnoses of heart disease. A key task, accurately identifying and segmenting the left ventricle (LV) in echocardiograms, remains challenging and labor-intensive. Current automated cardiac segmentation methods often lack the necessary accuracy and reproducibility, while semi-automated or manual annotations are excessively time-consuming. To address these limitations, we propose a novel segmentation framework, semi-and self-supervised learning with dual attention (SSL-DA) for echocardiogram segmentation. We start with a temporal masking network for pre-training. This network captures valuable information, such as echocardiogram periodicity. It also provides optimized initialization parameters for LV segmentation. We then employ a semi-supervised network to automatically segment the left ventricle, enhancing the model's learning with channel and spatial attention mechanisms to capture global channel dependencies and spatial dependencies across annotations. We evaluated SSL-DA on the publicly available EchoNet-Dynamic dataset, achieving a Dice similarity coefficient of 93.34% (95% CI, 93.23-93.46%), outperforming most prior CNN-based models. To further assess the generalization ability of SSL-DA, we conducted ablation experiments on the CAMUS dataset. Experimental results confirm that SSL-DA can quickly and accurately segment the left ventricle in echocardiograms, showing its potential for robust clinical application.

超声心动图分析在评估和诊断心功能方面起着至关重要的作用,为支持心脏病的医学诊断提供了必要的数据。在超声心动图中准确识别和分割左心室(LV)是一项关键任务,仍然具有挑战性和劳动强度。目前的自动化心脏分割方法往往缺乏必要的准确性和可重复性,而半自动或手动注释过于耗时。为了解决这些限制,我们提出了一种新的分割框架,半监督和自监督双注意学习(SSL-DA)用于超声心动图分割。我们从一个时间屏蔽网络开始进行预训练。该网络捕获有价值的信息,如超声心动图的周期性。为LV分割提供了优化的初始化参数。然后,我们采用半监督网络来自动分割左心室,通过通道和空间注意机制增强模型的学习,以捕获全局通道依赖关系和跨注释的空间依赖关系。我们在公开的EchoNet-Dynamic数据集上评估了SSL-DA,获得了93.34% (95% CI, 93.23-93.46%)的Dice相似系数,优于大多数先前基于cnn的模型。为了进一步评估SSL-DA的泛化能力,我们在CAMUS数据集上进行了烧蚀实验。实验结果证实,SSL-DA可以快速准确地在超声心动图上分割左心室,显示了其强大的临床应用潜力。
{"title":"SSL-DA: Semi-and Self-Supervised Learning with Dual Attention for Echocardiogram Segmentation.","authors":"Lin Lv, Xing Han, Zhengxiang Sun, Zhaoguang Li, Xiuying Wang, Tong Jiang, Yiren Liu, Tianshu Li, Jingjing Xu, Liangzhen You, Guihua Yao, Feng-Rong Sun, Jianping Xing","doi":"10.1007/s10278-025-01532-4","DOIUrl":"10.1007/s10278-025-01532-4","url":null,"abstract":"<p><p>Echocardiogram analysis plays a crucial role in assessing and diagnosing cardiac function, providing essential data to support medical diagnoses of heart disease. A key task, accurately identifying and segmenting the left ventricle (LV) in echocardiograms, remains challenging and labor-intensive. Current automated cardiac segmentation methods often lack the necessary accuracy and reproducibility, while semi-automated or manual annotations are excessively time-consuming. To address these limitations, we propose a novel segmentation framework, semi-and self-supervised learning with dual attention (SSL-DA) for echocardiogram segmentation. We start with a temporal masking network for pre-training. This network captures valuable information, such as echocardiogram periodicity. It also provides optimized initialization parameters for LV segmentation. We then employ a semi-supervised network to automatically segment the left ventricle, enhancing the model's learning with channel and spatial attention mechanisms to capture global channel dependencies and spatial dependencies across annotations. We evaluated SSL-DA on the publicly available EchoNet-Dynamic dataset, achieving a Dice similarity coefficient of 93.34% (95% CI, 93.23-93.46%), outperforming most prior CNN-based models. To further assess the generalization ability of SSL-DA, we conducted ablation experiments on the CAMUS dataset. Experimental results confirm that SSL-DA can quickly and accurately segment the left ventricle in echocardiograms, showing its potential for robust clinical application.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"948-961"},"PeriodicalIF":0.0,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12920964/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143997101","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Learning on Misaligned Dual-Energy Chest X-ray Images Using Paired Cycle-Consistent Generative Adversarial Networks. 基于配对循环一致生成对抗网络的错位双能量胸部x线图像深度学习。
Pub Date : 2026-02-01 Epub Date: 2025-05-05 DOI: 10.1007/s10278-025-01508-4
Yasuyuki Ueda, Misato Niu, Riko Shimazaki, Asumi Yamazaki, Masashi Seki, Takayuki Ishida

Dual-energy subtraction (DES) chest X-ray images (CXRs) are often affected by motion artifacts resulting from patients' voluntary or involuntary movements, even in clinical settings. Additionally, the mediastinum and upper abdominal regions in low-energy (LE) CXRs are susceptible to signal insufficiency due to inadequate input photon numbers. Current image processing techniques for removing motion artifacts and statistical noise from DES-CXRs are insufficient, and potential algorithms for these tasks remain largely unexplored. We propose a framework based on paired cycle-consistency adversarial generative networks to effectively remove motion artifacts and statistical noise from DES-CXRs. The proposed method incorporates ensemble discriminators, differentiable augmentation, anti-aliased convolution layers, and a basic 8-layer U-Net generator. This method was trained and tested using a clinical image dataset comprising data of 600 examinations of individuals who underwent dual-energy chest X-ray imaging for diagnostic purposes, using a sixfold cross-validation approach. It demonstrated a remarkable improvement in motion artifact suppression in terms of an analysis of full width at the 10-percent maximum improved from 0.216 ± 0.0720 to 0.200 ± 0.0783 for the left lung region of interests including the cardiac region. Furthermore, it outperformed the method in a previous study in terms of a peak signal-to-noise ratio of 50.7 ± 3.68, structural similarity index of 0.997 ± 0.0152 for LE images, and Fréchet inception distance of 85.0 ± 3.52 for bone-suppressed DES images. The proposed method significantly outperforms existing techniques for removing motion artifacts and statistical noise and shows strong potential for clinical applications in chest X-ray imaging.

双能量减影(DES)胸部x线图像(cxr)经常受到由患者自愿或非自愿运动引起的运动伪影的影响,即使在临床环境中也是如此。此外,由于输入光子数量不足,低能量(LE) cxr中的纵隔和上腹部区域容易受到信号不足的影响。目前用于从des - cxr中去除运动伪影和统计噪声的图像处理技术是不够的,并且用于这些任务的潜在算法在很大程度上仍未被探索。我们提出了一个基于成对循环一致性对抗生成网络的框架,以有效地去除des - cxr中的运动伪影和统计噪声。该方法结合了集成鉴别器、可微增强、抗混叠卷积层和一个基本的8层U-Net生成器。该方法使用临床图像数据集进行训练和测试,该数据集包括600例接受双能胸部x线成像诊断的个体的检查数据,使用六倍交叉验证方法。从10%最大值的全宽度分析来看,它显示了运动伪影抑制的显著改善,对于包括心脏区域在内的左肺区域,从0.216±0.0720改善到0.200±0.0783。此外,该方法的峰值信噪比为50.7±3.68,LE图像的结构相似指数为0.997±0.0152,骨抑制DES图像的fr起始距离为85.0±3.52,均优于前人研究方法。该方法明显优于现有的去除运动伪影和统计噪声的技术,在胸部x线成像的临床应用中显示出强大的潜力。
{"title":"Deep Learning on Misaligned Dual-Energy Chest X-ray Images Using Paired Cycle-Consistent Generative Adversarial Networks.","authors":"Yasuyuki Ueda, Misato Niu, Riko Shimazaki, Asumi Yamazaki, Masashi Seki, Takayuki Ishida","doi":"10.1007/s10278-025-01508-4","DOIUrl":"10.1007/s10278-025-01508-4","url":null,"abstract":"<p><p>Dual-energy subtraction (DES) chest X-ray images (CXRs) are often affected by motion artifacts resulting from patients' voluntary or involuntary movements, even in clinical settings. Additionally, the mediastinum and upper abdominal regions in low-energy (LE) CXRs are susceptible to signal insufficiency due to inadequate input photon numbers. Current image processing techniques for removing motion artifacts and statistical noise from DES-CXRs are insufficient, and potential algorithms for these tasks remain largely unexplored. We propose a framework based on paired cycle-consistency adversarial generative networks to effectively remove motion artifacts and statistical noise from DES-CXRs. The proposed method incorporates ensemble discriminators, differentiable augmentation, anti-aliased convolution layers, and a basic 8-layer U-Net generator. This method was trained and tested using a clinical image dataset comprising data of 600 examinations of individuals who underwent dual-energy chest X-ray imaging for diagnostic purposes, using a sixfold cross-validation approach. It demonstrated a remarkable improvement in motion artifact suppression in terms of an analysis of full width at the 10-percent maximum improved from 0.216 ± 0.0720 to 0.200 ± 0.0783 for the left lung region of interests including the cardiac region. Furthermore, it outperformed the method in a previous study in terms of a peak signal-to-noise ratio of 50.7 ± 3.68, structural similarity index of 0.997 ± 0.0152 for LE images, and Fréchet inception distance of 85.0 ± 3.52 for bone-suppressed DES images. The proposed method significantly outperforms existing techniques for removing motion artifacts and statistical noise and shows strong potential for clinical applications in chest X-ray imaging.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"827-841"},"PeriodicalIF":0.0,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12920848/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144033498","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimized Feature Selection and Deep Neural Networks to Improve Heart Disease Prediction. 优化特征选择和深度神经网络改善心脏病预测。
Pub Date : 2026-02-01 Epub Date: 2025-04-16 DOI: 10.1007/s10278-025-01435-4
Changming Tan, Zhaoshun Yuan, Feng Xu, Dang Xie

Heart disease remains a significant health threat due to its high mortality rate and increasing prevalence. Early prediction using basic physical markers from routine exams is crucial for timely diagnosis and intervention. However, manual analysis of large datasets can be labor-intensive and error-prone. Our goal is to rapidly and reliably anticipate cardiac disease using a variety of body signs. This research presents a unique model for heart disease prediction. We provide a system for predicting cardiac disease that blends the deep convolutional neural network with a feature selection technique based on the LinearSVC. This integrated feature selection method selects a subset of characteristics that are strongly linked with heart disease. We feed these features into the deep conventual neural network that we constructed. Also to improve the speed of the predictor and avoid gradient varnishing or explosion, the network's hyperparameters were tuned using the random search algorithm. The proposed method was evaluated using the UCI and MIT datasets. The predictor is evaluated using a number of indicators, such as accuracy, recall, precision, and F1 score. The results demonstrate that our model attains accuracy rates of 98.16%, 98.2%, 95.38%, and 97.84% in the UCI dataset, with an average MCC score of 90%. These results affirm the efficacy and reliability of the proposed technique to predict heart disease.

心脏病由于其高死亡率和日益增加的流行率,仍然是一个重大的健康威胁。利用常规检查的基本物理标记进行早期预测对于及时诊断和干预至关重要。然而,对大型数据集的人工分析可能是劳动密集型的,而且容易出错。我们的目标是利用各种身体体征快速可靠地预测心脏病。这项研究提出了一种独特的心脏病预测模型。我们提供了一个将深度卷积神经网络与基于线性svc的特征选择技术相结合的心脏病预测系统。这种综合特征选择方法选择了与心脏病密切相关的特征子集。我们将这些特征输入到我们构建的深度神经网络中。此外,为了提高预测器的速度并避免梯度清漆或爆炸,使用随机搜索算法对网络的超参数进行了调整。使用UCI和MIT数据集对所提出的方法进行了评估。预测器使用许多指标进行评估,例如准确性、召回率、精度和F1分数。结果表明,该模型在UCI数据集中的准确率分别为98.16%、98.2%、95.38%和97.84%,平均MCC得分为90%。这些结果肯定了该技术预测心脏病的有效性和可靠性。
{"title":"Optimized Feature Selection and Deep Neural Networks to Improve Heart Disease Prediction.","authors":"Changming Tan, Zhaoshun Yuan, Feng Xu, Dang Xie","doi":"10.1007/s10278-025-01435-4","DOIUrl":"10.1007/s10278-025-01435-4","url":null,"abstract":"<p><p>Heart disease remains a significant health threat due to its high mortality rate and increasing prevalence. Early prediction using basic physical markers from routine exams is crucial for timely diagnosis and intervention. However, manual analysis of large datasets can be labor-intensive and error-prone. Our goal is to rapidly and reliably anticipate cardiac disease using a variety of body signs. This research presents a unique model for heart disease prediction. We provide a system for predicting cardiac disease that blends the deep convolutional neural network with a feature selection technique based on the LinearSVC. This integrated feature selection method selects a subset of characteristics that are strongly linked with heart disease. We feed these features into the deep conventual neural network that we constructed. Also to improve the speed of the predictor and avoid gradient varnishing or explosion, the network's hyperparameters were tuned using the random search algorithm. The proposed method was evaluated using the UCI and MIT datasets. The predictor is evaluated using a number of indicators, such as accuracy, recall, precision, and F1 score. The results demonstrate that our model attains accuracy rates of 98.16%, 98.2%, 95.38%, and 97.84% in the UCI dataset, with an average MCC score of 90%. These results affirm the efficacy and reliability of the proposed technique to predict heart disease.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"908-925"},"PeriodicalIF":0.0,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12920954/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144065402","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improving Ovarian Cancer Subtyping with Computer Vision Models on Tiled Histopathological Images. 利用平铺组织病理图像的计算机视觉模型改进卵巢癌亚型。
Pub Date : 2026-02-01 Epub Date: 2025-05-20 DOI: 10.1007/s10278-025-01546-y
Sterling Ramroach, Rikaard Hosein

Ovarian cancer remains one of the most challenging cancers to diagnose due to its non-specific symptoms, lack of reliable screening tests, and the complexity of detecting abnormalities. Accurate subtype classification is crucial for personalised treatment and improved patient outcomes. In this study, we developed a machine learning pipeline fine-tuning pre-trained computer vision models to classify ovarian cancer subtypes from whole slide images (WSI). Using targeted tissue masks for necrosis, stroma, and tumour regions as a proof of concept, we demonstrated the efficacy of tiling masked regions to transform a complex detection-then-classification problem into a simpler classification task. Our method achieved high accuracy in tile-level classification, with a subsequent extension to subtype classification via majority voting on tiled images. Precision exceeds 90% across subtypes, which highlights the potential of scalable, automated systems to assist in ovarian cancer diagnostics. These findings contribute to the broader field of computational pathology, paving the way for enhanced diagnostic consistency and accessibility in clinical settings.

由于其非特异性症状、缺乏可靠的筛查试验以及检测异常的复杂性,卵巢癌仍然是诊断最具挑战性的癌症之一。准确的亚型分类对于个性化治疗和改善患者预后至关重要。在这项研究中,我们开发了一种机器学习管道微调预训练的计算机视觉模型,用于从整个幻灯片图像(WSI)中分类卵巢癌亚型。使用坏死、间质和肿瘤区域的靶向组织掩膜作为概念验证,我们证明了平铺掩膜区域将复杂的检测-分类问题转化为更简单的分类任务的有效性。我们的方法在瓷砖级分类中获得了很高的准确性,随后通过对瓷砖图像的多数投票扩展到子类型分类。所有亚型的准确率超过90%,这突出了可扩展的自动化系统在卵巢癌诊断方面的潜力。这些发现有助于更广泛的计算病理学领域,为增强诊断一致性和临床设置的可及性铺平道路。
{"title":"Improving Ovarian Cancer Subtyping with Computer Vision Models on Tiled Histopathological Images.","authors":"Sterling Ramroach, Rikaard Hosein","doi":"10.1007/s10278-025-01546-y","DOIUrl":"10.1007/s10278-025-01546-y","url":null,"abstract":"<p><p>Ovarian cancer remains one of the most challenging cancers to diagnose due to its non-specific symptoms, lack of reliable screening tests, and the complexity of detecting abnormalities. Accurate subtype classification is crucial for personalised treatment and improved patient outcomes. In this study, we developed a machine learning pipeline fine-tuning pre-trained computer vision models to classify ovarian cancer subtypes from whole slide images (WSI). Using targeted tissue masks for necrosis, stroma, and tumour regions as a proof of concept, we demonstrated the efficacy of tiling masked regions to transform a complex detection-then-classification problem into a simpler classification task. Our method achieved high accuracy in tile-level classification, with a subsequent extension to subtype classification via majority voting on tiled images. Precision exceeds 90% across subtypes, which highlights the potential of scalable, automated systems to assist in ovarian cancer diagnostics. These findings contribute to the broader field of computational pathology, paving the way for enhanced diagnostic consistency and accessibility in clinical settings.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"620-626"},"PeriodicalIF":0.0,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12920868/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144113205","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Prediction of Future Risk of Moderate to Severe Kidney Function Loss Using a Deep Learning Model-Enabled Chest Radiography. 使用支持深度学习模型的胸部x线摄影预测中度至重度肾功能丧失的未来风险。
Pub Date : 2026-02-01 Epub Date: 2025-04-02 DOI: 10.1007/s10278-025-01489-4
Kai-Chieh Chen, Shang-Yang Lee, Dung-Jang Tsai, Kai-Hsiung Ko, Yi-Chih Hsu, Wei-Chou Chang, Wen-Hui Fang, Chin Lin, Yu-Juei Hsu

Chronic kidney disease (CKD) remains a major public health concern, requiring better predictive models for early intervention. This study evaluates a deep learning model (DLM) that utilizes raw chest X-ray (CXR) data to predict moderate to severe kidney function decline. We analyzed data from 79,219 patients with an estimated Glomerular Filtration Rate (eGFR) between 65 and 120, segmented into development (n = 37,983), tuning (n = 15,346), internal validation (n = 14,113), and external validation (n = 11,777) sets. Our DLM, pretrained on CXR-report pairs, was fine-tuned with the development set. We retrospectively examined data spanning April 2011 to February 2022, with a 5-year maximum follow-up. Primary and secondary endpoints included CKD stage 3b progression, ESRD/dialysis, and mortality. The overall concordance index (C-index) values for the internal and external validation sets were 0.903 (95% CI, 0.885-0.922) and 0.851 (95% CI, 0.819-0.883), respectively. In these sets, the incidences of progression to CKD stage 3b at 5 years were 19.2% and 13.4% in the high-risk group, significantly higher than those in the median-risk (5.9% and 5.1%) and low-risk groups (0.9% and 0.9%), respectively. The sex, age, and eGFR-adjusted hazard ratios (HR) for the high-risk group compared to the low-risk group were 16.88 (95% CI, 10.84-26.28) and 7.77 (95% CI, 4.77-12.64), respectively. The high-risk group also exhibited higher probabilities of progressing to ESRD/dialysis or experiencing mortality compared to the low-risk group. Further analysis revealed that the high-risk group compared to the low/median-risk group had a higher prevalence of complications and abnormal blood/urine markers. Our findings demonstrate that a DLM utilizing CXR can effectively predict CKD stage 3b progression, offering a potential tool for early intervention in high-risk populations.

慢性肾脏疾病(CKD)仍然是一个主要的公共卫生问题,需要更好的早期干预预测模型。本研究评估了一种深度学习模型(DLM),该模型利用原始胸部x射线(CXR)数据预测中度至重度肾功能下降。我们分析了来自79,219例患者的数据,这些患者的肾小球滤过率(eGFR)估计在65 - 120之间,分为发展组(n = 37,983)、调整组(n = 15,346)、内部验证组(n = 14,113)和外部验证组(n = 11,777)。我们的DLM在cxr -报告对上进行了预训练,并根据开发集进行了微调。我们回顾性研究了2011年4月至2022年2月的数据,最长随访时间为5年。主要和次要终点包括CKD 3b期进展、ESRD/透析和死亡率。内部和外部验证集的总体一致性指数(C-index)值分别为0.903 (95% CI, 0.885-0.922)和0.851 (95% CI, 0.819-0.883)。在这些组中,高风险组5年进展为CKD 3b期的发生率分别为19.2%和13.4%,显著高于中危组(5.9%和5.1%)和低危组(0.9%和0.9%)。与低危组相比,高危组的性别、年龄和egfr调整后的危险比(HR)分别为16.88 (95% CI, 10.84-26.28)和7.77 (95% CI, 4.77-12.64)。与低风险组相比,高风险组也表现出更高的进展为ESRD/透析或经历死亡的可能性。进一步分析显示,与低/中危组相比,高危组并发症和异常血液/尿液标志物的患病率更高。我们的研究结果表明,利用CXR的DLM可以有效地预测CKD 3b期进展,为高危人群的早期干预提供了潜在的工具。
{"title":"Prediction of Future Risk of Moderate to Severe Kidney Function Loss Using a Deep Learning Model-Enabled Chest Radiography.","authors":"Kai-Chieh Chen, Shang-Yang Lee, Dung-Jang Tsai, Kai-Hsiung Ko, Yi-Chih Hsu, Wei-Chou Chang, Wen-Hui Fang, Chin Lin, Yu-Juei Hsu","doi":"10.1007/s10278-025-01489-4","DOIUrl":"10.1007/s10278-025-01489-4","url":null,"abstract":"<p><p>Chronic kidney disease (CKD) remains a major public health concern, requiring better predictive models for early intervention. This study evaluates a deep learning model (DLM) that utilizes raw chest X-ray (CXR) data to predict moderate to severe kidney function decline. We analyzed data from 79,219 patients with an estimated Glomerular Filtration Rate (eGFR) between 65 and 120, segmented into development (n = 37,983), tuning (n = 15,346), internal validation (n = 14,113), and external validation (n = 11,777) sets. Our DLM, pretrained on CXR-report pairs, was fine-tuned with the development set. We retrospectively examined data spanning April 2011 to February 2022, with a 5-year maximum follow-up. Primary and secondary endpoints included CKD stage 3b progression, ESRD/dialysis, and mortality. The overall concordance index (C-index) values for the internal and external validation sets were 0.903 (95% CI, 0.885-0.922) and 0.851 (95% CI, 0.819-0.883), respectively. In these sets, the incidences of progression to CKD stage 3b at 5 years were 19.2% and 13.4% in the high-risk group, significantly higher than those in the median-risk (5.9% and 5.1%) and low-risk groups (0.9% and 0.9%), respectively. The sex, age, and eGFR-adjusted hazard ratios (HR) for the high-risk group compared to the low-risk group were 16.88 (95% CI, 10.84-26.28) and 7.77 (95% CI, 4.77-12.64), respectively. The high-risk group also exhibited higher probabilities of progressing to ESRD/dialysis or experiencing mortality compared to the low-risk group. Further analysis revealed that the high-risk group compared to the low/median-risk group had a higher prevalence of complications and abnormal blood/urine markers. Our findings demonstrate that a DLM utilizing CXR can effectively predict CKD stage 3b progression, offering a potential tool for early intervention in high-risk populations.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"454-467"},"PeriodicalIF":0.0,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12920974/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143775244","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing Burn Diagnosis through SE-ResNet18 and Confidence Filtering. 利用SE-ResNet18和置信度滤波增强烧伤诊断。
Pub Date : 2026-02-01 Epub Date: 2025-04-08 DOI: 10.1007/s10278-025-01495-6
Hanyue Mo, Ziwen Kuang, Haoxuan Wang, Xinyi Cai, Kun Cheng

Accurate classification of burn severity is crucial for effective clinical treatment; however, existing methods often fail to balance precision and real-time performance. To address this challenge, we propose a deep learning-based approach utilizing an enhanced ResNet18 architecture with integrated attention mechanisms to improve classification accuracy. The system consists of data preprocessing, classification, optimization, and post-processing modules. The optimization strategy employs an adaptive learning rate combining cosine annealing and class-specific gradient adaptation, alongside targeted adjustments for class imbalance, while an improved Adam optimizer enhances convergence stability. Post-processing incorporates confidence filtering (threshold 0.3) and selective evaluation, with weighted aggregation-integrating dynamic accuracy calculation and moving average to refine predictions and enhance diagnostic reliability. Experimental results on a burn skin test dataset demonstrate that the proposed model achieves a classification accuracy of 99.19% ± 0.12 and a mean average precision (mAP) of 98.72% ± 0.10, highlighting its potential for real-time clinical burn assessment.

烧伤严重程度的准确分类对于有效的临床治疗至关重要;然而,现有的方法往往无法平衡精度和实时性。为了解决这一挑战,我们提出了一种基于深度学习的方法,利用增强的ResNet18架构和集成的注意力机制来提高分类精度。该系统由数据预处理、分类、优化和后处理四个模块组成。优化策略采用结合余弦退火和类别梯度自适应的自适应学习率,并对类别不平衡进行有针对性的调整,同时改进的Adam优化器增强了收敛稳定性。后处理采用置信度过滤(阈值0.3)和选择性评估,加权聚合集成动态精度计算和移动平均,以改进预测和提高诊断可靠性。在烧伤皮肤测试数据集上的实验结果表明,该模型的分类准确率为99.19%±0.12,平均精度(mAP)为98.72%±0.10,显示了其在临床烧伤实时评估中的潜力。
{"title":"Enhancing Burn Diagnosis through SE-ResNet18 and Confidence Filtering.","authors":"Hanyue Mo, Ziwen Kuang, Haoxuan Wang, Xinyi Cai, Kun Cheng","doi":"10.1007/s10278-025-01495-6","DOIUrl":"10.1007/s10278-025-01495-6","url":null,"abstract":"<p><p>Accurate classification of burn severity is crucial for effective clinical treatment; however, existing methods often fail to balance precision and real-time performance. To address this challenge, we propose a deep learning-based approach utilizing an enhanced ResNet18 architecture with integrated attention mechanisms to improve classification accuracy. The system consists of data preprocessing, classification, optimization, and post-processing modules. The optimization strategy employs an adaptive learning rate combining cosine annealing and class-specific gradient adaptation, alongside targeted adjustments for class imbalance, while an improved Adam optimizer enhances convergence stability. Post-processing incorporates confidence filtering (threshold 0.3) and selective evaluation, with weighted aggregation-integrating dynamic accuracy calculation and moving average to refine predictions and enhance diagnostic reliability. Experimental results on a burn skin test dataset demonstrate that the proposed model achieves a classification accuracy of 99.19% ± 0.12 and a mean average precision (mAP) of 98.72% ± 0.10, highlighting its potential for real-time clinical burn assessment.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"639-654"},"PeriodicalIF":0.0,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12920881/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143813386","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Dirichlet Distribution-Based Complex Ensemble Approach for Breast Cancer Classification from Ultrasound Images with Transfer Learning and Multiphase Spaced Repetition Method. 基于Dirichlet分布的复杂集成方法在乳腺癌超声图像分类中的迁移学习和多相间隔重复方法。
Pub Date : 2026-02-01 Epub Date: 2025-04-29 DOI: 10.1007/s10278-025-01515-5
Osman Güler

Breast ultrasound is a useful and rapid diagnostic tool for the early detection of breast cancer. Artificial intelligence-supported computer-aided decision systems, which assist expert radiologists and clinicians, provide reliable and rapid results. Deep learning methods and techniques are widely used in the field of health for early diagnosis, abnormality detection, and disease diagnosis. Therefore, in this study, a deep ensemble learning model based on Dirichlet distribution using pre-trained transfer learning models for breast cancer classification from ultrasound images is proposed. In the study, experiments were conducted using the Breast Ultrasound Images Dataset (BUSI). The dataset, which had an imbalanced class structure, was balanced using data augmentation techniques. DenseNet201, InceptionV3, VGG16, and ResNet152 models were used for transfer learning with fivefold cross-validation. Statistical analyses, including the ANOVA test and Tukey HSD test, were applied to evaluate the model's performance and ensure the reliability of the results. Additionally, Grad-CAM (Gradient-weighted Class Activation Mapping) was used for explainable AI (XAI), providing visual explanations of the deep learning model's decision-making process. The spaced repetition method, commonly used to improve the success of learners in educational sciences, was adapted to artificial intelligence in this study. The results of training with transfer learning models were used as input for further training, and spaced repetition was applied using previously learned information. The use of the spaced repetition method led to increased model success and reduced learning times. The weights obtained from the trained models were input into an ensemble learning system based on Dirichlet distribution with different variations. The proposed model achieved 99.60% validation accuracy on the dataset, demonstrating its effectiveness in breast cancer classification.

乳腺超声是早期发现乳腺癌的一种有用的快速诊断工具。人工智能支持的计算机辅助决策系统,可以帮助放射科专家和临床医生,提供可靠和快速的结果。深度学习方法和技术被广泛应用于健康领域的早期诊断、异常检测和疾病诊断。因此,本研究提出了一种基于Dirichlet分布的深度集成学习模型,利用预训练迁移学习模型对超声图像进行乳腺癌分类。在本研究中,实验使用乳腺超声图像数据集(BUSI)进行。类结构不平衡的数据集使用数据增强技术进行了平衡。使用DenseNet201、InceptionV3、VGG16和ResNet152模型进行迁移学习,并进行五重交叉验证。采用统计分析,包括ANOVA检验和Tukey HSD检验来评价模型的性能,确保结果的可靠性。此外,Grad-CAM(梯度加权类激活映射)用于可解释人工智能(XAI),为深度学习模型的决策过程提供可视化解释。通常用于提高教育科学学习者成功的间隔重复方法在本研究中被应用于人工智能。使用迁移学习模型的训练结果作为进一步训练的输入,并使用先前学习的信息应用间隔重复。使用间隔重复法提高了模型的成功率,减少了学习时间。将训练模型得到的权值输入到基于Dirichlet分布的集成学习系统中。该模型在数据集上的验证准确率达到99.60%,证明了其在乳腺癌分类中的有效性。
{"title":"A Dirichlet Distribution-Based Complex Ensemble Approach for Breast Cancer Classification from Ultrasound Images with Transfer Learning and Multiphase Spaced Repetition Method.","authors":"Osman Güler","doi":"10.1007/s10278-025-01515-5","DOIUrl":"10.1007/s10278-025-01515-5","url":null,"abstract":"<p><p>Breast ultrasound is a useful and rapid diagnostic tool for the early detection of breast cancer. Artificial intelligence-supported computer-aided decision systems, which assist expert radiologists and clinicians, provide reliable and rapid results. Deep learning methods and techniques are widely used in the field of health for early diagnosis, abnormality detection, and disease diagnosis. Therefore, in this study, a deep ensemble learning model based on Dirichlet distribution using pre-trained transfer learning models for breast cancer classification from ultrasound images is proposed. In the study, experiments were conducted using the Breast Ultrasound Images Dataset (BUSI). The dataset, which had an imbalanced class structure, was balanced using data augmentation techniques. DenseNet201, InceptionV3, VGG16, and ResNet152 models were used for transfer learning with fivefold cross-validation. Statistical analyses, including the ANOVA test and Tukey HSD test, were applied to evaluate the model's performance and ensure the reliability of the results. Additionally, Grad-CAM (Gradient-weighted Class Activation Mapping) was used for explainable AI (XAI), providing visual explanations of the deep learning model's decision-making process. The spaced repetition method, commonly used to improve the success of learners in educational sciences, was adapted to artificial intelligence in this study. The results of training with transfer learning models were used as input for further training, and spaced repetition was applied using previously learned information. The use of the spaced repetition method led to increased model success and reduced learning times. The weights obtained from the trained models were input into an ensemble learning system based on Dirichlet distribution with different variations. The proposed model achieved 99.60% validation accuracy on the dataset, demonstrating its effectiveness in breast cancer classification.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"202-228"},"PeriodicalIF":0.0,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12920884/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144057056","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of imaging informatics in medicine
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1