{"title":"","authors":"","doi":"","DOIUrl":"","url":null,"abstract":"","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"129 ","pages":"Article 102712"},"PeriodicalIF":4.9,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146669755","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"","authors":"","doi":"","DOIUrl":"","url":null,"abstract":"","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"128 ","pages":"Article 102716"},"PeriodicalIF":4.9,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146493480","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"","authors":"","doi":"","DOIUrl":"","url":null,"abstract":"","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"128 ","pages":"Article 102702"},"PeriodicalIF":4.9,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146493486","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"","authors":"","doi":"","DOIUrl":"","url":null,"abstract":"","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"128 ","pages":"Article 102715"},"PeriodicalIF":4.9,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146493490","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"","authors":"","doi":"","DOIUrl":"","url":null,"abstract":"","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"128 ","pages":"Article 102721"},"PeriodicalIF":4.9,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146493493","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-26DOI: 10.1016/j.compmedimag.2025.102681
Xiaodong Zhou , Huibin Wang
In the task of 3D reconstruction of X-ray coronary artery, matching vessel branches in different viewpoints is a challenging task. In this study, this task is transformed into the process of vessel branches instance segmentation and then matching branches of the same color, and an instance segmentation network (YOLO-CAVBIS) is proposed specifically for deformed and dynamic vessels. Firstly, since the left and right coronary artery branches are not easy to distinguish, a coronary artery classification dataset is produced and the left and right coronary artery arteries are classified using the YOLOv8-cls classification model, and then the classified images are fed into two parallel YOLO-CAVBIS networks for coronary artery branches instance segmentation. Finally, the branches with the same color of branches in different viewpoints are matched. The experimental results show that the accuracy of the coronary artery classification model can reach 100%, and the mAP50 of the proposed left coronary branches instance segmentation model reaches 98.4%, and the mAP50 of the proposed right coronary branches instance segmentation model reaches 99.4%. In terms of extracting deformation and dynamic vascular features, our proposed YOLO-CAVBIS network demonstrates greater specificity and superiority compared to other instance segmentation networks, and can be used as a baseline model for the task of coronary artery branches instance segmentation. Code repository: https://gitee.com/zaleman/ca_instance_segmentation, https://github.com/zaleman/ca_instance_segmentation.
{"title":"Research on X-ray coronary artery branches instance segmentation and matching task","authors":"Xiaodong Zhou , Huibin Wang","doi":"10.1016/j.compmedimag.2025.102681","DOIUrl":"10.1016/j.compmedimag.2025.102681","url":null,"abstract":"<div><div>In the task of 3D reconstruction of X-ray coronary artery, matching vessel branches in different viewpoints is a challenging task. In this study, this task is transformed into the process of vessel branches instance segmentation and then matching branches of the same color, and an instance segmentation network (YOLO-CAVBIS) is proposed specifically for deformed and dynamic vessels. Firstly, since the left and right coronary artery branches are not easy to distinguish, a coronary artery classification dataset is produced and the left and right coronary artery arteries are classified using the YOLOv8-cls classification model, and then the classified images are fed into two parallel YOLO-CAVBIS networks for coronary artery branches instance segmentation. Finally, the branches with the same color of branches in different viewpoints are matched. The experimental results show that the accuracy of the coronary artery classification model can reach 100%, and the mAP50 of the proposed left coronary branches instance segmentation model reaches 98.4%, and the mAP50 of the proposed right coronary branches instance segmentation model reaches 99.4%. In terms of extracting deformation and dynamic vascular features, our proposed YOLO-CAVBIS network demonstrates greater specificity and superiority compared to other instance segmentation networks, and can be used as a baseline model for the task of coronary artery branches instance segmentation. Code repository: <span><span>https://gitee.com/zaleman/ca_instance_segmentation</span><svg><path></path></svg></span>, <span><span>https://github.com/zaleman/ca_instance_segmentation</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"128 ","pages":"Article 102681"},"PeriodicalIF":4.9,"publicationDate":"2025-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145928056","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-26DOI: 10.1016/j.compmedimag.2025.102695
Li Shiyan, Wang Shuqin, Gu Xin, Sun Debing
In recent years, semi-supervised learning (SSL) has attracted increasing attention in medical image analysis, showing great potential in scenarios with limited annotations. However, existing consistency regularization methods suffer from several limitations: overly uniform constraints at the output layer, lack of interaction within adversarial strategies, and reliance on external sample pools for sample estimation, which together lead to insufficient use of feature-level information and unstable training. To address these challenges, this paper proposes a novel semi-supervised framework, termed Feature-level multi-scale Consistency and Adversarial Training (FCAT). A multi-scale feature-level consistency mechanism is introduced to capture hierarchical structural representations through cross-level feature fusion, enabling robust feature alignment without relying on external sample pools. To overcome the limitation of unidirectional adversarial training, a bidirectional feature perturbation strategy is designed under a teacher–student collaboration scheme, where both models generate perturbations from their own gradients and enforce mutual consistency. In addition, an intrinsic evaluation mechanism based on entropy and complementary confidence is developed to rank unlabeled samples according to their information content, guiding the training process toward informative hard samples while reducing overfitting to trivial ones. Experiments on the balanced Pneumonia Chest X-ray and NCT-CRC-HE histopathology datasets, as well as the imbalanced ISIC 2019 dermoscopic skin lesion dataset, demonstrate that our FCAT achieves competitive performance and strong generalization across diverse imaging modalities and data distributions.
{"title":"Semi-supervised medical image classification via feature-level multi-scale consistency and adversarial training","authors":"Li Shiyan, Wang Shuqin, Gu Xin, Sun Debing","doi":"10.1016/j.compmedimag.2025.102695","DOIUrl":"10.1016/j.compmedimag.2025.102695","url":null,"abstract":"<div><div>In recent years, semi-supervised learning (SSL) has attracted increasing attention in medical image analysis, showing great potential in scenarios with limited annotations. However, existing consistency regularization methods suffer from several limitations: overly uniform constraints at the output layer, lack of interaction within adversarial strategies, and reliance on external sample pools for sample estimation, which together lead to insufficient use of feature-level information and unstable training. To address these challenges, this paper proposes a novel semi-supervised framework, termed Feature-level multi-scale Consistency and Adversarial Training (FCAT). A multi-scale feature-level consistency mechanism is introduced to capture hierarchical structural representations through cross-level feature fusion, enabling robust feature alignment without relying on external sample pools. To overcome the limitation of unidirectional adversarial training, a bidirectional feature perturbation strategy is designed under a teacher–student collaboration scheme, where both models generate perturbations from their own gradients and enforce mutual consistency. In addition, an intrinsic evaluation mechanism based on entropy and complementary confidence is developed to rank unlabeled samples according to their information content, guiding the training process toward informative hard samples while reducing overfitting to trivial ones. Experiments on the balanced Pneumonia Chest X-ray and NCT-CRC-HE histopathology datasets, as well as the imbalanced ISIC 2019 dermoscopic skin lesion dataset, demonstrate that our FCAT achieves competitive performance and strong generalization across diverse imaging modalities and data distributions.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"128 ","pages":"Article 102695"},"PeriodicalIF":4.9,"publicationDate":"2025-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145886479","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-19DOI: 10.1016/j.compmedimag.2025.102690
Luohong Wu , Matthias Seibold , Nicola A. Cavalcanti , Giuseppe Loggia , Lisa Reissner , Bastian Sigrist , Jonas Hein , Lilian Calvet , Arnd Viehöfer , Philipp Fürnstahl
Background:
Bone surface reconstruction is an essential component of computer-assisted orthopedic surgery (CAOS), forming the foundation for both preoperative planning and intraoperative guidance. Compared to traditional imaging modalities such as computed tomography (CT) and magnetic resonance imaging (MRI), ultrasound, an emerging CAOS technology, provides a radiation-free, cost-effective, and portable alternative. While ultrasound offers new opportunities in CAOS, technical shortcomings continue to hinder its translation into surgery. In particular, due to the inherent limitations of ultrasound imaging, B-mode ultrasound typically captures only partial bone surfaces. The inter- and intra-operator variability in ultrasound scanning further increases the complexity of the data. Existing reconstruction methods struggle with such challenging data, leading to increased reconstruction errors and artifacts, such as holes and inflated structures. Effective techniques for accurately reconstructing open bone surfaces from real-world 3D ultrasound volumes remain lacking.
Methods:
We propose UltraBoneUDF, a self-supervised framework specifically designed for reconstructing open bone surfaces from ultrasound data. It learns unsigned distance functions (UDFs) from 3D ultrasound data. In addition, we present a novel loss function based on local tangent plane optimization that substantially improves surface reconstruction quality. UltraBoneUDF and competing models are benchmarked on three open-source datasets and further evaluated through ablation studies.
Results:
Qualitative results demonstrate the limitations of the state-of-the-art methods. Quantitatively, UltraBoneUDF achieves comparable or lower bi-directional Chamfer distance across three datasets with fewer parameters: 1.60 mm on the UltraBones100k dataset ( improvement), 0.21 mm on the OpenBoneCT dataset, and 0.18 mm on the ClosedBoneCT dataset.
Conclusion:
UltraBoneUDF represents a promising solution for open bone surface reconstruction from 3D ultrasound volumes, with the potential to advance downstream applications in CAOS.
{"title":"UltraBoneUDF: Self-supervised bone surface reconstruction from ultrasound based on neural unsigned distance functions","authors":"Luohong Wu , Matthias Seibold , Nicola A. Cavalcanti , Giuseppe Loggia , Lisa Reissner , Bastian Sigrist , Jonas Hein , Lilian Calvet , Arnd Viehöfer , Philipp Fürnstahl","doi":"10.1016/j.compmedimag.2025.102690","DOIUrl":"10.1016/j.compmedimag.2025.102690","url":null,"abstract":"<div><h3>Background:</h3><div>Bone surface reconstruction is an essential component of computer-assisted orthopedic surgery (CAOS), forming the foundation for both preoperative planning and intraoperative guidance. Compared to traditional imaging modalities such as computed tomography (CT) and magnetic resonance imaging (MRI), ultrasound, an emerging CAOS technology, provides a radiation-free, cost-effective, and portable alternative. While ultrasound offers new opportunities in CAOS, technical shortcomings continue to hinder its translation into surgery. In particular, due to the inherent limitations of ultrasound imaging, B-mode ultrasound typically captures only partial bone surfaces. The inter- and intra-operator variability in ultrasound scanning further increases the complexity of the data. Existing reconstruction methods struggle with such challenging data, leading to increased reconstruction errors and artifacts, such as holes and inflated structures. Effective techniques for accurately reconstructing open bone surfaces from real-world 3D ultrasound volumes remain lacking.</div></div><div><h3>Methods:</h3><div>We propose UltraBoneUDF, a self-supervised framework specifically designed for reconstructing open bone surfaces from ultrasound data. It learns unsigned distance functions (UDFs) from 3D ultrasound data. In addition, we present a novel loss function based on local tangent plane optimization that substantially improves surface reconstruction quality. UltraBoneUDF and competing models are benchmarked on three open-source datasets and further evaluated through ablation studies.</div></div><div><h3>Results:</h3><div>Qualitative results demonstrate the limitations of the state-of-the-art methods. Quantitatively, UltraBoneUDF achieves comparable or lower bi-directional Chamfer distance across three datasets with fewer parameters: 1.60 mm on the UltraBones100k dataset (<span><math><mrow><mo>≈</mo><mn>25</mn><mo>.</mo><mn>5</mn><mtext>%</mtext></mrow></math></span> improvement), 0.21 mm on the OpenBoneCT dataset, and 0.18 mm on the ClosedBoneCT dataset.</div></div><div><h3>Conclusion:</h3><div>UltraBoneUDF represents a promising solution for open bone surface reconstruction from 3D ultrasound volumes, with the potential to advance downstream applications in CAOS.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"127 ","pages":"Article 102690"},"PeriodicalIF":4.9,"publicationDate":"2025-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145840092","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-19DOI: 10.1016/j.compmedimag.2025.102689
Yuheng Yang , Kun You , Haoyang He , Yuehua Zhang , Xue Feng , Fei Lu , Luping Fang , Yunling Wang , Qing Pan
Deep learning methods have been widely used in medical imaging, including rib segmentation. Nevertheless, their dependence on large annotated datasets poses a significant challenge, as expert-annotated rib Computed Tomography (CT) scans are notably scarce. An additional complication arises from domain shift, which often limits the direct applicability of models trained on public datasets to specific clinical tasks, thus requiring further resource-intensive annotations on target domains for adaptation. Although semi-supervised methods have been developed to mitigate annotation costs, the prevailing strategies largely remain at the sample level. This results in unavoidable redundancy within each annotated sample, making the process of labeling an entire CT scan exceedingly tedious and costly. To address these issues, we propose a semi-supervised approach named the Entropy-Guided Partial Annotation (EGPA) method for rib segmentation. This method actively identifies the most informative regions in images for annotation based on entropy metrics, thereby substantially reducing the workload for experts during both model training and cross-domain adaptation. By integrating contrastive learning, active learning, and self-training strategies, EGPA not only significantly saves annotation cost and time when training from scratch but also effectively addresses the challenges of migrating from source to target domains. On the public RibSegV2 dataset (source domain) and a private chest CT rib segmentation dataset (target domain), EGPA achieved Dice scores of 89.5 and 90.7, respectively, nearly matching the performance of fully supervised models (89.9 and 91.2) with only 19 % and 18 % of the full annotation workload. This remarkable reduction in annotation effort shortens the development timeline for reliable segmentation tools and enhances their clinical feasibility. By simplifying the creation of high-quality annotated datasets, our approach facilitates the broad deployment of rib analysis tools in varied clinical settings, promoting standardized and efficient diagnostic practices.
{"title":"Entropy-guided partial annotation for cross-domain rib segmentation","authors":"Yuheng Yang , Kun You , Haoyang He , Yuehua Zhang , Xue Feng , Fei Lu , Luping Fang , Yunling Wang , Qing Pan","doi":"10.1016/j.compmedimag.2025.102689","DOIUrl":"10.1016/j.compmedimag.2025.102689","url":null,"abstract":"<div><div>Deep learning methods have been widely used in medical imaging, including rib segmentation. Nevertheless, their dependence on large annotated datasets poses a significant challenge, as expert-annotated rib Computed Tomography (CT) scans are notably scarce. An additional complication arises from domain shift, which often limits the direct applicability of models trained on public datasets to specific clinical tasks, thus requiring further resource-intensive annotations on target domains for adaptation. Although semi-supervised methods have been developed to mitigate annotation costs, the prevailing strategies largely remain at the sample level. This results in unavoidable redundancy within each annotated sample, making the process of labeling an entire CT scan exceedingly tedious and costly. To address these issues, we propose a semi-supervised approach named the Entropy-Guided Partial Annotation (EGPA) method for rib segmentation. This method actively identifies the most informative regions in images for annotation based on entropy metrics, thereby substantially reducing the workload for experts during both model training and cross-domain adaptation. By integrating contrastive learning, active learning, and self-training strategies, EGPA not only significantly saves annotation cost and time when training from scratch but also effectively addresses the challenges of migrating from source to target domains. On the public RibSegV2 dataset (source domain) and a private chest CT rib segmentation dataset (target domain), EGPA achieved Dice scores of 89.5 and 90.7, respectively, nearly matching the performance of fully supervised models (89.9 and 91.2) with only 19 % and 18 % of the full annotation workload. This remarkable reduction in annotation effort shortens the development timeline for reliable segmentation tools and enhances their clinical feasibility. By simplifying the creation of high-quality annotated datasets, our approach facilitates the broad deployment of rib analysis tools in varied clinical settings, promoting standardized and efficient diagnostic practices.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"127 ","pages":"Article 102689"},"PeriodicalIF":4.9,"publicationDate":"2025-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145835241","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-19DOI: 10.1016/j.compmedimag.2025.102692
Qiongmin Zhang, Siyi Yu, Yin Shi, Xiaowei Tan
Alzheimer’s Disease (AD) is a progressive neurodegenerative disorder characterized by cognitive decline and memory loss. Magnetic Resonance Imaging (MRI) plays a key role in detecting AD in Computer-aided Diagnosis (CAD) systems. However, variations in MRI scanners and imaging protocols introduce domain shifts, which significantly degrade model performance. Additionally, CAD models may misdiagnose unfamiliar neurodegenerative diseases not represented during training. In these complex and diverse clinical scenarios, employing closed set domain adaptation methods to achieve accurate diagnosis of AD presents substantial challenges. We propose a Hyperspherical Weighted Adversarial Learning-based Open Set Domain Adaptation (HWAL-OSDA) method for AD diagnosis. We introduce a voxel-based 3D feature extraction and fusion module to effectively capture and integrate MRI spatial features and employ a Multi-scale and Dual Attention Aggregation block to focus on disease-sensitive regions. To overcome the dispersion of feature distributions in high-dimensional space, hyperspherical variational auto-encoder module is incorporated to improve the learning of latent feature representations on a hypersphere. Furthermore, the spherical angular distance-based triplet loss and margin-based loss in the cross-domain alignment and separation module enhance the separability of known classes and establish a clear decision boundary between known and unknown classes. To improve the positive transfer of known samples and reduce the negative transfer of unknown samples, we design a weighted adversarial domain adaptation module that utilizes a dynamic instance-level weighting scheme, combining the Weibull distribution with entropy. Experiments on the ADNI and PPMI datasets show that HWAL-OSDA achieves an average accuracy of 94.2%, 83.68%, and 77.83% across three-way classification tasks (AD vs. CN vs. Unk, MCI vs. CN vs. Unk, and AD vs. MCI vs. Unk tasks), outperforming traditional and state-of-the-art OSDA methods. This approach offers a practical reference for CAD of AD and other neurodegenerative diseases in open clinical settings.
阿尔茨海默病(AD)是一种以认知能力下降和记忆丧失为特征的进行性神经退行性疾病。在计算机辅助诊断(CAD)系统中,磁共振成像(MRI)在检测AD方面起着关键作用。然而,MRI扫描仪和成像协议的变化引入了域移位,这大大降低了模型的性能。此外,CAD模型可能误诊训练中未出现的不熟悉的神经退行性疾病。在这些复杂多样的临床场景中,采用闭集域自适应方法实现AD的准确诊断提出了很大的挑战。提出了一种基于超球面加权对抗学习的开集域自适应(hwalosda)的AD诊断方法。引入基于体素的三维特征提取与融合模块,有效捕获和整合MRI空间特征,采用多尺度双注意力聚合块对疾病敏感区域进行聚焦。为了克服特征分布在高维空间中的分散性,引入了超球变分自编码器模块,提高了对超球上潜在特征表示的学习。此外,跨域对准与分离模块中基于球面角距离的三重态损失和基于边缘的损失增强了已知类的可分性,并在已知和未知类之间建立了明确的决策边界。为了提高已知样本的正迁移和减少未知样本的负迁移,我们设计了一个加权的对抗域自适应模块,该模块采用动态实例级加权方案,将威布尔分布与熵相结合。在ADNI和PPMI数据集上的实验表明,hwo -OSDA在三向分类任务(AD vs. CN vs. Unk, MCI vs. CN vs. Unk, AD vs. MCI vs. Unk)上的平均准确率分别为94.2%、83.68%和77.83%,优于传统和最先进的OSDA方法。该方法可为开放式临床环境下AD及其他神经退行性疾病的CAD提供实用参考。
{"title":"Improving Alzheimer’s disease diagnosis by hyperspherical weighted adversarial learning in open set domain adaptation","authors":"Qiongmin Zhang, Siyi Yu, Yin Shi, Xiaowei Tan","doi":"10.1016/j.compmedimag.2025.102692","DOIUrl":"10.1016/j.compmedimag.2025.102692","url":null,"abstract":"<div><div>Alzheimer’s Disease (AD) is a progressive neurodegenerative disorder characterized by cognitive decline and memory loss. Magnetic Resonance Imaging (MRI) plays a key role in detecting AD in Computer-aided Diagnosis (CAD) systems. However, variations in MRI scanners and imaging protocols introduce domain shifts, which significantly degrade model performance. Additionally, CAD models may misdiagnose unfamiliar neurodegenerative diseases not represented during training. In these complex and diverse clinical scenarios, employing closed set domain adaptation methods to achieve accurate diagnosis of AD presents substantial challenges. We propose a Hyperspherical Weighted Adversarial Learning-based Open Set Domain Adaptation (HWAL-OSDA) method for AD diagnosis. We introduce a voxel-based 3D feature extraction and fusion module to effectively capture and integrate MRI spatial features and employ a Multi-scale and Dual Attention Aggregation block to focus on disease-sensitive regions. To overcome the dispersion of feature distributions in high-dimensional space, hyperspherical variational auto-encoder module is incorporated to improve the learning of latent feature representations on a hypersphere. Furthermore, the spherical angular distance-based triplet loss and margin-based loss in the cross-domain alignment and separation module enhance the separability of known classes and establish a clear decision boundary between known and unknown classes. To improve the positive transfer of known samples and reduce the negative transfer of unknown samples, we design a weighted adversarial domain adaptation module that utilizes a dynamic instance-level weighting scheme, combining the Weibull distribution with entropy. Experiments on the ADNI and PPMI datasets show that HWAL-OSDA achieves an average accuracy of 94.2%, 83.68%, and 77.83% across three-way classification tasks (AD vs. CN vs. Unk, MCI vs. CN vs. Unk, and AD vs. MCI vs. Unk tasks), outperforming traditional and state-of-the-art OSDA methods. This approach offers a practical reference for CAD of AD and other neurodegenerative diseases in open clinical settings.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"127 ","pages":"Article 102692"},"PeriodicalIF":4.9,"publicationDate":"2025-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145821879","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}