首页 > 最新文献

Computerized Medical Imaging and Graphics最新文献

英文 中文
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2026-01-01
{"title":"","authors":"","doi":"","DOIUrl":"","url":null,"abstract":"","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"129 ","pages":"Article 102712"},"PeriodicalIF":4.9,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146669755","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2026-01-01
{"title":"","authors":"","doi":"","DOIUrl":"","url":null,"abstract":"","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"128 ","pages":"Article 102716"},"PeriodicalIF":4.9,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146493480","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2026-01-01
{"title":"","authors":"","doi":"","DOIUrl":"","url":null,"abstract":"","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"128 ","pages":"Article 102702"},"PeriodicalIF":4.9,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146493486","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2026-01-01
{"title":"","authors":"","doi":"","DOIUrl":"","url":null,"abstract":"","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"128 ","pages":"Article 102715"},"PeriodicalIF":4.9,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146493490","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2026-01-01
{"title":"","authors":"","doi":"","DOIUrl":"","url":null,"abstract":"","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"128 ","pages":"Article 102721"},"PeriodicalIF":4.9,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146493493","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Research on X-ray coronary artery branches instance segmentation and matching task x射线冠状动脉分支实例分割与匹配任务研究
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2025-12-26 DOI: 10.1016/j.compmedimag.2025.102681
Xiaodong Zhou , Huibin Wang
In the task of 3D reconstruction of X-ray coronary artery, matching vessel branches in different viewpoints is a challenging task. In this study, this task is transformed into the process of vessel branches instance segmentation and then matching branches of the same color, and an instance segmentation network (YOLO-CAVBIS) is proposed specifically for deformed and dynamic vessels. Firstly, since the left and right coronary artery branches are not easy to distinguish, a coronary artery classification dataset is produced and the left and right coronary artery arteries are classified using the YOLOv8-cls classification model, and then the classified images are fed into two parallel YOLO-CAVBIS networks for coronary artery branches instance segmentation. Finally, the branches with the same color of branches in different viewpoints are matched. The experimental results show that the accuracy of the coronary artery classification model can reach 100%, and the mAP50 of the proposed left coronary branches instance segmentation model reaches 98.4%, and the mAP50 of the proposed right coronary branches instance segmentation model reaches 99.4%. In terms of extracting deformation and dynamic vascular features, our proposed YOLO-CAVBIS network demonstrates greater specificity and superiority compared to other instance segmentation networks, and can be used as a baseline model for the task of coronary artery branches instance segmentation. Code repository: https://gitee.com/zaleman/ca_instance_segmentation, https://github.com/zaleman/ca_instance_segmentation.
在x线冠状动脉三维重建任务中,不同视点的血管分支匹配是一项具有挑战性的任务。在本研究中,将该任务转化为血管分支实例分割和相同颜色分支匹配的过程,并提出了针对变形血管和动态血管的实例分割网络(YOLO-CAVBIS)。首先,针对左右冠状动脉分支不易区分的问题,建立冠状动脉分类数据集,利用YOLOv8-cls分类模型对左右冠状动脉进行分类,然后将分类后的图像送入两个并行的yolov8 - cavbis网络进行冠状动脉分支实例分割。最后,对不同视点分支颜色相同的分支进行匹配。实验结果表明,冠状动脉分类模型的准确率可以达到100%,所提出的左冠状动脉分支实例分割模型的mAP50达到98.4%,所提出的右冠状动脉分支实例分割模型的mAP50达到99.4%。在提取血管形变和血管动态特征方面,与其他实例分割网络相比,我们提出的YOLO-CAVBIS网络具有更大的特异性和优越性,可以作为冠状动脉分支实例分割任务的基线模型。代码存储库:https://gitee.com/zaleman/ca_instance_segmentation, https://github.com/zaleman/ca_instance_segmentation。
{"title":"Research on X-ray coronary artery branches instance segmentation and matching task","authors":"Xiaodong Zhou ,&nbsp;Huibin Wang","doi":"10.1016/j.compmedimag.2025.102681","DOIUrl":"10.1016/j.compmedimag.2025.102681","url":null,"abstract":"<div><div>In the task of 3D reconstruction of X-ray coronary artery, matching vessel branches in different viewpoints is a challenging task. In this study, this task is transformed into the process of vessel branches instance segmentation and then matching branches of the same color, and an instance segmentation network (YOLO-CAVBIS) is proposed specifically for deformed and dynamic vessels. Firstly, since the left and right coronary artery branches are not easy to distinguish, a coronary artery classification dataset is produced and the left and right coronary artery arteries are classified using the YOLOv8-cls classification model, and then the classified images are fed into two parallel YOLO-CAVBIS networks for coronary artery branches instance segmentation. Finally, the branches with the same color of branches in different viewpoints are matched. The experimental results show that the accuracy of the coronary artery classification model can reach 100%, and the mAP50 of the proposed left coronary branches instance segmentation model reaches 98.4%, and the mAP50 of the proposed right coronary branches instance segmentation model reaches 99.4%. In terms of extracting deformation and dynamic vascular features, our proposed YOLO-CAVBIS network demonstrates greater specificity and superiority compared to other instance segmentation networks, and can be used as a baseline model for the task of coronary artery branches instance segmentation. Code repository: <span><span>https://gitee.com/zaleman/ca_instance_segmentation</span><svg><path></path></svg></span>, <span><span>https://github.com/zaleman/ca_instance_segmentation</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"128 ","pages":"Article 102681"},"PeriodicalIF":4.9,"publicationDate":"2025-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145928056","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Semi-supervised medical image classification via feature-level multi-scale consistency and adversarial training 基于特征级多尺度一致性和对抗训练的半监督医学图像分类
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2025-12-26 DOI: 10.1016/j.compmedimag.2025.102695
Li Shiyan, Wang Shuqin, Gu Xin, Sun Debing
In recent years, semi-supervised learning (SSL) has attracted increasing attention in medical image analysis, showing great potential in scenarios with limited annotations. However, existing consistency regularization methods suffer from several limitations: overly uniform constraints at the output layer, lack of interaction within adversarial strategies, and reliance on external sample pools for sample estimation, which together lead to insufficient use of feature-level information and unstable training. To address these challenges, this paper proposes a novel semi-supervised framework, termed Feature-level multi-scale Consistency and Adversarial Training (FCAT). A multi-scale feature-level consistency mechanism is introduced to capture hierarchical structural representations through cross-level feature fusion, enabling robust feature alignment without relying on external sample pools. To overcome the limitation of unidirectional adversarial training, a bidirectional feature perturbation strategy is designed under a teacher–student collaboration scheme, where both models generate perturbations from their own gradients and enforce mutual consistency. In addition, an intrinsic evaluation mechanism based on entropy and complementary confidence is developed to rank unlabeled samples according to their information content, guiding the training process toward informative hard samples while reducing overfitting to trivial ones. Experiments on the balanced Pneumonia Chest X-ray and NCT-CRC-HE histopathology datasets, as well as the imbalanced ISIC 2019 dermoscopic skin lesion dataset, demonstrate that our FCAT achieves competitive performance and strong generalization across diverse imaging modalities and data distributions.
近年来,半监督学习(semi-supervised learning, SSL)在医学图像分析中受到越来越多的关注,在标注有限的场景中显示出巨大的潜力。然而,现有的一致性正则化方法存在一些局限性:输出层过于统一的约束,对抗策略之间缺乏交互,以及依赖外部样本池进行样本估计,这些都导致特征级信息的使用不足和训练不稳定。为了解决这些挑战,本文提出了一种新的半监督框架,称为特征级多尺度一致性和对抗训练(FCAT)。引入了一种多尺度特征级一致性机制,通过跨级别特征融合捕获分层结构表示,实现了不依赖外部样本池的鲁棒特征对齐。为了克服单向对抗训练的局限性,在师生协作方案下设计了双向特征摄动策略,两个模型从各自的梯度产生摄动,并实现相互一致性。此外,开发了一种基于熵和互补置信度的内在评价机制,根据未标记样本的信息内容对其进行排序,引导训练过程向信息丰富的硬样本发展,同时减少对平凡样本的过拟合。在平衡的肺炎胸片和NCT-CRC-HE组织病理学数据集以及不平衡的ISIC 2019皮肤镜皮肤病变数据集上进行的实验表明,我们的FCAT在不同的成像方式和数据分布中具有竞争力的性能和很强的泛化性。
{"title":"Semi-supervised medical image classification via feature-level multi-scale consistency and adversarial training","authors":"Li Shiyan,&nbsp;Wang Shuqin,&nbsp;Gu Xin,&nbsp;Sun Debing","doi":"10.1016/j.compmedimag.2025.102695","DOIUrl":"10.1016/j.compmedimag.2025.102695","url":null,"abstract":"<div><div>In recent years, semi-supervised learning (SSL) has attracted increasing attention in medical image analysis, showing great potential in scenarios with limited annotations. However, existing consistency regularization methods suffer from several limitations: overly uniform constraints at the output layer, lack of interaction within adversarial strategies, and reliance on external sample pools for sample estimation, which together lead to insufficient use of feature-level information and unstable training. To address these challenges, this paper proposes a novel semi-supervised framework, termed Feature-level multi-scale Consistency and Adversarial Training (FCAT). A multi-scale feature-level consistency mechanism is introduced to capture hierarchical structural representations through cross-level feature fusion, enabling robust feature alignment without relying on external sample pools. To overcome the limitation of unidirectional adversarial training, a bidirectional feature perturbation strategy is designed under a teacher–student collaboration scheme, where both models generate perturbations from their own gradients and enforce mutual consistency. In addition, an intrinsic evaluation mechanism based on entropy and complementary confidence is developed to rank unlabeled samples according to their information content, guiding the training process toward informative hard samples while reducing overfitting to trivial ones. Experiments on the balanced Pneumonia Chest X-ray and NCT-CRC-HE histopathology datasets, as well as the imbalanced ISIC 2019 dermoscopic skin lesion dataset, demonstrate that our FCAT achieves competitive performance and strong generalization across diverse imaging modalities and data distributions.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"128 ","pages":"Article 102695"},"PeriodicalIF":4.9,"publicationDate":"2025-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145886479","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
UltraBoneUDF: Self-supervised bone surface reconstruction from ultrasound based on neural unsigned distance functions UltraBoneUDF:基于神经无符号距离函数的超声自监督骨表面重建
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2025-12-19 DOI: 10.1016/j.compmedimag.2025.102690
Luohong Wu , Matthias Seibold , Nicola A. Cavalcanti , Giuseppe Loggia , Lisa Reissner , Bastian Sigrist , Jonas Hein , Lilian Calvet , Arnd Viehöfer , Philipp Fürnstahl

Background:

Bone surface reconstruction is an essential component of computer-assisted orthopedic surgery (CAOS), forming the foundation for both preoperative planning and intraoperative guidance. Compared to traditional imaging modalities such as computed tomography (CT) and magnetic resonance imaging (MRI), ultrasound, an emerging CAOS technology, provides a radiation-free, cost-effective, and portable alternative. While ultrasound offers new opportunities in CAOS, technical shortcomings continue to hinder its translation into surgery. In particular, due to the inherent limitations of ultrasound imaging, B-mode ultrasound typically captures only partial bone surfaces. The inter- and intra-operator variability in ultrasound scanning further increases the complexity of the data. Existing reconstruction methods struggle with such challenging data, leading to increased reconstruction errors and artifacts, such as holes and inflated structures. Effective techniques for accurately reconstructing open bone surfaces from real-world 3D ultrasound volumes remain lacking.

Methods:

We propose UltraBoneUDF, a self-supervised framework specifically designed for reconstructing open bone surfaces from ultrasound data. It learns unsigned distance functions (UDFs) from 3D ultrasound data. In addition, we present a novel loss function based on local tangent plane optimization that substantially improves surface reconstruction quality. UltraBoneUDF and competing models are benchmarked on three open-source datasets and further evaluated through ablation studies.

Results:

Qualitative results demonstrate the limitations of the state-of-the-art methods. Quantitatively, UltraBoneUDF achieves comparable or lower bi-directional Chamfer distance across three datasets with fewer parameters: 1.60 mm on the UltraBones100k dataset (25.5% improvement), 0.21 mm on the OpenBoneCT dataset, and 0.18 mm on the ClosedBoneCT dataset.

Conclusion:

UltraBoneUDF represents a promising solution for open bone surface reconstruction from 3D ultrasound volumes, with the potential to advance downstream applications in CAOS.
背景:骨表面重建是计算机辅助骨科手术(CAOS)的重要组成部分,是术前规划和术中指导的基础。与计算机断层扫描(CT)和磁共振成像(MRI)等传统成像方式相比,超声作为一种新兴的CAOS技术,提供了一种无辐射、成本效益高、便携的替代方案。虽然超声为CAOS提供了新的机会,但技术缺陷继续阻碍其向手术的转化。特别是,由于超声成像的固有局限性,b超通常只能捕获部分骨表面。超声扫描中操作员之间和操作员内部的可变性进一步增加了数据的复杂性。现有的重建方法难以处理这些具有挑战性的数据,导致重建误差和伪影增加,例如孔洞和膨胀结构。从真实世界的三维超声体积中准确重建开放骨表面的有效技术仍然缺乏。方法:我们提出了UltraBoneUDF,一个专门用于从超声数据重建开放骨表面的自监督框架。它从3D超声数据中学习无符号距离函数(udf)。此外,我们提出了一种新的基于局部切平面优化的损失函数,大大提高了表面重建的质量。UltraBoneUDF和竞争模型在三个开源数据集上进行基准测试,并通过消融研究进一步评估。结果:定性结果表明了最先进方法的局限性。在定量上,UltraBoneUDF在三个参数较少的数据集上实现了相当或更低的双向倒角距离:UltraBones100k数据集上的倒角距离为1.60 mm(≈25.5%),OpenBoneCT数据集上的倒角距离为0.21 mm, closebonect数据集上的倒角距离为0.18 mm。结论:UltraBoneUDF是一种很有前途的解决方案,可以从3D超声体积中重建开放骨表面,具有推进CAOS下游应用的潜力。
{"title":"UltraBoneUDF: Self-supervised bone surface reconstruction from ultrasound based on neural unsigned distance functions","authors":"Luohong Wu ,&nbsp;Matthias Seibold ,&nbsp;Nicola A. Cavalcanti ,&nbsp;Giuseppe Loggia ,&nbsp;Lisa Reissner ,&nbsp;Bastian Sigrist ,&nbsp;Jonas Hein ,&nbsp;Lilian Calvet ,&nbsp;Arnd Viehöfer ,&nbsp;Philipp Fürnstahl","doi":"10.1016/j.compmedimag.2025.102690","DOIUrl":"10.1016/j.compmedimag.2025.102690","url":null,"abstract":"<div><h3>Background:</h3><div>Bone surface reconstruction is an essential component of computer-assisted orthopedic surgery (CAOS), forming the foundation for both preoperative planning and intraoperative guidance. Compared to traditional imaging modalities such as computed tomography (CT) and magnetic resonance imaging (MRI), ultrasound, an emerging CAOS technology, provides a radiation-free, cost-effective, and portable alternative. While ultrasound offers new opportunities in CAOS, technical shortcomings continue to hinder its translation into surgery. In particular, due to the inherent limitations of ultrasound imaging, B-mode ultrasound typically captures only partial bone surfaces. The inter- and intra-operator variability in ultrasound scanning further increases the complexity of the data. Existing reconstruction methods struggle with such challenging data, leading to increased reconstruction errors and artifacts, such as holes and inflated structures. Effective techniques for accurately reconstructing open bone surfaces from real-world 3D ultrasound volumes remain lacking.</div></div><div><h3>Methods:</h3><div>We propose UltraBoneUDF, a self-supervised framework specifically designed for reconstructing open bone surfaces from ultrasound data. It learns unsigned distance functions (UDFs) from 3D ultrasound data. In addition, we present a novel loss function based on local tangent plane optimization that substantially improves surface reconstruction quality. UltraBoneUDF and competing models are benchmarked on three open-source datasets and further evaluated through ablation studies.</div></div><div><h3>Results:</h3><div>Qualitative results demonstrate the limitations of the state-of-the-art methods. Quantitatively, UltraBoneUDF achieves comparable or lower bi-directional Chamfer distance across three datasets with fewer parameters: 1.60 mm on the UltraBones100k dataset (<span><math><mrow><mo>≈</mo><mn>25</mn><mo>.</mo><mn>5</mn><mtext>%</mtext></mrow></math></span> improvement), 0.21 mm on the OpenBoneCT dataset, and 0.18 mm on the ClosedBoneCT dataset.</div></div><div><h3>Conclusion:</h3><div>UltraBoneUDF represents a promising solution for open bone surface reconstruction from 3D ultrasound volumes, with the potential to advance downstream applications in CAOS.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"127 ","pages":"Article 102690"},"PeriodicalIF":4.9,"publicationDate":"2025-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145840092","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Entropy-guided partial annotation for cross-domain rib segmentation 跨域肋骨分割的熵引导部分标注。
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2025-12-19 DOI: 10.1016/j.compmedimag.2025.102689
Yuheng Yang , Kun You , Haoyang He , Yuehua Zhang , Xue Feng , Fei Lu , Luping Fang , Yunling Wang , Qing Pan
Deep learning methods have been widely used in medical imaging, including rib segmentation. Nevertheless, their dependence on large annotated datasets poses a significant challenge, as expert-annotated rib Computed Tomography (CT) scans are notably scarce. An additional complication arises from domain shift, which often limits the direct applicability of models trained on public datasets to specific clinical tasks, thus requiring further resource-intensive annotations on target domains for adaptation. Although semi-supervised methods have been developed to mitigate annotation costs, the prevailing strategies largely remain at the sample level. This results in unavoidable redundancy within each annotated sample, making the process of labeling an entire CT scan exceedingly tedious and costly. To address these issues, we propose a semi-supervised approach named the Entropy-Guided Partial Annotation (EGPA) method for rib segmentation. This method actively identifies the most informative regions in images for annotation based on entropy metrics, thereby substantially reducing the workload for experts during both model training and cross-domain adaptation. By integrating contrastive learning, active learning, and self-training strategies, EGPA not only significantly saves annotation cost and time when training from scratch but also effectively addresses the challenges of migrating from source to target domains. On the public RibSegV2 dataset (source domain) and a private chest CT rib segmentation dataset (target domain), EGPA achieved Dice scores of 89.5 and 90.7, respectively, nearly matching the performance of fully supervised models (89.9 and 91.2) with only 19 % and 18 % of the full annotation workload. This remarkable reduction in annotation effort shortens the development timeline for reliable segmentation tools and enhances their clinical feasibility. By simplifying the creation of high-quality annotated datasets, our approach facilitates the broad deployment of rib analysis tools in varied clinical settings, promoting standardized and efficient diagnostic practices.
深度学习方法已广泛应用于医学成像,包括肋骨分割。然而,它们对大型注释数据集的依赖构成了重大挑战,因为专家注释的肋骨计算机断层扫描(CT)非常稀缺。另一个复杂性来自领域转移,这通常限制了在公共数据集上训练的模型对特定临床任务的直接适用性,因此需要对目标领域进行进一步的资源密集型注释以适应。虽然已经开发了半监督方法来降低注释成本,但主流策略在很大程度上仍然停留在样本水平。这导致在每个注释样本中不可避免的冗余,使得标记整个CT扫描的过程非常繁琐和昂贵。为了解决这些问题,我们提出了一种半监督方法,称为熵引导部分注释(EGPA)方法,用于肋骨分割。该方法基于熵度量主动识别图像中信息量最大的区域进行标注,从而大大减少了专家在模型训练和跨域自适应过程中的工作量。通过集成对比学习、主动学习和自我训练策略,EGPA不仅在从头开始训练时显著节省注释成本和时间,而且还有效地解决了从源域迁移到目标域的挑战。在公开的RibSegV2数据集(源域)和私有的胸部CT肋骨分割数据集(目标域)上,EGPA分别获得了89.5和90.7分,几乎与完全监督模型(89.9和91.2)的性能相匹配,仅占全部注释工作量的19% %和18% %。这显著减少了注释工作,缩短了可靠分割工具的开发时间,并提高了它们的临床可行性。通过简化高质量注释数据集的创建,我们的方法有助于在各种临床环境中广泛部署肋骨分析工具,促进标准化和高效的诊断实践。
{"title":"Entropy-guided partial annotation for cross-domain rib segmentation","authors":"Yuheng Yang ,&nbsp;Kun You ,&nbsp;Haoyang He ,&nbsp;Yuehua Zhang ,&nbsp;Xue Feng ,&nbsp;Fei Lu ,&nbsp;Luping Fang ,&nbsp;Yunling Wang ,&nbsp;Qing Pan","doi":"10.1016/j.compmedimag.2025.102689","DOIUrl":"10.1016/j.compmedimag.2025.102689","url":null,"abstract":"<div><div>Deep learning methods have been widely used in medical imaging, including rib segmentation. Nevertheless, their dependence on large annotated datasets poses a significant challenge, as expert-annotated rib Computed Tomography (CT) scans are notably scarce. An additional complication arises from domain shift, which often limits the direct applicability of models trained on public datasets to specific clinical tasks, thus requiring further resource-intensive annotations on target domains for adaptation. Although semi-supervised methods have been developed to mitigate annotation costs, the prevailing strategies largely remain at the sample level. This results in unavoidable redundancy within each annotated sample, making the process of labeling an entire CT scan exceedingly tedious and costly. To address these issues, we propose a semi-supervised approach named the Entropy-Guided Partial Annotation (EGPA) method for rib segmentation. This method actively identifies the most informative regions in images for annotation based on entropy metrics, thereby substantially reducing the workload for experts during both model training and cross-domain adaptation. By integrating contrastive learning, active learning, and self-training strategies, EGPA not only significantly saves annotation cost and time when training from scratch but also effectively addresses the challenges of migrating from source to target domains. On the public RibSegV2 dataset (source domain) and a private chest CT rib segmentation dataset (target domain), EGPA achieved Dice scores of 89.5 and 90.7, respectively, nearly matching the performance of fully supervised models (89.9 and 91.2) with only 19 % and 18 % of the full annotation workload. This remarkable reduction in annotation effort shortens the development timeline for reliable segmentation tools and enhances their clinical feasibility. By simplifying the creation of high-quality annotated datasets, our approach facilitates the broad deployment of rib analysis tools in varied clinical settings, promoting standardized and efficient diagnostic practices.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"127 ","pages":"Article 102689"},"PeriodicalIF":4.9,"publicationDate":"2025-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145835241","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improving Alzheimer’s disease diagnosis by hyperspherical weighted adversarial learning in open set domain adaptation 开放集域自适应超球面加权对抗学习改进阿尔茨海默病诊断。
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2025-12-19 DOI: 10.1016/j.compmedimag.2025.102692
Qiongmin Zhang, Siyi Yu, Yin Shi, Xiaowei Tan
Alzheimer’s Disease (AD) is a progressive neurodegenerative disorder characterized by cognitive decline and memory loss. Magnetic Resonance Imaging (MRI) plays a key role in detecting AD in Computer-aided Diagnosis (CAD) systems. However, variations in MRI scanners and imaging protocols introduce domain shifts, which significantly degrade model performance. Additionally, CAD models may misdiagnose unfamiliar neurodegenerative diseases not represented during training. In these complex and diverse clinical scenarios, employing closed set domain adaptation methods to achieve accurate diagnosis of AD presents substantial challenges. We propose a Hyperspherical Weighted Adversarial Learning-based Open Set Domain Adaptation (HWAL-OSDA) method for AD diagnosis. We introduce a voxel-based 3D feature extraction and fusion module to effectively capture and integrate MRI spatial features and employ a Multi-scale and Dual Attention Aggregation block to focus on disease-sensitive regions. To overcome the dispersion of feature distributions in high-dimensional space, hyperspherical variational auto-encoder module is incorporated to improve the learning of latent feature representations on a hypersphere. Furthermore, the spherical angular distance-based triplet loss and margin-based loss in the cross-domain alignment and separation module enhance the separability of known classes and establish a clear decision boundary between known and unknown classes. To improve the positive transfer of known samples and reduce the negative transfer of unknown samples, we design a weighted adversarial domain adaptation module that utilizes a dynamic instance-level weighting scheme, combining the Weibull distribution with entropy. Experiments on the ADNI and PPMI datasets show that HWAL-OSDA achieves an average accuracy of 94.2%, 83.68%, and 77.83% across three-way classification tasks (AD vs. CN vs. Unk, MCI vs. CN vs. Unk, and AD vs. MCI vs. Unk tasks), outperforming traditional and state-of-the-art OSDA methods. This approach offers a practical reference for CAD of AD and other neurodegenerative diseases in open clinical settings.
阿尔茨海默病(AD)是一种以认知能力下降和记忆丧失为特征的进行性神经退行性疾病。在计算机辅助诊断(CAD)系统中,磁共振成像(MRI)在检测AD方面起着关键作用。然而,MRI扫描仪和成像协议的变化引入了域移位,这大大降低了模型的性能。此外,CAD模型可能误诊训练中未出现的不熟悉的神经退行性疾病。在这些复杂多样的临床场景中,采用闭集域自适应方法实现AD的准确诊断提出了很大的挑战。提出了一种基于超球面加权对抗学习的开集域自适应(hwalosda)的AD诊断方法。引入基于体素的三维特征提取与融合模块,有效捕获和整合MRI空间特征,采用多尺度双注意力聚合块对疾病敏感区域进行聚焦。为了克服特征分布在高维空间中的分散性,引入了超球变分自编码器模块,提高了对超球上潜在特征表示的学习。此外,跨域对准与分离模块中基于球面角距离的三重态损失和基于边缘的损失增强了已知类的可分性,并在已知和未知类之间建立了明确的决策边界。为了提高已知样本的正迁移和减少未知样本的负迁移,我们设计了一个加权的对抗域自适应模块,该模块采用动态实例级加权方案,将威布尔分布与熵相结合。在ADNI和PPMI数据集上的实验表明,hwo -OSDA在三向分类任务(AD vs. CN vs. Unk, MCI vs. CN vs. Unk, AD vs. MCI vs. Unk)上的平均准确率分别为94.2%、83.68%和77.83%,优于传统和最先进的OSDA方法。该方法可为开放式临床环境下AD及其他神经退行性疾病的CAD提供实用参考。
{"title":"Improving Alzheimer’s disease diagnosis by hyperspherical weighted adversarial learning in open set domain adaptation","authors":"Qiongmin Zhang,&nbsp;Siyi Yu,&nbsp;Yin Shi,&nbsp;Xiaowei Tan","doi":"10.1016/j.compmedimag.2025.102692","DOIUrl":"10.1016/j.compmedimag.2025.102692","url":null,"abstract":"<div><div>Alzheimer’s Disease (AD) is a progressive neurodegenerative disorder characterized by cognitive decline and memory loss. Magnetic Resonance Imaging (MRI) plays a key role in detecting AD in Computer-aided Diagnosis (CAD) systems. However, variations in MRI scanners and imaging protocols introduce domain shifts, which significantly degrade model performance. Additionally, CAD models may misdiagnose unfamiliar neurodegenerative diseases not represented during training. In these complex and diverse clinical scenarios, employing closed set domain adaptation methods to achieve accurate diagnosis of AD presents substantial challenges. We propose a Hyperspherical Weighted Adversarial Learning-based Open Set Domain Adaptation (HWAL-OSDA) method for AD diagnosis. We introduce a voxel-based 3D feature extraction and fusion module to effectively capture and integrate MRI spatial features and employ a Multi-scale and Dual Attention Aggregation block to focus on disease-sensitive regions. To overcome the dispersion of feature distributions in high-dimensional space, hyperspherical variational auto-encoder module is incorporated to improve the learning of latent feature representations on a hypersphere. Furthermore, the spherical angular distance-based triplet loss and margin-based loss in the cross-domain alignment and separation module enhance the separability of known classes and establish a clear decision boundary between known and unknown classes. To improve the positive transfer of known samples and reduce the negative transfer of unknown samples, we design a weighted adversarial domain adaptation module that utilizes a dynamic instance-level weighting scheme, combining the Weibull distribution with entropy. Experiments on the ADNI and PPMI datasets show that HWAL-OSDA achieves an average accuracy of 94.2%, 83.68%, and 77.83% across three-way classification tasks (AD vs. CN vs. Unk, MCI vs. CN vs. Unk, and AD vs. MCI vs. Unk tasks), outperforming traditional and state-of-the-art OSDA methods. This approach offers a practical reference for CAD of AD and other neurodegenerative diseases in open clinical settings.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"127 ","pages":"Article 102692"},"PeriodicalIF":4.9,"publicationDate":"2025-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145821879","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computerized Medical Imaging and Graphics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1