首页 > 最新文献

Computerized Medical Imaging and Graphics最新文献

英文 中文
ScribSD+: Scribble-supervised medical image segmentation based on simultaneous multi-scale knowledge distillation and class-wise contrastive regularization ScribSD+:基于同步多尺度知识提炼和类别对比正则化的 Scribble 监督医学图像分割
IF 5.4 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2024-07-09 DOI: 10.1016/j.compmedimag.2024.102416

Despite that deep learning has achieved state-of-the-art performance for automatic medical image segmentation, it often requires a large amount of pixel-level manual annotations for training. Obtaining these high-quality annotations is time-consuming and requires specialized knowledge, which hinders the widespread application that relies on such annotations to train a model with good segmentation performance. Using scribble annotations can substantially reduce the annotation cost, but often leads to poor segmentation performance due to insufficient supervision. In this work, we propose a novel framework named as ScribSD+ that is based on multi-scale knowledge distillation and class-wise contrastive regularization for learning from scribble annotations. For a student network supervised by scribbles and the teacher based on Exponential Moving Average (EMA), we first introduce multi-scale prediction-level Knowledge Distillation (KD) that leverages soft predictions of the teacher network to supervise the student at multiple scales, and then propose class-wise contrastive regularization which encourages feature similarity within the same class and dissimilarity across different classes, thereby effectively improving the segmentation performance of the student network. Experimental results on the ACDC dataset for heart structure segmentation and a fetal MRI dataset for placenta and fetal brain segmentation demonstrate that our method significantly improves the student’s performance and outperforms five state-of-the-art scribble-supervised learning methods. Consequently, the method has a potential for reducing the annotation cost in developing deep learning models for clinical diagnosis.

尽管深度学习在自动医学图像分割方面取得了最先进的性能,但它往往需要大量像素级的人工注释来进行训练。获取这些高质量注释不仅耗时,而且需要专业知识,这就阻碍了依赖这些注释来训练具有良好分割性能的模型的广泛应用。使用涂鸦注释可以大大降低注释成本,但由于监督不足,往往会导致分割性能不佳。在这项工作中,我们提出了一个名为 ScribSD+ 的新框架,该框架基于多尺度知识提炼和分类对比正则化,用于从涂鸦注释中学习。对于由涂鸦和基于指数移动平均(EMA)的教师监督的学生网络,我们首先引入了多尺度预测级知识蒸馏(KD),利用教师网络的软预测在多个尺度上监督学生,然后提出了类对比正则化,鼓励同类内的特征相似性和不同类间的特征相似性,从而有效提高了学生网络的分割性能。在 ACDC 数据集(用于心脏结构分割)和胎儿 MRI 数据集(用于胎盘和胎儿大脑分割)上的实验结果表明,我们的方法显著提高了学生网络的性能,并优于五种最先进的涂鸦监督学习方法。因此,在开发用于临床诊断的深度学习模型时,该方法有望降低标注成本。
{"title":"ScribSD+: Scribble-supervised medical image segmentation based on simultaneous multi-scale knowledge distillation and class-wise contrastive regularization","authors":"","doi":"10.1016/j.compmedimag.2024.102416","DOIUrl":"10.1016/j.compmedimag.2024.102416","url":null,"abstract":"<div><p>Despite that deep learning has achieved state-of-the-art performance for automatic medical image segmentation, it often requires a large amount of pixel-level manual annotations for training. Obtaining these high-quality annotations is time-consuming and requires specialized knowledge, which hinders the widespread application that relies on such annotations to train a model with good segmentation performance. Using scribble annotations can substantially reduce the annotation cost, but often leads to poor segmentation performance due to insufficient supervision. In this work, we propose a novel framework named as ScribSD+ that is based on multi-scale knowledge distillation and class-wise contrastive regularization for learning from scribble annotations. For a student network supervised by scribbles and the teacher based on Exponential Moving Average (EMA), we first introduce multi-scale prediction-level Knowledge Distillation (KD) that leverages soft predictions of the teacher network to supervise the student at multiple scales, and then propose class-wise contrastive regularization which encourages feature similarity within the same class and dissimilarity across different classes, thereby effectively improving the segmentation performance of the student network. Experimental results on the ACDC dataset for heart structure segmentation and a fetal MRI dataset for placenta and fetal brain segmentation demonstrate that our method significantly improves the student’s performance and outperforms five state-of-the-art scribble-supervised learning methods. Consequently, the method has a potential for reducing the annotation cost in developing deep learning models for clinical diagnosis.</p></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":null,"pages":null},"PeriodicalIF":5.4,"publicationDate":"2024-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141629743","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A comprehensive approach for evaluating lymphovascular invasion in invasive breast cancer: Leveraging multimodal MRI findings, radiomics, and deep learning analysis of intra- and peritumoral regions 评估浸润性乳腺癌淋巴管侵犯的综合方法:利用多模态磁共振成像结果、放射组学和深度学习分析瘤内和瘤周区域
IF 5.4 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2024-07-08 DOI: 10.1016/j.compmedimag.2024.102415

Purpose

To evaluate lymphovascular invasion (LVI) in breast cancer by comparing the diagnostic performance of preoperative multimodal magnetic resonance imaging (MRI)-based radiomics and deep-learning (DL) models.

Methods

This retrospective study included 262 patients with breast cancer—183 in the training cohort (144 LVI-negative and 39 LVI-positive cases) and 79 in the validation cohort (59 LVI-negative and 20 LVI-positive cases). Radiomics features were extracted from the intra- and peritumoral breast regions using multimodal MRI to generate gross tumor volume (GTV)_radiomics and gross tumor volume plus peritumoral volume (GPTV)_radiomics. Subsequently, DL models (GTV_DL and GPTV_DL) were constructed based on the GTV and GPTV to determine the LVI status. Finally, the most effective radiomics and DL models were integrated with imaging findings to establish a hybrid model, which was converted into a nomogram to quantify the LVI risk.

Results

The diagnostic efficiency of GPTV_DL was superior to that of GTV_DL (areas under the curve [AUCs], 0.771 and 0.720, respectively). Similarly, GPTV_radiomics outperformed GTV_radiomics (AUC, 0.685 and 0.636, respectively). Univariate and multivariate logistic regression analyses revealed an association between imaging findings, such as MRI-axillary lymph nodes and peritumoral edema (AUC, 0.665). The hybrid model achieved the highest accuracy by combining GPTV_DL, GPTV_radiomics, and imaging findings (AUC, 0.872).

Conclusion

The diagnostic efficiency of the GPTV-derived radiomics and DL models surpassed that of the GTV-derived models. Furthermore, the hybrid model, which incorporated GPTV_DL, GPTV_radiomics, and imaging findings, demonstrated the effective determination of LVI status prior to surgery in patients with breast cancer.

目的通过比较基于术前多模态磁共振成像(MRI)的放射组学模型和深度学习(DL)模型的诊断性能,评估乳腺癌的淋巴管侵犯(LVI)。方法这项回顾性研究纳入了262例乳腺癌患者--其中183例为训练队列(144例LVI阴性,39例LVI阳性),79例为验证队列(59例LVI阴性,20例LVI阳性)。利用多模态磁共振成像从乳腺瘤内和瘤周区域提取放射组学特征,生成肿瘤总体积(GTV)_放射组学和肿瘤总体积加瘤周体积(GPTV)_放射组学。随后,根据 GTV 和 GPTV 建立 DL 模型(GTV_DL 和 GPTV_DL),以确定 LVI 状态。结果 GPTV_DL 的诊断效率优于 GTV_DL(曲线下面积 [AUC],分别为 0.771 和 0.720)。同样,GPTV_放射组学也优于 GTV_放射组学(AUC 分别为 0.685 和 0.636)。单变量和多变量逻辑回归分析显示,MRI-腋窝淋巴结和瘤周水肿等成像结果之间存在关联(AUC,0.665)。结论 GPTV 导出的放射组学模型和 DL 模型的诊断效率超过了 GTV 导出的模型。此外,融合了 GPTV_DL、GPTV_放射组学和成像结果的混合模型证明了在乳腺癌患者手术前确定 LVI 状态的有效性。
{"title":"A comprehensive approach for evaluating lymphovascular invasion in invasive breast cancer: Leveraging multimodal MRI findings, radiomics, and deep learning analysis of intra- and peritumoral regions","authors":"","doi":"10.1016/j.compmedimag.2024.102415","DOIUrl":"10.1016/j.compmedimag.2024.102415","url":null,"abstract":"<div><h3>Purpose</h3><p>To evaluate lymphovascular invasion (LVI) in breast cancer by comparing the diagnostic performance of preoperative multimodal magnetic resonance imaging (MRI)-based radiomics and deep-learning (DL) models.</p></div><div><h3>Methods</h3><p>This retrospective study included 262 patients with breast cancer—183 in the training cohort (144 LVI-negative and 39 LVI-positive cases) and 79 in the validation cohort (59 LVI-negative and 20 LVI-positive cases). Radiomics features were extracted from the intra- and peritumoral breast regions using multimodal MRI to generate gross tumor volume (GTV)_radiomics and gross tumor volume plus peritumoral volume (GPTV)_radiomics. Subsequently, DL models (GTV_DL and GPTV_DL) were constructed based on the GTV and GPTV to determine the LVI status. Finally, the most effective radiomics and DL models were integrated with imaging findings to establish a hybrid model, which was converted into a nomogram to quantify the LVI risk.</p></div><div><h3>Results</h3><p>The diagnostic efficiency of GPTV_DL was superior to that of GTV_DL (areas under the curve [AUCs], 0.771 and 0.720, respectively). Similarly, GPTV_radiomics outperformed GTV_radiomics (AUC, 0.685 and 0.636, respectively). Univariate and multivariate logistic regression analyses revealed an association between imaging findings, such as MRI-axillary lymph nodes and peritumoral edema (AUC, 0.665). The hybrid model achieved the highest accuracy by combining GPTV_DL, GPTV_radiomics, and imaging findings (AUC, 0.872).</p></div><div><h3>Conclusion</h3><p>The diagnostic efficiency of the GPTV-derived radiomics and DL models surpassed that of the GTV-derived models. Furthermore, the hybrid model, which incorporated GPTV_DL, GPTV_radiomics, and imaging findings, demonstrated the effective determination of LVI status prior to surgery in patients with breast cancer.</p></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":null,"pages":null},"PeriodicalIF":5.4,"publicationDate":"2024-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141692455","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Accurate segmentation of liver tumor from multi-modality non-contrast images using a dual-stream multi-level fusion framework 利用双流多层次融合框架从多模态非对比图像中准确分割肝脏肿瘤。
IF 5.4 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2024-07-03 DOI: 10.1016/j.compmedimag.2024.102414
Chenchu Xu , Xue Wu , Boyan Wang , Jie Chen , Zhifan Gao , Xiujian Liu , Heye Zhang

The use of multi-modality non-contrast images (i.e., T1FS, T2FS and DWI) for segmenting liver tumors provides a solution by eliminating the use of contrast agents and is crucial for clinical diagnosis. However, this remains a challenging task to discover the most useful information to fuse multi-modality images for accurate segmentation due to inter-modal interference. In this paper, we propose a dual-stream multi-level fusion framework (DM-FF) to, for the first time, accurately segment liver tumors from non-contrast multi-modality images directly. Our DM-FF first designs an attention-based encoder–decoder to effectively extract multi-level feature maps corresponding to a specified representation of each modality. Then, DM-FF creates two types of fusion modules, in which a module fuses learned features to obtain a shared representation across multi-modality images to exploit commonalities and improve the performance, and a module fuses the decision evidence of segment to discover differences between modalities to prevent interference caused by modality’s conflict. By integrating these three components, DM-FF enables multi-modality non-contrast images to cooperate with each other and enables an accurate segmentation. Evaluation on 250 patients including different types of tumors from two MRI scanners, DM-FF achieves a Dice of 81.20%, and improves performance (Dice by at least 11%) when comparing the eight state-of-the-art segmentation architectures. The results indicate that our DM-FF significantly promotes the development and deployment of non-contrast liver tumor technology.

使用多模态非对比图像(即 T1FS、T2FS 和 DWI)分割肝脏肿瘤提供了一种解决方案,无需使用造影剂,对临床诊断至关重要。然而,由于模态间的干扰,要发现最有用的信息来融合多模态图像以进行准确分割仍是一项具有挑战性的任务。在本文中,我们提出了一种双流多层次融合框架(DM-FF),首次直接从非对比度多模态图像中准确分割肝脏肿瘤。我们的 DM-FF 首先设计了一个基于注意力的编码器-解码器,以有效提取与每种模态的指定表示相对应的多级特征图。然后,DM-FF 创建了两类融合模块,其中一个模块融合所学特征,以获得多模态图像的共享表征,从而利用共性提高性能;另一个模块融合片段的决策证据,以发现模态之间的差异,从而防止模态冲突造成的干扰。通过整合这三个组件,DM-FF 可使多模态非对比图像相互配合,实现准确的分割。通过对两台核磁共振扫描仪扫描的250名不同类型肿瘤患者进行评估,DM-FF的Dice值达到81.20%,与八种最先进的分割架构相比,DM-FF提高了性能(Dice值至少提高了11%)。结果表明,我们的 DM-FF 极大地促进了非对比度肝脏肿瘤技术的开发和应用。
{"title":"Accurate segmentation of liver tumor from multi-modality non-contrast images using a dual-stream multi-level fusion framework","authors":"Chenchu Xu ,&nbsp;Xue Wu ,&nbsp;Boyan Wang ,&nbsp;Jie Chen ,&nbsp;Zhifan Gao ,&nbsp;Xiujian Liu ,&nbsp;Heye Zhang","doi":"10.1016/j.compmedimag.2024.102414","DOIUrl":"10.1016/j.compmedimag.2024.102414","url":null,"abstract":"<div><p>The use of multi-modality non-contrast images (i.e., T1FS, T2FS and DWI) for segmenting liver tumors provides a solution by eliminating the use of contrast agents and is crucial for clinical diagnosis. However, this remains a challenging task to discover the most useful information to fuse multi-modality images for accurate segmentation due to inter-modal interference. In this paper, we propose a dual-stream multi-level fusion framework (DM-FF) to, for the first time, accurately segment liver tumors from non-contrast multi-modality images directly. Our DM-FF first designs an attention-based encoder–decoder to effectively extract multi-level feature maps corresponding to a specified representation of each modality. Then, DM-FF creates two types of fusion modules, in which a module fuses learned features to obtain a shared representation across multi-modality images to exploit commonalities and improve the performance, and a module fuses the decision evidence of segment to discover differences between modalities to prevent interference caused by modality’s conflict. By integrating these three components, DM-FF enables multi-modality non-contrast images to cooperate with each other and enables an accurate segmentation. Evaluation on 250 patients including different types of tumors from two MRI scanners, DM-FF achieves a Dice of 81.20%, and improves performance (Dice by at least 11%) when comparing the eight state-of-the-art segmentation architectures. The results indicate that our DM-FF significantly promotes the development and deployment of non-contrast liver tumor technology.</p></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":null,"pages":null},"PeriodicalIF":5.4,"publicationDate":"2024-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141564962","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Radiomic-based prediction of lesion-specific systemic treatment response in metastatic disease 基于放射线组学预测转移性疾病的病灶特异性全身治疗反应。
IF 5.4 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2024-06-25 DOI: 10.1016/j.compmedimag.2024.102413
Caryn Geady , Farnoosh Abbas-Aghababazadeh , Andres Kohan , Scott Schuetze , David Shultz , Benjamin Haibe-Kains

Despite sharing the same histologic classification, individual tumors in multi metastatic patients may present with different characteristics and varying sensitivities to anticancer therapies. In this study, we investigate the utility of radiomic biomarkers for prediction of lesion-specific treatment resistance in multi metastatic leiomyosarcoma patients. Using a dataset of n=202 lung metastases (LM) from n=80 patients with 1648 pre-treatment computed tomography (CT) radiomics features and LM progression determined from follow-up CT, we developed a radiomic model to predict the progression of each lesion. Repeat experiments assessed the relative predictive performance across LM volume groups. Lesion-specific radiomic models indicate up to a 4.5-fold increase in predictive capacity compared with a no-skill classifier, with an area under the precision-recall curve of 0.70 for the most precise model (FDR = 0.05). Precision varied by administered drug and LM volume. The effect of LM volume was controlled by removing radiomic features at a volume-correlation coefficient threshold of 0.20. Predicting lesion-specific responses using radiomic features represents a novel strategy by which to assess treatment response that acknowledges biological diversity within metastatic subclones, which could facilitate management strategies involving selective ablation of resistant clones in the setting of systemic therapy.

尽管组织学分类相同,但多发性转移瘤患者的单个肿瘤可能具有不同的特征,对抗癌疗法的敏感性也各不相同。在这项研究中,我们研究了放射生物标志物在预测多发性转移性骨髓瘤患者病灶特异性治疗耐药性方面的效用。我们利用来自80名患者的202个肺转移灶(LM)数据集和1648个治疗前计算机断层扫描(CT)放射组学特征以及随访CT确定的LM进展情况,建立了一个放射组学模型来预测每个病灶的进展情况。重复实验评估了不同 LM 体积组的相对预测性能。与无技能分类器相比,病灶特异性放射组学模型显示预测能力最多可提高 4.5 倍,最精确模型的精确度-召回曲线下面积为 0.70(FDR = 0.05)。精确度因施用药物和 LM 容量而异。在体积相关系数阈值为 0.20 时,通过移除放射体特征来控制 LM 体积的影响。利用放射学特征预测病灶特异性反应是一种评估治疗反应的新策略,它承认转移性亚克隆内的生物多样性,这有助于在全身治疗中选择性消融耐药克隆的管理策略。
{"title":"Radiomic-based prediction of lesion-specific systemic treatment response in metastatic disease","authors":"Caryn Geady ,&nbsp;Farnoosh Abbas-Aghababazadeh ,&nbsp;Andres Kohan ,&nbsp;Scott Schuetze ,&nbsp;David Shultz ,&nbsp;Benjamin Haibe-Kains","doi":"10.1016/j.compmedimag.2024.102413","DOIUrl":"10.1016/j.compmedimag.2024.102413","url":null,"abstract":"<div><p>Despite sharing the same histologic classification, individual tumors in multi metastatic patients may present with different characteristics and varying sensitivities to anticancer therapies. In this study, we investigate the utility of radiomic biomarkers for prediction of lesion-specific treatment resistance in multi metastatic leiomyosarcoma patients. Using a dataset of n=202 lung metastases (LM) from n=80 patients with 1648 pre-treatment computed tomography (CT) radiomics features and LM progression determined from follow-up CT, we developed a radiomic model to predict the progression of each lesion. Repeat experiments assessed the relative predictive performance across LM volume groups. Lesion-specific radiomic models indicate up to a 4.5-fold increase in predictive capacity compared with a no-skill classifier, with an area under the precision-recall curve of 0.70 for the most precise model (FDR = 0.05). Precision varied by administered drug and LM volume. The effect of LM volume was controlled by removing radiomic features at a volume-correlation coefficient threshold of 0.20. Predicting lesion-specific responses using radiomic features represents a novel strategy by which to assess treatment response that acknowledges biological diversity within metastatic subclones, which could facilitate management strategies involving selective ablation of resistant clones in the setting of systemic therapy.</p></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":null,"pages":null},"PeriodicalIF":5.4,"publicationDate":"2024-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141472247","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fragment distance-guided dual-stream learning for automatic pelvic fracture segmentation 用于骨盆骨折自动分割的片段距离引导双流学习。
IF 5.4 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2024-06-19 DOI: 10.1016/j.compmedimag.2024.102412
Bolun Zeng , Huixiang Wang , Leo Joskowicz , Xiaojun Chen

Pelvic fracture is a complex and severe injury. Accurate diagnosis and treatment planning require the segmentation of the pelvic structure and the fractured fragments from preoperative CT scans. However, this segmentation is a challenging task, as the fragments from a pelvic fracture typically exhibit considerable variability and irregularity in the morphologies, locations, and quantities. In this study, we propose a novel dual-stream learning framework for the automatic segmentation and category labeling of pelvic fractures. Our method uniquely identifies pelvic fracture fragments in various quantities and locations using a dual-branch architecture that leverages distance learning from bone fragments. Moreover, we develop a multi-size feature fusion module that adaptively aggregates features from diverse receptive fields tailored to targets of different sizes and shapes, thus boosting segmentation performance. Extensive experiments on three pelvic fracture datasets from different medical centers demonstrated the accuracy and generalizability of the proposed method. It achieves a mean Dice coefficient and mean Sensitivity of 0.935±0.068 and 0.929±0.058 in the dataset FracCLINIC, and 0.955±0.072 and 0.912±0.125 in the dataset FracSegData, which are superior than other comparing methods. Our method optimizes the process of pelvic fracture segmentation, potentially serving as an effective tool for preoperative planning in the clinical management of pelvic fractures.

骨盆骨折是一种复杂而严重的损伤。准确的诊断和治疗计划需要通过术前 CT 扫描对骨盆结构和骨折碎片进行分割。然而,这种分割是一项具有挑战性的任务,因为骨盆骨折的碎片通常在形态、位置和数量上表现出相当大的可变性和不规则性。在本研究中,我们提出了一种新颖的双流学习框架,用于骨盆骨折的自动分割和类别标记。我们的方法采用双分支架构,利用骨碎片的距离学习,独特地识别出不同数量和位置的骨盆骨折碎片。此外,我们还开发了多尺寸特征融合模块,该模块可根据不同尺寸和形状的目标,自适应地聚合来自不同感受野的特征,从而提高分割性能。在来自不同医疗中心的三个骨盆骨折数据集上进行的广泛实验证明了所提方法的准确性和通用性。在数据集 FracCLINIC 中,该方法的平均骰子系数(Dice coefficient)和平均灵敏度(Sensitivity)分别为 0.935±0.068 和 0.929±0.058;在数据集 FracSegData 中,该方法的平均骰子系数(Dice coefficient)和平均灵敏度(Sensitivity)分别为 0.955±0.072 和 0.912±0.125,均优于其他比较方法。我们的方法优化了骨盆骨折的分割过程,有可能成为骨盆骨折临床治疗中术前规划的有效工具。
{"title":"Fragment distance-guided dual-stream learning for automatic pelvic fracture segmentation","authors":"Bolun Zeng ,&nbsp;Huixiang Wang ,&nbsp;Leo Joskowicz ,&nbsp;Xiaojun Chen","doi":"10.1016/j.compmedimag.2024.102412","DOIUrl":"10.1016/j.compmedimag.2024.102412","url":null,"abstract":"<div><p>Pelvic fracture is a complex and severe injury. Accurate diagnosis and treatment planning require the segmentation of the pelvic structure and the fractured fragments from preoperative CT scans. However, this segmentation is a challenging task, as the fragments from a pelvic fracture typically exhibit considerable variability and irregularity in the morphologies, locations, and quantities. In this study, we propose a novel dual-stream learning framework for the automatic segmentation and category labeling of pelvic fractures. Our method uniquely identifies pelvic fracture fragments in various quantities and locations using a dual-branch architecture that leverages distance learning from bone fragments. Moreover, we develop a multi-size feature fusion module that adaptively aggregates features from diverse receptive fields tailored to targets of different sizes and shapes, thus boosting segmentation performance. Extensive experiments on three pelvic fracture datasets from different medical centers demonstrated the accuracy and generalizability of the proposed method. It achieves a mean Dice coefficient and mean Sensitivity of 0.935<span><math><mo>±</mo></math></span>0.068 and 0.929<span><math><mo>±</mo></math></span>0.058 in the dataset FracCLINIC, and 0.955<span><math><mo>±</mo></math></span>0.072 and 0.912<span><math><mo>±</mo></math></span>0.125 in the dataset FracSegData, which are superior than other comparing methods. Our method optimizes the process of pelvic fracture segmentation, potentially serving as an effective tool for preoperative planning in the clinical management of pelvic fractures.</p></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":null,"pages":null},"PeriodicalIF":5.4,"publicationDate":"2024-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141472246","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Precision dose prediction for breast cancer patients undergoing IMRT: The Swin-UMamba-Channel Model 对接受 IMRT 的乳腺癌患者进行精确剂量预测:Swin-Umamba-Channel 模型
IF 5.7 2区 医学 Q1 Medicine Pub Date : 2024-06-13 DOI: 10.1016/j.compmedimag.2024.102409
Hui Xie , Hua Zhang , Zijie Chen , Tao Tan

Background

Radiation therapy is one of the crucial treatment modalities for cancer. An excellent radiation therapy plan relies heavily on an outstanding dose distribution map, which is traditionally generated through repeated trials and adjustments by experienced physicists. However, this process is both time-consuming and labor-intensive, and it comes with a degree of subjectivity. Now, with the powerful capabilities of deep learning, we are able to predict dose distribution maps more accurately, effectively overcoming these challenges.

Methods

In this study, we propose a novel Swin-UMamba-Channel prediction model specifically designed for predicting the dose distribution of patients with left breast cancer undergoing radiotherapy after total mastectomy. This model integrates anatomical position information of organs and ray angle information, significantly enhancing prediction accuracy. Through iterative training of the generator (Swin-UMamba) and discriminator, the model can generate images that closely match the actual dose, assisting physicists in quickly creating DVH curves and shortening the treatment planning cycle. Our model exhibits excellent performance in terms of prediction accuracy, computational efficiency, and practicality, and its effectiveness has been further verified through comparative experiments with similar networks.

Results

The results of the study indicate that our model can accurately predict the clinical dose of breast cancer patients undergoing intensity-modulated radiation therapy (IMRT). The predicted dose range is from 0 to 50 Gy, and compared with actual data, it shows a high accuracy with an average Dice similarity coefficient of 0.86. Specifically, the average dose change rate for the planning target volume ranges from 0.28 % to 1.515 %, while the average dose change rates for the right and left lungs are 2.113 % and 0.508 %, respectively. Notably, due to their small sizes, the heart and spinal cord exhibit relatively higher average dose change rates, reaching 3.208 % and 1.490 %, respectively. In comparison with similar dose studies, our model demonstrates superior performance. Additionally, our model possesses fewer parameters, lower computational complexity, and shorter processing time, further enhancing its practicality and efficiency. These findings provide strong evidence for the accuracy and reliability of our model in predicting doses, offering significant technical support for IMRT in breast cancer patients.

Conclusion

This study presents a novel Swin-UMamba-Channel dose prediction model, and its results demonstrate its precise prediction of clinical doses for the target area of left breast cancer patients undergoing total mastectomy and IMRT. These remarkable achievements provide valuable reference data for subsequent plan optimization and quality control, paving a new path for the application of deep learning in

背景放射治疗是治疗癌症的重要方法之一。优秀的放射治疗计划在很大程度上依赖于出色的剂量分布图,而传统的剂量分布图是由经验丰富的物理学家通过反复试验和调整生成的。然而,这一过程既耗时又耗力,还带有一定的主观性。现在,借助深度学习的强大功能,我们能够更准确地预测剂量分布图,有效克服这些挑战。方法在这项研究中,我们提出了一种新型 Swin-UMamba-Channel 预测模型,专门用于预测全乳房切除术后接受放疗的左乳腺癌患者的剂量分布。该模型整合了器官解剖位置信息和射线角度信息,大大提高了预测精度。通过对生成器(Swin-UMamba)和判别器的迭代训练,该模型可以生成与实际剂量非常接近的图像,从而帮助物理学家快速创建 DVH 曲线并缩短治疗计划周期。我们的模型在预测精度、计算效率和实用性方面都表现出色,其有效性通过与类似网络的对比实验得到了进一步验证。结果研究结果表明,我们的模型可以准确预测接受强度调制放射治疗(IMRT)的乳腺癌患者的临床剂量。预测的剂量范围为 0 至 50 Gy,与实际数据相比,它显示出较高的准确性,平均骰子相似系数为 0.86。具体来说,规划靶体积的平均剂量变化率在 0.28 % 到 1.515 % 之间,而左右肺的平均剂量变化率分别为 2.113 % 和 0.508 %。值得注意的是,心脏和脊髓由于体积较小,平均剂量变化率相对较高,分别达到 3.208 % 和 1.490 %。与类似的剂量研究相比,我们的模型表现出更优越的性能。此外,我们的模型参数少、计算复杂度低、处理时间短,进一步提高了实用性和效率。这些研究结果有力地证明了我们的模型在预测剂量方面的准确性和可靠性,为乳腺癌患者的 IMRT 提供了重要的技术支持。结论本研究提出了一种新型 Swin-UMamba-Channel 剂量预测模型,其结果表明它能精确预测接受全乳房切除术和 IMRT 的左侧乳腺癌患者靶区的临床剂量。这些卓越的成就为后续的计划优化和质量控制提供了宝贵的参考数据,为深度学习在放射治疗领域的应用铺平了一条新的道路。
{"title":"Precision dose prediction for breast cancer patients undergoing IMRT: The Swin-UMamba-Channel Model","authors":"Hui Xie ,&nbsp;Hua Zhang ,&nbsp;Zijie Chen ,&nbsp;Tao Tan","doi":"10.1016/j.compmedimag.2024.102409","DOIUrl":"https://doi.org/10.1016/j.compmedimag.2024.102409","url":null,"abstract":"<div><h3>Background</h3><p>Radiation therapy is one of the crucial treatment modalities for cancer. An excellent radiation therapy plan relies heavily on an outstanding dose distribution map, which is traditionally generated through repeated trials and adjustments by experienced physicists. However, this process is both time-consuming and labor-intensive, and it comes with a degree of subjectivity. Now, with the powerful capabilities of deep learning, we are able to predict dose distribution maps more accurately, effectively overcoming these challenges.</p></div><div><h3>Methods</h3><p>In this study, we propose a novel Swin-UMamba-Channel prediction model specifically designed for predicting the dose distribution of patients with left breast cancer undergoing radiotherapy after total mastectomy. This model integrates anatomical position information of organs and ray angle information, significantly enhancing prediction accuracy. Through iterative training of the generator (Swin-UMamba) and discriminator, the model can generate images that closely match the actual dose, assisting physicists in quickly creating DVH curves and shortening the treatment planning cycle. Our model exhibits excellent performance in terms of prediction accuracy, computational efficiency, and practicality, and its effectiveness has been further verified through comparative experiments with similar networks.</p></div><div><h3>Results</h3><p>The results of the study indicate that our model can accurately predict the clinical dose of breast cancer patients undergoing intensity-modulated radiation therapy (IMRT). The predicted dose range is from 0 to 50 Gy, and compared with actual data, it shows a high accuracy with an average Dice similarity coefficient of 0.86. Specifically, the average dose change rate for the planning target volume ranges from 0.28 % to 1.515 %, while the average dose change rates for the right and left lungs are 2.113 % and 0.508 %, respectively. Notably, due to their small sizes, the heart and spinal cord exhibit relatively higher average dose change rates, reaching 3.208 % and 1.490 %, respectively. In comparison with similar dose studies, our model demonstrates superior performance. Additionally, our model possesses fewer parameters, lower computational complexity, and shorter processing time, further enhancing its practicality and efficiency. These findings provide strong evidence for the accuracy and reliability of our model in predicting doses, offering significant technical support for IMRT in breast cancer patients.</p></div><div><h3>Conclusion</h3><p>This study presents a novel Swin-UMamba-Channel dose prediction model, and its results demonstrate its precise prediction of clinical doses for the target area of left breast cancer patients undergoing total mastectomy and IMRT. These remarkable achievements provide valuable reference data for subsequent plan optimization and quality control, paving a new path for the application of deep learning in","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":null,"pages":null},"PeriodicalIF":5.7,"publicationDate":"2024-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141323312","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing trabecular CT scans based on deep learning with multi-strategy fusion 基于多策略融合深度学习的小梁 CT 扫描增强技术
IF 5.4 2区 医学 Q1 Medicine Pub Date : 2024-06-12 DOI: 10.1016/j.compmedimag.2024.102410
Peixuan Ge , Shibo Li , Yefeng Liang , Shuwei Zhang , Lihai Zhang , Ying Hu , Liang Yao , Pak Kin Wong

Trabecular bone analysis plays a crucial role in understanding bone health and disease, with applications like osteoporosis diagnosis. This paper presents a comprehensive study on 3D trabecular computed tomography (CT) image restoration, addressing significant challenges in this domain. The research introduces a backbone model, Cascade-SwinUNETR, for single-view 3D CT image restoration. This model leverages deep layer aggregation with supervision and capabilities of Swin-Transformer to excel in feature extraction. Additionally, this study also brings DVSR3D, a dual-view restoration model, achieving good performance through deep feature fusion with attention mechanisms and Autoencoders. Furthermore, an Unsupervised Domain Adaptation (UDA) method is introduced, allowing models to adapt to input data distributions without additional labels, holding significant potential for real-world medical applications, and eliminating the need for invasive data collection procedures. The study also includes the curation of a new dual-view dataset for CT image restoration, addressing the scarcity of real human bone data in Micro-CT. Finally, the dual-view approach is validated through downstream medical bone microstructure measurements. Our contributions open several paths for trabecular bone analysis, promising improved clinical outcomes in bone health assessment and diagnosis.

骨小梁分析在了解骨骼健康和疾病方面发挥着至关重要的作用,其应用包括骨质疏松症诊断。本文对三维骨小梁计算机断层扫描(CT)图像修复进行了全面研究,解决了这一领域的重大挑战。研究介绍了一种用于单视角三维 CT 图像修复的骨干模型 Cascade-SwinUNETR。该模型利用深度层聚合与 Swin-Transformer 的监督和功能,在特征提取方面表现出色。此外,这项研究还带来了双视角修复模型 DVSR3D,通过与注意力机制和自动编码器进行深度特征融合,实现了良好的性能。此外,该研究还引入了无监督领域适应(UDA)方法,使模型能够适应输入数据分布,而无需额外的标签,这为现实世界的医疗应用带来了巨大潜力,并消除了对侵入性数据收集程序的需求。研究还包括为 CT 图像修复策划一个新的双视角数据集,以解决 Micro-CT 中真实人体骨骼数据稀缺的问题。最后,通过下游医学骨微结构测量验证了双视角方法。我们的贡献为骨小梁分析开辟了几条道路,有望改善骨健康评估和诊断的临床结果。
{"title":"Enhancing trabecular CT scans based on deep learning with multi-strategy fusion","authors":"Peixuan Ge ,&nbsp;Shibo Li ,&nbsp;Yefeng Liang ,&nbsp;Shuwei Zhang ,&nbsp;Lihai Zhang ,&nbsp;Ying Hu ,&nbsp;Liang Yao ,&nbsp;Pak Kin Wong","doi":"10.1016/j.compmedimag.2024.102410","DOIUrl":"10.1016/j.compmedimag.2024.102410","url":null,"abstract":"<div><p>Trabecular bone analysis plays a crucial role in understanding bone health and disease, with applications like osteoporosis diagnosis. This paper presents a comprehensive study on 3D trabecular computed tomography (CT) image restoration, addressing significant challenges in this domain. The research introduces a backbone model, Cascade-SwinUNETR, for single-view 3D CT image restoration. This model leverages deep layer aggregation with supervision and capabilities of Swin-Transformer to excel in feature extraction. Additionally, this study also brings DVSR3D, a dual-view restoration model, achieving good performance through deep feature fusion with attention mechanisms and Autoencoders. Furthermore, an Unsupervised Domain Adaptation (UDA) method is introduced, allowing models to adapt to input data distributions without additional labels, holding significant potential for real-world medical applications, and eliminating the need for invasive data collection procedures. The study also includes the curation of a new dual-view dataset for CT image restoration, addressing the scarcity of real human bone data in Micro-CT. Finally, the dual-view approach is validated through downstream medical bone microstructure measurements. Our contributions open several paths for trabecular bone analysis, promising improved clinical outcomes in bone health assessment and diagnosis.</p></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":null,"pages":null},"PeriodicalIF":5.4,"publicationDate":"2024-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141400693","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An automatic radiomic-based approach for disease localization: A pilot study on COVID-19 基于放射学的自动疾病定位方法:COVID-19 试验研究
IF 5.4 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2024-06-12 DOI: 10.1016/j.compmedimag.2024.102411
Giulia Varriano , Vittoria Nardone , Simona Correra, Francesco Mercaldo, Antonella Santone

Radiomics is an innovative field in Personalized Medicine to help medical specialists in diagnosis and prognosis. Mainly, the application of Radiomics to medical images requires the definition and delimitation of the Region Of Interest (ROI) on the medical image to extract radiomic features. The aim of this preliminary study is to define an approach that automatically detects the specific areas indicative of a particular disease and examines them to minimize diagnostic errors associated with false positives and false negatives. This approach aims to create a nxn grid on the DICOM image sequence and each cell in the matrix is associated with a region from which radiomic features can be extracted.

The proposed procedure uses the Model Checking technique and produces as output the medical diagnosis of the patient, i.e., whether the patient under analysis is affected or not by a specific disease. Furthermore, the matrix-based method also localizes where appears the disease marks. To evaluate the performance of the proposed methodology, a case study on COVID-19 disease is used. Both results on disease identification and localization seem very promising. Furthermore, this proposed approach yields better results compared to methods based on the extraction of features using the whole image as a single ROI, as evidenced by improvements in Accuracy and especially Recall. Our approach supports the advancement of knowledge, interoperability and trust in the software tool, fostering collaboration among doctors, staff and Radiomics.

放射组学是个性化医学的一个创新领域,可帮助医学专家进行诊断和预后。将放射组学应用于医学影像,主要需要定义和划定医学影像上的感兴趣区(ROI),以提取放射组学特征。这项初步研究的目的是确定一种方法,自动检测特定疾病的特定指示区域,并对其进行检查,以尽量减少与假阳性和假阴性相关的诊断错误。这种方法的目的是在 DICOM 图像序列上创建一个 nxn 网格,矩阵中的每个单元格都与一个区域相关联,可以从中提取放射学特征。建议的程序使用模型检查技术,并将病人的医疗诊断结果作为输出,即分析中的病人是否患有特定疾病。此外,基于矩阵的方法还能定位疾病标记出现的位置。为了评估所提出方法的性能,我们使用了 COVID-19 疾病的案例研究。疾病识别和定位的结果似乎都很不错。此外,与基于将整个图像作为单一 ROI 提取特征的方法相比,所提出的方法能产生更好的结果,这体现在准确率和召回率的提高上。我们的方法有助于增进知识、互操作性和对软件工具的信任,促进医生、员工和放射医学之间的合作。
{"title":"An automatic radiomic-based approach for disease localization: A pilot study on COVID-19","authors":"Giulia Varriano ,&nbsp;Vittoria Nardone ,&nbsp;Simona Correra,&nbsp;Francesco Mercaldo,&nbsp;Antonella Santone","doi":"10.1016/j.compmedimag.2024.102411","DOIUrl":"10.1016/j.compmedimag.2024.102411","url":null,"abstract":"<div><p>Radiomics is an innovative field in Personalized Medicine to help medical specialists in diagnosis and prognosis. Mainly, the application of Radiomics to medical images requires the definition and delimitation of the Region Of Interest (ROI) on the medical image to extract radiomic features. The aim of this preliminary study is to define an approach that automatically detects the specific areas indicative of a particular disease and examines them to minimize diagnostic errors associated with false positives and false negatives. This approach aims to create a <span><math><mrow><mi>n</mi><mi>x</mi><mi>n</mi></mrow></math></span> grid on the DICOM image sequence and each cell in the matrix is associated with a region from which radiomic features can be extracted.</p><p>The proposed procedure uses the Model Checking technique and produces as output the medical diagnosis of the patient, i.e., whether the patient under analysis is affected or not by a specific disease. Furthermore, the matrix-based method also localizes where appears the disease marks. To evaluate the performance of the proposed methodology, a case study on COVID-19 disease is used. Both results on disease identification and localization seem very promising. Furthermore, this proposed approach yields better results compared to methods based on the extraction of features using the whole image as a single ROI, as evidenced by improvements in Accuracy and especially Recall. Our approach supports the advancement of knowledge, interoperability and trust in the software tool, fostering collaboration among doctors, staff and Radiomics.</p></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":null,"pages":null},"PeriodicalIF":5.4,"publicationDate":"2024-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141394565","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PCa-RadHop: A transparent and lightweight feed-forward method for clinically significant prostate cancer segmentation PCa-RadHop:用于具有临床意义的前列腺癌分割的透明轻量级前馈方法
IF 5.4 2区 医学 Q1 Medicine Pub Date : 2024-06-10 DOI: 10.1016/j.compmedimag.2024.102408
Vasileios Magoulianitis , Jiaxin Yang , Yijing Yang , Jintang Xue , Masatomo Kaneko , Giovanni Cacciamani , Andre Abreu , Vinay Duddalwar , C.-C. Jay Kuo , Inderbir S. Gill , Chrysostomos Nikias

Prostate Cancer is one of the most frequently occurring cancers in men, with a low survival rate if not early diagnosed. PI-RADS reading has a high false positive rate, thus increasing the diagnostic incurred costs and patient discomfort. Deep learning (DL) models achieve a high segmentation performance, although require a large model size and complexity. Also, DL models lack of feature interpretability and are perceived as “black-boxes” in the medical field. PCa-RadHop pipeline is proposed in this work, aiming to provide a more transparent feature extraction process using a linear model. It adopts the recently introduced Green Learning (GL) paradigm, which offers a small model size and low complexity. PCa-RadHop consists of two stages: Stage-1 extracts data-driven radiomics features from the bi-parametric Magnetic Resonance Imaging (bp-MRI) input and predicts an initial heatmap. To reduce the false positive rate, a subsequent stage-2 is introduced to refine the predictions by including more contextual information and radiomics features from each already detected Region of Interest (ROI). Experiments on the largest publicly available dataset, PI-CAI, show a competitive performance standing of the proposed method among other deep DL models, achieving an area under the curve (AUC) of 0.807 among a cohort of 1,000 patients. Moreover, PCa-RadHop maintains orders of magnitude smaller model size and complexity.

前列腺癌是男性最常见的癌症之一,如不及早诊断,存活率很低。PI-RADS 读数的假阳性率很高,从而增加了诊断成本和患者的不适感。深度学习(DL)模型可实现较高的分割性能,但需要较大的模型规模和复杂度。此外,深度学习模型缺乏特征可解释性,在医疗领域被视为 "黑盒子"。本研究提出了 PCa-RadHop 管道,旨在使用线性模型提供更透明的特征提取过程。它采用了最近推出的绿色学习(GL)范式,具有模型小、复杂度低的特点。PCa-RadHop 包括两个阶段:第一阶段从双参数磁共振成像(bp-MRI)输入中提取数据驱动的放射组学特征,并预测初始热图。为了降低误报率,随后引入了第二阶段,通过从每个已检测到的感兴趣区(ROI)中纳入更多上下文信息和放射组学特征来完善预测。在最大的公开可用数据集 PI-CAI 上进行的实验表明,与其他深度 DL 模型相比,所提出的方法具有很强的性能竞争力,在 1,000 名患者中的曲线下面积(AUC)达到了 0.807。此外,PCa-RadHop 的模型大小和复杂程度都要小得多。
{"title":"PCa-RadHop: A transparent and lightweight feed-forward method for clinically significant prostate cancer segmentation","authors":"Vasileios Magoulianitis ,&nbsp;Jiaxin Yang ,&nbsp;Yijing Yang ,&nbsp;Jintang Xue ,&nbsp;Masatomo Kaneko ,&nbsp;Giovanni Cacciamani ,&nbsp;Andre Abreu ,&nbsp;Vinay Duddalwar ,&nbsp;C.-C. Jay Kuo ,&nbsp;Inderbir S. Gill ,&nbsp;Chrysostomos Nikias","doi":"10.1016/j.compmedimag.2024.102408","DOIUrl":"https://doi.org/10.1016/j.compmedimag.2024.102408","url":null,"abstract":"<div><p>Prostate Cancer is one of the most frequently occurring cancers in men, with a low survival rate if not early diagnosed. PI-RADS reading has a high false positive rate, thus increasing the diagnostic incurred costs and patient discomfort. Deep learning (DL) models achieve a high segmentation performance, although require a large model size and complexity. Also, DL models lack of feature interpretability and are perceived as “black-boxes” in the medical field. PCa-RadHop pipeline is proposed in this work, aiming to provide a more transparent feature extraction process using a linear model. It adopts the recently introduced Green Learning (GL) paradigm, which offers a small model size and low complexity. PCa-RadHop consists of two stages: Stage-1 extracts data-driven radiomics features from the bi-parametric Magnetic Resonance Imaging (bp-MRI) input and predicts an initial heatmap. To reduce the false positive rate, a subsequent stage-2 is introduced to refine the predictions by including more contextual information and radiomics features from each already detected Region of Interest (ROI). Experiments on the largest publicly available dataset, PI-CAI, show a competitive performance standing of the proposed method among other deep DL models, achieving an area under the curve (AUC) of 0.807 among a cohort of 1,000 patients. Moreover, PCa-RadHop maintains orders of magnitude smaller model size and complexity.</p></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":null,"pages":null},"PeriodicalIF":5.4,"publicationDate":"2024-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141438459","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unsupervised domain adaptation based on feature and edge alignment for femur X-ray image segmentation 基于特征和边缘对齐的无监督域适应性股骨 X 射线图像分割
IF 5.7 2区 医学 Q1 Medicine Pub Date : 2024-06-08 DOI: 10.1016/j.compmedimag.2024.102407
Xiaoming Jiang , Yongxin Yang , Tong Su , Kai Xiao , LiDan Lu , Wei Wang , Changsong Guo , Lizhi Shao , Mingjing Wang , Dong Jiang

The gold standard for diagnosing osteoporosis is bone mineral density (BMD) measurement by dual-energy X-ray absorptiometry (DXA). However, various factors during the imaging process cause domain shifts in DXA images, which lead to incorrect bone segmentation. Research shows that poor bone segmentation is one of the prime reasons of inaccurate BMD measurement, severely affecting the diagnosis and treatment plans for osteoporosis. In this paper, we propose a Multi-feature Joint Discriminative Domain Adaptation (MDDA) framework to improve segmentation performance and the generalization of the network in domain-shifted images. The proposed method learns domain-invariant features between the source and target domains from the perspectives of multi-scale features and edges, and is evaluated on real data from multi-center datasets. Compared to other state-of-the-art methods, the feature prior from the source domain and edge prior enable the proposed MDDA to achieve the optimal domain adaptation performance and generalization. It also demonstrates superior performance in domain adaptation tasks on small amount datasets, even using only 5 or 10 images. In this study, MDDA provides an accurate bone segmentation tool for BMD measurement based on DXA imaging.

诊断骨质疏松症的金标准是通过双能 X 射线吸收仪(DXA)测量骨矿密度(BMD)。然而,成像过程中的各种因素会造成 DXA 图像的域偏移,从而导致错误的骨分割。研究表明,骨分割不准确是导致 BMD 测量不准确的主要原因之一,严重影响了骨质疏松症的诊断和治疗方案。在本文中,我们提出了一种多特征联合判别域适应(MDDA)框架,以提高分割性能和网络在域偏移图像中的泛化能力。所提出的方法从多尺度特征和边缘的角度学习源域和目标域之间的域不变特征,并在多中心数据集的真实数据上进行了评估。与其他最先进的方法相比,来自源域的特征先验和边缘先验使所提出的 MDDA 能够实现最佳的域适应性能和泛化。在少量数据集的域适应任务中,即使只使用 5 或 10 幅图像,MDDA 也能表现出卓越的性能。在这项研究中,MDDA 为基于 DXA 成像的 BMD 测量提供了精确的骨骼分割工具。
{"title":"Unsupervised domain adaptation based on feature and edge alignment for femur X-ray image segmentation","authors":"Xiaoming Jiang ,&nbsp;Yongxin Yang ,&nbsp;Tong Su ,&nbsp;Kai Xiao ,&nbsp;LiDan Lu ,&nbsp;Wei Wang ,&nbsp;Changsong Guo ,&nbsp;Lizhi Shao ,&nbsp;Mingjing Wang ,&nbsp;Dong Jiang","doi":"10.1016/j.compmedimag.2024.102407","DOIUrl":"https://doi.org/10.1016/j.compmedimag.2024.102407","url":null,"abstract":"<div><p>The gold standard for diagnosing osteoporosis is bone mineral density (BMD) measurement by dual-energy X-ray absorptiometry (DXA). However, various factors during the imaging process cause domain shifts in DXA images, which lead to incorrect bone segmentation. Research shows that poor bone segmentation is one of the prime reasons of inaccurate BMD measurement, severely affecting the diagnosis and treatment plans for osteoporosis. In this paper, we propose a Multi-feature Joint Discriminative Domain Adaptation (MDDA) framework to improve segmentation performance and the generalization of the network in domain-shifted images. The proposed method learns domain-invariant features between the source and target domains from the perspectives of multi-scale features and edges, and is evaluated on real data from multi-center datasets. Compared to other state-of-the-art methods, the feature prior from the source domain and edge prior enable the proposed MDDA to achieve the optimal domain adaptation performance and generalization. It also demonstrates superior performance in domain adaptation tasks on small amount datasets, even using only 5 or 10 images. In this study, MDDA provides an accurate bone segmentation tool for BMD measurement based on DXA imaging.</p></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":null,"pages":null},"PeriodicalIF":5.7,"publicationDate":"2024-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141328854","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computerized Medical Imaging and Graphics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1