首页 > 最新文献

Computerized Medical Imaging and Graphics最新文献

英文 中文
Weakly supervised detection of pheochromocytomas and paragangliomas in CT using noisy data. 利用噪声数据对 CT 中的嗜铬细胞瘤和副神经节瘤进行弱监督检测。
IF 5.4 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2024-07-20 DOI: 10.1016/j.compmedimag.2024.102419
David Oluigbo, Tejas Sudharshan Mathai, Bikash Santra, Pritam Mukherjee, Jianfei Liu, Abhishek Jha, Mayank Patel, Karel Pacak, Ronald M Summers

Pheochromocytomas and Paragangliomas (PPGLs) are rare adrenal and extra-adrenal tumors that have metastatic potential. Management of patients with PPGLs mainly depends on the makeup of their genetic cluster: SDHx, VHL/EPAS1, kinase, and sporadic. CT is the preferred modality for precise localization of PPGLs, such that their metastatic progression can be assessed. However, the variable size, morphology, and appearance of these tumors in different anatomical regions can pose challenges for radiologists. Since radiologists must routinely track changes across patient visits, manual annotation of PPGLs is quite time-consuming and cumbersome to do across all axial slices in a CT volume. As such, PPGLs are only weakly annotated on axial slices by radiologists in the form of RECIST measurements. To ameliorate the manual effort spent by radiologists, we propose a method for the automated detection of PPGLs in CT via a proxy segmentation task. Weak 3D annotations (derived from 2D bounding boxes) were used to train both 2D and 3D nnUNet models to detect PPGLs via segmentation. We evaluated our approaches on an in-house dataset comprised of chest-abdomen-pelvis CTs of 255 patients with confirmed PPGLs. On a test set of 53 CT volumes, our 3D nnUNet model achieved a detection precision of 70% and sensitivity of 64.1%, and outperformed the 2D model that obtained a precision of 52.7% and sensitivity of 27.5% (p< 0.05). SDHx and sporadic genetic clusters achieved the highest precisions of 73.1% and 72.7% respectively. Our state-of-the art findings highlight the promising nature of the challenging task of automated PPGL detection.

嗜铬细胞瘤和副神经节瘤(PPGLs)是罕见的肾上腺和肾上腺外肿瘤,具有转移潜力。对PPGLs患者的治疗主要取决于其基因簇的构成:SDHx、VHL/EPAS1、激酶和散发性。CT 是对 PPGLs 进行精确定位的首选方式,以便对其转移进展进行评估。然而,这些肿瘤在不同解剖区域的大小、形态和外观各不相同,这给放射科医生带来了挑战。由于放射科医生必须定期跟踪患者就诊时的变化,因此在 CT 卷的所有轴切片上手动标注 PPGL 相当耗时且繁琐。因此,放射科医生只能以 RECIST 测量的形式在轴切片上对 PPGL 进行微弱的注释。为了减轻放射科医生的人工工作量,我们提出了一种通过代理分割任务自动检测 CT 中 PPGL 的方法。弱三维注释(源自二维边界框)被用于训练二维和三维 nnUNet 模型,以通过分割检测 PPGL。我们在一个内部数据集上评估了我们的方法,该数据集由 255 名确诊 PPGL 患者的胸部-腹部-骨盆 CT 组成。在一个包含 53 张 CT 卷的测试集中,我们的 3D nnUNet 模型的检测精度达到 70%,灵敏度达到 64.1%,优于二维模型,后者的检测精度为 52.7%,灵敏度为 27.5%(p< 0.05)。SDHx和散发性基因簇的精确度最高,分别达到73.1%和72.7%。我们的最新研究结果凸显了 PPGL 自动检测这一具有挑战性任务的前景。
{"title":"Weakly supervised detection of pheochromocytomas and paragangliomas in CT using noisy data.","authors":"David Oluigbo, Tejas Sudharshan Mathai, Bikash Santra, Pritam Mukherjee, Jianfei Liu, Abhishek Jha, Mayank Patel, Karel Pacak, Ronald M Summers","doi":"10.1016/j.compmedimag.2024.102419","DOIUrl":"10.1016/j.compmedimag.2024.102419","url":null,"abstract":"<p><p>Pheochromocytomas and Paragangliomas (PPGLs) are rare adrenal and extra-adrenal tumors that have metastatic potential. Management of patients with PPGLs mainly depends on the makeup of their genetic cluster: SDHx, VHL/EPAS1, kinase, and sporadic. CT is the preferred modality for precise localization of PPGLs, such that their metastatic progression can be assessed. However, the variable size, morphology, and appearance of these tumors in different anatomical regions can pose challenges for radiologists. Since radiologists must routinely track changes across patient visits, manual annotation of PPGLs is quite time-consuming and cumbersome to do across all axial slices in a CT volume. As such, PPGLs are only weakly annotated on axial slices by radiologists in the form of RECIST measurements. To ameliorate the manual effort spent by radiologists, we propose a method for the automated detection of PPGLs in CT via a proxy segmentation task. Weak 3D annotations (derived from 2D bounding boxes) were used to train both 2D and 3D nnUNet models to detect PPGLs via segmentation. We evaluated our approaches on an in-house dataset comprised of chest-abdomen-pelvis CTs of 255 patients with confirmed PPGLs. On a test set of 53 CT volumes, our 3D nnUNet model achieved a detection precision of 70% and sensitivity of 64.1%, and outperformed the 2D model that obtained a precision of 52.7% and sensitivity of 27.5% (p< 0.05). SDHx and sporadic genetic clusters achieved the highest precisions of 73.1% and 72.7% respectively. Our state-of-the art findings highlight the promising nature of the challenging task of automated PPGL detection.</p>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":null,"pages":null},"PeriodicalIF":5.4,"publicationDate":"2024-07-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141762309","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ScribSD+: Scribble-supervised medical image segmentation based on simultaneous multi-scale knowledge distillation and class-wise contrastive regularization ScribSD+:基于同步多尺度知识提炼和类别对比正则化的 Scribble 监督医学图像分割
IF 5.4 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2024-07-09 DOI: 10.1016/j.compmedimag.2024.102416

Despite that deep learning has achieved state-of-the-art performance for automatic medical image segmentation, it often requires a large amount of pixel-level manual annotations for training. Obtaining these high-quality annotations is time-consuming and requires specialized knowledge, which hinders the widespread application that relies on such annotations to train a model with good segmentation performance. Using scribble annotations can substantially reduce the annotation cost, but often leads to poor segmentation performance due to insufficient supervision. In this work, we propose a novel framework named as ScribSD+ that is based on multi-scale knowledge distillation and class-wise contrastive regularization for learning from scribble annotations. For a student network supervised by scribbles and the teacher based on Exponential Moving Average (EMA), we first introduce multi-scale prediction-level Knowledge Distillation (KD) that leverages soft predictions of the teacher network to supervise the student at multiple scales, and then propose class-wise contrastive regularization which encourages feature similarity within the same class and dissimilarity across different classes, thereby effectively improving the segmentation performance of the student network. Experimental results on the ACDC dataset for heart structure segmentation and a fetal MRI dataset for placenta and fetal brain segmentation demonstrate that our method significantly improves the student’s performance and outperforms five state-of-the-art scribble-supervised learning methods. Consequently, the method has a potential for reducing the annotation cost in developing deep learning models for clinical diagnosis.

尽管深度学习在自动医学图像分割方面取得了最先进的性能,但它往往需要大量像素级的人工注释来进行训练。获取这些高质量注释不仅耗时,而且需要专业知识,这就阻碍了依赖这些注释来训练具有良好分割性能的模型的广泛应用。使用涂鸦注释可以大大降低注释成本,但由于监督不足,往往会导致分割性能不佳。在这项工作中,我们提出了一个名为 ScribSD+ 的新框架,该框架基于多尺度知识提炼和分类对比正则化,用于从涂鸦注释中学习。对于由涂鸦和基于指数移动平均(EMA)的教师监督的学生网络,我们首先引入了多尺度预测级知识蒸馏(KD),利用教师网络的软预测在多个尺度上监督学生,然后提出了类对比正则化,鼓励同类内的特征相似性和不同类间的特征相似性,从而有效提高了学生网络的分割性能。在 ACDC 数据集(用于心脏结构分割)和胎儿 MRI 数据集(用于胎盘和胎儿大脑分割)上的实验结果表明,我们的方法显著提高了学生网络的性能,并优于五种最先进的涂鸦监督学习方法。因此,在开发用于临床诊断的深度学习模型时,该方法有望降低标注成本。
{"title":"ScribSD+: Scribble-supervised medical image segmentation based on simultaneous multi-scale knowledge distillation and class-wise contrastive regularization","authors":"","doi":"10.1016/j.compmedimag.2024.102416","DOIUrl":"10.1016/j.compmedimag.2024.102416","url":null,"abstract":"<div><p>Despite that deep learning has achieved state-of-the-art performance for automatic medical image segmentation, it often requires a large amount of pixel-level manual annotations for training. Obtaining these high-quality annotations is time-consuming and requires specialized knowledge, which hinders the widespread application that relies on such annotations to train a model with good segmentation performance. Using scribble annotations can substantially reduce the annotation cost, but often leads to poor segmentation performance due to insufficient supervision. In this work, we propose a novel framework named as ScribSD+ that is based on multi-scale knowledge distillation and class-wise contrastive regularization for learning from scribble annotations. For a student network supervised by scribbles and the teacher based on Exponential Moving Average (EMA), we first introduce multi-scale prediction-level Knowledge Distillation (KD) that leverages soft predictions of the teacher network to supervise the student at multiple scales, and then propose class-wise contrastive regularization which encourages feature similarity within the same class and dissimilarity across different classes, thereby effectively improving the segmentation performance of the student network. Experimental results on the ACDC dataset for heart structure segmentation and a fetal MRI dataset for placenta and fetal brain segmentation demonstrate that our method significantly improves the student’s performance and outperforms five state-of-the-art scribble-supervised learning methods. Consequently, the method has a potential for reducing the annotation cost in developing deep learning models for clinical diagnosis.</p></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":null,"pages":null},"PeriodicalIF":5.4,"publicationDate":"2024-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141629743","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A comprehensive approach for evaluating lymphovascular invasion in invasive breast cancer: Leveraging multimodal MRI findings, radiomics, and deep learning analysis of intra- and peritumoral regions 评估浸润性乳腺癌淋巴管侵犯的综合方法:利用多模态磁共振成像结果、放射组学和深度学习分析瘤内和瘤周区域
IF 5.4 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2024-07-08 DOI: 10.1016/j.compmedimag.2024.102415

Purpose

To evaluate lymphovascular invasion (LVI) in breast cancer by comparing the diagnostic performance of preoperative multimodal magnetic resonance imaging (MRI)-based radiomics and deep-learning (DL) models.

Methods

This retrospective study included 262 patients with breast cancer—183 in the training cohort (144 LVI-negative and 39 LVI-positive cases) and 79 in the validation cohort (59 LVI-negative and 20 LVI-positive cases). Radiomics features were extracted from the intra- and peritumoral breast regions using multimodal MRI to generate gross tumor volume (GTV)_radiomics and gross tumor volume plus peritumoral volume (GPTV)_radiomics. Subsequently, DL models (GTV_DL and GPTV_DL) were constructed based on the GTV and GPTV to determine the LVI status. Finally, the most effective radiomics and DL models were integrated with imaging findings to establish a hybrid model, which was converted into a nomogram to quantify the LVI risk.

Results

The diagnostic efficiency of GPTV_DL was superior to that of GTV_DL (areas under the curve [AUCs], 0.771 and 0.720, respectively). Similarly, GPTV_radiomics outperformed GTV_radiomics (AUC, 0.685 and 0.636, respectively). Univariate and multivariate logistic regression analyses revealed an association between imaging findings, such as MRI-axillary lymph nodes and peritumoral edema (AUC, 0.665). The hybrid model achieved the highest accuracy by combining GPTV_DL, GPTV_radiomics, and imaging findings (AUC, 0.872).

Conclusion

The diagnostic efficiency of the GPTV-derived radiomics and DL models surpassed that of the GTV-derived models. Furthermore, the hybrid model, which incorporated GPTV_DL, GPTV_radiomics, and imaging findings, demonstrated the effective determination of LVI status prior to surgery in patients with breast cancer.

目的通过比较基于术前多模态磁共振成像(MRI)的放射组学模型和深度学习(DL)模型的诊断性能,评估乳腺癌的淋巴管侵犯(LVI)。方法这项回顾性研究纳入了262例乳腺癌患者--其中183例为训练队列(144例LVI阴性,39例LVI阳性),79例为验证队列(59例LVI阴性,20例LVI阳性)。利用多模态磁共振成像从乳腺瘤内和瘤周区域提取放射组学特征,生成肿瘤总体积(GTV)_放射组学和肿瘤总体积加瘤周体积(GPTV)_放射组学。随后,根据 GTV 和 GPTV 建立 DL 模型(GTV_DL 和 GPTV_DL),以确定 LVI 状态。结果 GPTV_DL 的诊断效率优于 GTV_DL(曲线下面积 [AUC],分别为 0.771 和 0.720)。同样,GPTV_放射组学也优于 GTV_放射组学(AUC 分别为 0.685 和 0.636)。单变量和多变量逻辑回归分析显示,MRI-腋窝淋巴结和瘤周水肿等成像结果之间存在关联(AUC,0.665)。结论 GPTV 导出的放射组学模型和 DL 模型的诊断效率超过了 GTV 导出的模型。此外,融合了 GPTV_DL、GPTV_放射组学和成像结果的混合模型证明了在乳腺癌患者手术前确定 LVI 状态的有效性。
{"title":"A comprehensive approach for evaluating lymphovascular invasion in invasive breast cancer: Leveraging multimodal MRI findings, radiomics, and deep learning analysis of intra- and peritumoral regions","authors":"","doi":"10.1016/j.compmedimag.2024.102415","DOIUrl":"10.1016/j.compmedimag.2024.102415","url":null,"abstract":"<div><h3>Purpose</h3><p>To evaluate lymphovascular invasion (LVI) in breast cancer by comparing the diagnostic performance of preoperative multimodal magnetic resonance imaging (MRI)-based radiomics and deep-learning (DL) models.</p></div><div><h3>Methods</h3><p>This retrospective study included 262 patients with breast cancer—183 in the training cohort (144 LVI-negative and 39 LVI-positive cases) and 79 in the validation cohort (59 LVI-negative and 20 LVI-positive cases). Radiomics features were extracted from the intra- and peritumoral breast regions using multimodal MRI to generate gross tumor volume (GTV)_radiomics and gross tumor volume plus peritumoral volume (GPTV)_radiomics. Subsequently, DL models (GTV_DL and GPTV_DL) were constructed based on the GTV and GPTV to determine the LVI status. Finally, the most effective radiomics and DL models were integrated with imaging findings to establish a hybrid model, which was converted into a nomogram to quantify the LVI risk.</p></div><div><h3>Results</h3><p>The diagnostic efficiency of GPTV_DL was superior to that of GTV_DL (areas under the curve [AUCs], 0.771 and 0.720, respectively). Similarly, GPTV_radiomics outperformed GTV_radiomics (AUC, 0.685 and 0.636, respectively). Univariate and multivariate logistic regression analyses revealed an association between imaging findings, such as MRI-axillary lymph nodes and peritumoral edema (AUC, 0.665). The hybrid model achieved the highest accuracy by combining GPTV_DL, GPTV_radiomics, and imaging findings (AUC, 0.872).</p></div><div><h3>Conclusion</h3><p>The diagnostic efficiency of the GPTV-derived radiomics and DL models surpassed that of the GTV-derived models. Furthermore, the hybrid model, which incorporated GPTV_DL, GPTV_radiomics, and imaging findings, demonstrated the effective determination of LVI status prior to surgery in patients with breast cancer.</p></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":null,"pages":null},"PeriodicalIF":5.4,"publicationDate":"2024-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141692455","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Accurate segmentation of liver tumor from multi-modality non-contrast images using a dual-stream multi-level fusion framework 利用双流多层次融合框架从多模态非对比图像中准确分割肝脏肿瘤。
IF 5.4 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2024-07-03 DOI: 10.1016/j.compmedimag.2024.102414
Chenchu Xu , Xue Wu , Boyan Wang , Jie Chen , Zhifan Gao , Xiujian Liu , Heye Zhang

The use of multi-modality non-contrast images (i.e., T1FS, T2FS and DWI) for segmenting liver tumors provides a solution by eliminating the use of contrast agents and is crucial for clinical diagnosis. However, this remains a challenging task to discover the most useful information to fuse multi-modality images for accurate segmentation due to inter-modal interference. In this paper, we propose a dual-stream multi-level fusion framework (DM-FF) to, for the first time, accurately segment liver tumors from non-contrast multi-modality images directly. Our DM-FF first designs an attention-based encoder–decoder to effectively extract multi-level feature maps corresponding to a specified representation of each modality. Then, DM-FF creates two types of fusion modules, in which a module fuses learned features to obtain a shared representation across multi-modality images to exploit commonalities and improve the performance, and a module fuses the decision evidence of segment to discover differences between modalities to prevent interference caused by modality’s conflict. By integrating these three components, DM-FF enables multi-modality non-contrast images to cooperate with each other and enables an accurate segmentation. Evaluation on 250 patients including different types of tumors from two MRI scanners, DM-FF achieves a Dice of 81.20%, and improves performance (Dice by at least 11%) when comparing the eight state-of-the-art segmentation architectures. The results indicate that our DM-FF significantly promotes the development and deployment of non-contrast liver tumor technology.

使用多模态非对比图像(即 T1FS、T2FS 和 DWI)分割肝脏肿瘤提供了一种解决方案,无需使用造影剂,对临床诊断至关重要。然而,由于模态间的干扰,要发现最有用的信息来融合多模态图像以进行准确分割仍是一项具有挑战性的任务。在本文中,我们提出了一种双流多层次融合框架(DM-FF),首次直接从非对比度多模态图像中准确分割肝脏肿瘤。我们的 DM-FF 首先设计了一个基于注意力的编码器-解码器,以有效提取与每种模态的指定表示相对应的多级特征图。然后,DM-FF 创建了两类融合模块,其中一个模块融合所学特征,以获得多模态图像的共享表征,从而利用共性提高性能;另一个模块融合片段的决策证据,以发现模态之间的差异,从而防止模态冲突造成的干扰。通过整合这三个组件,DM-FF 可使多模态非对比图像相互配合,实现准确的分割。通过对两台核磁共振扫描仪扫描的250名不同类型肿瘤患者进行评估,DM-FF的Dice值达到81.20%,与八种最先进的分割架构相比,DM-FF提高了性能(Dice值至少提高了11%)。结果表明,我们的 DM-FF 极大地促进了非对比度肝脏肿瘤技术的开发和应用。
{"title":"Accurate segmentation of liver tumor from multi-modality non-contrast images using a dual-stream multi-level fusion framework","authors":"Chenchu Xu ,&nbsp;Xue Wu ,&nbsp;Boyan Wang ,&nbsp;Jie Chen ,&nbsp;Zhifan Gao ,&nbsp;Xiujian Liu ,&nbsp;Heye Zhang","doi":"10.1016/j.compmedimag.2024.102414","DOIUrl":"10.1016/j.compmedimag.2024.102414","url":null,"abstract":"<div><p>The use of multi-modality non-contrast images (i.e., T1FS, T2FS and DWI) for segmenting liver tumors provides a solution by eliminating the use of contrast agents and is crucial for clinical diagnosis. However, this remains a challenging task to discover the most useful information to fuse multi-modality images for accurate segmentation due to inter-modal interference. In this paper, we propose a dual-stream multi-level fusion framework (DM-FF) to, for the first time, accurately segment liver tumors from non-contrast multi-modality images directly. Our DM-FF first designs an attention-based encoder–decoder to effectively extract multi-level feature maps corresponding to a specified representation of each modality. Then, DM-FF creates two types of fusion modules, in which a module fuses learned features to obtain a shared representation across multi-modality images to exploit commonalities and improve the performance, and a module fuses the decision evidence of segment to discover differences between modalities to prevent interference caused by modality’s conflict. By integrating these three components, DM-FF enables multi-modality non-contrast images to cooperate with each other and enables an accurate segmentation. Evaluation on 250 patients including different types of tumors from two MRI scanners, DM-FF achieves a Dice of 81.20%, and improves performance (Dice by at least 11%) when comparing the eight state-of-the-art segmentation architectures. The results indicate that our DM-FF significantly promotes the development and deployment of non-contrast liver tumor technology.</p></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":null,"pages":null},"PeriodicalIF":5.4,"publicationDate":"2024-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141564962","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Efficient multi-stage feedback attention for diverse lesion in cancer image segmentation 针对癌症图像分割中的不同病灶的高效多级反馈关注
IF 5.4 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2024-07-01 DOI: 10.1016/j.compmedimag.2024.102417
D. M. S. Arsa, Talha Ilyas, Seok-Hwan Park, Leon O. Chua, Hyongsuk Kim
{"title":"Efficient multi-stage feedback attention for diverse lesion in cancer image segmentation","authors":"D. M. S. Arsa, Talha Ilyas, Seok-Hwan Park, Leon O. Chua, Hyongsuk Kim","doi":"10.1016/j.compmedimag.2024.102417","DOIUrl":"https://doi.org/10.1016/j.compmedimag.2024.102417","url":null,"abstract":"","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":null,"pages":null},"PeriodicalIF":5.4,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141716333","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-objective Bayesian optimization with enhanced features for adaptively improved glioblastoma partitioning and survival prediction 利用增强特征的多目标贝叶斯优化技术,自适应改进胶质母细胞瘤的分区和生存预测
IF 5.4 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2024-07-01 DOI: 10.1016/j.compmedimag.2024.102420
Yifan Li, Chao Li, Yiran Wei, Stephen J. Price, C. Schönlieb, Xi Chen
{"title":"Multi-objective Bayesian optimization with enhanced features for adaptively improved glioblastoma partitioning and survival prediction","authors":"Yifan Li, Chao Li, Yiran Wei, Stephen J. Price, C. Schönlieb, Xi Chen","doi":"10.1016/j.compmedimag.2024.102420","DOIUrl":"https://doi.org/10.1016/j.compmedimag.2024.102420","url":null,"abstract":"","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":null,"pages":null},"PeriodicalIF":5.4,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141838546","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
2D/3D deformable registration for endoscopic camera images using self-supervised offline learning of intraoperative pneumothorax deformation 利用对术中气胸变形的自监督离线学习,实现内窥镜相机图像的二维/三维可变形配准
IF 5.4 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2024-07-01 DOI: 10.1016/j.compmedimag.2024.102418
Tomoki Oya, Yuka Kadomatsu, T.F. Chen-Yoshikawa, Megumi Nakao
{"title":"2D/3D deformable registration for endoscopic camera images using self-supervised offline learning of intraoperative pneumothorax deformation","authors":"Tomoki Oya, Yuka Kadomatsu, T.F. Chen-Yoshikawa, Megumi Nakao","doi":"10.1016/j.compmedimag.2024.102418","DOIUrl":"https://doi.org/10.1016/j.compmedimag.2024.102418","url":null,"abstract":"","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":null,"pages":null},"PeriodicalIF":5.4,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141851662","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TLF: Triple learning framework for intracranial aneurysms segmentation from unreliable labeled CTA scans TLF:从不可靠的标记 CTA 扫描中分割颅内动脉瘤的三重学习框架
IF 5.4 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2024-07-01 DOI: 10.1016/j.compmedimag.2024.102421
Lei Chai, Shuangqian Xue, Daodao Tang, Jixin Liu, Ning Sun, Xiujuan Liu
{"title":"TLF: Triple learning framework for intracranial aneurysms segmentation from unreliable labeled CTA scans","authors":"Lei Chai, Shuangqian Xue, Daodao Tang, Jixin Liu, Ning Sun, Xiujuan Liu","doi":"10.1016/j.compmedimag.2024.102421","DOIUrl":"https://doi.org/10.1016/j.compmedimag.2024.102421","url":null,"abstract":"","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":null,"pages":null},"PeriodicalIF":5.4,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141842424","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Radiomic-based prediction of lesion-specific systemic treatment response in metastatic disease 基于放射线组学预测转移性疾病的病灶特异性全身治疗反应。
IF 5.4 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2024-06-25 DOI: 10.1016/j.compmedimag.2024.102413
Caryn Geady , Farnoosh Abbas-Aghababazadeh , Andres Kohan , Scott Schuetze , David Shultz , Benjamin Haibe-Kains

Despite sharing the same histologic classification, individual tumors in multi metastatic patients may present with different characteristics and varying sensitivities to anticancer therapies. In this study, we investigate the utility of radiomic biomarkers for prediction of lesion-specific treatment resistance in multi metastatic leiomyosarcoma patients. Using a dataset of n=202 lung metastases (LM) from n=80 patients with 1648 pre-treatment computed tomography (CT) radiomics features and LM progression determined from follow-up CT, we developed a radiomic model to predict the progression of each lesion. Repeat experiments assessed the relative predictive performance across LM volume groups. Lesion-specific radiomic models indicate up to a 4.5-fold increase in predictive capacity compared with a no-skill classifier, with an area under the precision-recall curve of 0.70 for the most precise model (FDR = 0.05). Precision varied by administered drug and LM volume. The effect of LM volume was controlled by removing radiomic features at a volume-correlation coefficient threshold of 0.20. Predicting lesion-specific responses using radiomic features represents a novel strategy by which to assess treatment response that acknowledges biological diversity within metastatic subclones, which could facilitate management strategies involving selective ablation of resistant clones in the setting of systemic therapy.

尽管组织学分类相同,但多发性转移瘤患者的单个肿瘤可能具有不同的特征,对抗癌疗法的敏感性也各不相同。在这项研究中,我们研究了放射生物标志物在预测多发性转移性骨髓瘤患者病灶特异性治疗耐药性方面的效用。我们利用来自80名患者的202个肺转移灶(LM)数据集和1648个治疗前计算机断层扫描(CT)放射组学特征以及随访CT确定的LM进展情况,建立了一个放射组学模型来预测每个病灶的进展情况。重复实验评估了不同 LM 体积组的相对预测性能。与无技能分类器相比,病灶特异性放射组学模型显示预测能力最多可提高 4.5 倍,最精确模型的精确度-召回曲线下面积为 0.70(FDR = 0.05)。精确度因施用药物和 LM 容量而异。在体积相关系数阈值为 0.20 时,通过移除放射体特征来控制 LM 体积的影响。利用放射学特征预测病灶特异性反应是一种评估治疗反应的新策略,它承认转移性亚克隆内的生物多样性,这有助于在全身治疗中选择性消融耐药克隆的管理策略。
{"title":"Radiomic-based prediction of lesion-specific systemic treatment response in metastatic disease","authors":"Caryn Geady ,&nbsp;Farnoosh Abbas-Aghababazadeh ,&nbsp;Andres Kohan ,&nbsp;Scott Schuetze ,&nbsp;David Shultz ,&nbsp;Benjamin Haibe-Kains","doi":"10.1016/j.compmedimag.2024.102413","DOIUrl":"10.1016/j.compmedimag.2024.102413","url":null,"abstract":"<div><p>Despite sharing the same histologic classification, individual tumors in multi metastatic patients may present with different characteristics and varying sensitivities to anticancer therapies. In this study, we investigate the utility of radiomic biomarkers for prediction of lesion-specific treatment resistance in multi metastatic leiomyosarcoma patients. Using a dataset of n=202 lung metastases (LM) from n=80 patients with 1648 pre-treatment computed tomography (CT) radiomics features and LM progression determined from follow-up CT, we developed a radiomic model to predict the progression of each lesion. Repeat experiments assessed the relative predictive performance across LM volume groups. Lesion-specific radiomic models indicate up to a 4.5-fold increase in predictive capacity compared with a no-skill classifier, with an area under the precision-recall curve of 0.70 for the most precise model (FDR = 0.05). Precision varied by administered drug and LM volume. The effect of LM volume was controlled by removing radiomic features at a volume-correlation coefficient threshold of 0.20. Predicting lesion-specific responses using radiomic features represents a novel strategy by which to assess treatment response that acknowledges biological diversity within metastatic subclones, which could facilitate management strategies involving selective ablation of resistant clones in the setting of systemic therapy.</p></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":null,"pages":null},"PeriodicalIF":5.4,"publicationDate":"2024-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141472247","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fragment distance-guided dual-stream learning for automatic pelvic fracture segmentation 用于骨盆骨折自动分割的片段距离引导双流学习。
IF 5.4 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2024-06-19 DOI: 10.1016/j.compmedimag.2024.102412
Bolun Zeng , Huixiang Wang , Leo Joskowicz , Xiaojun Chen

Pelvic fracture is a complex and severe injury. Accurate diagnosis and treatment planning require the segmentation of the pelvic structure and the fractured fragments from preoperative CT scans. However, this segmentation is a challenging task, as the fragments from a pelvic fracture typically exhibit considerable variability and irregularity in the morphologies, locations, and quantities. In this study, we propose a novel dual-stream learning framework for the automatic segmentation and category labeling of pelvic fractures. Our method uniquely identifies pelvic fracture fragments in various quantities and locations using a dual-branch architecture that leverages distance learning from bone fragments. Moreover, we develop a multi-size feature fusion module that adaptively aggregates features from diverse receptive fields tailored to targets of different sizes and shapes, thus boosting segmentation performance. Extensive experiments on three pelvic fracture datasets from different medical centers demonstrated the accuracy and generalizability of the proposed method. It achieves a mean Dice coefficient and mean Sensitivity of 0.935±0.068 and 0.929±0.058 in the dataset FracCLINIC, and 0.955±0.072 and 0.912±0.125 in the dataset FracSegData, which are superior than other comparing methods. Our method optimizes the process of pelvic fracture segmentation, potentially serving as an effective tool for preoperative planning in the clinical management of pelvic fractures.

骨盆骨折是一种复杂而严重的损伤。准确的诊断和治疗计划需要通过术前 CT 扫描对骨盆结构和骨折碎片进行分割。然而,这种分割是一项具有挑战性的任务,因为骨盆骨折的碎片通常在形态、位置和数量上表现出相当大的可变性和不规则性。在本研究中,我们提出了一种新颖的双流学习框架,用于骨盆骨折的自动分割和类别标记。我们的方法采用双分支架构,利用骨碎片的距离学习,独特地识别出不同数量和位置的骨盆骨折碎片。此外,我们还开发了多尺寸特征融合模块,该模块可根据不同尺寸和形状的目标,自适应地聚合来自不同感受野的特征,从而提高分割性能。在来自不同医疗中心的三个骨盆骨折数据集上进行的广泛实验证明了所提方法的准确性和通用性。在数据集 FracCLINIC 中,该方法的平均骰子系数(Dice coefficient)和平均灵敏度(Sensitivity)分别为 0.935±0.068 和 0.929±0.058;在数据集 FracSegData 中,该方法的平均骰子系数(Dice coefficient)和平均灵敏度(Sensitivity)分别为 0.955±0.072 和 0.912±0.125,均优于其他比较方法。我们的方法优化了骨盆骨折的分割过程,有可能成为骨盆骨折临床治疗中术前规划的有效工具。
{"title":"Fragment distance-guided dual-stream learning for automatic pelvic fracture segmentation","authors":"Bolun Zeng ,&nbsp;Huixiang Wang ,&nbsp;Leo Joskowicz ,&nbsp;Xiaojun Chen","doi":"10.1016/j.compmedimag.2024.102412","DOIUrl":"10.1016/j.compmedimag.2024.102412","url":null,"abstract":"<div><p>Pelvic fracture is a complex and severe injury. Accurate diagnosis and treatment planning require the segmentation of the pelvic structure and the fractured fragments from preoperative CT scans. However, this segmentation is a challenging task, as the fragments from a pelvic fracture typically exhibit considerable variability and irregularity in the morphologies, locations, and quantities. In this study, we propose a novel dual-stream learning framework for the automatic segmentation and category labeling of pelvic fractures. Our method uniquely identifies pelvic fracture fragments in various quantities and locations using a dual-branch architecture that leverages distance learning from bone fragments. Moreover, we develop a multi-size feature fusion module that adaptively aggregates features from diverse receptive fields tailored to targets of different sizes and shapes, thus boosting segmentation performance. Extensive experiments on three pelvic fracture datasets from different medical centers demonstrated the accuracy and generalizability of the proposed method. It achieves a mean Dice coefficient and mean Sensitivity of 0.935<span><math><mo>±</mo></math></span>0.068 and 0.929<span><math><mo>±</mo></math></span>0.058 in the dataset FracCLINIC, and 0.955<span><math><mo>±</mo></math></span>0.072 and 0.912<span><math><mo>±</mo></math></span>0.125 in the dataset FracSegData, which are superior than other comparing methods. Our method optimizes the process of pelvic fracture segmentation, potentially serving as an effective tool for preoperative planning in the clinical management of pelvic fractures.</p></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":null,"pages":null},"PeriodicalIF":5.4,"publicationDate":"2024-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141472246","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computerized Medical Imaging and Graphics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1