首页 > 最新文献

Computerized Medical Imaging and Graphics最新文献

英文 中文
Uncertainty estimation using a 3D probabilistic U-Net for segmentation with small radiotherapy clinical trial datasets 使用三维概率 UNet 对小型放疗临床试验数据集进行分割的不确定性估计
IF 5.7 2区 医学 Q1 Medicine Pub Date : 2024-06-02 DOI: 10.1016/j.compmedimag.2024.102403
Phillip Chlap , Hang Min , Jason Dowling , Matthew Field , Kirrily Cloak , Trevor Leong , Mark Lee , Julie Chu , Jennifer Tan , Phillip Tran , Tomas Kron , Mark Sidhom , Kirsty Wiltshire , Sarah Keats , Andrew Kneebone , Annette Haworth , Martin A. Ebert , Shalini K. Vinod , Lois Holloway

Background and objectives

Bio-medical image segmentation models typically attempt to predict one segmentation that resembles a ground-truth structure as closely as possible. However, as medical images are not perfect representations of anatomy, obtaining this ground truth is not possible. A surrogate commonly used is to have multiple expert observers define the same structure for a dataset. When multiple observers define the same structure on the same image there can be significant differences depending on the structure, image quality/modality and the region being defined. It is often desirable to estimate this type of aleatoric uncertainty in a segmentation model to help understand the region in which the true structure is likely to be positioned. Furthermore, obtaining these datasets is resource intensive so training such models using limited data may be required. With a small dataset size, differing patient anatomy is likely not well represented causing epistemic uncertainty which should also be estimated so it can be determined for which cases the model is effective or not.

Methods

We use a 3D probabilistic U-Net to train a model from which several segmentations can be sampled to estimate the range of uncertainty seen between multiple observers. To ensure that regions where observers disagree most are emphasised in model training, we expand the Generalised Evidence Lower Bound (ELBO) with a Constrained Optimisation (GECO) loss function with an additional contour loss term to give attention to this region. Ensemble and Monte-Carlo dropout (MCDO) uncertainty quantification methods are used during inference to estimate model confidence on an unseen case. We apply our methodology to two radiotherapy clinical trial datasets, a gastric cancer trial (TOPGEAR, TROG 08.08) and a post-prostatectomy prostate cancer trial (RAVES, TROG 08.03). Each dataset contains only 10 cases each for model development to segment the clinical target volume (CTV) which was defined by multiple observers on each case. An additional 50 cases are available as a hold-out dataset for each trial which had only one observer define the CTV structure on each case. Up to 50 samples were generated using the probabilistic model for each case in the hold-out dataset. To assess performance, each manually defined structure was matched to the closest matching sampled segmentation based on commonly used metrics.

Results

The TOPGEAR CTV model achieved a Dice Similarity Coefficient (DSC) and Surface DSC (sDSC) of 0.7 and 0.43 respectively with the RAVES model achieving 0.75 and 0.71 respectively. Segmentation quality across cases in the hold-out datasets was variable however both the ensemble and MCDO uncertainty estimation approaches were able to accurately estimate model confidence with a p-value < 0.001 for both TOPGEAR and RAVES when comparing the DSC using the Pearson correlation coefficient.

Conclu

背景和目标生物医学图像分割模型通常试图预测一种尽可能接近地面实况结构的分割。然而,由于医学图像并非解剖学的完美代表,因此不可能获得这一基本真相。常用的替代方法是让多个专家观察者为数据集定义相同的结构。当多个观察者在同一图像上定义同一结构时,可能会因结构、图像质量/模式和定义区域的不同而产生显著差异。通常需要对分割模型中的这种不确定性进行估计,以帮助了解真正的结构可能位于哪个区域。此外,获取这些数据集需要大量资源,因此可能需要使用有限的数据来训练此类模型。在数据集规模较小的情况下,不同的患者解剖结构可能没有得到很好的体现,从而导致认识上的不确定性,这种不确定性也需要进行估计,以便确定模型在哪些情况下有效,哪些情况下无效。为确保在模型训练中强调观察者意见分歧最大的区域,我们扩展了广义证据下限(ELBO)与约束优化(GECO)损失函数,并增加了一个轮廓损失项,以关注这一区域。在推理过程中,我们使用了集合和蒙特卡罗遗漏(MCDO)不确定性量化方法来估计未见病例的模型置信度。我们将我们的方法应用于两个放射治疗临床试验数据集,一个是胃癌试验(TOPGEAR,TROG 08.08),另一个是前列腺切除术后前列腺癌试验(RAVES,TROG 08.03)。每个数据集只包含 10 个病例,用于开发模型以分割临床靶区(CTV),每个病例由多个观察者定义。另有 50 个病例作为每个试验的保留数据集,每个病例只有一个观察者定义 CTV 结构。在保留数据集中,使用概率模型为每个病例生成了多达 50 个样本。结果 TOPGEAR CTV 模型的骰子相似系数(DSC)和表面相似系数(sDSC)分别为 0.7 和 0.43,而 RAVES 模型分别为 0.75 和 0.71。在保留数据集中,不同病例的分割质量各不相同,但是在使用皮尔逊相关系数比较 DSC 时,集合和 MCDO 不确定性估计方法都能准确估计模型置信度,TOPGEAR 和 RAVES 的 p 值均为 0.001。让模型估计预测置信度对于了解模型可能对哪些未见病例有用非常重要。
{"title":"Uncertainty estimation using a 3D probabilistic U-Net for segmentation with small radiotherapy clinical trial datasets","authors":"Phillip Chlap ,&nbsp;Hang Min ,&nbsp;Jason Dowling ,&nbsp;Matthew Field ,&nbsp;Kirrily Cloak ,&nbsp;Trevor Leong ,&nbsp;Mark Lee ,&nbsp;Julie Chu ,&nbsp;Jennifer Tan ,&nbsp;Phillip Tran ,&nbsp;Tomas Kron ,&nbsp;Mark Sidhom ,&nbsp;Kirsty Wiltshire ,&nbsp;Sarah Keats ,&nbsp;Andrew Kneebone ,&nbsp;Annette Haworth ,&nbsp;Martin A. Ebert ,&nbsp;Shalini K. Vinod ,&nbsp;Lois Holloway","doi":"10.1016/j.compmedimag.2024.102403","DOIUrl":"10.1016/j.compmedimag.2024.102403","url":null,"abstract":"<div><h3>Background and objectives</h3><p>Bio-medical image segmentation models typically attempt to predict one segmentation that resembles a ground-truth structure as closely as possible. However, as medical images are not perfect representations of anatomy, obtaining this ground truth is not possible. A surrogate commonly used is to have multiple expert observers define the same structure for a dataset. When multiple observers define the same structure on the same image there can be significant differences depending on the structure, image quality/modality and the region being defined. It is often desirable to estimate this type of aleatoric uncertainty in a segmentation model to help understand the region in which the true structure is likely to be positioned. Furthermore, obtaining these datasets is resource intensive so training such models using limited data may be required. With a small dataset size, differing patient anatomy is likely not well represented causing epistemic uncertainty which should also be estimated so it can be determined for which cases the model is effective or not.</p></div><div><h3>Methods</h3><p>We use a 3D probabilistic U-Net to train a model from which several segmentations can be sampled to estimate the range of uncertainty seen between multiple observers. To ensure that regions where observers disagree most are emphasised in model training, we expand the Generalised Evidence Lower Bound (ELBO) with a Constrained Optimisation (GECO) loss function with an additional contour loss term to give attention to this region. Ensemble and Monte-Carlo dropout (MCDO) uncertainty quantification methods are used during inference to estimate model confidence on an unseen case. We apply our methodology to two radiotherapy clinical trial datasets, a gastric cancer trial (TOPGEAR, TROG 08.08) and a post-prostatectomy prostate cancer trial (RAVES, TROG 08.03). Each dataset contains only 10 cases each for model development to segment the clinical target volume (CTV) which was defined by multiple observers on each case. An additional 50 cases are available as a hold-out dataset for each trial which had only one observer define the CTV structure on each case. Up to 50 samples were generated using the probabilistic model for each case in the hold-out dataset. To assess performance, each manually defined structure was matched to the closest matching sampled segmentation based on commonly used metrics.</p></div><div><h3>Results</h3><p>The TOPGEAR CTV model achieved a Dice Similarity Coefficient (DSC) and Surface DSC (sDSC) of 0.7 and 0.43 respectively with the RAVES model achieving 0.75 and 0.71 respectively. Segmentation quality across cases in the hold-out datasets was variable however both the ensemble and MCDO uncertainty estimation approaches were able to accurately estimate model confidence with a p-value &lt; 0.001 for both TOPGEAR and RAVES when comparing the DSC using the Pearson correlation coefficient.</p></div><div><h3>Conclu","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":null,"pages":null},"PeriodicalIF":5.7,"publicationDate":"2024-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0895611124000806/pdfft?md5=868a3bb84995d28d5305b07d9e1c8a21&pid=1-s2.0-S0895611124000806-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141277277","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PFMNet: Prototype-based feature mapping network for few-shot domain adaptation in medical image segmentation PFMNet:基于原型的特征映射网络,用于医学影像分割中的少量领域适应。
IF 5.7 2区 医学 Q1 Medicine Pub Date : 2024-05-28 DOI: 10.1016/j.compmedimag.2024.102406
Runze Wang, Guoyan Zheng

Lack of data is one of the biggest hurdles for rare disease research using deep learning. Due to the lack of rare-disease images and annotations, training a robust network for automatic rare-disease image segmentation is very challenging. To address this challenge, few-shot domain adaptation (FSDA) has emerged as a practical research direction, aiming to leverage a limited number of annotated images from a target domain to facilitate adaptation of models trained on other large datasets in a source domain. In this paper, we present a novel prototype-based feature mapping network (PFMNet) designed for FSDA in medical image segmentation. PFMNet adopts an encoder–decoder structure for segmentation, with the prototype-based feature mapping (PFM) module positioned at the bottom of the encoder–decoder structure. The PFM module transforms high-level features from the target domain into the source domain-like features that are more easily comprehensible by the decoder. By leveraging these source domain-like features, the decoder can effectively perform few-shot segmentation in the target domain and generate accurate segmentation masks. We evaluate the performance of PFMNet through experiments on three typical yet challenging few-shot medical image segmentation tasks: cross-center optic disc/cup segmentation, cross-center polyp segmentation, and cross-modality cardiac structure segmentation. We consider four different settings: 5-shot, 10-shot, 15-shot, and 20-shot. The experimental results substantiate the efficacy of our proposed approach for few-shot domain adaptation in medical image segmentation.

缺乏数据是利用深度学习进行罕见病研究的最大障碍之一。由于缺乏罕见病图像和注释,训练一个健壮的网络来自动分割罕见病图像非常具有挑战性。为了应对这一挑战,少数几个领域自适应(FSDA)已成为一个实用的研究方向,其目的是利用目标领域中数量有限的注释图像,促进对源领域中其他大型数据集上训练的模型进行自适应。本文介绍了一种新颖的基于原型的特征映射网络(PFMNet),该网络专为医学图像分割中的 FSDA 而设计。PFMNet 采用编码器-解码器结构进行分割,基于原型的特征映射(PFM)模块位于编码器-解码器结构的底部。PFM 模块将目标领域的高级特征转换为源领域的类特征,使解码器更容易理解。通过利用这些类似源域的特征,解码器可以有效地在目标域中执行少镜头分割,并生成准确的分割掩码。我们通过对三个典型但极具挑战性的少镜头医学图像分割任务进行实验,评估了 PFMNet 的性能:跨中心视盘/视杯分割、跨中心息肉分割和跨模态心脏结构分割。我们考虑了四种不同的设置:5 次、10 次、15 次和 20 次。实验结果证明了我们所提出的方法在医学图像分割中进行少镜头域适应的有效性。
{"title":"PFMNet: Prototype-based feature mapping network for few-shot domain adaptation in medical image segmentation","authors":"Runze Wang,&nbsp;Guoyan Zheng","doi":"10.1016/j.compmedimag.2024.102406","DOIUrl":"10.1016/j.compmedimag.2024.102406","url":null,"abstract":"<div><p>Lack of data is one of the biggest hurdles for rare disease research using deep learning. Due to the lack of rare-disease images and annotations, training a robust network for automatic rare-disease image segmentation is very challenging. To address this challenge, few-shot domain adaptation (FSDA) has emerged as a practical research direction, aiming to leverage a limited number of annotated images from a target domain to facilitate adaptation of models trained on other large datasets in a source domain. In this paper, we present a novel prototype-based feature mapping network (PFMNet) designed for FSDA in medical image segmentation. PFMNet adopts an encoder–decoder structure for segmentation, with the prototype-based feature mapping (PFM) module positioned at the bottom of the encoder–decoder structure. The PFM module transforms high-level features from the target domain into the source domain-like features that are more easily comprehensible by the decoder. By leveraging these source domain-like features, the decoder can effectively perform few-shot segmentation in the target domain and generate accurate segmentation masks. We evaluate the performance of PFMNet through experiments on three typical yet challenging few-shot medical image segmentation tasks: cross-center optic disc/cup segmentation, cross-center polyp segmentation, and cross-modality cardiac structure segmentation. We consider four different settings: 5-shot, 10-shot, 15-shot, and 20-shot. The experimental results substantiate the efficacy of our proposed approach for few-shot domain adaptation in medical image segmentation.</p></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":null,"pages":null},"PeriodicalIF":5.7,"publicationDate":"2024-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141201107","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FetalBrainAwareNet: Bridging GANs with anatomical insight for fetal ultrasound brain plane synthesis FetalBrainAwareNet:为胎儿超声脑平面合成架起 GAN 与解剖学洞察力之间的桥梁。
IF 5.7 2区 医学 Q1 Medicine Pub Date : 2024-05-28 DOI: 10.1016/j.compmedimag.2024.102405
Angelo Lasala , Maria Chiara Fiorentino , Andrea Bandini , Sara Moccia

Over the past decade, deep-learning (DL) algorithms have become a promising tool to aid clinicians in identifying fetal head standard planes (FHSPs) during ultrasound (US) examination. However, the adoption of these algorithms in clinical settings is still hindered by the lack of large annotated datasets. To overcome this barrier, we introduce FetalBrainAwareNet, an innovative framework designed to synthesize anatomically accurate images of FHSPs. FetalBrainAwareNet introduces a cutting-edge approach that utilizes class activation maps as a prior in its conditional adversarial training process. This approach fosters the presence of the specific anatomical landmarks in the synthesized images. Additionally, we investigate specialized regularization terms within the adversarial training loss function to control the morphology of the fetal skull and foster the differentiation between the standard planes, ensuring that the synthetic images faithfully represent real US scans in both structure and overall appearance. The versatility of our FetalBrainAwareNet framework is highlighted by its ability to generate high-quality images of three predominant FHSPs using a singular, integrated framework. Quantitative (Fréchet inception distance of 88.52) and qualitative (t-SNE) results suggest that our framework generates US images with greater variability compared to state-of-the-art methods. By using the synthetic images generated with our framework, we increase the accuracy of FHSP classifiers by 3.2% compared to training the same classifiers solely with real acquisitions. These achievements suggest that using our synthetic images to increase the training set could provide benefits to enhance the performance of DL algorithms for FHSPs classification that could be integrated in real clinical scenarios.

在过去十年中,深度学习(DL)算法已成为帮助临床医生在超声波(US)检查中识别胎儿头部标准平面(FHSPs)的一种前景广阔的工具。然而,由于缺乏大型注释数据集,这些算法在临床环境中的应用仍然受到阻碍。为了克服这一障碍,我们引入了胎儿脑感知网络(FetalBrainAwareNet),这是一个创新的框架,旨在合成解剖学上准确的胎儿头颅平面图像。FetalBrainAwareNet 引入了一种前沿方法,在条件对抗训练过程中利用类激活图作为先验。这种方法有助于在合成图像中出现特定的解剖地标。此外,我们还研究了对抗训练损失函数中的专门正则化项,以控制胎儿头骨的形态,促进标准平面之间的区分,确保合成图像在结构和整体外观上忠实再现真实的 US 扫描图像。我们的 FetalBrainAwareNet 框架的多功能性体现在它能够利用一个单一的集成框架生成三种主要 FHSP 的高质量图像。定量(弗雷谢特起始距离为 88.52)和定性(t-SNE)结果表明,与最先进的方法相比,我们的框架生成的 US 图像具有更大的可变性。通过使用我们的框架生成的合成图像,我们将 FHSP 分类器的准确率提高了 3.2%,而仅使用真实采集图像训练相同分类器的准确率则降低了 3.2%。这些成果表明,使用我们的合成图像来增加训练集可以提高用于 FHSP 分类的 DL 算法的性能,并将其整合到实际临床场景中。
{"title":"FetalBrainAwareNet: Bridging GANs with anatomical insight for fetal ultrasound brain plane synthesis","authors":"Angelo Lasala ,&nbsp;Maria Chiara Fiorentino ,&nbsp;Andrea Bandini ,&nbsp;Sara Moccia","doi":"10.1016/j.compmedimag.2024.102405","DOIUrl":"10.1016/j.compmedimag.2024.102405","url":null,"abstract":"<div><p>Over the past decade, deep-learning (DL) algorithms have become a promising tool to aid clinicians in identifying fetal head standard planes (FHSPs) during ultrasound (US) examination. However, the adoption of these algorithms in clinical settings is still hindered by the lack of large annotated datasets. To overcome this barrier, we introduce FetalBrainAwareNet, an innovative framework designed to synthesize anatomically accurate images of FHSPs. FetalBrainAwareNet introduces a cutting-edge approach that utilizes class activation maps as a prior in its conditional adversarial training process. This approach fosters the presence of the specific anatomical landmarks in the synthesized images. Additionally, we investigate specialized regularization terms within the adversarial training loss function to control the morphology of the fetal skull and foster the differentiation between the standard planes, ensuring that the synthetic images faithfully represent real US scans in both structure and overall appearance. The versatility of our FetalBrainAwareNet framework is highlighted by its ability to generate high-quality images of three predominant FHSPs using a singular, integrated framework. Quantitative (Fréchet inception distance of 88.52) and qualitative (t-SNE) results suggest that our framework generates US images with greater variability compared to state-of-the-art methods. By using the synthetic images generated with our framework, we increase the accuracy of FHSP classifiers by 3.2% compared to training the same classifiers solely with real acquisitions. These achievements suggest that using our synthetic images to increase the training set could provide benefits to enhance the performance of DL algorithms for FHSPs classification that could be integrated in real clinical scenarios.</p></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":null,"pages":null},"PeriodicalIF":5.7,"publicationDate":"2024-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141201020","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A brain subcortical segmentation tool based on anatomy attentional fusion network for developing macaques 基于发育中猕猴解剖注意融合网络的大脑皮层下分割工具
IF 5.7 2区 医学 Q1 Medicine Pub Date : 2024-05-25 DOI: 10.1016/j.compmedimag.2024.102404
Tao Zhong , Ya Wang , Xiaotong Xu , Xueyang Wu , Shujun Liang , Zhenyuan Ning , Li Wang , Yuyu Niu , Gang Li , Yu Zhang

Magnetic Resonance Imaging (MRI) plays a pivotal role in the accurate measurement of brain subcortical structures in macaques, which is crucial for unraveling the complexities of brain structure and function, thereby enhancing our understanding of neurodegenerative diseases and brain development. However, due to significant differences in brain size, structure, and imaging characteristics between humans and macaques, computational tools developed for human neuroimaging studies often encounter obstacles when applied to macaques. In this context, we propose an Anatomy Attentional Fusion Network (AAF-Net), which integrates multimodal MRI data with anatomical constraints in a multi-scale framework to address the challenges posed by the dynamic development, regional heterogeneity, and age-related size variations of the juvenile macaque brain, thus achieving precise subcortical segmentation. Specifically, we generate a Signed Distance Map (SDM) based on the initial rough segmentation of the subcortical region by a network as an anatomical constraint, providing comprehensive information on positions, structures, and morphology. Then we construct AAF-Net to fully fuse the SDM anatomical constraints and multimodal images for refined segmentation. To thoroughly evaluate the performance of our proposed tool, over 700 macaque MRIs from 19 datasets were used in this study. Specifically, we employed two manually labeled longitudinal macaque datasets to develop the tool and complete four-fold cross-validations. Furthermore, we incorporated various external datasets to demonstrate the proposed tool’s generalization capabilities and promise in brain development research. We have made this tool available as an open-source resource at https://github.com/TaoZhong11/Macaque_subcortical_segmentation for direct application.

磁共振成像(MRI)在精确测量猕猴大脑皮层下结构方面起着举足轻重的作用,这对于揭示大脑结构和功能的复杂性,从而提高我们对神经退行性疾病和大脑发育的认识至关重要。然而,由于人类和猕猴的大脑大小、结构和成像特征存在显著差异,为人类神经成像研究开发的计算工具在应用于猕猴时往往会遇到障碍。在这种情况下,我们提出了解剖学注意力融合网络(AAF-Net),该网络在多尺度框架下将多模态磁共振成像数据与解剖学约束整合在一起,以应对幼年猕猴大脑的动态发育、区域异质性和与年龄相关的大小变化所带来的挑战,从而实现皮层下的精确分割。具体来说,我们以网络对皮层下区域的初步粗略分割为基础,生成签名距离图(SDM)作为解剖约束,提供位置、结构和形态等综合信息。然后,我们构建 AAF-Net,将 SDM 解剖约束和多模态图像充分融合,进行精细分割。为了全面评估我们提出的工具的性能,本研究使用了来自 19 个数据集的 700 多张猕猴 MRI 图像。具体来说,我们使用了两个人工标注的纵向猕猴数据集来开发工具,并完成了四倍交叉验证。此外,我们还纳入了各种外部数据集,以展示所提议工具的通用能力和在大脑发育研究中的应用前景。我们已将该工具作为开源资源发布在 https://github.com/TaoZhong11/Macaque_subcortical_segmentation 网站上,以供直接应用。
{"title":"A brain subcortical segmentation tool based on anatomy attentional fusion network for developing macaques","authors":"Tao Zhong ,&nbsp;Ya Wang ,&nbsp;Xiaotong Xu ,&nbsp;Xueyang Wu ,&nbsp;Shujun Liang ,&nbsp;Zhenyuan Ning ,&nbsp;Li Wang ,&nbsp;Yuyu Niu ,&nbsp;Gang Li ,&nbsp;Yu Zhang","doi":"10.1016/j.compmedimag.2024.102404","DOIUrl":"https://doi.org/10.1016/j.compmedimag.2024.102404","url":null,"abstract":"<div><p>Magnetic Resonance Imaging (MRI) plays a pivotal role in the accurate measurement of brain subcortical structures in macaques, which is crucial for unraveling the complexities of brain structure and function, thereby enhancing our understanding of neurodegenerative diseases and brain development. However, due to significant differences in brain size, structure, and imaging characteristics between humans and macaques, computational tools developed for human neuroimaging studies often encounter obstacles when applied to macaques. In this context, we propose an Anatomy Attentional Fusion Network (AAF-Net), which integrates multimodal MRI data with anatomical constraints in a multi-scale framework to address the challenges posed by the dynamic development, regional heterogeneity, and age-related size variations of the juvenile macaque brain, thus achieving precise subcortical segmentation. Specifically, we generate a Signed Distance Map (SDM) based on the initial rough segmentation of the subcortical region by a network as an anatomical constraint, providing comprehensive information on positions, structures, and morphology. Then we construct AAF-Net to fully fuse the SDM anatomical constraints and multimodal images for refined segmentation. To thoroughly evaluate the performance of our proposed tool, over 700 macaque MRIs from 19 datasets were used in this study. Specifically, we employed two manually labeled longitudinal macaque datasets to develop the tool and complete four-fold cross-validations. Furthermore, we incorporated various external datasets to demonstrate the proposed tool’s generalization capabilities and promise in brain development research. We have made this tool available as an open-source resource at <span>https://github.com/TaoZhong11/Macaque_subcortical_segmentation</span><svg><path></path></svg> for direct application.</p></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":null,"pages":null},"PeriodicalIF":5.7,"publicationDate":"2024-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141313299","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Progress and trends in neurological disorders research based on deep learning 基于深度学习的神经系统疾病研究进展与趋势
IF 5.7 2区 医学 Q1 Medicine Pub Date : 2024-05-25 DOI: 10.1016/j.compmedimag.2024.102400
Muhammad Shahid Iqbal , Md Belal Bin Heyat , Saba Parveen , Mohd Ammar Bin Hayat , Mohamad Roshanzamir , Roohallah Alizadehsani , Faijan Akhtar , Eram Sayeed , Sadiq Hussain , Hany S. Hussein , Mohamad Sawan

In recent years, deep learning (DL) has emerged as a powerful tool in clinical imaging, offering unprecedented opportunities for the diagnosis and treatment of neurological disorders (NDs). This comprehensive review explores the multifaceted role of DL techniques in leveraging vast datasets to advance our understanding of NDs and improve clinical outcomes. Beginning with a systematic literature review, we delve into the utilization of DL, particularly focusing on multimodal neuroimaging data analysis—a domain that has witnessed rapid progress and garnered significant scientific interest. Our study categorizes and critically analyses numerous DL models, including Convolutional Neural Networks (CNNs), LSTM-CNN, GAN, and VGG, to understand their performance across different types of Neurology Diseases. Through particular analysis, we identify key benchmarks and datasets utilized in training and testing DL models, shedding light on the challenges and opportunities in clinical neuroimaging research. Moreover, we discuss the effectiveness of DL in real-world clinical scenarios, emphasizing its potential to revolutionize ND diagnosis and therapy. By synthesizing existing literature and describing future directions, this review not only provides insights into the current state of DL applications in ND analysis but also covers the way for the development of more efficient and accessible DL techniques. Finally, our findings underscore the transformative impact of DL in reshaping the landscape of clinical neuroimaging, offering hope for enhanced patient care and groundbreaking discoveries in the field of neurology. This review paper is beneficial for neuropathologists and new researchers in this field.

近年来,深度学习(DL)已成为临床成像领域的强大工具,为神经系统疾病(NDs)的诊断和治疗提供了前所未有的机遇。这篇综合性综述探讨了深度学习技术在利用庞大的数据集推进我们对 NDs 的理解和改善临床疗效方面所发挥的多方面作用。从系统的文献综述开始,我们深入探讨了 DL 的应用,尤其侧重于多模态神经影像数据分析--该领域进展迅速,引起了科学界的极大兴趣。我们的研究对卷积神经网络 (CNN)、LSTM-CNN、GAN 和 VGG 等众多 DL 模型进行了分类和批判性分析,以了解它们在不同类型神经疾病中的表现。通过具体分析,我们确定了用于训练和测试 DL 模型的关键基准和数据集,揭示了临床神经成像研究中的挑战和机遇。此外,我们还讨论了 DL 在真实世界临床场景中的有效性,强调了它在革新 ND 诊断和治疗方面的潜力。通过综合现有文献和描述未来方向,本综述不仅深入分析了 DL 在 ND 分析中的应用现状,还为开发更高效、更易用的 DL 技术指明了方向。最后,我们的研究结果强调了 DL 在重塑临床神经成像格局方面的变革性影响,为加强患者护理和神经学领域的突破性发现带来了希望。这篇综述论文对神经病理学家和该领域的新研究人员大有裨益。
{"title":"Progress and trends in neurological disorders research based on deep learning","authors":"Muhammad Shahid Iqbal ,&nbsp;Md Belal Bin Heyat ,&nbsp;Saba Parveen ,&nbsp;Mohd Ammar Bin Hayat ,&nbsp;Mohamad Roshanzamir ,&nbsp;Roohallah Alizadehsani ,&nbsp;Faijan Akhtar ,&nbsp;Eram Sayeed ,&nbsp;Sadiq Hussain ,&nbsp;Hany S. Hussein ,&nbsp;Mohamad Sawan","doi":"10.1016/j.compmedimag.2024.102400","DOIUrl":"https://doi.org/10.1016/j.compmedimag.2024.102400","url":null,"abstract":"<div><p>In recent years, deep learning (DL) has emerged as a powerful tool in clinical imaging, offering unprecedented opportunities for the diagnosis and treatment of neurological disorders (NDs). This comprehensive review explores the multifaceted role of DL techniques in leveraging vast datasets to advance our understanding of NDs and improve clinical outcomes. Beginning with a systematic literature review, we delve into the utilization of DL, particularly focusing on multimodal neuroimaging data analysis—a domain that has witnessed rapid progress and garnered significant scientific interest. Our study categorizes and critically analyses numerous DL models, including Convolutional Neural Networks (CNNs), LSTM-CNN, GAN, and VGG, to understand their performance across different types of Neurology Diseases. Through particular analysis, we identify key benchmarks and datasets utilized in training and testing DL models, shedding light on the challenges and opportunities in clinical neuroimaging research. Moreover, we discuss the effectiveness of DL in real-world clinical scenarios, emphasizing its potential to revolutionize ND diagnosis and therapy. By synthesizing existing literature and describing future directions, this review not only provides insights into the current state of DL applications in ND analysis but also covers the way for the development of more efficient and accessible DL techniques. Finally, our findings underscore the transformative impact of DL in reshaping the landscape of clinical neuroimaging, offering hope for enhanced patient care and groundbreaking discoveries in the field of neurology. This review paper is beneficial for neuropathologists and new researchers in this field.</p></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":null,"pages":null},"PeriodicalIF":5.7,"publicationDate":"2024-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141289967","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A deep learning approach for virtual contrast enhancement in Contrast Enhanced Spectral Mammography 对比度增强型光谱乳腺 X 射线摄影中虚拟对比度增强的深度学习方法
IF 5.7 2区 医学 Q1 Medicine Pub Date : 2024-05-23 DOI: 10.1016/j.compmedimag.2024.102398
Aurora Rofena , Valerio Guarrasi , Marina Sarli , Claudia Lucia Piccolo , Matteo Sammarra , Bruno Beomonte Zobel , Paolo Soda

Contrast Enhanced Spectral Mammography (CESM) is a dual-energy mammographic imaging technique that first requires intravenously administering an iodinated contrast medium. Then, it collects both a low-energy image, comparable to standard mammography, and a high-energy image. The two scans are combined to get a recombined image showing contrast enhancement. Despite CESM diagnostic advantages for breast cancer diagnosis, the use of contrast medium can cause side effects, and CESM also beams patients with a higher radiation dose compared to standard mammography. To address these limitations, this work proposes using deep generative models for virtual contrast enhancement on CESM, aiming to make CESM contrast-free and reduce the radiation dose. Our deep networks, consisting of an autoencoder and two Generative Adversarial Networks, the Pix2Pix, and the CycleGAN, generate synthetic recombined images solely from low-energy images. We perform an extensive quantitative and qualitative analysis of the model’s performance, also exploiting radiologists’ assessments, on a novel CESM dataset that includes 1138 images. As a further contribution to this work, we make the dataset publicly available. The results show that CycleGAN is the most promising deep network to generate synthetic recombined images, highlighting the potential of artificial intelligence techniques for virtual contrast enhancement in this field.

造影剂增强光谱乳腺摄影术(CESM)是一种双能量乳腺成像技术,首先需要静脉注射碘化造影剂。然后,它会同时采集低能量图像(与标准乳腺 X 射线照相术类似)和高能量图像。将这两张扫描图像合并,得到一张显示对比度增强的重组图像。尽管 CESM 在诊断乳腺癌方面具有优势,但造影剂的使用会产生副作用,而且与标准乳腺 X 射线照相术相比,CESM 还会对患者产生较高的辐射剂量。针对这些局限性,本研究提出使用深度生成模型对 CESM 进行虚拟对比度增强,旨在使 CESM 无需对比度并降低辐射剂量。我们的深度网络由一个自动编码器和两个生成对抗网络(Pix2Pix 和 CycleGAN)组成,仅从低能量图像生成合成重组图像。我们在一个包含 1138 幅图像的新型 CESM 数据集上对该模型的性能进行了广泛的定量和定性分析,同时还利用了放射科医生的评估。作为对这项工作的进一步贡献,我们公开了该数据集。结果表明,CycleGAN 是生成合成重组图像的最有前途的深度网络,凸显了人工智能技术在该领域虚拟对比度增强方面的潜力。
{"title":"A deep learning approach for virtual contrast enhancement in Contrast Enhanced Spectral Mammography","authors":"Aurora Rofena ,&nbsp;Valerio Guarrasi ,&nbsp;Marina Sarli ,&nbsp;Claudia Lucia Piccolo ,&nbsp;Matteo Sammarra ,&nbsp;Bruno Beomonte Zobel ,&nbsp;Paolo Soda","doi":"10.1016/j.compmedimag.2024.102398","DOIUrl":"https://doi.org/10.1016/j.compmedimag.2024.102398","url":null,"abstract":"<div><p>Contrast Enhanced Spectral Mammography (CESM) is a dual-energy mammographic imaging technique that first requires intravenously administering an iodinated contrast medium. Then, it collects both a low-energy image, comparable to standard mammography, and a high-energy image. The two scans are combined to get a recombined image showing contrast enhancement. Despite CESM diagnostic advantages for breast cancer diagnosis, the use of contrast medium can cause side effects, and CESM also beams patients with a higher radiation dose compared to standard mammography. To address these limitations, this work proposes using deep generative models for virtual contrast enhancement on CESM, aiming to make CESM contrast-free and reduce the radiation dose. Our deep networks, consisting of an autoencoder and two Generative Adversarial Networks, the Pix2Pix, and the CycleGAN, generate synthetic recombined images solely from low-energy images. We perform an extensive quantitative and qualitative analysis of the model’s performance, also exploiting radiologists’ assessments, on a novel CESM dataset that includes 1138 images. As a further contribution to this work, we make the dataset publicly available. The results show that CycleGAN is the most promising deep network to generate synthetic recombined images, highlighting the potential of artificial intelligence techniques for virtual contrast enhancement in this field.</p></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":null,"pages":null},"PeriodicalIF":5.7,"publicationDate":"2024-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0895611124000752/pdfft?md5=579b15387524c47940b3088af4489328&pid=1-s2.0-S0895611124000752-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141163469","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep learning ensembles for detecting brain metastases in longitudinal multi-modal MRI studies 在纵向多模态磁共振成像研究中检测脑转移的深度学习组合
IF 5.7 2区 医学 Q1 Medicine Pub Date : 2024-05-22 DOI: 10.1016/j.compmedimag.2024.102401
Bartosz Machura , Damian Kucharski , Oskar Bozek , Bartosz Eksner , Bartosz Kokoszka , Tomasz Pekala , Mateusz Radom , Marek Strzelczak , Lukasz Zarudzki , Benjamín Gutiérrez-Becker , Agata Krason , Jean Tessier , Jakub Nalepa

Metastatic brain cancer is a condition characterized by the migration of cancer cells to the brain from extracranial sites. Notably, metastatic brain tumors surpass primary brain tumors in prevalence by a significant factor, they exhibit an aggressive growth potential and have the capacity to spread across diverse cerebral locations simultaneously. Magnetic resonance imaging (MRI) scans of individuals afflicted with metastatic brain tumors unveil a wide spectrum of characteristics. These lesions vary in size and quantity, spanning from tiny nodules to substantial masses captured within MRI. Patients may present with a limited number of lesions or an extensive burden of hundreds of them. Moreover, longitudinal studies may depict surgical resection cavities, as well as areas of necrosis or edema. Thus, the manual analysis of such MRI scans is difficult, user-dependent and cost-inefficient, and – importantly – it lacks reproducibility. We address these challenges and propose a pipeline for detecting and analyzing brain metastases in longitudinal studies, which benefits from an ensemble of various deep learning architectures originally designed for different downstream tasks (detection and segmentation). The experiments, performed over 275 multi-modal MRI scans of 87 patients acquired in 53 sites, coupled with rigorously validated manual annotations, revealed that our pipeline, built upon open-source tools to ensure its reproducibility, offers high-quality detection, and allows for precisely tracking the disease progression. To objectively quantify the generalizability of models, we introduce a new data stratification approach that accommodates the heterogeneity of the dataset and is used to elaborate training-test splits in a data-robust manner, alongside a new set of quality metrics to objectively assess algorithms. Our system provides a fully automatic and quantitative approach that may support physicians in a laborious process of disease progression tracking and evaluation of treatment efficacy.

转移性脑癌的特点是癌细胞从颅外转移到脑部。值得注意的是,转移性脑肿瘤的发病率远远超过原发性脑肿瘤,它们具有侵袭性生长潜力,并能同时扩散到大脑的不同部位。转移性脑肿瘤患者的磁共振成像(MRI)扫描显示出广泛的特征。这些病变的大小和数量各不相同,从微小的结节到磁共振成像中捕捉到的巨大肿块。患者可能表现为数量有限的病灶,也可能表现为数以百计的广泛病灶。此外,纵向研究可能会显示手术切除腔以及坏死或水肿区域。因此,手动分析这类磁共振成像扫描既困难又依赖用户,成本效率低,更重要的是缺乏可重复性。针对这些挑战,我们提出了一种在纵向研究中检测和分析脑转移的方法,它得益于最初为不同下游任务(检测和分割)设计的各种深度学习架构的组合。通过对在 53 个地点获得的 87 名患者的 275 个多模态 MRI 扫描以及经过严格验证的人工注释进行实验,发现我们的管道基于开源工具构建,可确保其可重复性,提供高质量的检测,并可精确跟踪疾病进展。为了客观地量化模型的可推广性,我们引入了一种新的数据分层方法,这种方法能适应数据集的异质性,并用于以数据稳健的方式制定训练-测试分割,同时还引入了一套新的质量指标来客观地评估算法。我们的系统提供了一种全自动的定量方法,可在疾病进展跟踪和疗效评估的繁琐过程中为医生提供支持。
{"title":"Deep learning ensembles for detecting brain metastases in longitudinal multi-modal MRI studies","authors":"Bartosz Machura ,&nbsp;Damian Kucharski ,&nbsp;Oskar Bozek ,&nbsp;Bartosz Eksner ,&nbsp;Bartosz Kokoszka ,&nbsp;Tomasz Pekala ,&nbsp;Mateusz Radom ,&nbsp;Marek Strzelczak ,&nbsp;Lukasz Zarudzki ,&nbsp;Benjamín Gutiérrez-Becker ,&nbsp;Agata Krason ,&nbsp;Jean Tessier ,&nbsp;Jakub Nalepa","doi":"10.1016/j.compmedimag.2024.102401","DOIUrl":"https://doi.org/10.1016/j.compmedimag.2024.102401","url":null,"abstract":"<div><p>Metastatic brain cancer is a condition characterized by the migration of cancer cells to the brain from extracranial sites. Notably, metastatic brain tumors surpass primary brain tumors in prevalence by a significant factor, they exhibit an aggressive growth potential and have the capacity to spread across diverse cerebral locations simultaneously. Magnetic resonance imaging (MRI) scans of individuals afflicted with metastatic brain tumors unveil a wide spectrum of characteristics. These lesions vary in size and quantity, spanning from tiny nodules to substantial masses captured within MRI. Patients may present with a limited number of lesions or an extensive burden of hundreds of them. Moreover, longitudinal studies may depict surgical resection cavities, as well as areas of necrosis or edema. Thus, the manual analysis of such MRI scans is difficult, user-dependent and cost-inefficient, and – importantly – it lacks reproducibility. We address these challenges and propose a pipeline for detecting and analyzing brain metastases in longitudinal studies, which benefits from an ensemble of various deep learning architectures originally designed for different downstream tasks (detection and segmentation). The experiments, performed over 275 multi-modal MRI scans of 87 patients acquired in 53 sites, coupled with rigorously validated manual annotations, revealed that our pipeline, built upon open-source tools to ensure its reproducibility, offers high-quality detection, and allows for precisely tracking the disease progression. To objectively quantify the generalizability of models, we introduce a new data stratification approach that accommodates the heterogeneity of the dataset and is used to elaborate training-test splits in a data-robust manner, alongside a new set of quality metrics to objectively assess algorithms. Our system provides a fully automatic and quantitative approach that may support physicians in a laborious process of disease progression tracking and evaluation of treatment efficacy.</p></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":null,"pages":null},"PeriodicalIF":5.7,"publicationDate":"2024-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0895611124000788/pdfft?md5=51dfd8fc9e95917b8971fa4297d3ea4e&pid=1-s2.0-S0895611124000788-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141095699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A 3D framework for segmentation of carotid artery vessel wall and identification of plaque compositions in multi-sequence MR images 在多序列磁共振图像中分割颈动脉血管壁并识别斑块成分的三维框架
IF 5.7 2区 医学 Q1 Medicine Pub Date : 2024-05-21 DOI: 10.1016/j.compmedimag.2024.102402
Jian Wang , Fan Yu , Mengze Zhang , Jie Lu , Zhen Qian

Accurately assessing carotid artery wall thickening and identifying risky plaque components are critical for early diagnosis and risk management of carotid atherosclerosis. In this paper, we present a 3D framework for automated segmentation of the carotid artery vessel wall and identification of the compositions of carotid plaque in multi-sequence magnetic resonance (MR) images under the challenge of imperfect manual labeling. Manual labeling is commonly done in 2D slices of these multi-sequence MR images and often lacks perfect alignment across 2D slices and the multiple MR sequences, leading to labeling inaccuracies. To address such challenges, our framework is split into two parts: a segmentation subnetwork and a plaque component identification subnetwork. Initially, a 2D localization network pinpoints the carotid artery’s position, extracting the region of interest (ROI) from the input images. Following that, a signed-distance-map-enabled 3D U-net (Çiçek etal, 2016)an adaptation of the nnU-net (Ronneberger and Fischer, 2015) segments the carotid artery vessel wall. This method allows for the concurrent segmentation of the vessel wall area using the signed distance map (SDM) loss (Xue et al., 2020) which regularizes the segmentation surfaces in 3D and reduces erroneous segmentation caused by imperfect manual labels. Subsequently, the ROI of the input images and the obtained vessel wall masks are extracted and combined to obtain the identification results of plaque components in the identification subnetwork. Tailored data augmentation operations are introduced into the framework to reduce the false positive rate of calcification and hemorrhage identification. We trained and tested our proposed method on a dataset consisting of 115 patients, and it achieves an accurate segmentation result of carotid artery wall (0.8459 Dice), which is superior to the best result in published studies (0.7885 Dice). Our approach yielded accuracies of 0.82, 0.73 and 0.88 for the identification of calcification, lipid-rich core and hemorrhage components. Our proposed framework can be potentially used in clinical and research settings to help radiologists perform cumbersome reading tasks and evaluate the risk of carotid plaques.

准确评估颈动脉壁增厚和识别风险斑块成分对于颈动脉粥样硬化的早期诊断和风险管理至关重要。在本文中,我们提出了一个三维框架,用于自动分割颈动脉血管壁并识别多序列磁共振(MR)图像中的颈动脉斑块成分,以应对人工标记不完善的挑战。手动标记通常是在这些多序列磁共振图像的二维切片上完成的,而且往往缺乏二维切片和多序列磁共振图像之间的完美对齐,从而导致标记不准确。为了应对这些挑战,我们的框架分为两部分:分割子网络和斑块成分识别子网络。首先,二维定位网络从输入图像中提取感兴趣区(ROI),确定颈动脉的位置。然后,由签名距离图支持的 3D U-网络(Çiçek etal,2016 年)对 nnU-网络(Ronneberger 和 Fischer,2015 年)进行改编,对颈动脉血管壁进行分割。该方法允许使用签名距离图(SDM)损失(Xue 等人,2020 年)同时分割血管壁区域,该损失可对三维分割表面进行正则化处理,并减少因人工标签不完善而导致的错误分割。随后,提取输入图像的 ROI 和获得的血管壁掩膜,并将其结合起来,以获得识别子网络中斑块成分的识别结果。该框架还引入了量身定制的数据增强操作,以降低钙化和出血识别的误判率。我们在一个由 115 名患者组成的数据集上训练和测试了我们提出的方法,该方法实现了准确的颈动脉壁分割结果(0.8459 Dice),优于已发表研究中的最佳结果(0.7885 Dice)。我们的方法对钙化、富脂核心和出血成分的识别准确率分别为 0.82、0.73 和 0.88。我们提出的框架可用于临床和研究,帮助放射科医生完成繁琐的阅读任务,评估颈动脉斑块的风险。
{"title":"A 3D framework for segmentation of carotid artery vessel wall and identification of plaque compositions in multi-sequence MR images","authors":"Jian Wang ,&nbsp;Fan Yu ,&nbsp;Mengze Zhang ,&nbsp;Jie Lu ,&nbsp;Zhen Qian","doi":"10.1016/j.compmedimag.2024.102402","DOIUrl":"10.1016/j.compmedimag.2024.102402","url":null,"abstract":"<div><p>Accurately assessing carotid artery wall thickening and identifying risky plaque components are critical for early diagnosis and risk management of carotid atherosclerosis. In this paper, we present a 3D framework for automated segmentation of the carotid artery vessel wall and identification of the compositions of carotid plaque in multi-sequence magnetic resonance (MR) images under the challenge of imperfect manual labeling. Manual labeling is commonly done in 2D slices of these multi-sequence MR images and often lacks perfect alignment across 2D slices and the multiple MR sequences, leading to labeling inaccuracies. To address such challenges, our framework is split into two parts: a segmentation subnetwork and a plaque component identification subnetwork. Initially, a 2D localization network pinpoints the carotid artery’s position, extracting the region of interest (ROI) from the input images. Following that, a signed-distance-map-enabled 3D U-net (Çiçek etal, 2016)an adaptation of the nnU-net (Ronneberger and Fischer, 2015) segments the carotid artery vessel wall. This method allows for the concurrent segmentation of the vessel wall area using the signed distance map (SDM) loss (Xue et al., 2020) which regularizes the segmentation surfaces in 3D and reduces erroneous segmentation caused by imperfect manual labels. Subsequently, the ROI of the input images and the obtained vessel wall masks are extracted and combined to obtain the identification results of plaque components in the identification subnetwork. Tailored data augmentation operations are introduced into the framework to reduce the false positive rate of calcification and hemorrhage identification. We trained and tested our proposed method on a dataset consisting of 115 patients, and it achieves an accurate segmentation result of carotid artery wall (0.8459 Dice), which is superior to the best result in published studies (0.7885 Dice). Our approach yielded accuracies of 0.82, 0.73 and 0.88 for the identification of calcification, lipid-rich core and hemorrhage components. Our proposed framework can be potentially used in clinical and research settings to help radiologists perform cumbersome reading tasks and evaluate the risk of carotid plaques.</p></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":null,"pages":null},"PeriodicalIF":5.7,"publicationDate":"2024-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141135573","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing cancer prediction in challenging screen-detected incident lung nodules using time-series deep learning 利用时间序列深度学习加强对具有挑战性的筛查发现的偶发肺结节的癌症预测
IF 5.7 2区 医学 Q1 Medicine Pub Date : 2024-05-20 DOI: 10.1016/j.compmedimag.2024.102399
Shahab Aslani , Pavan Alluri , Eyjolfur Gudmundsson , Edward Chandy , John McCabe , Anand Devaraj , Carolyn Horst , Sam M. Janes , Rahul Chakkara , Daniel C. Alexander , SUMMIT consortium, Arjun Nair , Joseph Jacob

Lung cancer screening (LCS) using annual computed tomography (CT) scanning significantly reduces mortality by detecting cancerous lung nodules at an earlier stage. Deep learning algorithms can improve nodule malignancy risk stratification. However, they have typically been used to analyse single time point CT data when detecting malignant nodules on either baseline or incident CT LCS rounds. Deep learning algorithms have the greatest value in two aspects. These approaches have great potential in assessing nodule change across time-series CT scans where subtle changes may be challenging to identify using the human eye alone. Moreover, they could be targeted to detect nodules developing on incident screening rounds, where cancers are generally smaller and more challenging to detect confidently.

Here, we show the performance of our Deep learning-based Computer-Aided Diagnosis model integrating Nodule and Lung imaging data with clinical Metadata Longitudinally (DeepCAD-NLM-L) for malignancy prediction. DeepCAD-NLM-L showed improved performance (AUC = 88%) against models utilizing single time-point data alone. DeepCAD-NLM-L also demonstrated comparable and complementary performance to radiologists when interpreting the most challenging nodules typically found in LCS programs. It also demonstrated similar performance to radiologists when assessed on out-of-distribution imaging dataset. The results emphasize the advantages of using time-series and multimodal analyses when interpreting malignancy risk in LCS.

使用年度计算机断层扫描(CT)进行肺癌筛查(LCS)可在早期发现肺癌结节,从而显著降低死亡率。深度学习算法可以改善结节恶性风险分层。然而,在检测基线或事件 CT LCS 轮次中的恶性结节时,它们通常被用于分析单个时间点 CT 数据。深度学习算法在两个方面具有最大价值。这些方法在评估跨时间序列 CT 扫描的结节变化方面具有巨大潜力,在这种情况下,仅靠人眼可能难以识别微妙的变化。在这里,我们展示了基于深度学习的计算机辅助诊断模型的性能,该模型将结节和肺部成像数据与临床元数据纵向整合(DeepCAD-NLM-L),用于恶性肿瘤预测。与仅利用单一时间点数据的模型相比,DeepCAD-NLM-L 的性能有所提高(AUC = 88%)。DeepCAD-NLM-L 在解读 LCS 项目中常见的最具挑战性的结节时,也表现出与放射科医生相当的互补性。在对分布外成像数据集进行评估时,DeepCAD-NLM-L 也表现出与放射科医生相似的性能。这些结果强调了在解读 LCS 中的恶性肿瘤风险时使用时间序列和多模态分析的优势。
{"title":"Enhancing cancer prediction in challenging screen-detected incident lung nodules using time-series deep learning","authors":"Shahab Aslani ,&nbsp;Pavan Alluri ,&nbsp;Eyjolfur Gudmundsson ,&nbsp;Edward Chandy ,&nbsp;John McCabe ,&nbsp;Anand Devaraj ,&nbsp;Carolyn Horst ,&nbsp;Sam M. Janes ,&nbsp;Rahul Chakkara ,&nbsp;Daniel C. Alexander ,&nbsp;SUMMIT consortium,&nbsp;Arjun Nair ,&nbsp;Joseph Jacob","doi":"10.1016/j.compmedimag.2024.102399","DOIUrl":"10.1016/j.compmedimag.2024.102399","url":null,"abstract":"<div><p>Lung cancer screening (LCS) using annual computed tomography (CT) scanning significantly reduces mortality by detecting cancerous lung nodules at an earlier stage. Deep learning algorithms can improve nodule malignancy risk stratification. However, they have typically been used to analyse single time point CT data when detecting malignant nodules on either baseline or incident CT LCS rounds. Deep learning algorithms have the greatest value in two aspects. These approaches have great potential in assessing nodule change across time-series CT scans where subtle changes may be challenging to identify using the human eye alone. Moreover, they could be targeted to detect nodules developing on incident screening rounds, where cancers are generally smaller and more challenging to detect confidently.</p><p>Here, we show the performance of our Deep learning-based Computer-Aided Diagnosis model integrating Nodule and Lung imaging data with clinical Metadata Longitudinally (DeepCAD-NLM-L) for malignancy prediction. DeepCAD-NLM-L showed improved performance (AUC = 88%) against models utilizing single time-point data alone. DeepCAD-NLM-L also demonstrated comparable and complementary performance to radiologists when interpreting the most challenging nodules typically found in LCS programs. It also demonstrated similar performance to radiologists when assessed on out-of-distribution imaging dataset. The results emphasize the advantages of using time-series and multimodal analyses when interpreting malignancy risk in LCS.</p></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":null,"pages":null},"PeriodicalIF":5.7,"publicationDate":"2024-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0895611124000764/pdfft?md5=8b33f33239dfe3edc77e2b30eb2fbd9c&pid=1-s2.0-S0895611124000764-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141136556","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep neural network for the prediction of KRAS, NRAS, and BRAF genotypes in left-sided colorectal cancer based on histopathologic images 基于组织病理学图像预测左侧结直肠癌 KRAS、NRAS 和 BRAF 基因型的深度神经网络。
IF 5.7 2区 医学 Q1 Medicine Pub Date : 2024-05-12 DOI: 10.1016/j.compmedimag.2024.102384
Xuejie Li , Xianda Chi , Pinjie Huang , Qiong Liang , Jianpei Liu

Background

The KRAS, NRAS, and BRAF genotypes are critical for selecting targeted therapies for patients with metastatic colorectal cancer (mCRC). Here, we aimed to develop a deep learning model that utilizes pathologic whole-slide images (WSIs) to accurately predict the status of KRAS, NRAS, and BRAFV600E.

Methods

129 patients with left-sided colon cancer and rectal cancer from the Third Affiliated Hospital of Sun Yat-sen University were assigned to the training and testing cohorts. Utilizing three convolutional neural networks (ResNet18, ResNet50, and Inception v3), we extracted 206 pathological features from H&E-stained WSIs, serving as the foundation for constructing specific pathological models. A clinical feature model was then developed, with carcinoembryonic antigen (CEA) identified through comprehensive multiple regression analysis as the key biomarker. Subsequently, these two models were combined to create a clinical-pathological integrated model, resulting in a total of three genetic prediction models.

Result

103 patients were evaluated in the training cohort (1782,302 image tiles), while the remaining 26 patients were enrolled in the testing cohort (489,481 image tiles). Compared with the clinical model and the pathology model, the combined model which incorporated CEA levels and pathological signatures, showed increased predictive ability, with an area under the curve (AUC) of 0.96 in the training and an AUC of 0.83 in the testing cohort, accompanied by a high positive predictive value (PPV 0.92).

Conclusion

The combined model demonstrated a considerable ability to accurately predict the status of KRAS, NRAS, and BRAFV600E in patients with left-sided colorectal cancer, with potential application to assist doctors in developing targeted treatment strategies for mCRC patients, and effectively identifying mutations and eliminating the need for confirmatory genetic testing.

背景:KRAS、NRAS和BRAF基因型是转移性结直肠癌(mCRC)患者选择靶向疗法的关键。方法:将中山大学附属第三医院的 129 名左侧结肠癌和直肠癌患者分配到训练组和测试组。利用三种卷积神经网络(ResNet18、ResNet50和Inception v3),我们从H&E染色的WSI中提取了206个病理特征,作为构建特定病理模型的基础。然后建立了临床特征模型,并通过综合多元回归分析确定癌胚抗原 (CEA) 为关键生物标记物。结果:103 名患者被纳入训练队列(1782302 张图像),其余 26 名患者被纳入测试队列(489481 张图像)。与临床模型和病理模型相比,包含 CEA 水平和病理特征的组合模型显示出更强的预测能力,训练队列中的曲线下面积(AUC)为 0.96,测试队列中的曲线下面积(AUC)为 0.83,同时具有较高的阳性预测值(PPV 0.92):综合模型在准确预测左侧结直肠癌患者的 KRAS、NRAS 和 BRAFV600E 状态方面表现出了相当高的能力,有望协助医生为 mCRC 患者制定有针对性的治疗策略,并有效识别基因突变,消除确诊基因检测的需要。
{"title":"Deep neural network for the prediction of KRAS, NRAS, and BRAF genotypes in left-sided colorectal cancer based on histopathologic images","authors":"Xuejie Li ,&nbsp;Xianda Chi ,&nbsp;Pinjie Huang ,&nbsp;Qiong Liang ,&nbsp;Jianpei Liu","doi":"10.1016/j.compmedimag.2024.102384","DOIUrl":"10.1016/j.compmedimag.2024.102384","url":null,"abstract":"<div><h3>Background</h3><p>The KRAS, NRAS, and BRAF genotypes are critical for selecting targeted therapies for patients with metastatic colorectal cancer (mCRC). Here, we aimed to develop a deep learning model that utilizes pathologic whole-slide images (WSIs) to accurately predict the status of KRAS, NRAS, and BRAF<sup>V600E</sup>.</p></div><div><h3>Methods</h3><p>129 patients with left-sided colon cancer and rectal cancer from the Third Affiliated Hospital of Sun Yat-sen University were assigned to the training and testing cohorts. Utilizing three convolutional neural networks (ResNet18, ResNet50, and Inception v3), we extracted 206 pathological features from H&amp;E-stained WSIs, serving as the foundation for constructing specific pathological models. A clinical feature model was then developed, with carcinoembryonic antigen (CEA) identified through comprehensive multiple regression analysis as the key biomarker. Subsequently, these two models were combined to create a clinical-pathological integrated model, resulting in a total of three genetic prediction models.</p></div><div><h3>Result</h3><p>103 patients were evaluated in the training cohort (1782,302 image tiles), while the remaining 26 patients were enrolled in the testing cohort (489,481 image tiles). Compared with the clinical model and the pathology model, the combined model which incorporated CEA levels and pathological signatures, showed increased predictive ability, with an area under the curve (AUC) of 0.96 in the training and an AUC of 0.83 in the testing cohort, accompanied by a high positive predictive value (PPV 0.92).</p></div><div><h3>Conclusion</h3><p>The combined model demonstrated a considerable ability to accurately predict the status of KRAS, NRAS, and BRAF<sup>V600E</sup> in patients with left-sided colorectal cancer, with potential application to assist doctors in developing targeted treatment strategies for mCRC patients, and effectively identifying mutations and eliminating the need for confirmatory genetic testing.</p></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":null,"pages":null},"PeriodicalIF":5.7,"publicationDate":"2024-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140960862","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computerized Medical Imaging and Graphics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1