首页 > 最新文献

Computerized Medical Imaging and Graphics最新文献

英文 中文
Enhancing trabecular CT scans based on deep learning with multi-strategy fusion 基于多策略融合深度学习的小梁 CT 扫描增强技术
IF 5.4 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2024-06-12 DOI: 10.1016/j.compmedimag.2024.102410
Peixuan Ge , Shibo Li , Yefeng Liang , Shuwei Zhang , Lihai Zhang , Ying Hu , Liang Yao , Pak Kin Wong

Trabecular bone analysis plays a crucial role in understanding bone health and disease, with applications like osteoporosis diagnosis. This paper presents a comprehensive study on 3D trabecular computed tomography (CT) image restoration, addressing significant challenges in this domain. The research introduces a backbone model, Cascade-SwinUNETR, for single-view 3D CT image restoration. This model leverages deep layer aggregation with supervision and capabilities of Swin-Transformer to excel in feature extraction. Additionally, this study also brings DVSR3D, a dual-view restoration model, achieving good performance through deep feature fusion with attention mechanisms and Autoencoders. Furthermore, an Unsupervised Domain Adaptation (UDA) method is introduced, allowing models to adapt to input data distributions without additional labels, holding significant potential for real-world medical applications, and eliminating the need for invasive data collection procedures. The study also includes the curation of a new dual-view dataset for CT image restoration, addressing the scarcity of real human bone data in Micro-CT. Finally, the dual-view approach is validated through downstream medical bone microstructure measurements. Our contributions open several paths for trabecular bone analysis, promising improved clinical outcomes in bone health assessment and diagnosis.

骨小梁分析在了解骨骼健康和疾病方面发挥着至关重要的作用,其应用包括骨质疏松症诊断。本文对三维骨小梁计算机断层扫描(CT)图像修复进行了全面研究,解决了这一领域的重大挑战。研究介绍了一种用于单视角三维 CT 图像修复的骨干模型 Cascade-SwinUNETR。该模型利用深度层聚合与 Swin-Transformer 的监督和功能,在特征提取方面表现出色。此外,这项研究还带来了双视角修复模型 DVSR3D,通过与注意力机制和自动编码器进行深度特征融合,实现了良好的性能。此外,该研究还引入了无监督领域适应(UDA)方法,使模型能够适应输入数据分布,而无需额外的标签,这为现实世界的医疗应用带来了巨大潜力,并消除了对侵入性数据收集程序的需求。研究还包括为 CT 图像修复策划一个新的双视角数据集,以解决 Micro-CT 中真实人体骨骼数据稀缺的问题。最后,通过下游医学骨微结构测量验证了双视角方法。我们的贡献为骨小梁分析开辟了几条道路,有望改善骨健康评估和诊断的临床结果。
{"title":"Enhancing trabecular CT scans based on deep learning with multi-strategy fusion","authors":"Peixuan Ge ,&nbsp;Shibo Li ,&nbsp;Yefeng Liang ,&nbsp;Shuwei Zhang ,&nbsp;Lihai Zhang ,&nbsp;Ying Hu ,&nbsp;Liang Yao ,&nbsp;Pak Kin Wong","doi":"10.1016/j.compmedimag.2024.102410","DOIUrl":"10.1016/j.compmedimag.2024.102410","url":null,"abstract":"<div><p>Trabecular bone analysis plays a crucial role in understanding bone health and disease, with applications like osteoporosis diagnosis. This paper presents a comprehensive study on 3D trabecular computed tomography (CT) image restoration, addressing significant challenges in this domain. The research introduces a backbone model, Cascade-SwinUNETR, for single-view 3D CT image restoration. This model leverages deep layer aggregation with supervision and capabilities of Swin-Transformer to excel in feature extraction. Additionally, this study also brings DVSR3D, a dual-view restoration model, achieving good performance through deep feature fusion with attention mechanisms and Autoencoders. Furthermore, an Unsupervised Domain Adaptation (UDA) method is introduced, allowing models to adapt to input data distributions without additional labels, holding significant potential for real-world medical applications, and eliminating the need for invasive data collection procedures. The study also includes the curation of a new dual-view dataset for CT image restoration, addressing the scarcity of real human bone data in Micro-CT. Finally, the dual-view approach is validated through downstream medical bone microstructure measurements. Our contributions open several paths for trabecular bone analysis, promising improved clinical outcomes in bone health assessment and diagnosis.</p></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"116 ","pages":"Article 102410"},"PeriodicalIF":5.4,"publicationDate":"2024-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141400693","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An automatic radiomic-based approach for disease localization: A pilot study on COVID-19 基于放射学的自动疾病定位方法:COVID-19 试验研究
IF 5.4 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2024-06-12 DOI: 10.1016/j.compmedimag.2024.102411
Giulia Varriano , Vittoria Nardone , Simona Correra, Francesco Mercaldo, Antonella Santone

Radiomics is an innovative field in Personalized Medicine to help medical specialists in diagnosis and prognosis. Mainly, the application of Radiomics to medical images requires the definition and delimitation of the Region Of Interest (ROI) on the medical image to extract radiomic features. The aim of this preliminary study is to define an approach that automatically detects the specific areas indicative of a particular disease and examines them to minimize diagnostic errors associated with false positives and false negatives. This approach aims to create a nxn grid on the DICOM image sequence and each cell in the matrix is associated with a region from which radiomic features can be extracted.

The proposed procedure uses the Model Checking technique and produces as output the medical diagnosis of the patient, i.e., whether the patient under analysis is affected or not by a specific disease. Furthermore, the matrix-based method also localizes where appears the disease marks. To evaluate the performance of the proposed methodology, a case study on COVID-19 disease is used. Both results on disease identification and localization seem very promising. Furthermore, this proposed approach yields better results compared to methods based on the extraction of features using the whole image as a single ROI, as evidenced by improvements in Accuracy and especially Recall. Our approach supports the advancement of knowledge, interoperability and trust in the software tool, fostering collaboration among doctors, staff and Radiomics.

放射组学是个性化医学的一个创新领域,可帮助医学专家进行诊断和预后。将放射组学应用于医学影像,主要需要定义和划定医学影像上的感兴趣区(ROI),以提取放射组学特征。这项初步研究的目的是确定一种方法,自动检测特定疾病的特定指示区域,并对其进行检查,以尽量减少与假阳性和假阴性相关的诊断错误。这种方法的目的是在 DICOM 图像序列上创建一个 nxn 网格,矩阵中的每个单元格都与一个区域相关联,可以从中提取放射学特征。建议的程序使用模型检查技术,并将病人的医疗诊断结果作为输出,即分析中的病人是否患有特定疾病。此外,基于矩阵的方法还能定位疾病标记出现的位置。为了评估所提出方法的性能,我们使用了 COVID-19 疾病的案例研究。疾病识别和定位的结果似乎都很不错。此外,与基于将整个图像作为单一 ROI 提取特征的方法相比,所提出的方法能产生更好的结果,这体现在准确率和召回率的提高上。我们的方法有助于增进知识、互操作性和对软件工具的信任,促进医生、员工和放射医学之间的合作。
{"title":"An automatic radiomic-based approach for disease localization: A pilot study on COVID-19","authors":"Giulia Varriano ,&nbsp;Vittoria Nardone ,&nbsp;Simona Correra,&nbsp;Francesco Mercaldo,&nbsp;Antonella Santone","doi":"10.1016/j.compmedimag.2024.102411","DOIUrl":"10.1016/j.compmedimag.2024.102411","url":null,"abstract":"<div><p>Radiomics is an innovative field in Personalized Medicine to help medical specialists in diagnosis and prognosis. Mainly, the application of Radiomics to medical images requires the definition and delimitation of the Region Of Interest (ROI) on the medical image to extract radiomic features. The aim of this preliminary study is to define an approach that automatically detects the specific areas indicative of a particular disease and examines them to minimize diagnostic errors associated with false positives and false negatives. This approach aims to create a <span><math><mrow><mi>n</mi><mi>x</mi><mi>n</mi></mrow></math></span> grid on the DICOM image sequence and each cell in the matrix is associated with a region from which radiomic features can be extracted.</p><p>The proposed procedure uses the Model Checking technique and produces as output the medical diagnosis of the patient, i.e., whether the patient under analysis is affected or not by a specific disease. Furthermore, the matrix-based method also localizes where appears the disease marks. To evaluate the performance of the proposed methodology, a case study on COVID-19 disease is used. Both results on disease identification and localization seem very promising. Furthermore, this proposed approach yields better results compared to methods based on the extraction of features using the whole image as a single ROI, as evidenced by improvements in Accuracy and especially Recall. Our approach supports the advancement of knowledge, interoperability and trust in the software tool, fostering collaboration among doctors, staff and Radiomics.</p></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"116 ","pages":"Article 102411"},"PeriodicalIF":5.4,"publicationDate":"2024-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141394565","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PCa-RadHop: A transparent and lightweight feed-forward method for clinically significant prostate cancer segmentation PCa-RadHop:用于具有临床意义的前列腺癌分割的透明轻量级前馈方法
IF 5.4 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2024-06-10 DOI: 10.1016/j.compmedimag.2024.102408
Vasileios Magoulianitis , Jiaxin Yang , Yijing Yang , Jintang Xue , Masatomo Kaneko , Giovanni Cacciamani , Andre Abreu , Vinay Duddalwar , C.-C. Jay Kuo , Inderbir S. Gill , Chrysostomos Nikias

Prostate Cancer is one of the most frequently occurring cancers in men, with a low survival rate if not early diagnosed. PI-RADS reading has a high false positive rate, thus increasing the diagnostic incurred costs and patient discomfort. Deep learning (DL) models achieve a high segmentation performance, although require a large model size and complexity. Also, DL models lack of feature interpretability and are perceived as “black-boxes” in the medical field. PCa-RadHop pipeline is proposed in this work, aiming to provide a more transparent feature extraction process using a linear model. It adopts the recently introduced Green Learning (GL) paradigm, which offers a small model size and low complexity. PCa-RadHop consists of two stages: Stage-1 extracts data-driven radiomics features from the bi-parametric Magnetic Resonance Imaging (bp-MRI) input and predicts an initial heatmap. To reduce the false positive rate, a subsequent stage-2 is introduced to refine the predictions by including more contextual information and radiomics features from each already detected Region of Interest (ROI). Experiments on the largest publicly available dataset, PI-CAI, show a competitive performance standing of the proposed method among other deep DL models, achieving an area under the curve (AUC) of 0.807 among a cohort of 1,000 patients. Moreover, PCa-RadHop maintains orders of magnitude smaller model size and complexity.

前列腺癌是男性最常见的癌症之一,如不及早诊断,存活率很低。PI-RADS 读数的假阳性率很高,从而增加了诊断成本和患者的不适感。深度学习(DL)模型可实现较高的分割性能,但需要较大的模型规模和复杂度。此外,深度学习模型缺乏特征可解释性,在医疗领域被视为 "黑盒子"。本研究提出了 PCa-RadHop 管道,旨在使用线性模型提供更透明的特征提取过程。它采用了最近推出的绿色学习(GL)范式,具有模型小、复杂度低的特点。PCa-RadHop 包括两个阶段:第一阶段从双参数磁共振成像(bp-MRI)输入中提取数据驱动的放射组学特征,并预测初始热图。为了降低误报率,随后引入了第二阶段,通过从每个已检测到的感兴趣区(ROI)中纳入更多上下文信息和放射组学特征来完善预测。在最大的公开可用数据集 PI-CAI 上进行的实验表明,与其他深度 DL 模型相比,所提出的方法具有很强的性能竞争力,在 1,000 名患者中的曲线下面积(AUC)达到了 0.807。此外,PCa-RadHop 的模型大小和复杂程度都要小得多。
{"title":"PCa-RadHop: A transparent and lightweight feed-forward method for clinically significant prostate cancer segmentation","authors":"Vasileios Magoulianitis ,&nbsp;Jiaxin Yang ,&nbsp;Yijing Yang ,&nbsp;Jintang Xue ,&nbsp;Masatomo Kaneko ,&nbsp;Giovanni Cacciamani ,&nbsp;Andre Abreu ,&nbsp;Vinay Duddalwar ,&nbsp;C.-C. Jay Kuo ,&nbsp;Inderbir S. Gill ,&nbsp;Chrysostomos Nikias","doi":"10.1016/j.compmedimag.2024.102408","DOIUrl":"https://doi.org/10.1016/j.compmedimag.2024.102408","url":null,"abstract":"<div><p>Prostate Cancer is one of the most frequently occurring cancers in men, with a low survival rate if not early diagnosed. PI-RADS reading has a high false positive rate, thus increasing the diagnostic incurred costs and patient discomfort. Deep learning (DL) models achieve a high segmentation performance, although require a large model size and complexity. Also, DL models lack of feature interpretability and are perceived as “black-boxes” in the medical field. PCa-RadHop pipeline is proposed in this work, aiming to provide a more transparent feature extraction process using a linear model. It adopts the recently introduced Green Learning (GL) paradigm, which offers a small model size and low complexity. PCa-RadHop consists of two stages: Stage-1 extracts data-driven radiomics features from the bi-parametric Magnetic Resonance Imaging (bp-MRI) input and predicts an initial heatmap. To reduce the false positive rate, a subsequent stage-2 is introduced to refine the predictions by including more contextual information and radiomics features from each already detected Region of Interest (ROI). Experiments on the largest publicly available dataset, PI-CAI, show a competitive performance standing of the proposed method among other deep DL models, achieving an area under the curve (AUC) of 0.807 among a cohort of 1,000 patients. Moreover, PCa-RadHop maintains orders of magnitude smaller model size and complexity.</p></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"116 ","pages":"Article 102408"},"PeriodicalIF":5.4,"publicationDate":"2024-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141438459","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unsupervised domain adaptation based on feature and edge alignment for femur X-ray image segmentation 基于特征和边缘对齐的无监督域适应性股骨 X 射线图像分割
IF 5.7 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2024-06-08 DOI: 10.1016/j.compmedimag.2024.102407
Xiaoming Jiang , Yongxin Yang , Tong Su , Kai Xiao , LiDan Lu , Wei Wang , Changsong Guo , Lizhi Shao , Mingjing Wang , Dong Jiang

The gold standard for diagnosing osteoporosis is bone mineral density (BMD) measurement by dual-energy X-ray absorptiometry (DXA). However, various factors during the imaging process cause domain shifts in DXA images, which lead to incorrect bone segmentation. Research shows that poor bone segmentation is one of the prime reasons of inaccurate BMD measurement, severely affecting the diagnosis and treatment plans for osteoporosis. In this paper, we propose a Multi-feature Joint Discriminative Domain Adaptation (MDDA) framework to improve segmentation performance and the generalization of the network in domain-shifted images. The proposed method learns domain-invariant features between the source and target domains from the perspectives of multi-scale features and edges, and is evaluated on real data from multi-center datasets. Compared to other state-of-the-art methods, the feature prior from the source domain and edge prior enable the proposed MDDA to achieve the optimal domain adaptation performance and generalization. It also demonstrates superior performance in domain adaptation tasks on small amount datasets, even using only 5 or 10 images. In this study, MDDA provides an accurate bone segmentation tool for BMD measurement based on DXA imaging.

诊断骨质疏松症的金标准是通过双能 X 射线吸收仪(DXA)测量骨矿密度(BMD)。然而,成像过程中的各种因素会造成 DXA 图像的域偏移,从而导致错误的骨分割。研究表明,骨分割不准确是导致 BMD 测量不准确的主要原因之一,严重影响了骨质疏松症的诊断和治疗方案。在本文中,我们提出了一种多特征联合判别域适应(MDDA)框架,以提高分割性能和网络在域偏移图像中的泛化能力。所提出的方法从多尺度特征和边缘的角度学习源域和目标域之间的域不变特征,并在多中心数据集的真实数据上进行了评估。与其他最先进的方法相比,来自源域的特征先验和边缘先验使所提出的 MDDA 能够实现最佳的域适应性能和泛化。在少量数据集的域适应任务中,即使只使用 5 或 10 幅图像,MDDA 也能表现出卓越的性能。在这项研究中,MDDA 为基于 DXA 成像的 BMD 测量提供了精确的骨骼分割工具。
{"title":"Unsupervised domain adaptation based on feature and edge alignment for femur X-ray image segmentation","authors":"Xiaoming Jiang ,&nbsp;Yongxin Yang ,&nbsp;Tong Su ,&nbsp;Kai Xiao ,&nbsp;LiDan Lu ,&nbsp;Wei Wang ,&nbsp;Changsong Guo ,&nbsp;Lizhi Shao ,&nbsp;Mingjing Wang ,&nbsp;Dong Jiang","doi":"10.1016/j.compmedimag.2024.102407","DOIUrl":"https://doi.org/10.1016/j.compmedimag.2024.102407","url":null,"abstract":"<div><p>The gold standard for diagnosing osteoporosis is bone mineral density (BMD) measurement by dual-energy X-ray absorptiometry (DXA). However, various factors during the imaging process cause domain shifts in DXA images, which lead to incorrect bone segmentation. Research shows that poor bone segmentation is one of the prime reasons of inaccurate BMD measurement, severely affecting the diagnosis and treatment plans for osteoporosis. In this paper, we propose a Multi-feature Joint Discriminative Domain Adaptation (MDDA) framework to improve segmentation performance and the generalization of the network in domain-shifted images. The proposed method learns domain-invariant features between the source and target domains from the perspectives of multi-scale features and edges, and is evaluated on real data from multi-center datasets. Compared to other state-of-the-art methods, the feature prior from the source domain and edge prior enable the proposed MDDA to achieve the optimal domain adaptation performance and generalization. It also demonstrates superior performance in domain adaptation tasks on small amount datasets, even using only 5 or 10 images. In this study, MDDA provides an accurate bone segmentation tool for BMD measurement based on DXA imaging.</p></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"116 ","pages":"Article 102407"},"PeriodicalIF":5.7,"publicationDate":"2024-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141328854","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Uncertainty estimation using a 3D probabilistic U-Net for segmentation with small radiotherapy clinical trial datasets 使用三维概率 UNet 对小型放疗临床试验数据集进行分割的不确定性估计
IF 5.7 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2024-06-02 DOI: 10.1016/j.compmedimag.2024.102403
Phillip Chlap , Hang Min , Jason Dowling , Matthew Field , Kirrily Cloak , Trevor Leong , Mark Lee , Julie Chu , Jennifer Tan , Phillip Tran , Tomas Kron , Mark Sidhom , Kirsty Wiltshire , Sarah Keats , Andrew Kneebone , Annette Haworth , Martin A. Ebert , Shalini K. Vinod , Lois Holloway
<div><h3>Background and objectives</h3><p>Bio-medical image segmentation models typically attempt to predict one segmentation that resembles a ground-truth structure as closely as possible. However, as medical images are not perfect representations of anatomy, obtaining this ground truth is not possible. A surrogate commonly used is to have multiple expert observers define the same structure for a dataset. When multiple observers define the same structure on the same image there can be significant differences depending on the structure, image quality/modality and the region being defined. It is often desirable to estimate this type of aleatoric uncertainty in a segmentation model to help understand the region in which the true structure is likely to be positioned. Furthermore, obtaining these datasets is resource intensive so training such models using limited data may be required. With a small dataset size, differing patient anatomy is likely not well represented causing epistemic uncertainty which should also be estimated so it can be determined for which cases the model is effective or not.</p></div><div><h3>Methods</h3><p>We use a 3D probabilistic U-Net to train a model from which several segmentations can be sampled to estimate the range of uncertainty seen between multiple observers. To ensure that regions where observers disagree most are emphasised in model training, we expand the Generalised Evidence Lower Bound (ELBO) with a Constrained Optimisation (GECO) loss function with an additional contour loss term to give attention to this region. Ensemble and Monte-Carlo dropout (MCDO) uncertainty quantification methods are used during inference to estimate model confidence on an unseen case. We apply our methodology to two radiotherapy clinical trial datasets, a gastric cancer trial (TOPGEAR, TROG 08.08) and a post-prostatectomy prostate cancer trial (RAVES, TROG 08.03). Each dataset contains only 10 cases each for model development to segment the clinical target volume (CTV) which was defined by multiple observers on each case. An additional 50 cases are available as a hold-out dataset for each trial which had only one observer define the CTV structure on each case. Up to 50 samples were generated using the probabilistic model for each case in the hold-out dataset. To assess performance, each manually defined structure was matched to the closest matching sampled segmentation based on commonly used metrics.</p></div><div><h3>Results</h3><p>The TOPGEAR CTV model achieved a Dice Similarity Coefficient (DSC) and Surface DSC (sDSC) of 0.7 and 0.43 respectively with the RAVES model achieving 0.75 and 0.71 respectively. Segmentation quality across cases in the hold-out datasets was variable however both the ensemble and MCDO uncertainty estimation approaches were able to accurately estimate model confidence with a p-value < 0.001 for both TOPGEAR and RAVES when comparing the DSC using the Pearson correlation coefficient.</p></div><div><h3>Conclu
背景和目标生物医学图像分割模型通常试图预测一种尽可能接近地面实况结构的分割。然而,由于医学图像并非解剖学的完美代表,因此不可能获得这一基本真相。常用的替代方法是让多个专家观察者为数据集定义相同的结构。当多个观察者在同一图像上定义同一结构时,可能会因结构、图像质量/模式和定义区域的不同而产生显著差异。通常需要对分割模型中的这种不确定性进行估计,以帮助了解真正的结构可能位于哪个区域。此外,获取这些数据集需要大量资源,因此可能需要使用有限的数据来训练此类模型。在数据集规模较小的情况下,不同的患者解剖结构可能没有得到很好的体现,从而导致认识上的不确定性,这种不确定性也需要进行估计,以便确定模型在哪些情况下有效,哪些情况下无效。为确保在模型训练中强调观察者意见分歧最大的区域,我们扩展了广义证据下限(ELBO)与约束优化(GECO)损失函数,并增加了一个轮廓损失项,以关注这一区域。在推理过程中,我们使用了集合和蒙特卡罗遗漏(MCDO)不确定性量化方法来估计未见病例的模型置信度。我们将我们的方法应用于两个放射治疗临床试验数据集,一个是胃癌试验(TOPGEAR,TROG 08.08),另一个是前列腺切除术后前列腺癌试验(RAVES,TROG 08.03)。每个数据集只包含 10 个病例,用于开发模型以分割临床靶区(CTV),每个病例由多个观察者定义。另有 50 个病例作为每个试验的保留数据集,每个病例只有一个观察者定义 CTV 结构。在保留数据集中,使用概率模型为每个病例生成了多达 50 个样本。结果 TOPGEAR CTV 模型的骰子相似系数(DSC)和表面相似系数(sDSC)分别为 0.7 和 0.43,而 RAVES 模型分别为 0.75 和 0.71。在保留数据集中,不同病例的分割质量各不相同,但是在使用皮尔逊相关系数比较 DSC 时,集合和 MCDO 不确定性估计方法都能准确估计模型置信度,TOPGEAR 和 RAVES 的 p 值均为 0.001。让模型估计预测置信度对于了解模型可能对哪些未见病例有用非常重要。
{"title":"Uncertainty estimation using a 3D probabilistic U-Net for segmentation with small radiotherapy clinical trial datasets","authors":"Phillip Chlap ,&nbsp;Hang Min ,&nbsp;Jason Dowling ,&nbsp;Matthew Field ,&nbsp;Kirrily Cloak ,&nbsp;Trevor Leong ,&nbsp;Mark Lee ,&nbsp;Julie Chu ,&nbsp;Jennifer Tan ,&nbsp;Phillip Tran ,&nbsp;Tomas Kron ,&nbsp;Mark Sidhom ,&nbsp;Kirsty Wiltshire ,&nbsp;Sarah Keats ,&nbsp;Andrew Kneebone ,&nbsp;Annette Haworth ,&nbsp;Martin A. Ebert ,&nbsp;Shalini K. Vinod ,&nbsp;Lois Holloway","doi":"10.1016/j.compmedimag.2024.102403","DOIUrl":"10.1016/j.compmedimag.2024.102403","url":null,"abstract":"&lt;div&gt;&lt;h3&gt;Background and objectives&lt;/h3&gt;&lt;p&gt;Bio-medical image segmentation models typically attempt to predict one segmentation that resembles a ground-truth structure as closely as possible. However, as medical images are not perfect representations of anatomy, obtaining this ground truth is not possible. A surrogate commonly used is to have multiple expert observers define the same structure for a dataset. When multiple observers define the same structure on the same image there can be significant differences depending on the structure, image quality/modality and the region being defined. It is often desirable to estimate this type of aleatoric uncertainty in a segmentation model to help understand the region in which the true structure is likely to be positioned. Furthermore, obtaining these datasets is resource intensive so training such models using limited data may be required. With a small dataset size, differing patient anatomy is likely not well represented causing epistemic uncertainty which should also be estimated so it can be determined for which cases the model is effective or not.&lt;/p&gt;&lt;/div&gt;&lt;div&gt;&lt;h3&gt;Methods&lt;/h3&gt;&lt;p&gt;We use a 3D probabilistic U-Net to train a model from which several segmentations can be sampled to estimate the range of uncertainty seen between multiple observers. To ensure that regions where observers disagree most are emphasised in model training, we expand the Generalised Evidence Lower Bound (ELBO) with a Constrained Optimisation (GECO) loss function with an additional contour loss term to give attention to this region. Ensemble and Monte-Carlo dropout (MCDO) uncertainty quantification methods are used during inference to estimate model confidence on an unseen case. We apply our methodology to two radiotherapy clinical trial datasets, a gastric cancer trial (TOPGEAR, TROG 08.08) and a post-prostatectomy prostate cancer trial (RAVES, TROG 08.03). Each dataset contains only 10 cases each for model development to segment the clinical target volume (CTV) which was defined by multiple observers on each case. An additional 50 cases are available as a hold-out dataset for each trial which had only one observer define the CTV structure on each case. Up to 50 samples were generated using the probabilistic model for each case in the hold-out dataset. To assess performance, each manually defined structure was matched to the closest matching sampled segmentation based on commonly used metrics.&lt;/p&gt;&lt;/div&gt;&lt;div&gt;&lt;h3&gt;Results&lt;/h3&gt;&lt;p&gt;The TOPGEAR CTV model achieved a Dice Similarity Coefficient (DSC) and Surface DSC (sDSC) of 0.7 and 0.43 respectively with the RAVES model achieving 0.75 and 0.71 respectively. Segmentation quality across cases in the hold-out datasets was variable however both the ensemble and MCDO uncertainty estimation approaches were able to accurately estimate model confidence with a p-value &lt; 0.001 for both TOPGEAR and RAVES when comparing the DSC using the Pearson correlation coefficient.&lt;/p&gt;&lt;/div&gt;&lt;div&gt;&lt;h3&gt;Conclu","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"116 ","pages":"Article 102403"},"PeriodicalIF":5.7,"publicationDate":"2024-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0895611124000806/pdfft?md5=868a3bb84995d28d5305b07d9e1c8a21&pid=1-s2.0-S0895611124000806-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141277277","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PFMNet: Prototype-based feature mapping network for few-shot domain adaptation in medical image segmentation PFMNet:基于原型的特征映射网络,用于医学影像分割中的少量领域适应。
IF 5.7 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2024-05-28 DOI: 10.1016/j.compmedimag.2024.102406
Runze Wang, Guoyan Zheng

Lack of data is one of the biggest hurdles for rare disease research using deep learning. Due to the lack of rare-disease images and annotations, training a robust network for automatic rare-disease image segmentation is very challenging. To address this challenge, few-shot domain adaptation (FSDA) has emerged as a practical research direction, aiming to leverage a limited number of annotated images from a target domain to facilitate adaptation of models trained on other large datasets in a source domain. In this paper, we present a novel prototype-based feature mapping network (PFMNet) designed for FSDA in medical image segmentation. PFMNet adopts an encoder–decoder structure for segmentation, with the prototype-based feature mapping (PFM) module positioned at the bottom of the encoder–decoder structure. The PFM module transforms high-level features from the target domain into the source domain-like features that are more easily comprehensible by the decoder. By leveraging these source domain-like features, the decoder can effectively perform few-shot segmentation in the target domain and generate accurate segmentation masks. We evaluate the performance of PFMNet through experiments on three typical yet challenging few-shot medical image segmentation tasks: cross-center optic disc/cup segmentation, cross-center polyp segmentation, and cross-modality cardiac structure segmentation. We consider four different settings: 5-shot, 10-shot, 15-shot, and 20-shot. The experimental results substantiate the efficacy of our proposed approach for few-shot domain adaptation in medical image segmentation.

缺乏数据是利用深度学习进行罕见病研究的最大障碍之一。由于缺乏罕见病图像和注释,训练一个健壮的网络来自动分割罕见病图像非常具有挑战性。为了应对这一挑战,少数几个领域自适应(FSDA)已成为一个实用的研究方向,其目的是利用目标领域中数量有限的注释图像,促进对源领域中其他大型数据集上训练的模型进行自适应。本文介绍了一种新颖的基于原型的特征映射网络(PFMNet),该网络专为医学图像分割中的 FSDA 而设计。PFMNet 采用编码器-解码器结构进行分割,基于原型的特征映射(PFM)模块位于编码器-解码器结构的底部。PFM 模块将目标领域的高级特征转换为源领域的类特征,使解码器更容易理解。通过利用这些类似源域的特征,解码器可以有效地在目标域中执行少镜头分割,并生成准确的分割掩码。我们通过对三个典型但极具挑战性的少镜头医学图像分割任务进行实验,评估了 PFMNet 的性能:跨中心视盘/视杯分割、跨中心息肉分割和跨模态心脏结构分割。我们考虑了四种不同的设置:5 次、10 次、15 次和 20 次。实验结果证明了我们所提出的方法在医学图像分割中进行少镜头域适应的有效性。
{"title":"PFMNet: Prototype-based feature mapping network for few-shot domain adaptation in medical image segmentation","authors":"Runze Wang,&nbsp;Guoyan Zheng","doi":"10.1016/j.compmedimag.2024.102406","DOIUrl":"10.1016/j.compmedimag.2024.102406","url":null,"abstract":"<div><p>Lack of data is one of the biggest hurdles for rare disease research using deep learning. Due to the lack of rare-disease images and annotations, training a robust network for automatic rare-disease image segmentation is very challenging. To address this challenge, few-shot domain adaptation (FSDA) has emerged as a practical research direction, aiming to leverage a limited number of annotated images from a target domain to facilitate adaptation of models trained on other large datasets in a source domain. In this paper, we present a novel prototype-based feature mapping network (PFMNet) designed for FSDA in medical image segmentation. PFMNet adopts an encoder–decoder structure for segmentation, with the prototype-based feature mapping (PFM) module positioned at the bottom of the encoder–decoder structure. The PFM module transforms high-level features from the target domain into the source domain-like features that are more easily comprehensible by the decoder. By leveraging these source domain-like features, the decoder can effectively perform few-shot segmentation in the target domain and generate accurate segmentation masks. We evaluate the performance of PFMNet through experiments on three typical yet challenging few-shot medical image segmentation tasks: cross-center optic disc/cup segmentation, cross-center polyp segmentation, and cross-modality cardiac structure segmentation. We consider four different settings: 5-shot, 10-shot, 15-shot, and 20-shot. The experimental results substantiate the efficacy of our proposed approach for few-shot domain adaptation in medical image segmentation.</p></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"116 ","pages":"Article 102406"},"PeriodicalIF":5.7,"publicationDate":"2024-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141201107","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FetalBrainAwareNet: Bridging GANs with anatomical insight for fetal ultrasound brain plane synthesis FetalBrainAwareNet:为胎儿超声脑平面合成架起 GAN 与解剖学洞察力之间的桥梁。
IF 5.7 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2024-05-28 DOI: 10.1016/j.compmedimag.2024.102405
Angelo Lasala , Maria Chiara Fiorentino , Andrea Bandini , Sara Moccia

Over the past decade, deep-learning (DL) algorithms have become a promising tool to aid clinicians in identifying fetal head standard planes (FHSPs) during ultrasound (US) examination. However, the adoption of these algorithms in clinical settings is still hindered by the lack of large annotated datasets. To overcome this barrier, we introduce FetalBrainAwareNet, an innovative framework designed to synthesize anatomically accurate images of FHSPs. FetalBrainAwareNet introduces a cutting-edge approach that utilizes class activation maps as a prior in its conditional adversarial training process. This approach fosters the presence of the specific anatomical landmarks in the synthesized images. Additionally, we investigate specialized regularization terms within the adversarial training loss function to control the morphology of the fetal skull and foster the differentiation between the standard planes, ensuring that the synthetic images faithfully represent real US scans in both structure and overall appearance. The versatility of our FetalBrainAwareNet framework is highlighted by its ability to generate high-quality images of three predominant FHSPs using a singular, integrated framework. Quantitative (Fréchet inception distance of 88.52) and qualitative (t-SNE) results suggest that our framework generates US images with greater variability compared to state-of-the-art methods. By using the synthetic images generated with our framework, we increase the accuracy of FHSP classifiers by 3.2% compared to training the same classifiers solely with real acquisitions. These achievements suggest that using our synthetic images to increase the training set could provide benefits to enhance the performance of DL algorithms for FHSPs classification that could be integrated in real clinical scenarios.

在过去十年中,深度学习(DL)算法已成为帮助临床医生在超声波(US)检查中识别胎儿头部标准平面(FHSPs)的一种前景广阔的工具。然而,由于缺乏大型注释数据集,这些算法在临床环境中的应用仍然受到阻碍。为了克服这一障碍,我们引入了胎儿脑感知网络(FetalBrainAwareNet),这是一个创新的框架,旨在合成解剖学上准确的胎儿头颅平面图像。FetalBrainAwareNet 引入了一种前沿方法,在条件对抗训练过程中利用类激活图作为先验。这种方法有助于在合成图像中出现特定的解剖地标。此外,我们还研究了对抗训练损失函数中的专门正则化项,以控制胎儿头骨的形态,促进标准平面之间的区分,确保合成图像在结构和整体外观上忠实再现真实的 US 扫描图像。我们的 FetalBrainAwareNet 框架的多功能性体现在它能够利用一个单一的集成框架生成三种主要 FHSP 的高质量图像。定量(弗雷谢特起始距离为 88.52)和定性(t-SNE)结果表明,与最先进的方法相比,我们的框架生成的 US 图像具有更大的可变性。通过使用我们的框架生成的合成图像,我们将 FHSP 分类器的准确率提高了 3.2%,而仅使用真实采集图像训练相同分类器的准确率则降低了 3.2%。这些成果表明,使用我们的合成图像来增加训练集可以提高用于 FHSP 分类的 DL 算法的性能,并将其整合到实际临床场景中。
{"title":"FetalBrainAwareNet: Bridging GANs with anatomical insight for fetal ultrasound brain plane synthesis","authors":"Angelo Lasala ,&nbsp;Maria Chiara Fiorentino ,&nbsp;Andrea Bandini ,&nbsp;Sara Moccia","doi":"10.1016/j.compmedimag.2024.102405","DOIUrl":"10.1016/j.compmedimag.2024.102405","url":null,"abstract":"<div><p>Over the past decade, deep-learning (DL) algorithms have become a promising tool to aid clinicians in identifying fetal head standard planes (FHSPs) during ultrasound (US) examination. However, the adoption of these algorithms in clinical settings is still hindered by the lack of large annotated datasets. To overcome this barrier, we introduce FetalBrainAwareNet, an innovative framework designed to synthesize anatomically accurate images of FHSPs. FetalBrainAwareNet introduces a cutting-edge approach that utilizes class activation maps as a prior in its conditional adversarial training process. This approach fosters the presence of the specific anatomical landmarks in the synthesized images. Additionally, we investigate specialized regularization terms within the adversarial training loss function to control the morphology of the fetal skull and foster the differentiation between the standard planes, ensuring that the synthetic images faithfully represent real US scans in both structure and overall appearance. The versatility of our FetalBrainAwareNet framework is highlighted by its ability to generate high-quality images of three predominant FHSPs using a singular, integrated framework. Quantitative (Fréchet inception distance of 88.52) and qualitative (t-SNE) results suggest that our framework generates US images with greater variability compared to state-of-the-art methods. By using the synthetic images generated with our framework, we increase the accuracy of FHSP classifiers by 3.2% compared to training the same classifiers solely with real acquisitions. These achievements suggest that using our synthetic images to increase the training set could provide benefits to enhance the performance of DL algorithms for FHSPs classification that could be integrated in real clinical scenarios.</p></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"116 ","pages":"Article 102405"},"PeriodicalIF":5.7,"publicationDate":"2024-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141201020","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A brain subcortical segmentation tool based on anatomy attentional fusion network for developing macaques 基于发育中猕猴解剖注意融合网络的大脑皮层下分割工具
IF 5.7 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2024-05-25 DOI: 10.1016/j.compmedimag.2024.102404
Tao Zhong , Ya Wang , Xiaotong Xu , Xueyang Wu , Shujun Liang , Zhenyuan Ning , Li Wang , Yuyu Niu , Gang Li , Yu Zhang

Magnetic Resonance Imaging (MRI) plays a pivotal role in the accurate measurement of brain subcortical structures in macaques, which is crucial for unraveling the complexities of brain structure and function, thereby enhancing our understanding of neurodegenerative diseases and brain development. However, due to significant differences in brain size, structure, and imaging characteristics between humans and macaques, computational tools developed for human neuroimaging studies often encounter obstacles when applied to macaques. In this context, we propose an Anatomy Attentional Fusion Network (AAF-Net), which integrates multimodal MRI data with anatomical constraints in a multi-scale framework to address the challenges posed by the dynamic development, regional heterogeneity, and age-related size variations of the juvenile macaque brain, thus achieving precise subcortical segmentation. Specifically, we generate a Signed Distance Map (SDM) based on the initial rough segmentation of the subcortical region by a network as an anatomical constraint, providing comprehensive information on positions, structures, and morphology. Then we construct AAF-Net to fully fuse the SDM anatomical constraints and multimodal images for refined segmentation. To thoroughly evaluate the performance of our proposed tool, over 700 macaque MRIs from 19 datasets were used in this study. Specifically, we employed two manually labeled longitudinal macaque datasets to develop the tool and complete four-fold cross-validations. Furthermore, we incorporated various external datasets to demonstrate the proposed tool’s generalization capabilities and promise in brain development research. We have made this tool available as an open-source resource at https://github.com/TaoZhong11/Macaque_subcortical_segmentation for direct application.

磁共振成像(MRI)在精确测量猕猴大脑皮层下结构方面起着举足轻重的作用,这对于揭示大脑结构和功能的复杂性,从而提高我们对神经退行性疾病和大脑发育的认识至关重要。然而,由于人类和猕猴的大脑大小、结构和成像特征存在显著差异,为人类神经成像研究开发的计算工具在应用于猕猴时往往会遇到障碍。在这种情况下,我们提出了解剖学注意力融合网络(AAF-Net),该网络在多尺度框架下将多模态磁共振成像数据与解剖学约束整合在一起,以应对幼年猕猴大脑的动态发育、区域异质性和与年龄相关的大小变化所带来的挑战,从而实现皮层下的精确分割。具体来说,我们以网络对皮层下区域的初步粗略分割为基础,生成签名距离图(SDM)作为解剖约束,提供位置、结构和形态等综合信息。然后,我们构建 AAF-Net,将 SDM 解剖约束和多模态图像充分融合,进行精细分割。为了全面评估我们提出的工具的性能,本研究使用了来自 19 个数据集的 700 多张猕猴 MRI 图像。具体来说,我们使用了两个人工标注的纵向猕猴数据集来开发工具,并完成了四倍交叉验证。此外,我们还纳入了各种外部数据集,以展示所提议工具的通用能力和在大脑发育研究中的应用前景。我们已将该工具作为开源资源发布在 https://github.com/TaoZhong11/Macaque_subcortical_segmentation 网站上,以供直接应用。
{"title":"A brain subcortical segmentation tool based on anatomy attentional fusion network for developing macaques","authors":"Tao Zhong ,&nbsp;Ya Wang ,&nbsp;Xiaotong Xu ,&nbsp;Xueyang Wu ,&nbsp;Shujun Liang ,&nbsp;Zhenyuan Ning ,&nbsp;Li Wang ,&nbsp;Yuyu Niu ,&nbsp;Gang Li ,&nbsp;Yu Zhang","doi":"10.1016/j.compmedimag.2024.102404","DOIUrl":"https://doi.org/10.1016/j.compmedimag.2024.102404","url":null,"abstract":"<div><p>Magnetic Resonance Imaging (MRI) plays a pivotal role in the accurate measurement of brain subcortical structures in macaques, which is crucial for unraveling the complexities of brain structure and function, thereby enhancing our understanding of neurodegenerative diseases and brain development. However, due to significant differences in brain size, structure, and imaging characteristics between humans and macaques, computational tools developed for human neuroimaging studies often encounter obstacles when applied to macaques. In this context, we propose an Anatomy Attentional Fusion Network (AAF-Net), which integrates multimodal MRI data with anatomical constraints in a multi-scale framework to address the challenges posed by the dynamic development, regional heterogeneity, and age-related size variations of the juvenile macaque brain, thus achieving precise subcortical segmentation. Specifically, we generate a Signed Distance Map (SDM) based on the initial rough segmentation of the subcortical region by a network as an anatomical constraint, providing comprehensive information on positions, structures, and morphology. Then we construct AAF-Net to fully fuse the SDM anatomical constraints and multimodal images for refined segmentation. To thoroughly evaluate the performance of our proposed tool, over 700 macaque MRIs from 19 datasets were used in this study. Specifically, we employed two manually labeled longitudinal macaque datasets to develop the tool and complete four-fold cross-validations. Furthermore, we incorporated various external datasets to demonstrate the proposed tool’s generalization capabilities and promise in brain development research. We have made this tool available as an open-source resource at <span>https://github.com/TaoZhong11/Macaque_subcortical_segmentation</span><svg><path></path></svg> for direct application.</p></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"116 ","pages":"Article 102404"},"PeriodicalIF":5.7,"publicationDate":"2024-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141313299","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Progress and trends in neurological disorders research based on deep learning 基于深度学习的神经系统疾病研究进展与趋势
IF 5.7 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2024-05-25 DOI: 10.1016/j.compmedimag.2024.102400
Muhammad Shahid Iqbal , Md Belal Bin Heyat , Saba Parveen , Mohd Ammar Bin Hayat , Mohamad Roshanzamir , Roohallah Alizadehsani , Faijan Akhtar , Eram Sayeed , Sadiq Hussain , Hany S. Hussein , Mohamad Sawan

In recent years, deep learning (DL) has emerged as a powerful tool in clinical imaging, offering unprecedented opportunities for the diagnosis and treatment of neurological disorders (NDs). This comprehensive review explores the multifaceted role of DL techniques in leveraging vast datasets to advance our understanding of NDs and improve clinical outcomes. Beginning with a systematic literature review, we delve into the utilization of DL, particularly focusing on multimodal neuroimaging data analysis—a domain that has witnessed rapid progress and garnered significant scientific interest. Our study categorizes and critically analyses numerous DL models, including Convolutional Neural Networks (CNNs), LSTM-CNN, GAN, and VGG, to understand their performance across different types of Neurology Diseases. Through particular analysis, we identify key benchmarks and datasets utilized in training and testing DL models, shedding light on the challenges and opportunities in clinical neuroimaging research. Moreover, we discuss the effectiveness of DL in real-world clinical scenarios, emphasizing its potential to revolutionize ND diagnosis and therapy. By synthesizing existing literature and describing future directions, this review not only provides insights into the current state of DL applications in ND analysis but also covers the way for the development of more efficient and accessible DL techniques. Finally, our findings underscore the transformative impact of DL in reshaping the landscape of clinical neuroimaging, offering hope for enhanced patient care and groundbreaking discoveries in the field of neurology. This review paper is beneficial for neuropathologists and new researchers in this field.

近年来,深度学习(DL)已成为临床成像领域的强大工具,为神经系统疾病(NDs)的诊断和治疗提供了前所未有的机遇。这篇综合性综述探讨了深度学习技术在利用庞大的数据集推进我们对 NDs 的理解和改善临床疗效方面所发挥的多方面作用。从系统的文献综述开始,我们深入探讨了 DL 的应用,尤其侧重于多模态神经影像数据分析--该领域进展迅速,引起了科学界的极大兴趣。我们的研究对卷积神经网络 (CNN)、LSTM-CNN、GAN 和 VGG 等众多 DL 模型进行了分类和批判性分析,以了解它们在不同类型神经疾病中的表现。通过具体分析,我们确定了用于训练和测试 DL 模型的关键基准和数据集,揭示了临床神经成像研究中的挑战和机遇。此外,我们还讨论了 DL 在真实世界临床场景中的有效性,强调了它在革新 ND 诊断和治疗方面的潜力。通过综合现有文献和描述未来方向,本综述不仅深入分析了 DL 在 ND 分析中的应用现状,还为开发更高效、更易用的 DL 技术指明了方向。最后,我们的研究结果强调了 DL 在重塑临床神经成像格局方面的变革性影响,为加强患者护理和神经学领域的突破性发现带来了希望。这篇综述论文对神经病理学家和该领域的新研究人员大有裨益。
{"title":"Progress and trends in neurological disorders research based on deep learning","authors":"Muhammad Shahid Iqbal ,&nbsp;Md Belal Bin Heyat ,&nbsp;Saba Parveen ,&nbsp;Mohd Ammar Bin Hayat ,&nbsp;Mohamad Roshanzamir ,&nbsp;Roohallah Alizadehsani ,&nbsp;Faijan Akhtar ,&nbsp;Eram Sayeed ,&nbsp;Sadiq Hussain ,&nbsp;Hany S. Hussein ,&nbsp;Mohamad Sawan","doi":"10.1016/j.compmedimag.2024.102400","DOIUrl":"https://doi.org/10.1016/j.compmedimag.2024.102400","url":null,"abstract":"<div><p>In recent years, deep learning (DL) has emerged as a powerful tool in clinical imaging, offering unprecedented opportunities for the diagnosis and treatment of neurological disorders (NDs). This comprehensive review explores the multifaceted role of DL techniques in leveraging vast datasets to advance our understanding of NDs and improve clinical outcomes. Beginning with a systematic literature review, we delve into the utilization of DL, particularly focusing on multimodal neuroimaging data analysis—a domain that has witnessed rapid progress and garnered significant scientific interest. Our study categorizes and critically analyses numerous DL models, including Convolutional Neural Networks (CNNs), LSTM-CNN, GAN, and VGG, to understand their performance across different types of Neurology Diseases. Through particular analysis, we identify key benchmarks and datasets utilized in training and testing DL models, shedding light on the challenges and opportunities in clinical neuroimaging research. Moreover, we discuss the effectiveness of DL in real-world clinical scenarios, emphasizing its potential to revolutionize ND diagnosis and therapy. By synthesizing existing literature and describing future directions, this review not only provides insights into the current state of DL applications in ND analysis but also covers the way for the development of more efficient and accessible DL techniques. Finally, our findings underscore the transformative impact of DL in reshaping the landscape of clinical neuroimaging, offering hope for enhanced patient care and groundbreaking discoveries in the field of neurology. This review paper is beneficial for neuropathologists and new researchers in this field.</p></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"116 ","pages":"Article 102400"},"PeriodicalIF":5.7,"publicationDate":"2024-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141289967","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A deep learning approach for virtual contrast enhancement in Contrast Enhanced Spectral Mammography 对比度增强型光谱乳腺 X 射线摄影中虚拟对比度增强的深度学习方法
IF 5.7 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2024-05-23 DOI: 10.1016/j.compmedimag.2024.102398
Aurora Rofena , Valerio Guarrasi , Marina Sarli , Claudia Lucia Piccolo , Matteo Sammarra , Bruno Beomonte Zobel , Paolo Soda

Contrast Enhanced Spectral Mammography (CESM) is a dual-energy mammographic imaging technique that first requires intravenously administering an iodinated contrast medium. Then, it collects both a low-energy image, comparable to standard mammography, and a high-energy image. The two scans are combined to get a recombined image showing contrast enhancement. Despite CESM diagnostic advantages for breast cancer diagnosis, the use of contrast medium can cause side effects, and CESM also beams patients with a higher radiation dose compared to standard mammography. To address these limitations, this work proposes using deep generative models for virtual contrast enhancement on CESM, aiming to make CESM contrast-free and reduce the radiation dose. Our deep networks, consisting of an autoencoder and two Generative Adversarial Networks, the Pix2Pix, and the CycleGAN, generate synthetic recombined images solely from low-energy images. We perform an extensive quantitative and qualitative analysis of the model’s performance, also exploiting radiologists’ assessments, on a novel CESM dataset that includes 1138 images. As a further contribution to this work, we make the dataset publicly available. The results show that CycleGAN is the most promising deep network to generate synthetic recombined images, highlighting the potential of artificial intelligence techniques for virtual contrast enhancement in this field.

造影剂增强光谱乳腺摄影术(CESM)是一种双能量乳腺成像技术,首先需要静脉注射碘化造影剂。然后,它会同时采集低能量图像(与标准乳腺 X 射线照相术类似)和高能量图像。将这两张扫描图像合并,得到一张显示对比度增强的重组图像。尽管 CESM 在诊断乳腺癌方面具有优势,但造影剂的使用会产生副作用,而且与标准乳腺 X 射线照相术相比,CESM 还会对患者产生较高的辐射剂量。针对这些局限性,本研究提出使用深度生成模型对 CESM 进行虚拟对比度增强,旨在使 CESM 无需对比度并降低辐射剂量。我们的深度网络由一个自动编码器和两个生成对抗网络(Pix2Pix 和 CycleGAN)组成,仅从低能量图像生成合成重组图像。我们在一个包含 1138 幅图像的新型 CESM 数据集上对该模型的性能进行了广泛的定量和定性分析,同时还利用了放射科医生的评估。作为对这项工作的进一步贡献,我们公开了该数据集。结果表明,CycleGAN 是生成合成重组图像的最有前途的深度网络,凸显了人工智能技术在该领域虚拟对比度增强方面的潜力。
{"title":"A deep learning approach for virtual contrast enhancement in Contrast Enhanced Spectral Mammography","authors":"Aurora Rofena ,&nbsp;Valerio Guarrasi ,&nbsp;Marina Sarli ,&nbsp;Claudia Lucia Piccolo ,&nbsp;Matteo Sammarra ,&nbsp;Bruno Beomonte Zobel ,&nbsp;Paolo Soda","doi":"10.1016/j.compmedimag.2024.102398","DOIUrl":"https://doi.org/10.1016/j.compmedimag.2024.102398","url":null,"abstract":"<div><p>Contrast Enhanced Spectral Mammography (CESM) is a dual-energy mammographic imaging technique that first requires intravenously administering an iodinated contrast medium. Then, it collects both a low-energy image, comparable to standard mammography, and a high-energy image. The two scans are combined to get a recombined image showing contrast enhancement. Despite CESM diagnostic advantages for breast cancer diagnosis, the use of contrast medium can cause side effects, and CESM also beams patients with a higher radiation dose compared to standard mammography. To address these limitations, this work proposes using deep generative models for virtual contrast enhancement on CESM, aiming to make CESM contrast-free and reduce the radiation dose. Our deep networks, consisting of an autoencoder and two Generative Adversarial Networks, the Pix2Pix, and the CycleGAN, generate synthetic recombined images solely from low-energy images. We perform an extensive quantitative and qualitative analysis of the model’s performance, also exploiting radiologists’ assessments, on a novel CESM dataset that includes 1138 images. As a further contribution to this work, we make the dataset publicly available. The results show that CycleGAN is the most promising deep network to generate synthetic recombined images, highlighting the potential of artificial intelligence techniques for virtual contrast enhancement in this field.</p></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"116 ","pages":"Article 102398"},"PeriodicalIF":5.7,"publicationDate":"2024-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0895611124000752/pdfft?md5=579b15387524c47940b3088af4489328&pid=1-s2.0-S0895611124000752-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141163469","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computerized Medical Imaging and Graphics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1