首页 > 最新文献

Computerized Medical Imaging and Graphics最新文献

英文 中文
Motion correction and super-resolution for multi-slice cardiac magnetic resonance imaging via an end-to-end deep learning approach 通过端到端深度学习方法实现多切片心脏磁共振成像的运动校正和超分辨率
IF 5.7 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2024-04-29 DOI: 10.1016/j.compmedimag.2024.102389
Zhennong Chen, Hui Ren, Quanzheng Li, Xiang Li

Accurate reconstruction of a high-resolution 3D volume of the heart is critical for comprehensive cardiac assessments. However, cardiac magnetic resonance (CMR) data is usually acquired as a stack of 2D short-axis (SAX) slices, which suffers from the inter-slice misalignment due to cardiac motion and data sparsity from large gaps between SAX slices. Therefore, we aim to propose an end-to-end deep learning (DL) model to address these two challenges simultaneously, employing specific model components for each challenge. The objective is to reconstruct a high-resolution 3D volume of the heart (VHR) from acquired CMR SAX slices (VLR). We define the transformation from VLR to VHR as a sequential process of motion correction and super-resolution. Accordingly, our DL model incorporates two distinct components. The first component conducts motion correction by predicting displacement vectors to re-position each SAX slice accurately. The second component takes the motion-corrected SAX slices from the first component and performs the super-resolution to fill the data gaps. These two components operate in a sequential way, and the entire model is trained end-to-end. Our model significantly reduced inter-slice misalignment from originally 3.33±0.74 mm to 1.36±0.63 mm and generated accurate high resolution 3D volumes with Dice of 0.974±0.010 for left ventricle (LV) and 0.938±0.017 for myocardium in a simulation dataset. When compared to the LAX contours in a real-world dataset, our model achieved Dice of 0.945±0.023 for LV and 0.786±0.060 for myocardium. In both datasets, our model with specific components for motion correction and super-resolution significantly enhance the performance compared to the model without such design considerations. The codes for our model are available at https://github.com/zhennongchen/CMR_MC_SR_End2End.

准确重建心脏的高分辨率三维容积对于全面的心脏评估至关重要。然而,心脏磁共振(CMR)数据通常是以二维短轴(SAX)切片堆叠的形式获取的,这就会受到心脏运动造成的切片间错位和 SAX 切片间巨大间隙造成的数据稀疏的影响。因此,我们旨在提出一种端到端的深度学习(DL)模型,同时应对这两个挑战,并针对每个挑战采用特定的模型组件。我们的目标是从获取的 CMR SAX 切片(VLR)重建高分辨率的三维心脏容积(VHR)。我们将从 VLR 到 VHR 的转换定义为运动校正和超分辨率的连续过程。因此,我们的 DL 模型包含两个不同的组件。第一个组件通过预测位移向量来进行运动校正,从而准确地重新定位每个 SAX 切片。第二个组件从第一个组件中获取经过运动校正的 SAX 切片,并执行超分辨率以填补数据空白。这两个组件以顺序的方式运行,整个模型是端到端的训练。我们的模型大大减少了切片间的错位,从原来的 3.33±0.74 毫米减少到 1.36±0.63 毫米,并在模拟数据集中生成了精确的高分辨率三维体积,左心室(LV)的 Dice 为 0.974±0.010,心肌的 Dice 为 0.938±0.017。与真实世界数据集中的 LAX 轮廓相比,我们的模型对左心室的 Dice 值为 0.945±0.023,对心肌的 Dice 值为 0.786±0.060。在这两个数据集中,与没有考虑运动校正和超分辨率的模型相比,我们的模型包含了运动校正和超分辨率的特定组件,大大提高了性能。我们的模型代码见 https://github.com/zhennongchen/CMR_MC_SR_End2End。
{"title":"Motion correction and super-resolution for multi-slice cardiac magnetic resonance imaging via an end-to-end deep learning approach","authors":"Zhennong Chen,&nbsp;Hui Ren,&nbsp;Quanzheng Li,&nbsp;Xiang Li","doi":"10.1016/j.compmedimag.2024.102389","DOIUrl":"https://doi.org/10.1016/j.compmedimag.2024.102389","url":null,"abstract":"<div><p>Accurate reconstruction of a high-resolution 3D volume of the heart is critical for comprehensive cardiac assessments. However, cardiac magnetic resonance (CMR) data is usually acquired as a stack of 2D short-axis (SAX) slices, which suffers from the inter-slice misalignment due to cardiac motion and data sparsity from large gaps between SAX slices. Therefore, we aim to propose an end-to-end deep learning (DL) model to address these two challenges simultaneously, employing specific model components for each challenge. The objective is to reconstruct a high-resolution 3D volume of the heart (<span><math><msub><mrow><mi>V</mi></mrow><mrow><mi>HR</mi></mrow></msub></math></span>) from acquired CMR SAX slices (<span><math><msub><mrow><mi>V</mi></mrow><mrow><mi>LR</mi></mrow></msub></math></span>). We define the transformation from <span><math><msub><mrow><mi>V</mi></mrow><mrow><mi>LR</mi></mrow></msub></math></span> to <span><math><msub><mrow><mi>V</mi></mrow><mrow><mi>HR</mi></mrow></msub></math></span> as a sequential process of motion correction and super-resolution. Accordingly, our DL model incorporates two distinct components. The first component conducts motion correction by predicting displacement vectors to re-position each SAX slice accurately. The second component takes the motion-corrected SAX slices from the first component and performs the super-resolution to fill the data gaps. These two components operate in a sequential way, and the entire model is trained end-to-end. Our model significantly reduced inter-slice misalignment from originally 3.33<span><math><mo>±</mo></math></span>0.74 mm to 1.36<span><math><mo>±</mo></math></span>0.63 mm and generated accurate high resolution 3D volumes with Dice of 0.974<span><math><mo>±</mo></math></span>0.010 for left ventricle (LV) and 0.938<span><math><mo>±</mo></math></span>0.017 for myocardium in a simulation dataset. When compared to the LAX contours in a real-world dataset, our model achieved Dice of 0.945<span><math><mo>±</mo></math></span>0.023 for LV and 0.786<span><math><mo>±</mo></math></span>0.060 for myocardium. In both datasets, our model with specific components for motion correction and super-resolution significantly enhance the performance compared to the model without such design considerations. The codes for our model are available at <span>https://github.com/zhennongchen/CMR_MC_SR_End2End</span><svg><path></path></svg>.</p></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"115 ","pages":"Article 102389"},"PeriodicalIF":5.7,"publicationDate":"2024-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140816871","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A deep learning-based pipeline for developing multi-rib shape generative model with populational percentiles or anthropometrics as predictors 基于深度学习的管道,用于开发以人口百分位数或人体测量学为预测指标的多肋形状生成模型
IF 5.7 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2024-04-25 DOI: 10.1016/j.compmedimag.2024.102388
Yuan Huang , Sven A. Holcombe , Stewart C. Wang , Jisi Tang

Rib cross-sectional shapes (characterized by the outer contour and cortical bone thickness) affect the rib mechanical response under impact loading, thereby influence the rib injury pattern and risk. A statistical description of the rib shapes or their correlations to anthropometrics is a prerequisite to the development of numerical human body models representing target demographics. Variational autoencoders (VAE) as anatomical shape generators remain to be explored in terms of utilizing the latent vectors to control or interpret the representativeness of the generated results. In this paper, we propose a pipeline for developing a multi-rib cross-sectional shape generative model from CT images, which consists of the achievement of rib cross-sectional shape data from CT images using an anatomical indexing system and regular grids, and a unified framework to fit shape distributions and associate shapes to anthropometrics for different rib categories. Specifically, we collected CT images including 3193 ribs, surface regular grid is generated for each rib based on anatomical coordinates, the rib cross-sectional shapes are characterized by nodal coordinates and cortical bone thickness. The tensor structure of shape data based on regular grids enable the implementation of CNNs in the conditional variational autoencoder (CVAE). The CVAE is trained against an auxiliary classifier to decouple the low-dimensional representations of the inter- and intra- variations and fit each intra-variation by a Gaussian distribution simultaneously. Random tree regressors are further leveraged to associate each continuous intra-class space with the corresponding anthropometrics of the subjects, i.e., age, height and weight. As a result, with the rib class labels and the latent vectors sampled from Gaussian distributions or predicted from anthropometrics as the inputs, the decoder can generate valid rib cross-sectional shapes of given class labels (male/female, 2nd to 11th ribs) for arbitrary populational percentiles or specific age, height and weight, which paves the road for future biomedical and biomechanical studies considering the diversity of rib shapes across the population.

肋骨横截面形状(以外轮廓和皮质骨厚度为特征)会影响肋骨在冲击负荷下的机械响应,从而影响肋骨损伤模式和风险。对肋骨形状或其与人体测量学的相关性进行统计描述,是开发代表目标人群的数字人体模型的先决条件。作为解剖形状生成器的变异自动编码器(VAE)在利用潜在向量控制或解释生成结果的代表性方面仍有待探索。在本文中,我们提出了一种从 CT 图像开发多肋骨横截面形状生成模型的方法,其中包括使用解剖索引系统和规则网格从 CT 图像中获取肋骨横截面形状数据,以及一个统一的框架来拟合不同肋骨类别的形状分布并将形状与人体测量学相关联。具体来说,我们收集了包括 3193 根肋骨在内的 CT 图像,根据解剖坐标为每根肋骨生成表面规则网格,通过节点坐标和皮质骨厚度表征肋骨横截面形状。基于规则网格的形状数据张量结构使 CNN 能够在条件变异自动编码器(CVAE)中实现。CVAE 根据辅助分类器进行训练,以解耦内部和内部变异的低维表示,并同时用高斯分布拟合每个内部变异。进一步利用随机树回归器,将每个连续的类内空间与受试者的相应人体测量数据(即年龄、身高和体重)关联起来。因此,有了肋骨类标签和从高斯分布采样或从人体测量预测的潜向量作为输入,解码器就能为任意人口百分位数或特定年龄、身高和体重生成给定类标签(男性/女性,第 2 至第 11 肋)的有效肋骨横截面形状,这为考虑到整个人群肋骨形状多样性的未来生物医学和生物力学研究铺平了道路。
{"title":"A deep learning-based pipeline for developing multi-rib shape generative model with populational percentiles or anthropometrics as predictors","authors":"Yuan Huang ,&nbsp;Sven A. Holcombe ,&nbsp;Stewart C. Wang ,&nbsp;Jisi Tang","doi":"10.1016/j.compmedimag.2024.102388","DOIUrl":"10.1016/j.compmedimag.2024.102388","url":null,"abstract":"<div><p>Rib cross-sectional shapes (characterized by the outer contour and cortical bone thickness) affect the rib mechanical response under impact loading, thereby influence the rib injury pattern and risk. A statistical description of the rib shapes or their correlations to anthropometrics is a prerequisite to the development of numerical human body models representing target demographics. Variational autoencoders (VAE) as anatomical shape generators remain to be explored in terms of utilizing the latent vectors to control or interpret the representativeness of the generated results. In this paper, we propose a pipeline for developing a multi-rib cross-sectional shape generative model from CT images, which consists of the achievement of rib cross-sectional shape data from CT images using an anatomical indexing system and regular grids, and a unified framework to fit shape distributions and associate shapes to anthropometrics for different rib categories. Specifically, we collected CT images including 3193 ribs, surface regular grid is generated for each rib based on anatomical coordinates, the rib cross-sectional shapes are characterized by nodal coordinates and cortical bone thickness. The tensor structure of shape data based on regular grids enable the implementation of CNNs in the conditional variational autoencoder (CVAE). The CVAE is trained against an auxiliary classifier to decouple the low-dimensional representations of the inter- and intra- variations and fit each intra-variation by a Gaussian distribution simultaneously. Random tree regressors are further leveraged to associate each continuous intra-class space with the corresponding anthropometrics of the subjects, i.e., age, height and weight. As a result, with the rib class labels and the latent vectors sampled from Gaussian distributions or predicted from anthropometrics as the inputs, the decoder can generate valid rib cross-sectional shapes of given class labels (male/female, 2nd to 11th ribs) for arbitrary populational percentiles or specific age, height and weight, which paves the road for future biomedical and biomechanical studies considering the diversity of rib shapes across the population.</p></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"115 ","pages":"Article 102388"},"PeriodicalIF":5.7,"publicationDate":"2024-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140791093","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
W-DRAG: A joint framework of WGAN with data random augmentation optimized for generative networks for bone marrow edema detection in dual energy CT W-DRAG:为生成网络优化的 WGAN 与数据随机增强联合框架,用于双能量 CT 中的骨髓水肿检测
IF 5.7 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2024-04-24 DOI: 10.1016/j.compmedimag.2024.102387
Chunsu Park , Jeong-Woon Kang , Doen-Eon Lee , Wookon Son , Sang-Min Lee , Chankue Park , MinWoo Kim

Dual-energy computed tomography (CT) is an excellent substitute for identifying bone marrow edema in magnetic resonance imaging. However, it is rarely used in practice owing to its low contrast. To overcome this problem, we constructed a framework based on deep learning techniques to screen for diseases using axial bone images and to identify the local positions of bone lesions. To address the limited availability of labeled samples, we developed a new generative adversarial network (GAN) that extends expressions beyond conventional augmentation (CA) methods based on geometric transformations. We theoretically and experimentally determined that combining the concepts of data augmentation optimized for GAN training (DAG) and Wasserstein GAN yields a considerably stable generation of synthetic images and effectively aligns their distribution with that of real images, thereby achieving a high degree of similarity. The classification model was trained using real and synthetic samples. Consequently, the GAN technique used in the diagnostic test had an improved F1 score of approximately 7.8% compared with CA. The final F1 score was 80.24%, and the recall and precision were 84.3% and 88.7%, respectively. The results obtained using the augmented samples outperformed those obtained using pure real samples without augmentation. In addition, we adopted explainable AI techniques that leverage a class activation map (CAM) and principal component analysis to facilitate visual analysis of the network’s results. The framework was designed to suggest an attention map and scattering plot to visually explain the disease predictions of the network.

双能计算机断层扫描(CT)是在磁共振成像中识别骨髓水肿的最佳替代方法。然而,由于其对比度低,在实践中很少使用。为了克服这一问题,我们构建了一个基于深度学习技术的框架,利用轴向骨骼图像筛查疾病,并识别骨骼病变的局部位置。为了解决标注样本有限的问题,我们开发了一种新的生成对抗网络(GAN),其表达方式超越了基于几何变换的传统增强(CA)方法。我们从理论和实验上确定,将针对 GAN 训练进行优化的数据增强(DAG)和 Wasserstein GAN 的概念相结合,可以生成相当稳定的合成图像,并有效地将其分布与真实图像的分布相一致,从而实现高度相似。分类模型使用真实样本和合成样本进行训练。因此,与 CA 相比,诊断测试中使用的 GAN 技术的 F1 分数提高了约 7.8%。最终的 F1 得分为 80.24%,召回率和精确率分别为 84.3% 和 88.7%。使用增强样本所获得的结果优于使用纯真实样本(无增强)所获得的结果。此外,我们还采用了可解释的人工智能技术,利用类激活图(CAM)和主成分分析来促进对网络结果的可视化分析。该框架旨在通过注意力图和散点图来直观地解释网络的疾病预测结果。
{"title":"W-DRAG: A joint framework of WGAN with data random augmentation optimized for generative networks for bone marrow edema detection in dual energy CT","authors":"Chunsu Park ,&nbsp;Jeong-Woon Kang ,&nbsp;Doen-Eon Lee ,&nbsp;Wookon Son ,&nbsp;Sang-Min Lee ,&nbsp;Chankue Park ,&nbsp;MinWoo Kim","doi":"10.1016/j.compmedimag.2024.102387","DOIUrl":"10.1016/j.compmedimag.2024.102387","url":null,"abstract":"<div><p>Dual-energy computed tomography (CT) is an excellent substitute for identifying bone marrow edema in magnetic resonance imaging. However, it is rarely used in practice owing to its low contrast. To overcome this problem, we constructed a framework based on deep learning techniques to screen for diseases using axial bone images and to identify the local positions of bone lesions. To address the limited availability of labeled samples, we developed a new generative adversarial network (GAN) that extends expressions beyond conventional augmentation (CA) methods based on geometric transformations. We theoretically and experimentally determined that combining the concepts of data augmentation optimized for GAN training (DAG) and Wasserstein GAN yields a considerably stable generation of synthetic images and effectively aligns their distribution with that of real images, thereby achieving a high degree of similarity. The classification model was trained using real and synthetic samples. Consequently, the GAN technique used in the diagnostic test had an improved F1 score of approximately 7.8% compared with CA. The final F1 score was 80.24%, and the recall and precision were 84.3% and 88.7%, respectively. The results obtained using the augmented samples outperformed those obtained using pure real samples without augmentation. In addition, we adopted explainable AI techniques that leverage a class activation map (CAM) and principal component analysis to facilitate visual analysis of the network’s results. The framework was designed to suggest an attention map and scattering plot to visually explain the disease predictions of the network.</p></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"115 ","pages":"Article 102387"},"PeriodicalIF":5.7,"publicationDate":"2024-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0895611124000648/pdfft?md5=340b576800836a42ff054a8829a2c44e&pid=1-s2.0-S0895611124000648-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140784586","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Advancing post-traumatic seizure classification and biomarker identification: Information decomposition based multimodal fusion and explainable machine learning with missing neuroimaging data 推进创伤后癫痫发作分类和生物标记物识别:基于信息分解的多模态融合和可解释的机器学习与缺失的神经影像数据
IF 5.7 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2024-04-19 DOI: 10.1016/j.compmedimag.2024.102386
Md Navid Akbar , Sebastian F. Ruf , Ashutosh Singh , Razieh Faghihpirayesh , Rachael Garner , Alexis Bennett , Celina Alba , Marianna La Rocca , Tales Imbiriba , Deniz Erdoğmuş , Dominique Duncan

A late post-traumatic seizure (LPTS), a consequence of traumatic brain injury (TBI), can potentially evolve into a lifelong condition known as post-traumatic epilepsy (PTE). Presently, the mechanism that triggers epileptogenesis in TBI patients remains elusive, inspiring the epilepsy community to devise ways to predict which TBI patients will develop PTE and to identify potential biomarkers. In response to this need, our study collected comprehensive, longitudinal multimodal data from 48 TBI patients across multiple participating institutions. A supervised binary classification task was created, contrasting data from LPTS patients with those without LPTS. To accommodate missing modalities in some subjects, we took a two-pronged approach. Firstly, we extended a graphical model-based Bayesian estimator to directly classify subjects with incomplete modality. Secondly, we explored conventional imputation techniques. The imputed multimodal information was then combined, following several fusion and dimensionality reduction techniques found in the literature, and subsequently fitted to a kernel- or a tree-based classifier. For this fusion, we proposed two new algorithms: recursive elimination of correlated components (RECC) that filters information based on the correlation between the already selected features, and information decomposition and selective fusion (IDSF), which effectively recombines information from decomposed multimodal features. Our cross-validation findings showed that the proposed IDSF algorithm delivers superior performance based on the area under the curve (AUC) score. Ultimately, after rigorous statistical comparisons and interpretable machine learning examination using Shapley values of the most frequently selected features, we recommend the two following magnetic resonance imaging (MRI) abnormalities as potential biomarkers: the left anterior limb of internal capsule in diffusion MRI (dMRI), and the right middle temporal gyrus in functional MRI (fMRI).

创伤后癫痫发作(LPTS)是创伤性脑损伤(TBI)的一种后果,有可能演变为一种终身性疾病,即创伤后癫痫(PTE)。目前,引发创伤性脑损伤患者癫痫发生的机制仍不明确,这促使癫痫学界想方设法预测哪些创伤性脑损伤患者会发展成 PTE,并找出潜在的生物标志物。为了满足这一需求,我们的研究收集了多个参与机构的 48 名创伤性脑损伤患者的全面、纵向多模态数据。我们创建了一个有监督的二元分类任务,将 LPTS 患者的数据与无 LPTS 患者的数据进行对比。为了适应某些受试者缺失的模式,我们采取了双管齐下的方法。首先,我们扩展了基于图形模型的贝叶斯估计器,以直接对模式不完整的受试者进行分类。其次,我们探索了传统的估算技术。然后,按照文献中的几种融合和降维技术,将估算出的多模态信息进行组合,并随后与基于核或树的分类器相匹配。为了实现这种融合,我们提出了两种新算法:相关成分递归消除算法(RECC),该算法根据已选特征之间的相关性过滤信息;信息分解和选择性融合算法(IDSF),该算法能有效地重新组合已分解的多模态特征信息。我们的交叉验证结果表明,根据曲线下面积(AUC)得分,拟议的 IDSF 算法性能更优。最终,经过严格的统计比较和使用最常选择特征的 Shapley 值进行可解释的机器学习检查,我们推荐以下两种磁共振成像(MRI)异常作为潜在的生物标记物:扩散磁共振成像(dMRI)中的内囊左前肢和功能磁共振成像(fMRI)中的右颞中回。
{"title":"Advancing post-traumatic seizure classification and biomarker identification: Information decomposition based multimodal fusion and explainable machine learning with missing neuroimaging data","authors":"Md Navid Akbar ,&nbsp;Sebastian F. Ruf ,&nbsp;Ashutosh Singh ,&nbsp;Razieh Faghihpirayesh ,&nbsp;Rachael Garner ,&nbsp;Alexis Bennett ,&nbsp;Celina Alba ,&nbsp;Marianna La Rocca ,&nbsp;Tales Imbiriba ,&nbsp;Deniz Erdoğmuş ,&nbsp;Dominique Duncan","doi":"10.1016/j.compmedimag.2024.102386","DOIUrl":"10.1016/j.compmedimag.2024.102386","url":null,"abstract":"<div><p>A late post-traumatic seizure (LPTS), a consequence of traumatic brain injury (TBI), can potentially evolve into a lifelong condition known as post-traumatic epilepsy (PTE). Presently, the mechanism that triggers epileptogenesis in TBI patients remains elusive, inspiring the epilepsy community to devise ways to predict which TBI patients will develop PTE and to identify potential biomarkers. In response to this need, our study collected comprehensive, longitudinal multimodal data from 48 TBI patients across multiple participating institutions. A supervised binary classification task was created, contrasting data from LPTS patients with those without LPTS. To accommodate missing modalities in some subjects, we took a two-pronged approach. Firstly, we extended a graphical model-based Bayesian estimator to directly classify subjects with incomplete modality. Secondly, we explored conventional imputation techniques. The imputed multimodal information was then combined, following several fusion and dimensionality reduction techniques found in the literature, and subsequently fitted to a kernel- or a tree-based classifier. For this fusion, we proposed two new algorithms: recursive elimination of correlated components (RECC) that filters information based on the correlation between the already selected features, and information decomposition and selective fusion (IDSF), which effectively recombines information from decomposed multimodal features. Our cross-validation findings showed that the proposed IDSF algorithm delivers superior performance based on the area under the curve (AUC) score. Ultimately, after rigorous statistical comparisons and interpretable machine learning examination using Shapley values of the most frequently selected features, we recommend the two following magnetic resonance imaging (MRI) abnormalities as potential biomarkers: the left anterior limb of internal capsule in diffusion MRI (dMRI), and the right middle temporal gyrus in functional MRI (fMRI).</p></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"115 ","pages":"Article 102386"},"PeriodicalIF":5.7,"publicationDate":"2024-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140775799","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A novel approach for estimating lung tumor motion based on dynamic features in 4D-CT 基于 4D-CT 动态特征估计肺肿瘤运动的新方法
IF 5.7 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2024-04-18 DOI: 10.1016/j.compmedimag.2024.102385
Ye-Jun Gong , Yue-Ke Li , Rongrong Zhou , Zhan Liang , Yingying Zhang , Tingting Cheng , Zi-Jian Zhang

Due to the high expenses involved, 4D-CT data for certain patients may only include five respiratory phases (0%, 20%, 40%, 60%, and 80%). This limitation can affect the subsequent planning of radiotherapy due to the absence of lung tumor information for the remaining five respiratory phases (10%, 30%, 50%, 70%, and 90%). This study aims to develop an interpolation method that can automatically derive tumor boundary contours for the five omitted phases using the available 5-phase 4D-CT data. The dynamic mode decomposition (DMD) method is a data-driven and model-free technique that can extract dynamic information from high-dimensional data. It enables the reconstruction of long-term dynamic patterns using only a limited number of time snapshots. The quasi-periodic motion of a deformable lung tumor caused by respiratory motion makes it suitable for treatment using DMD. The direct application of the DMD method to analyze the respiratory motion of the tumor is impractical because the tumor is three-dimensional and spans multiple CT slices. To predict the respiratory movement of lung tumors, a method called uniform angular interval (UAI) sampling was developed to generate snapshot vectors of equal length, which are suitable for DMD analysis. The effectiveness of this approach was confirmed by applying the UAI-DMD method to the 4D-CT data of ten patients with lung cancer. The results indicate that the UAI-DMD method effectively approximates the lung tumor’s deformable boundary surface and nonlinear motion trajectories. The estimated tumor centroid is within 2 mm of the manually delineated centroid, a smaller margin of error compared to the traditional BSpline interpolation method, which has a margin of 3 mm. This methodology has the potential to be extended to reconstruct the 20-phase respiratory movement of a lung tumor based on dynamic features from 10-phase 4D-CT data, thereby enabling more accurate estimation of the planned target volume (PTV).

由于费用高昂,某些患者的 4D-CT 数据可能只包括五个呼吸阶段(0%、20%、40%、60% 和 80%)。由于缺乏其余五个呼吸相位(10%、30%、50%、70% 和 90%)的肺部肿瘤信息,这一局限性会影响后续放疗计划的制定。本研究旨在开发一种插值方法,利用现有的五相 4D-CT 数据自动推导出被遗漏的五个阶段的肿瘤边界轮廓。动态模式分解(DMD)方法是一种数据驱动的无模型技术,可从高维数据中提取动态信息。它只需使用有限的时间快照就能重建长期动态模式。由呼吸运动引起的可变形肺肿瘤的准周期运动使其适合使用 DMD 进行治疗。由于肿瘤是三维的,且跨越多个 CT 切片,因此直接应用 DMD 方法分析肿瘤的呼吸运动是不切实际的。为了预测肺部肿瘤的呼吸运动,研究人员开发了一种称为均匀角间隔(UAI)采样的方法,以生成适合 DMD 分析的等长快照矢量。通过将 UAI-DMD 方法应用于 10 名肺癌患者的 4D-CT 数据,证实了这种方法的有效性。结果表明,UAI-DMD 方法能有效逼近肺部肿瘤的可变形边界面和非线性运动轨迹。估计的肿瘤中心点与人工划定的中心点相差 2 毫米以内,与传统的 BSpline 插值法相比误差较小,后者的误差为 3 毫米。这种方法有望扩展到根据 10 相 4D-CT 数据的动态特征重建肺部肿瘤的 20 相呼吸运动,从而更准确地估计计划靶体积(PTV)。
{"title":"A novel approach for estimating lung tumor motion based on dynamic features in 4D-CT","authors":"Ye-Jun Gong ,&nbsp;Yue-Ke Li ,&nbsp;Rongrong Zhou ,&nbsp;Zhan Liang ,&nbsp;Yingying Zhang ,&nbsp;Tingting Cheng ,&nbsp;Zi-Jian Zhang","doi":"10.1016/j.compmedimag.2024.102385","DOIUrl":"https://doi.org/10.1016/j.compmedimag.2024.102385","url":null,"abstract":"<div><p>Due to the high expenses involved, 4D-CT data for certain patients may only include five respiratory phases (0%, 20%, 40%, 60%, and 80%). This limitation can affect the subsequent planning of radiotherapy due to the absence of lung tumor information for the remaining five respiratory phases (10%, 30%, 50%, 70%, and 90%). This study aims to develop an interpolation method that can automatically derive tumor boundary contours for the five omitted phases using the available 5-phase 4D-CT data. The dynamic mode decomposition (DMD) method is a data-driven and model-free technique that can extract dynamic information from high-dimensional data. It enables the reconstruction of long-term dynamic patterns using only a limited number of time snapshots. The quasi-periodic motion of a deformable lung tumor caused by respiratory motion makes it suitable for treatment using DMD. The direct application of the DMD method to analyze the respiratory motion of the tumor is impractical because the tumor is three-dimensional and spans multiple CT slices. To predict the respiratory movement of lung tumors, a method called uniform angular interval (UAI) sampling was developed to generate snapshot vectors of equal length, which are suitable for DMD analysis. The effectiveness of this approach was confirmed by applying the UAI-DMD method to the 4D-CT data of ten patients with lung cancer. The results indicate that the UAI-DMD method effectively approximates the lung tumor’s deformable boundary surface and nonlinear motion trajectories. The estimated tumor centroid is within 2 mm of the manually delineated centroid, a smaller margin of error compared to the traditional BSpline interpolation method, which has a margin of 3 mm. This methodology has the potential to be extended to reconstruct the 20-phase respiratory movement of a lung tumor based on dynamic features from 10-phase 4D-CT data, thereby enabling more accurate estimation of the planned target volume (PTV).</p></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"115 ","pages":"Article 102385"},"PeriodicalIF":5.7,"publicationDate":"2024-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140638590","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hybrid dual mean-teacher network with double-uncertainty guidance for semi-supervised segmentation of magnetic resonance images 用于磁共振图像半监督分割的具有双重不确定性指导的混合双均值-教师网络
IF 5.7 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2024-04-17 DOI: 10.1016/j.compmedimag.2024.102383
Jiayi Zhu , Bart Bolsterlee , Brian V.Y. Chow , Yang Song , Erik Meijering

Semi-supervised learning has made significant progress in medical image segmentation. However, existing methods primarily utilize information from a single dimensionality, resulting in sub-optimal performance on challenging magnetic resonance imaging (MRI) data with multiple segmentation objects and anisotropic resolution. To address this issue, we present a Hybrid Dual Mean-Teacher (HD-Teacher) model with hybrid, semi-supervised, and multi-task learning to achieve effective semi-supervised segmentation. HD-Teacher employs a 2D and a 3D mean-teacher network to produce segmentation labels and signed distance fields from the hybrid information captured in both dimensionalities. This hybrid mechanism allows HD-Teacher to utilize features from 2D, 3D, or both dimensions as needed. Outputs from 2D and 3D teacher models are dynamically combined based on confidence scores, forming a single hybrid prediction with estimated uncertainty. We propose a hybrid regularization module to encourage both student models to produce results close to the uncertainty-weighted hybrid prediction to further improve their feature extraction capability. Extensive experiments of binary and multi-class segmentation conducted on three MRI datasets demonstrated that the proposed framework could (1) significantly outperform state-of-the-art semi-supervised methods (2) surpass a fully-supervised VNet trained on substantially more annotated data, and (3) perform on par with human raters on muscle and bone segmentation task. Code will be available at https://github.com/ThisGame42/Hybrid-Teacher.

半监督学习在医学图像分割方面取得了重大进展。然而,现有方法主要利用单一维度的信息,导致在具有多个分割对象和各向异性分辨率的高难度磁共振成像(MRI)数据上无法达到最佳性能。为解决这一问题,我们提出了混合双均值-教师(HD-Teacher)模型,该模型具有混合、半监督和多任务学习功能,可实现有效的半监督分割。HD-Teacher 采用二维和三维均值-教师网络,从两个维度捕捉到的混合信息中生成分割标签和符号距离场。这种混合机制允许 HD-Teacher 根据需要利用二维、三维或两个维度的特征。二维和三维教师模型的输出会根据置信度分数动态组合,形成一个具有估计不确定性的混合预测。我们提出了一个混合正则化模块,鼓励两个学生模型产生接近不确定性加权混合预测的结果,以进一步提高其特征提取能力。在三个核磁共振成像数据集上进行的二元和多类分割的广泛实验表明,所提出的框架可以:(1)显著优于最先进的半监督方法;(2)超越在更多注释数据上训练的全监督 VNet;以及(3)在肌肉和骨骼分割任务上与人类评分员的表现相当。代码将发布在 https://github.com/ThisGame42/Hybrid-Teacher 网站上。
{"title":"Hybrid dual mean-teacher network with double-uncertainty guidance for semi-supervised segmentation of magnetic resonance images","authors":"Jiayi Zhu ,&nbsp;Bart Bolsterlee ,&nbsp;Brian V.Y. Chow ,&nbsp;Yang Song ,&nbsp;Erik Meijering","doi":"10.1016/j.compmedimag.2024.102383","DOIUrl":"https://doi.org/10.1016/j.compmedimag.2024.102383","url":null,"abstract":"<div><p>Semi-supervised learning has made significant progress in medical image segmentation. However, existing methods primarily utilize information from a single dimensionality, resulting in sub-optimal performance on challenging magnetic resonance imaging (MRI) data with multiple segmentation objects and anisotropic resolution. To address this issue, we present a Hybrid Dual Mean-Teacher (HD-Teacher) model with hybrid, semi-supervised, and multi-task learning to achieve effective semi-supervised segmentation. HD-Teacher employs a 2D and a 3D mean-teacher network to produce segmentation labels and signed distance fields from the hybrid information captured in both dimensionalities. This hybrid mechanism allows HD-Teacher to utilize features from 2D, 3D, or both dimensions as needed. Outputs from 2D and 3D teacher models are dynamically combined based on confidence scores, forming a single hybrid prediction with estimated uncertainty. We propose a hybrid regularization module to encourage both student models to produce results close to the uncertainty-weighted hybrid prediction to further improve their feature extraction capability. Extensive experiments of binary and multi-class segmentation conducted on three MRI datasets demonstrated that the proposed framework could (1) significantly outperform state-of-the-art semi-supervised methods (2) surpass a fully-supervised VNet trained on substantially more annotated data, and (3) perform on par with human raters on muscle and bone segmentation task. Code will be available at <span>https://github.com/ThisGame42/Hybrid-Teacher</span><svg><path></path></svg>.</p></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"115 ","pages":"Article 102383"},"PeriodicalIF":5.7,"publicationDate":"2024-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0895611124000600/pdfft?md5=7ce6bdbb1f79301198bf452b8d9fd71f&pid=1-s2.0-S0895611124000600-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140631203","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CardSegNet: An adaptive hybrid CNN-vision transformer model for heart region segmentation in cardiac MRI CardSegNet:用于心脏核磁共振成像中心脏区域分割的自适应混合 CNN 视觉变换器模型
IF 5.7 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2024-04-16 DOI: 10.1016/j.compmedimag.2024.102382
Hamed Aghapanah , Reza Rasti , Saeed Kermani , Faezeh Tabesh , Hossein Yousefi Banaem , Hamidreza Pour Aliakbar , Hamid Sanei , William Paul Segars

Cardiovascular MRI (CMRI) is a non-invasive imaging technique adopted for assessing the blood circulatory system’s structure and function. Precise image segmentation is required to measure cardiac parameters and diagnose abnormalities through CMRI data. Because of anatomical heterogeneity and image variations, cardiac image segmentation is a challenging task. Quantification of cardiac parameters requires high-performance segmentation of the left ventricle (LV), right ventricle (RV), and left ventricle myocardium from the background. The first proposed solution here is to manually segment the regions, which is a time-consuming and error-prone procedure. In this context, many semi- or fully automatic solutions have been proposed recently, among which deep learning-based methods have revealed high performance in segmenting regions in CMRI data. In this study, a self-adaptive multi attention (SMA) module is introduced to adaptively leverage multiple attention mechanisms for better segmentation. The convolutional-based position and channel attention mechanisms with a patch tokenization-based vision transformer (ViT)-based attention mechanism in a hybrid and end-to-end manner are integrated into the SMA. The CNN- and ViT-based attentions mine the short- and long-range dependencies for more precise segmentation. The SMA module is applied in an encoder-decoder structure with a ResNet50 backbone named CardSegNet. Furthermore, a deep supervision method with multi-loss functions is introduced to the CardSegNet optimizer to reduce overfitting and enhance the model’s performance. The proposed model is validated on the ACDC2017 (n=100), M&Ms (n=321), and a local dataset (n=22) using the 10-fold cross-validation method with promising segmentation results, demonstrating its outperformance versus its counterparts.

心血管磁共振成像(CMRI)是一种无创成像技术,用于评估血液循环系统的结构和功能。通过 CMRI 数据测量心脏参数和诊断异常需要精确的图像分割。由于解剖异质性和图像变化,心脏图像分割是一项具有挑战性的任务。心脏参数的量化需要从背景中高性能地分割出左心室(LV)、右心室(RV)和左心室心肌。这里提出的第一个解决方案是手动分割区域,这是一个耗时且容易出错的过程。在这种情况下,最近提出了许多半自动或全自动的解决方案,其中基于深度学习的方法在 CMRI 数据的区域分割方面表现出色。在本研究中,引入了自适应多重注意(SMA)模块,以自适应地利用多重注意机制来获得更好的分割效果。基于卷积的位置和通道注意力机制与基于补丁标记化的视觉转换器(ViT)注意力机制以混合和端到端的方式集成到了 SMA 中。基于 CNN 和 ViT 的注意力挖掘短程和长程依赖关系,以实现更精确的分割。SMA 模块被应用于以 ResNet50 为骨干的编码器-解码器结构中,并命名为 CardSegNet。 此外,CardSegNet 优化器还引入了具有多损失函数的深度监督方法,以减少过拟合并提高模型性能。利用 10 倍交叉验证法,在 ACDC2017(n=100)、M&Ms(n=321)和本地数据集(n=22)上对所提出的模型进行了验证,结果显示其分割效果优于同类模型。
{"title":"CardSegNet: An adaptive hybrid CNN-vision transformer model for heart region segmentation in cardiac MRI","authors":"Hamed Aghapanah ,&nbsp;Reza Rasti ,&nbsp;Saeed Kermani ,&nbsp;Faezeh Tabesh ,&nbsp;Hossein Yousefi Banaem ,&nbsp;Hamidreza Pour Aliakbar ,&nbsp;Hamid Sanei ,&nbsp;William Paul Segars","doi":"10.1016/j.compmedimag.2024.102382","DOIUrl":"https://doi.org/10.1016/j.compmedimag.2024.102382","url":null,"abstract":"<div><p>Cardiovascular MRI (CMRI) is a non-invasive imaging technique adopted for assessing the blood circulatory system’s structure and function. Precise image segmentation is required to measure cardiac parameters and diagnose abnormalities through CMRI data. Because of anatomical heterogeneity and image variations, cardiac image segmentation is a challenging task. Quantification of cardiac parameters requires high-performance segmentation of the left ventricle (LV), right ventricle (RV), and left ventricle myocardium from the background. The first proposed solution here is to manually segment the regions, which is a time-consuming and error-prone procedure. In this context, many semi- or fully automatic solutions have been proposed recently, among which deep learning-based methods have revealed high performance in segmenting regions in CMRI data. In this study, a self-adaptive multi attention (SMA) module is introduced to adaptively leverage multiple attention mechanisms for better segmentation. The convolutional-based position and channel attention mechanisms with a patch tokenization-based vision transformer (ViT)-based attention mechanism in a hybrid and end-to-end manner are integrated into the SMA. The CNN- and ViT-based attentions mine the short- and long-range dependencies for more precise segmentation. The SMA module is applied in an encoder-decoder structure with a ResNet50 backbone named CardSegNet. Furthermore, a deep supervision method with multi-loss functions is introduced to the CardSegNet optimizer to reduce overfitting and enhance the model’s performance. The proposed model is validated on the ACDC2017 (n=100), M&amp;Ms (n=321), and a local dataset (n=22) using the 10-fold cross-validation method with promising segmentation results, demonstrating its outperformance versus its counterparts.</p></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"115 ","pages":"Article 102382"},"PeriodicalIF":5.7,"publicationDate":"2024-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140618093","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Global contextual representation via graph-transformer fusion for hepatocellular carcinoma prognosis in whole-slide images 通过图变换器融合实现全局上下文表示,用于全切面图像中的肝细胞癌预后分析
IF 5.7 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2024-04-16 DOI: 10.1016/j.compmedimag.2024.102378
Luyu Tang , Songhui Diao , Chao Li , Miaoxia He , Kun Ru , Wenjian Qin

Current methods of digital pathological images typically employ small image patches to learn local representative features to overcome the issues of computationally heavy and memory limitations. However, the global contextual features are not fully considered in whole-slide images (WSIs). Here, we designed a hybrid model that utilizes Graph Neural Network (GNN) module and Transformer module for the representation of global contextual features, called TransGNN. GNN module built a WSI-Graph for the foreground area of a WSI for explicitly capturing structural features, and the Transformer module through the self-attention mechanism implicitly learned the global context information. The prognostic markers of hepatocellular carcinoma (HCC) prognostic biomarkers were used to illustrate the importance of global contextual information in cancer histopathological analysis. Our model was validated using 362 WSIs from 355 HCC patients diagnosed from The Cancer Genome Atlas (TCGA). It showed impressive performance with a Concordance Index (C-Index) of 0.7308 (95% Confidence Interval (CI): (0.6283–0.8333)) for overall survival prediction and achieved the best performance among all models. Additionally, our model achieved an area under curve of 0.7904, 0.8087, and 0.8004 for 1-year, 3-year, and 5-year survival predictions, respectively. We further verified the superior performance of our model in HCC risk stratification and its clinical value through Kaplan–Meier curve and univariate and multivariate COX regression analysis. Our research demonstrated that TransGNN effectively utilized the context information of WSIs and contributed to the clinical prognostic evaluation of HCC.

目前的数字病理图像处理方法通常采用小图像片段来学习局部代表性特征,以克服计算量大和内存限制等问题。然而,在整幅图像(WSI)中,全局上下文特征并未得到充分考虑。在此,我们设计了一种混合模型,利用图神经网络(GNN)模块和变换器模块来表示全局上下文特征,称为 TransGNN。GNN 模块为 WSI 的前景区域构建了一个 WSI-Graph 来明确捕捉结构特征,而 Transformer 模块则通过自我注意机制隐式地学习全局上下文信息。我们使用肝细胞癌(HCC)预后生物标志物来说明全局上下文信息在癌症组织病理学分析中的重要性。我们使用癌症基因组图谱(TCGA)中确诊的 355 例 HCC 患者的 362 个 WSI 验证了我们的模型。该模型在总生存期预测方面表现出了令人印象深刻的性能,其一致性指数(C-Index)为 0.7308(95% 置信区间(CI):(0.6283-0.8333)),是所有模型中性能最好的。此外,我们的模型在预测 1 年、3 年和 5 年生存率时的曲线下面积分别为 0.7904、0.8087 和 0.8004。我们通过 Kaplan-Meier 曲线以及单变量和多变量 COX 回归分析进一步验证了我们的模型在 HCC 风险分层方面的卓越性能及其临床价值。我们的研究表明,TransGNN 有效地利用了 WSIs 的上下文信息,为 HCC 的临床预后评估做出了贡献。
{"title":"Global contextual representation via graph-transformer fusion for hepatocellular carcinoma prognosis in whole-slide images","authors":"Luyu Tang ,&nbsp;Songhui Diao ,&nbsp;Chao Li ,&nbsp;Miaoxia He ,&nbsp;Kun Ru ,&nbsp;Wenjian Qin","doi":"10.1016/j.compmedimag.2024.102378","DOIUrl":"https://doi.org/10.1016/j.compmedimag.2024.102378","url":null,"abstract":"<div><p>Current methods of digital pathological images typically employ small image patches to learn local representative features to overcome the issues of computationally heavy and memory limitations. However, the global contextual features are not fully considered in whole-slide images (WSIs). Here, we designed a hybrid model that utilizes Graph Neural Network (GNN) module and Transformer module for the representation of global contextual features, called TransGNN. GNN module built a WSI-Graph for the foreground area of a WSI for explicitly capturing structural features, and the Transformer module through the self-attention mechanism implicitly learned the global context information. The prognostic markers of hepatocellular carcinoma (HCC) prognostic biomarkers were used to illustrate the importance of global contextual information in cancer histopathological analysis. Our model was validated using 362 WSIs from 355 HCC patients diagnosed from The Cancer Genome Atlas (TCGA). It showed impressive performance with a Concordance Index (C-Index) of 0.7308 (95% Confidence Interval (CI): (0.6283–0.8333)) for overall survival prediction and achieved the best performance among all models. Additionally, our model achieved an area under curve of 0.7904, 0.8087, and 0.8004 for 1-year, 3-year, and 5-year survival predictions, respectively. We further verified the superior performance of our model in HCC risk stratification and its clinical value through Kaplan–Meier curve and univariate and multivariate COX regression analysis. Our research demonstrated that TransGNN effectively utilized the context information of WSIs and contributed to the clinical prognostic evaluation of HCC.</p></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"115 ","pages":"Article 102378"},"PeriodicalIF":5.7,"publicationDate":"2024-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140604545","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Distraction-aware hierarchical learning for vascular structure segmentation in intravascular ultrasound images 用于血管内超声图像血管结构分割的分心感知分层学习技术
IF 5.7 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2024-04-12 DOI: 10.1016/j.compmedimag.2024.102381
Wenhao Zhong , Heye Zhang , Zhifan Gao , William Kongto Hau , Guang Yang , Xiujian Liu , Lin Xu

Vascular structure segmentation in intravascular ultrasound (IVUS) images plays an important role in pre-procedural evaluation of percutaneous coronary intervention (PCI). However, vascular structure segmentation in IVUS images has the challenge of structure-dependent distractions. Structure-dependent distractions are categorized into two cases, structural intrinsic distractions and inter-structural distractions. Traditional machine learning methods often rely solely on low-level features, overlooking high-level features. This way limits the generalization of these methods. The existing semantic segmentation methods integrate low-level and high-level features to enhance generalization performance. But these methods also introduce additional interference, which is harmful to solving structural intrinsic distractions. Distraction cue methods attempt to address structural intrinsic distractions by removing interference from the features through a unique decoder. However, they tend to overlook the problem of inter-structural distractions. In this paper, we propose distraction-aware hierarchical learning (DHL) for vascular structure segmentation in IVUS images. Inspired by distraction cue methods for removing interference in a decoder, the DHL is designed as a hierarchical decoder that gradually removes structure-dependent distractions. The DHL includes global perception process, distraction perception process and structural perception process. The global perception process and distraction perception process remove structural intrinsic distractions then the structural perception process removes inter-structural distractions. In the global perception process, the DHL searches for the coarse structural region of the vascular structures on the slice of IVUS sequence. In the distraction perception process, the DHL progressively refines the coarse structural region of the vascular structures to remove structural distractions. In the structural perception process, the DHL detects regions of inter-structural distractions in fused structure features then separates them. Extensive experiments on 361 subjects show that the DHL is effective (e.g., the average Dice is greater than 0.95), and superior to ten state-of-the-art IVUS vascular structure segmentation methods.

血管内超声(IVUS)图像中的血管结构分割在经皮冠状动脉介入治疗(PCI)的术前评估中发挥着重要作用。然而,IVUS 图像中的血管结构分割面临着结构依赖性干扰的挑战。结构相关干扰分为两种情况:结构内在干扰和结构间干扰。传统的机器学习方法往往只依赖于低层次特征,而忽略了高层次特征。这就限制了这些方法的通用性。现有的语义分割方法整合了低级和高级特征,以提高泛化性能。但这些方法也引入了额外的干扰,不利于解决结构性内在分心问题。分心线索方法试图通过独特的解码器消除特征干扰,从而解决结构性内在分心问题。然而,这些方法往往忽略了结构间干扰的问题。在本文中,我们提出了针对 IVUS 图像中血管结构分割的分心感知分层学习(DHL)。受在解码器中消除干扰的分心线索方法的启发,DHL 被设计成一种分层解码器,可逐步消除与结构相关的干扰。DHL 包括全局感知过程、分心感知过程和结构感知过程。全局感知过程和分心感知过程消除结构内在干扰,然后结构感知过程消除结构间干扰。在全局感知过程中,DHL 在 IVUS 序列切片上搜索血管结构的粗结构区域。在分心感知过程中,DHL 逐步细化血管结构的粗结构区域,以去除结构分心。在结构感知过程中,DHL 会检测融合结构特征中的结构间干扰区域,然后将其分离。在 361 名受试者身上进行的大量实验表明,DHL 是有效的(例如,平均 Dice 大于 0.95),并且优于十种最先进的 IVUS 血管结构分割方法。
{"title":"Distraction-aware hierarchical learning for vascular structure segmentation in intravascular ultrasound images","authors":"Wenhao Zhong ,&nbsp;Heye Zhang ,&nbsp;Zhifan Gao ,&nbsp;William Kongto Hau ,&nbsp;Guang Yang ,&nbsp;Xiujian Liu ,&nbsp;Lin Xu","doi":"10.1016/j.compmedimag.2024.102381","DOIUrl":"https://doi.org/10.1016/j.compmedimag.2024.102381","url":null,"abstract":"<div><p>Vascular structure segmentation in intravascular ultrasound (IVUS) images plays an important role in pre-procedural evaluation of percutaneous coronary intervention (PCI). However, vascular structure segmentation in IVUS images has the challenge of structure-dependent distractions. Structure-dependent distractions are categorized into two cases, structural intrinsic distractions and inter-structural distractions. Traditional machine learning methods often rely solely on low-level features, overlooking high-level features. This way limits the generalization of these methods. The existing semantic segmentation methods integrate low-level and high-level features to enhance generalization performance. But these methods also introduce additional interference, which is harmful to solving structural intrinsic distractions. Distraction cue methods attempt to address structural intrinsic distractions by removing interference from the features through a unique decoder. However, they tend to overlook the problem of inter-structural distractions. In this paper, we propose distraction-aware hierarchical learning (DHL) for vascular structure segmentation in IVUS images. Inspired by distraction cue methods for removing interference in a decoder, the DHL is designed as a hierarchical decoder that gradually removes structure-dependent distractions. The DHL includes global perception process, distraction perception process and structural perception process. The global perception process and distraction perception process remove structural intrinsic distractions then the structural perception process removes inter-structural distractions. In the global perception process, the DHL searches for the coarse structural region of the vascular structures on the slice of IVUS sequence. In the distraction perception process, the DHL progressively refines the coarse structural region of the vascular structures to remove structural distractions. In the structural perception process, the DHL detects regions of inter-structural distractions in fused structure features then separates them. Extensive experiments on 361 subjects show that the DHL is effective (e.g., the average Dice is greater than 0.95), and superior to ten state-of-the-art IVUS vascular structure segmentation methods.</p></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"115 ","pages":"Article 102381"},"PeriodicalIF":5.7,"publicationDate":"2024-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140618092","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sub-features orthogonal decoupling: Detecting bone wall absence via a small number of abnormal examples for temporal CT images 子特征正交解耦:通过少量异常示例检测颞部 CT 图像的骨壁缺失
IF 5.7 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2024-04-12 DOI: 10.1016/j.compmedimag.2024.102380
Xiaoguang Li , Yichao Zhou , Hongxia Yin , Pengfei Zhao , Ruowei Tang , Han Lv , Yating Qin , Li Zhuo , Zhenchang Wang

The absence of bone wall located in the jugular bulb and sigmoid sinus of the temporal bone is one of the important reasons for pulsatile tinnitus. Automatic and accurate detection of these abnormal singes in CT slices has important theoretical significance and clinical value. Due to the shortage of abnormal samples, imbalanced samples, small inter-class differences, and low interpretability, existing deep-learning methods are greatly challenged. In this paper, we proposed a sub-features orthogonal decoupling model, which can effectively disentangle the representation features into class-specific sub-features and class-independent sub-features in a latent space. The former contains the discriminative information, while, the latter preserves information for image reconstruction. In addition, the proposed method can generate image samples using category conversion by combining the different class-specific sub-features and the class-independent sub-features, achieving corresponding mapping between deep features and images of specific classes. The proposed model improves the interpretability of the deep model and provides image synthesis methods for downstream tasks. The effectiveness of the method was verified in the detection of bone wall absence in the temporal bone jugular bulb and sigmoid sinus.

位于颞骨颈静脉球和乙状窦的骨壁缺失是导致搏动性耳鸣的重要原因之一。自动、准确地检测 CT 切片中的这些异常单体具有重要的理论意义和临床价值。由于异常样本不足、样本不平衡、类间差异小、可解释性低等问题,现有的深度学习方法受到很大挑战。本文提出了一种子特征正交解耦模型,它能有效地将表征特征在潜空间中分解为特定于类的子特征和与类无关的子特征。前者包含判别信息,后者则保留用于图像重建的信息。此外,所提出的方法还能通过结合不同类别的特定子特征和与类别无关的子特征,利用类别转换生成图像样本,实现深度特征与特定类别图像之间的对应映射。所提出的模型提高了深度模型的可解释性,并为下游任务提供了图像合成方法。在检测颞骨颈静脉球和乙状窦的骨壁缺失时,验证了该方法的有效性。
{"title":"Sub-features orthogonal decoupling: Detecting bone wall absence via a small number of abnormal examples for temporal CT images","authors":"Xiaoguang Li ,&nbsp;Yichao Zhou ,&nbsp;Hongxia Yin ,&nbsp;Pengfei Zhao ,&nbsp;Ruowei Tang ,&nbsp;Han Lv ,&nbsp;Yating Qin ,&nbsp;Li Zhuo ,&nbsp;Zhenchang Wang","doi":"10.1016/j.compmedimag.2024.102380","DOIUrl":"https://doi.org/10.1016/j.compmedimag.2024.102380","url":null,"abstract":"<div><p>The absence of bone wall located in the jugular bulb and sigmoid sinus of the temporal bone is one of the important reasons for pulsatile tinnitus. Automatic and accurate detection of these abnormal singes in CT slices has important theoretical significance and clinical value. Due to the shortage of abnormal samples, imbalanced samples, small inter-class differences, and low interpretability, existing deep-learning methods are greatly challenged. In this paper, we proposed a sub-features orthogonal decoupling model, which can effectively disentangle the representation features into class-specific sub-features and class-independent sub-features in a latent space. The former contains the discriminative information, while, the latter preserves information for image reconstruction. In addition, the proposed method can generate image samples using category conversion by combining the different class-specific sub-features and the class-independent sub-features, achieving corresponding mapping between deep features and images of specific classes. The proposed model improves the interpretability of the deep model and provides image synthesis methods for downstream tasks. The effectiveness of the method was verified in the detection of bone wall absence in the temporal bone jugular bulb and sigmoid sinus.</p></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"115 ","pages":"Article 102380"},"PeriodicalIF":5.7,"publicationDate":"2024-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140552526","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computerized Medical Imaging and Graphics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1