首页 > 最新文献

Medical image analysis最新文献

英文 中文
Multi-contrast image super-resolution with deformable attention and neighborhood-based feature aggregation (DANCE): Applications in anatomic and metabolic MRI 利用可变形注意力和基于邻域的特征聚合(DANCE)实现多对比图像超分辨率:在解剖和代谢磁共振成像中的应用。
IF 10.7 1区 医学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-30 DOI: 10.1016/j.media.2024.103359
Wenxuan Chen , Sirui Wu , Shuai Wang , Zhongsen Li , Jia Yang , Huifeng Yao , Qiyuan Tian , Xiaolei Song
Multi-contrast magnetic resonance imaging (MRI) reflects information about human tissues from different perspectives and has wide clinical applications. By utilizing the auxiliary information from reference images (Refs) in the easy-to-obtain modality, multi-contrast MRI super-resolution (SR) methods can synthesize high-resolution (HR) images from their low-resolution (LR) counterparts in the hard-to-obtain modality. In this study, we systematically discussed the potential impacts caused by cross-modal misalignments between LRs and Refs and, based on this discussion, proposed a novel deep-learning-based method with Deformable Attention and Neighborhood-based feature aggregation to be Computationally Efficient (DANCE) and insensitive to misalignments. Our method has been evaluated in two public MRI datasets, i.e., IXI and FastMRI, and an in-house MR metabolic imaging dataset with amide proton transfer weighted (APTW) images. Experimental results reveal that our method consistently outperforms baselines in various scenarios, with significant superiority observed in the misaligned group of IXI dataset and the prospective study of the clinical dataset. The robustness study proves that our method is insensitive to misalignments, maintaining an average PSNR of 30.67 dB when faced with a maximum range of ±9°and ±9 pixels of rotation and translation on Refs. Given our method’s desirable comprehensive performance, good robustness, and moderate computational complexity, it possesses substantial potential for clinical applications.
多对比度磁共振成像(MRI)从不同角度反映人体组织信息,具有广泛的临床应用价值。多对比磁共振成像超分辨率(SR)方法通过利用易获取模式中参考图像(Refs)的辅助信息,可从难获取模式中的低分辨率(LR)对应图像合成高分辨率(HR)图像。在这项研究中,我们系统地讨论了低分辨率图像和参考图像之间的跨模态错位可能造成的影响,并在此基础上提出了一种基于深度学习的新型方法,该方法具有可变形注意力和基于邻域的特征聚合,计算效率高(DANCE)且对错位不敏感。我们的方法在两个公共磁共振成像数据集(即 IXI 和 FastMRI)和一个内部磁共振代谢成像数据集(含酰胺质子转移加权(APTW)图像)中进行了评估。实验结果表明,在各种情况下,我们的方法始终优于基线方法,在 IXI 数据集的错位组和临床数据集的前瞻性研究中观察到了明显的优越性。鲁棒性研究证明,我们的方法对错位不敏感,在面对参考文献上最大范围为 ±9° 和 ±9 像素的旋转和平移时,平均 PSNR 保持在 30.67 dB。鉴于我们的方法具有理想的综合性能、良好的鲁棒性和适中的计算复杂度,它在临床应用中具有巨大的潜力。
{"title":"Multi-contrast image super-resolution with deformable attention and neighborhood-based feature aggregation (DANCE): Applications in anatomic and metabolic MRI","authors":"Wenxuan Chen ,&nbsp;Sirui Wu ,&nbsp;Shuai Wang ,&nbsp;Zhongsen Li ,&nbsp;Jia Yang ,&nbsp;Huifeng Yao ,&nbsp;Qiyuan Tian ,&nbsp;Xiaolei Song","doi":"10.1016/j.media.2024.103359","DOIUrl":"10.1016/j.media.2024.103359","url":null,"abstract":"<div><div>Multi-contrast magnetic resonance imaging (MRI) reflects information about human tissues from different perspectives and has wide clinical applications. By utilizing the auxiliary information from reference images (Refs) in the easy-to-obtain modality, multi-contrast MRI super-resolution (SR) methods can synthesize high-resolution (HR) images from their low-resolution (LR) counterparts in the hard-to-obtain modality. In this study, we systematically discussed the potential impacts caused by cross-modal misalignments between LRs and Refs and, based on this discussion, proposed a novel deep-learning-based method with <strong>D</strong>eformable <strong>A</strong>ttention and <strong>N</strong>eighborhood-based feature aggregation to be <strong>C</strong>omputationally <strong>E</strong>fficient (DANCE) and insensitive to misalignments. Our method has been evaluated in two public MRI datasets, i.e., IXI and FastMRI, and an in-house MR metabolic imaging dataset with amide proton transfer weighted (APTW) images. Experimental results reveal that our method consistently outperforms baselines in various scenarios, with significant superiority observed in the misaligned group of IXI dataset and the prospective study of the clinical dataset. The robustness study proves that our method is insensitive to misalignments, maintaining an average PSNR of 30.67 dB when faced with a maximum range of ±9°and ±9 pixels of rotation and translation on Refs. Given our method’s desirable comprehensive performance, good robustness, and moderate computational complexity, it possesses substantial potential for clinical applications.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"99 ","pages":"Article 103359"},"PeriodicalIF":10.7,"publicationDate":"2024-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142391705","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Flow-based Truncated Denoising Diffusion Model for super-resolution Magnetic Resonance Spectroscopic Imaging 用于超分辨率磁共振波谱成像的流式截断去噪扩散模型
IF 10.7 1区 医学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-27 DOI: 10.1016/j.media.2024.103358
Siyuan Dong , Zhuotong Cai , Gilbert Hangel , Wolfgang Bogner , Georg Widhalm , Yaqing Huang , Qinghao Liang , Chenyu You , Chathura Kumaragamage , Robert K. Fulbright , Amit Mahajan , Amin Karbasi , John A. Onofrey , Robin A. de Graaf , James S. Duncan
Magnetic Resonance Spectroscopic Imaging (MRSI) is a non-invasive imaging technique for studying metabolism and has become a crucial tool for understanding neurological diseases, cancers and diabetes. High spatial resolution MRSI is needed to characterize lesions, but in practice MRSI is acquired at low resolution due to time and sensitivity restrictions caused by the low metabolite concentrations. Therefore, there is an imperative need for a post-processing approach to generate high-resolution MRSI from low-resolution data that can be acquired fast and with high sensitivity. Deep learning-based super-resolution methods provided promising results for improving the spatial resolution of MRSI, but they still have limited capability to generate accurate and high-quality images. Recently, diffusion models have demonstrated superior learning capability than other generative models in various tasks, but sampling from diffusion models requires iterating through a large number of diffusion steps, which is time-consuming. This work introduces a Flow-based Truncated Denoising Diffusion Model (FTDDM) for super-resolution MRSI, which shortens the diffusion process by truncating the diffusion chain, and the truncated steps are estimated using a normalizing flow-based network. The network is conditioned on upscaling factors to enable multi-scale super-resolution. To train and evaluate the deep learning models, we developed a 1H-MRSI dataset acquired from 25 high-grade glioma patients. We demonstrate that FTDDM outperforms existing generative models while speeding up the sampling process by over 9-fold compared to the baseline diffusion model. Neuroradiologists’ evaluations confirmed the clinical advantages of our method, which also supports uncertainty estimation and sharpness adjustment, extending its potential clinical applications.
磁共振波谱成像(MRSI)是一种研究新陈代谢的非侵入性成像技术,已成为了解神经系统疾病、癌症和糖尿病的重要工具。要描述病变的特征,需要高空间分辨率的 MRSI,但在实践中,由于代谢物浓度低造成的时间和灵敏度限制,MRSI 的采集分辨率较低。因此,亟需一种后处理方法,从低分辨率数据中生成高分辨率的 MRSI,并能快速、高灵敏度地获取。基于深度学习的超分辨率方法为提高 MRSI 的空间分辨率提供了可喜的成果,但它们生成精确和高质量图像的能力仍然有限。最近,扩散模型在各种任务中都表现出了优于其他生成模型的学习能力,但从扩散模型采样需要迭代大量的扩散步骤,非常耗时。这项工作为超分辨率 MRSI 引入了基于流的截断去噪扩散模型(FTDDM),该模型通过截断扩散链来缩短扩散过程,截断的步骤使用基于流的归一化网络进行估算。该网络以上调因子为条件,以实现多尺度超分辨率。为了训练和评估深度学习模型,我们开发了一个 1H-MRSI 数据集,该数据集来自 25 名高级别胶质瘤患者。我们证明,FTDDM 优于现有的生成模型,同时与基线扩散模型相比,采样过程的速度提高了 9 倍以上。神经放射科医生的评估证实了我们方法的临床优势,该方法还支持不确定性估计和锐度调整,从而扩展了其潜在的临床应用。
{"title":"A Flow-based Truncated Denoising Diffusion Model for super-resolution Magnetic Resonance Spectroscopic Imaging","authors":"Siyuan Dong ,&nbsp;Zhuotong Cai ,&nbsp;Gilbert Hangel ,&nbsp;Wolfgang Bogner ,&nbsp;Georg Widhalm ,&nbsp;Yaqing Huang ,&nbsp;Qinghao Liang ,&nbsp;Chenyu You ,&nbsp;Chathura Kumaragamage ,&nbsp;Robert K. Fulbright ,&nbsp;Amit Mahajan ,&nbsp;Amin Karbasi ,&nbsp;John A. Onofrey ,&nbsp;Robin A. de Graaf ,&nbsp;James S. Duncan","doi":"10.1016/j.media.2024.103358","DOIUrl":"10.1016/j.media.2024.103358","url":null,"abstract":"<div><div>Magnetic Resonance Spectroscopic Imaging (MRSI) is a non-invasive imaging technique for studying metabolism and has become a crucial tool for understanding neurological diseases, cancers and diabetes. High spatial resolution MRSI is needed to characterize lesions, but in practice MRSI is acquired at low resolution due to time and sensitivity restrictions caused by the low metabolite concentrations. Therefore, there is an imperative need for a post-processing approach to generate high-resolution MRSI from low-resolution data that can be acquired fast and with high sensitivity. Deep learning-based super-resolution methods provided promising results for improving the spatial resolution of MRSI, but they still have limited capability to generate accurate and high-quality images. Recently, diffusion models have demonstrated superior learning capability than other generative models in various tasks, but sampling from diffusion models requires iterating through a large number of diffusion steps, which is time-consuming. This work introduces a Flow-based Truncated Denoising Diffusion Model (FTDDM) for super-resolution MRSI, which shortens the diffusion process by truncating the diffusion chain, and the truncated steps are estimated using a normalizing flow-based network. The network is conditioned on upscaling factors to enable multi-scale super-resolution. To train and evaluate the deep learning models, we developed a <sup>1</sup>H-MRSI dataset acquired from 25 high-grade glioma patients. We demonstrate that FTDDM outperforms existing generative models while speeding up the sampling process by over 9-fold compared to the baseline diffusion model. Neuroradiologists’ evaluations confirmed the clinical advantages of our method, which also supports uncertainty estimation and sharpness adjustment, extending its potential clinical applications.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"99 ","pages":"Article 103358"},"PeriodicalIF":10.7,"publicationDate":"2024-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142356947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Label refinement network from synthetic error augmentation for medical image segmentation 用于医学图像分割的合成误差增强标签细化网络
IF 10.7 1区 医学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-27 DOI: 10.1016/j.media.2024.103355
Shuai Chen , Antonio Garcia-Uceda , Jiahang Su , Gijs van Tulder , Lennard Wolff , Theo van Walsum , Marleen de Bruijne
Deep convolutional neural networks for image segmentation do not learn the label structure explicitly and may produce segmentations with an incorrect structure, e.g., with disconnected cylindrical structures in the segmentation of tree-like structures such as airways or blood vessels. In this paper, we propose a novel label refinement method to correct such errors from an initial segmentation, implicitly incorporating information about label structure. This method features two novel parts: (1) a model that generates synthetic structural errors, and (2) a label appearance simulation network that produces segmentations with synthetic errors that are similar in appearance to the real initial segmentations. Using these segmentations with synthetic errors and the original images, the label refinement network is trained to correct errors and improve the initial segmentations. The proposed method is validated on two segmentation tasks: airway segmentation from chest computed tomography (CT) scans and brain vessel segmentation from 3D CT angiography (CTA) images of the brain. In both applications, our method significantly outperformed a standard 3D U-Net, four previous label refinement methods, and a U-Net trained with a loss tailored for tubular structures. Improvements are even larger when additional unlabeled data is used for model training. In an ablation study, we demonstrate the value of the different components of the proposed method.
用于图像分割的深度卷积神经网络无法显式学习标签结构,因此可能会产生结构不正确的分割结果,例如在分割气管或血管等树状结构时,会产生断开的圆柱形结构。在本文中,我们提出了一种新颖的标签细化方法,通过隐含标签结构信息,从初始分割中纠正此类错误。这种方法有两个新颖的部分:(1) 一个生成合成结构错误的模型;(2) 一个标签外观模拟网络,生成与真实初始分割外观相似的带有合成错误的分割。利用这些带有合成误差的分割和原始图像,对标签细化网络进行训练,以纠正错误并改进初始分割。提出的方法在两项分割任务中得到了验证:胸部计算机断层扫描(CT)中的气道分割和大脑三维 CT 血管造影(CTA)图像中的脑血管分割。在这两项应用中,我们的方法明显优于标准三维 U-Net、之前的四种标签细化方法以及针对管状结构进行损失训练的 U-Net。如果在模型训练中使用额外的未标记数据,则改进幅度会更大。在一项消融研究中,我们展示了所提方法不同组成部分的价值。
{"title":"Label refinement network from synthetic error augmentation for medical image segmentation","authors":"Shuai Chen ,&nbsp;Antonio Garcia-Uceda ,&nbsp;Jiahang Su ,&nbsp;Gijs van Tulder ,&nbsp;Lennard Wolff ,&nbsp;Theo van Walsum ,&nbsp;Marleen de Bruijne","doi":"10.1016/j.media.2024.103355","DOIUrl":"10.1016/j.media.2024.103355","url":null,"abstract":"<div><div>Deep convolutional neural networks for image segmentation do not learn the label structure explicitly and may produce segmentations with an incorrect structure, e.g., with disconnected cylindrical structures in the segmentation of tree-like structures such as airways or blood vessels. In this paper, we propose a novel label refinement method to correct such errors from an initial segmentation, implicitly incorporating information about label structure. This method features two novel parts: (1) a model that generates synthetic structural errors, and (2) a label appearance simulation network that produces segmentations with synthetic errors that are similar in appearance to the real initial segmentations. Using these segmentations with synthetic errors and the original images, the label refinement network is trained to correct errors and improve the initial segmentations. The proposed method is validated on two segmentation tasks: airway segmentation from chest computed tomography (CT) scans and brain vessel segmentation from 3D CT angiography (CTA) images of the brain. In both applications, our method significantly outperformed a standard 3D U-Net, four previous label refinement methods, and a U-Net trained with a loss tailored for tubular structures. Improvements are even larger when additional unlabeled data is used for model training. In an ablation study, we demonstrate the value of the different components of the proposed method.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"99 ","pages":"Article 103355"},"PeriodicalIF":10.7,"publicationDate":"2024-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142378057","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MUsculo-Skeleton-Aware (MUSA) deep learning for anatomically guided head-and-neck CT deformable registration 用于解剖学引导的头颈部 CT 可变形配准的 MUsculo-Skeleton-Aware (MUSA) 深度学习。
IF 10.7 1区 医学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-21 DOI: 10.1016/j.media.2024.103351
Hengjie Liu , Elizabeth McKenzie , Di Xu , Qifan Xu , Robert K. Chin , Dan Ruan , Ke Sheng
Deep-learning-based deformable image registration (DL-DIR) has demonstrated improved accuracy compared to time-consuming non-DL methods across various anatomical sites. However, DL-DIR is still challenging in heterogeneous tissue regions with large deformation. In fact, several state-of-the-art DL-DIR methods fail to capture the large, anatomically plausible deformation when tested on head-and-neck computed tomography (CT) images. These results allude to the possibility that such complex head-and-neck deformation may be beyond the capacity of a single network structure or a homogeneous smoothness regularization. To address the challenge of combined multi-scale musculoskeletal motion and soft tissue deformation in the head-and-neck region, we propose a MUsculo-Skeleton-Aware (MUSA) framework to anatomically guide DL-DIR by leveraging the explicit multiresolution strategy and the inhomogeneous deformation constraints between the bony structures and soft tissue. The proposed method decomposes the complex deformation into a bulk posture change and residual fine deformation. It can accommodate both inter- and intra- subject registration. Our results show that the MUSA framework can consistently improve registration accuracy and, more importantly, the plausibility of deformation for various network architectures. The code will be publicly available at https://github.com/HengjieLiu/DIR-MUSA.
与耗时的非深度学习方法相比,基于深度学习的可变形图像配准(DL-DIR)在各种解剖部位的精确度都有所提高。然而,在具有较大变形的异质组织区域,DL-DIR 仍然具有挑战性。事实上,在头颈部计算机断层扫描(CT)图像上进行测试时,几种最先进的 DL-DIR 方法都无法捕捉到解剖学上合理的大变形。这些结果表明,这种复杂的头颈部变形可能超出了单一网络结构或均匀平滑正则化的能力范围。为了应对头颈部多尺度肌肉骨骼运动和软组织综合变形的挑战,我们提出了一种多尺度骨骼感知(MUSA)框架,利用显式多分辨率策略以及骨性结构和软组织之间的非均质变形约束,从解剖学角度指导 DL-DIR。所提出的方法将复杂变形分解为整体姿势变化和残余精细变形。该方法可用于主体间和主体内的配准。我们的研究结果表明,MUSA 框架可以持续提高配准精度,更重要的是,可以提高各种网络结构的变形可信度。代码将在 https://github.com/HengjieLiu/DIR-MUSA 上公开发布。
{"title":"MUsculo-Skeleton-Aware (MUSA) deep learning for anatomically guided head-and-neck CT deformable registration","authors":"Hengjie Liu ,&nbsp;Elizabeth McKenzie ,&nbsp;Di Xu ,&nbsp;Qifan Xu ,&nbsp;Robert K. Chin ,&nbsp;Dan Ruan ,&nbsp;Ke Sheng","doi":"10.1016/j.media.2024.103351","DOIUrl":"10.1016/j.media.2024.103351","url":null,"abstract":"<div><div>Deep-learning-based deformable image registration (DL-DIR) has demonstrated improved accuracy compared to time-consuming non-DL methods across various anatomical sites. However, DL-DIR is still challenging in heterogeneous tissue regions with large deformation. In fact, several state-of-the-art DL-DIR methods fail to capture the large, anatomically plausible deformation when tested on head-and-neck computed tomography (CT) images. These results allude to the possibility that such complex head-and-neck deformation may be beyond the capacity of a single network structure or a homogeneous smoothness regularization. To address the challenge of combined multi-scale musculoskeletal motion and soft tissue deformation in the head-and-neck region, we propose a MUsculo-Skeleton-Aware (MUSA) framework to anatomically guide DL-DIR by leveraging the explicit multiresolution strategy and the inhomogeneous deformation constraints between the bony structures and soft tissue. The proposed method decomposes the complex deformation into a bulk posture change and residual fine deformation. It can accommodate both inter- and intra- subject registration. Our results show that the MUSA framework can consistently improve registration accuracy and, more importantly, the plausibility of deformation for various network architectures. The code will be publicly available at <span><span>https://github.com/HengjieLiu/DIR-MUSA</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"99 ","pages":"Article 103351"},"PeriodicalIF":10.7,"publicationDate":"2024-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142400703","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DeepResBat: Deep residual batch harmonization accounting for covariate distribution differences DeepResBat:考虑协变量分布差异的深度残差批量协调。
IF 10.7 1区 医学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-21 DOI: 10.1016/j.media.2024.103354
Lijun An , Chen Zhang , Naren Wulan , Shaoshi Zhang , Pansheng Chen , Fang Ji , Kwun Kei Ng , Christopher Chen , Juan Helen Zhou , B.T. Thomas Yeo , Alzheimer's Disease Neuroimaging InitiativeAustralian Imaging Biomarkers and Lifestyle Study of Aging
Pooling MRI data from multiple datasets requires harmonization to reduce undesired inter-site variabilities, while preserving effects of biological variables (or covariates). The popular harmonization approach ComBat uses a mixed effect regression framework that explicitly accounts for covariate distribution differences across datasets. There is also significant interest in developing harmonization approaches based on deep neural networks (DNNs), such as conditional variational autoencoder (cVAE). However, current DNN approaches do not explicitly account for covariate distribution differences across datasets. Here, we provide mathematical results, suggesting that not accounting for covariates can lead to suboptimal harmonization. We propose two DNN-based covariate-aware harmonization approaches: covariate VAE (coVAE) and DeepResBat. The coVAE approach is a natural extension of cVAE by concatenating covariates and site information with site- and covariate-invariant latent representations. DeepResBat adopts a residual framework inspired by ComBat. DeepResBat first removes the effects of covariates with nonlinear regression trees, followed by eliminating site differences with cVAE. Finally, covariate effects are added back to the harmonized residuals. Using three datasets from three continents with a total of 2787 participants and 10,085 anatomical T1 scans, we find that DeepResBat and coVAE outperformed ComBat, CovBat and cVAE in terms of removing dataset differences, while enhancing biological effects of interest. However, coVAE hallucinates spurious associations between anatomical MRI and covariates even when no association exists. Future studies proposing DNN-based harmonization approaches should be aware of this false positive pitfall. Overall, our results suggest that DeepResBat is an effective deep learning alternative to ComBat. Code for DeepResBat can be found here: https://github.com/ThomasYeoLab/CBIG/tree/master/stable_projects/harmonization/An2024_DeepResBat.
汇集来自多个数据集的磁共振成像数据需要进行协调,以减少不希望出现的站间变异,同时保留生物变量(或协变量)的影响。流行的协调方法 ComBat 采用混合效应回归框架,明确考虑了不同数据集的协变量分布差异。此外,基于深度神经网络(DNN)(如条件变异自动编码器(cVAE))开发协调方法也备受关注。然而,目前的 DNN 方法并未明确考虑不同数据集的协变量分布差异。在此,我们提供了数学结果,表明不考虑协变量会导致次优协调。我们提出了两种基于 DNN 的协变量感知协调方法:协变量 VAE(coVAE)和 DeepResBat。coVAE 方法是 cVAE 的自然扩展,它将协变量和站点信息与站点和协变量不变的潜在表征结合在一起。DeepResBat 采用了受 ComBat 启发的残差框架。DeepResBat 首先用非线性回归树消除协变量的影响,然后用 cVAE 消除站点差异。最后,将协变量效应加回统一残差。通过使用来自三大洲的三个数据集,共 2787 名参与者和 10085 次解剖 T1 扫描,我们发现 DeepResBat 和 coVAE 在消除数据集差异方面优于 ComBat、CovBat 和 cVAE,同时增强了感兴趣的生物效应。然而,coVAE 会在解剖磁共振成像和协变量之间产生虚假关联,即使在不存在关联的情况下也是如此。未来提出基于 DNN 的协调方法的研究应注意这一假阳性陷阱。总之,我们的研究结果表明,DeepResBat 是一种有效的深度学习方法,可以替代 ComBat。DeepResBat 的代码可在此处找到:https://github.com/ThomasYeoLab/CBIG/tree/master/stable_projects/harmonization/An2024_DeepResBat。
{"title":"DeepResBat: Deep residual batch harmonization accounting for covariate distribution differences","authors":"Lijun An ,&nbsp;Chen Zhang ,&nbsp;Naren Wulan ,&nbsp;Shaoshi Zhang ,&nbsp;Pansheng Chen ,&nbsp;Fang Ji ,&nbsp;Kwun Kei Ng ,&nbsp;Christopher Chen ,&nbsp;Juan Helen Zhou ,&nbsp;B.T. Thomas Yeo ,&nbsp;Alzheimer's Disease Neuroimaging InitiativeAustralian Imaging Biomarkers and Lifestyle Study of Aging","doi":"10.1016/j.media.2024.103354","DOIUrl":"10.1016/j.media.2024.103354","url":null,"abstract":"<div><div>Pooling MRI data from multiple datasets requires harmonization to reduce undesired inter-site variabilities, while preserving effects of biological variables (or covariates). The popular harmonization approach ComBat uses a mixed effect regression framework that explicitly accounts for covariate distribution differences across datasets. There is also significant interest in developing harmonization approaches based on deep neural networks (DNNs), such as conditional variational autoencoder (cVAE). However, current DNN approaches do not explicitly account for covariate distribution differences across datasets. Here, we provide mathematical results, suggesting that not accounting for covariates can lead to suboptimal harmonization. We propose two DNN-based covariate-aware harmonization approaches: covariate VAE (coVAE) and DeepResBat. The coVAE approach is a natural extension of cVAE by concatenating covariates and site information with site- and covariate-invariant latent representations. DeepResBat adopts a residual framework inspired by ComBat. DeepResBat first removes the effects of covariates with nonlinear regression trees, followed by eliminating site differences with cVAE. Finally, covariate effects are added back to the harmonized residuals. Using three datasets from three continents with a total of 2787 participants and 10,085 anatomical T1 scans, we find that DeepResBat and coVAE outperformed ComBat, CovBat and cVAE in terms of removing dataset differences, while enhancing biological effects of interest. However, coVAE hallucinates spurious associations between anatomical MRI and covariates even when no association exists. Future studies proposing DNN-based harmonization approaches should be aware of this false positive pitfall. Overall, our results suggest that DeepResBat is an effective deep learning alternative to ComBat. Code for DeepResBat can be found here: <span><span>https://github.com/ThomasYeoLab/CBIG/tree/master/stable_projects/harmonization/An2024_DeepResBat</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"99 ","pages":"Article 103354"},"PeriodicalIF":10.7,"publicationDate":"2024-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142378046","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PSFHS challenge report: Pubic symphysis and fetal head segmentation from intrapartum ultrasound images PSFHS 挑战报告:从产后超声图像中分割耻骨联合和胎儿头部
IF 10.7 1区 医学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-21 DOI: 10.1016/j.media.2024.103353
Jieyun Bai , Zihao Zhou , Zhanhong Ou , Gregor Koehler , Raphael Stock , Klaus Maier-Hein , Marawan Elbatel , Robert Martí , Xiaomeng Li , Yaoyang Qiu , Panjie Gou , Gongping Chen , Lei Zhao , Jianxun Zhang , Yu Dai , Fangyijie Wang , Guénolé Silvestre , Kathleen Curran , Hongkun Sun , Jing Xu , Karim Lekadir
Segmentation of the fetal and maternal structures, particularly intrapartum ultrasound imaging as advocated by the International Society of Ultrasound in Obstetrics and Gynecology (ISUOG) for monitoring labor progression, is a crucial first step for quantitative diagnosis and clinical decision-making. This requires specialized analysis by obstetrics professionals, in a task that i) is highly time- and cost-consuming and ii) often yields inconsistent results. The utility of automatic segmentation algorithms for biometry has been proven, though existing results remain suboptimal. To push forward advancements in this area, the Grand Challenge on Pubic Symphysis-Fetal Head Segmentation (PSFHS) was held alongside the 26th International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI 2023). This challenge aimed to enhance the development of automatic segmentation algorithms at an international scale, providing the largest dataset to date with 5,101 intrapartum ultrasound images collected from two ultrasound machines across three hospitals from two institutions. The scientific community's enthusiastic participation led to the selection of the top 8 out of 179 entries from 193 registrants in the initial phase to proceed to the competition's second stage. These algorithms have elevated the state-of-the-art in automatic PSFHS from intrapartum ultrasound images. A thorough analysis of the results pinpointed ongoing challenges in the field and outlined recommendations for future work. The top solutions and the complete dataset remain publicly available, fostering further advancements in automatic segmentation and biometry for intrapartum ultrasound imaging.
对胎儿和母体结构进行分割,尤其是国际妇产科超声学会(ISUOG)所倡导的用于监测产程进展的产前超声成像,是定量诊断和临床决策的关键第一步。这需要产科专业人员进行专业分析,这项工作 i) 非常耗费时间和成本,ii) 经常产生不一致的结果。自动分割算法在生物测量中的实用性已得到证实,但现有结果仍不理想。为了推动这一领域的发展,在第 26 届国际医学影像计算和计算机辅助干预会议(MICCAI 2023)召开的同时,还举办了耻骨联合-胎儿头部分割(PSFHS)大挑战。该挑战赛旨在加强国际范围内自动分割算法的开发,提供了迄今为止最大的数据集,包括从两家机构的三家医院的两台超声波机上收集的 5,101 张产后超声图像。由于科学界的踊跃参与,在初赛阶段从 193 名报名者的 179 个参赛项目中选出了前 8 名进入第二阶段。这些算法提升了产前超声图像自动 PSFHS 的技术水平。对结果的全面分析指出了该领域目前面临的挑战,并概述了对未来工作的建议。优秀的解决方案和完整的数据集将继续公开,以促进产前超声成像的自动分割和生物测量技术的进一步发展。
{"title":"PSFHS challenge report: Pubic symphysis and fetal head segmentation from intrapartum ultrasound images","authors":"Jieyun Bai ,&nbsp;Zihao Zhou ,&nbsp;Zhanhong Ou ,&nbsp;Gregor Koehler ,&nbsp;Raphael Stock ,&nbsp;Klaus Maier-Hein ,&nbsp;Marawan Elbatel ,&nbsp;Robert Martí ,&nbsp;Xiaomeng Li ,&nbsp;Yaoyang Qiu ,&nbsp;Panjie Gou ,&nbsp;Gongping Chen ,&nbsp;Lei Zhao ,&nbsp;Jianxun Zhang ,&nbsp;Yu Dai ,&nbsp;Fangyijie Wang ,&nbsp;Guénolé Silvestre ,&nbsp;Kathleen Curran ,&nbsp;Hongkun Sun ,&nbsp;Jing Xu ,&nbsp;Karim Lekadir","doi":"10.1016/j.media.2024.103353","DOIUrl":"10.1016/j.media.2024.103353","url":null,"abstract":"<div><div>Segmentation of the fetal and maternal structures, particularly intrapartum ultrasound imaging as advocated by the International Society of Ultrasound in Obstetrics and Gynecology (ISUOG) for monitoring labor progression, is a crucial first step for quantitative diagnosis and clinical decision-making. This requires specialized analysis by obstetrics professionals, in a task that i) is highly time- and cost-consuming and ii) often yields inconsistent results. The utility of automatic segmentation algorithms for biometry has been proven, though existing results remain suboptimal. To push forward advancements in this area, the Grand Challenge on Pubic Symphysis-Fetal Head Segmentation (PSFHS) was held alongside the 26th International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI 2023). This challenge aimed to enhance the development of automatic segmentation algorithms at an international scale, providing the largest dataset to date with 5,101 intrapartum ultrasound images collected from two ultrasound machines across three hospitals from two institutions. The scientific community's enthusiastic participation led to the selection of the top 8 out of 179 entries from 193 registrants in the initial phase to proceed to the competition's second stage. These algorithms have elevated the state-of-the-art in automatic PSFHS from intrapartum ultrasound images. A thorough analysis of the results pinpointed ongoing challenges in the field and outlined recommendations for future work. The top solutions and the complete dataset remain publicly available, fostering further advancements in automatic segmentation and biometry for intrapartum ultrasound imaging.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"99 ","pages":"Article 103353"},"PeriodicalIF":10.7,"publicationDate":"2024-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142327456","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fourier Convolution Block with global receptive field for MRI reconstruction 用于磁共振成像重建的具有全局感受野的傅立叶卷积块
IF 10.7 1区 医学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-20 DOI: 10.1016/j.media.2024.103349
Haozhong Sun , Yuze Li , Zhongsen Li , Runyu Yang , Ziming Xu , Jiaqi Dou , Haikun Qi , Huijun Chen

Reconstructing images from under-sampled Magnetic Resonance Imaging (MRI) signals significantly reduces scan time and improves clinical practice. However, Convolutional Neural Network (CNN)-based methods, while demonstrating great performance in MRI reconstruction, may face limitations due to their restricted receptive field (RF), hindering the capture of global features. This is particularly crucial for reconstruction, as aliasing artifacts are distributed globally. Recent advancements in Vision Transformers have further emphasized the significance of a large RF. In this study, we proposed a novel global Fourier Convolution Block (FCB) with whole image RF and low computational complexity by transforming the regular spatial domain convolutions into frequency domain. Visualizations of the effective RF and trained kernels demonstrated that FCB improves the RF of reconstruction models in practice. The proposed FCB was evaluated on four popular CNN architectures using brain and knee MRI datasets. Models with FCB achieved superior PSNR and SSIM than baseline models and exhibited more details and texture recovery. The code is publicly available at https://github.com/Haozhoong/FCB.

从采样不足的磁共振成像(MRI)信号中重建图像可大大缩短扫描时间并改善临床实践。然而,基于卷积神经网络(CNN)的方法虽然在核磁共振成像重建中表现出很好的性能,但由于其感受野(RF)受限,在捕捉全局特征时可能会遇到一些限制。这一点对于重建尤为重要,因为混叠伪影分布在全球范围内。视觉变换器的最新进展进一步强调了大射频的重要性。在这项研究中,我们提出了一种新颖的全局傅立叶卷积块(FCB),通过将常规的空间域卷积转换为频域,实现了全图射频和低计算复杂度。有效射频和训练核的可视化表明,FCB 在实践中提高了重建模型的射频。利用脑部和膝部核磁共振成像数据集对四种流行的 CNN 架构进行了评估。采用 FCB 的模型在 PSNR 和 SSIM 方面均优于基线模型,并显示出更多的细节和纹理恢复。代码可在 https://github.com/Haozhoong/FCB 公开获取。
{"title":"Fourier Convolution Block with global receptive field for MRI reconstruction","authors":"Haozhong Sun ,&nbsp;Yuze Li ,&nbsp;Zhongsen Li ,&nbsp;Runyu Yang ,&nbsp;Ziming Xu ,&nbsp;Jiaqi Dou ,&nbsp;Haikun Qi ,&nbsp;Huijun Chen","doi":"10.1016/j.media.2024.103349","DOIUrl":"10.1016/j.media.2024.103349","url":null,"abstract":"<div><p>Reconstructing images from under-sampled Magnetic Resonance Imaging (MRI) signals significantly reduces scan time and improves clinical practice. However, Convolutional Neural Network (CNN)-based methods, while demonstrating great performance in MRI reconstruction, may face limitations due to their restricted receptive field (RF), hindering the capture of global features. This is particularly crucial for reconstruction, as aliasing artifacts are distributed globally. Recent advancements in Vision Transformers have further emphasized the significance of a large RF. In this study, we proposed a novel global Fourier Convolution Block (FCB) with whole image RF and low computational complexity by transforming the regular spatial domain convolutions into frequency domain. Visualizations of the effective RF and trained kernels demonstrated that FCB improves the RF of reconstruction models in practice. The proposed FCB was evaluated on four popular CNN architectures using brain and knee MRI datasets. Models with FCB achieved superior PSNR and SSIM than baseline models and exhibited more details and texture recovery. The code is publicly available at <span><span>https://github.com/Haozhoong/FCB</span><svg><path></path></svg></span>.</p></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"99 ","pages":"Article 103349"},"PeriodicalIF":10.7,"publicationDate":"2024-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142272920","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Re-identification from histopathology images 组织病理学图像的再识别
IF 10.7 1区 医学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-19 DOI: 10.1016/j.media.2024.103335
Jonathan Ganz , Jonas Ammeling , Samir Jabari , Katharina Breininger , Marc Aubreville
In numerous studies, deep learning algorithms have proven their potential for the analysis of histopathology images, for example, for revealing the subtypes of tumors or the primary origin of metastases. These models require large datasets for training, which must be anonymized to prevent possible patient identity leaks. This study demonstrates that even relatively simple deep learning algorithms can re-identify patients in large histopathology datasets with substantial accuracy. In addition, we compared a comprehensive set of state-of-the-art whole slide image classifiers and feature extractors for the given task. We evaluated our algorithms on two TCIA datasets including lung squamous cell carcinoma (LSCC) and lung adenocarcinoma (LUAD). We also demonstrate the algorithm’s performance on an in-house dataset of meningioma tissue. We predicted the source patient of a slide with F1 scores of up to 80.1% and 77.19% on the LSCC and LUAD datasets, respectively, and with 77.09% on our meningioma dataset. Based on our findings, we formulated a risk assessment scheme to estimate the risk to the patient’s privacy prior to publication.
在大量研究中,深度学习算法证明了其在组织病理学图像分析方面的潜力,例如,用于揭示肿瘤的亚型或转移瘤的原发来源。这些模型需要大量数据集进行训练,这些数据集必须进行匿名处理,以防止可能出现的患者身份泄露。本研究证明,即使是相对简单的深度学习算法,也能在大型组织病理学数据集中重新识别患者,而且准确率很高。此外,我们还针对给定任务比较了一整套最先进的全切片图像分类器和特征提取器。我们在两个 TCIA 数据集上评估了我们的算法,包括肺鳞状细胞癌(LSCC)和肺腺癌(LUAD)。我们还展示了算法在脑膜瘤组织内部数据集上的性能。在 LSCC 和 LUAD 数据集上,我们预测玻片来源患者的 F1 分数分别高达 80.1% 和 77.19%,而在脑膜瘤数据集上则高达 77.09%。根据我们的研究结果,我们制定了一个风险评估方案,以估算发表前患者隐私所面临的风险。
{"title":"Re-identification from histopathology images","authors":"Jonathan Ganz ,&nbsp;Jonas Ammeling ,&nbsp;Samir Jabari ,&nbsp;Katharina Breininger ,&nbsp;Marc Aubreville","doi":"10.1016/j.media.2024.103335","DOIUrl":"10.1016/j.media.2024.103335","url":null,"abstract":"<div><div>In numerous studies, deep learning algorithms have proven their potential for the analysis of histopathology images, for example, for revealing the subtypes of tumors or the primary origin of metastases. These models require large datasets for training, which must be anonymized to prevent possible patient identity leaks. This study demonstrates that even relatively simple deep learning algorithms can re-identify patients in large histopathology datasets with substantial accuracy. In addition, we compared a comprehensive set of state-of-the-art whole slide image classifiers and feature extractors for the given task. We evaluated our algorithms on two TCIA datasets including lung squamous cell carcinoma (LSCC) and lung adenocarcinoma (LUAD). We also demonstrate the algorithm’s performance on an in-house dataset of meningioma tissue. We predicted the source patient of a slide with <span><math><msub><mrow><mi>F</mi></mrow><mrow><mn>1</mn></mrow></msub></math></span> scores of up to 80.1% and 77.19% on the LSCC and LUAD datasets, respectively, and with 77.09% on our meningioma dataset. Based on our findings, we formulated a risk assessment scheme to estimate the risk to the patient’s privacy prior to publication.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"99 ","pages":"Article 103335"},"PeriodicalIF":10.7,"publicationDate":"2024-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1361841524002603/pdfft?md5=6efea46ba696d683bc55409496e68f7b&pid=1-s2.0-S1361841524002603-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142312995","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Maxillofacial bone movements-aware dual graph convolution approach for postoperative facial appearance prediction 用于术后面部外观预测的颌面骨运动感知双图卷积法
IF 10.7 1区 医学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-19 DOI: 10.1016/j.media.2024.103350
Xinrui Huang , Dongming He , Zhenming Li , Xiaofan Zhang , Xudong Wang
Postoperative facial appearance prediction is vital for surgeons to make orthognathic surgical plans and communicate with patients. Conventional biomechanical prediction methods require heavy computations and time-consuming manual operations which hamper their clinical practice. Deep learning based methods have shown the potential to improve computational efficiency and achieve comparable accuracy. However, existing deep learning based methods only learn facial features from facial point clouds and process regional points independently, which has constrains in perceiving facial surface details and topology. In addition, they predict postoperative displacements for all facial points in one step, which is vulnerable to weakly supervised training and easy to produce distorted predictions. To alleviate these limitations, we propose a novel dual graph convolution based postoperative facial appearance prediction model which considers the surface geometry by learning on two graphs constructed from the facial mesh in the Euclidean and geodesic spaces, and transfers the bone movements to facial movements in dual spaces. We further adopt a coarse-to-fine strategy which performs coarse predictions for facial meshes with fewer vertices and then adds more to obtain more robust fine predictions. Experiments on real clinical data demonstrate that our method outperforms state-of-the-art deep learning based methods qualitatively and quantitatively.
术后面部外观预测对于外科医生制定正颌外科手术计划和与患者沟通至关重要。传统的生物力学预测方法需要繁重的计算和耗时的手工操作,这妨碍了他们的临床实践。基于深度学习的方法已显示出提高计算效率和实现可比精度的潜力。然而,现有的基于深度学习的方法只能从面部点云中学习面部特征,并独立处理区域点,这对感知面部表面细节和拓扑结构造成了限制。此外,这些方法预测术后所有面部点的位移都是一步完成,容易受到弱监督训练的影响,预测结果也容易失真。为了缓解这些局限性,我们提出了一种基于双图卷积的新型术后面部外观预测模型,该模型通过在欧几里得空间和大地空间中由面部网格构建的两个图上学习来考虑表面几何,并将骨骼运动转移到双空间中的面部运动。我们还进一步采用了从粗到细的策略,对顶点较少的面部网格进行粗预测,然后增加顶点以获得更稳健的精细预测。在真实临床数据上进行的实验证明,我们的方法在定性和定量方面都优于基于深度学习的最先进方法。
{"title":"Maxillofacial bone movements-aware dual graph convolution approach for postoperative facial appearance prediction","authors":"Xinrui Huang ,&nbsp;Dongming He ,&nbsp;Zhenming Li ,&nbsp;Xiaofan Zhang ,&nbsp;Xudong Wang","doi":"10.1016/j.media.2024.103350","DOIUrl":"10.1016/j.media.2024.103350","url":null,"abstract":"<div><div>Postoperative facial appearance prediction is vital for surgeons to make orthognathic surgical plans and communicate with patients. Conventional biomechanical prediction methods require heavy computations and time-consuming manual operations which hamper their clinical practice. Deep learning based methods have shown the potential to improve computational efficiency and achieve comparable accuracy. However, existing deep learning based methods only learn facial features from facial point clouds and process regional points independently, which has constrains in perceiving facial surface details and topology. In addition, they predict postoperative displacements for all facial points in one step, which is vulnerable to weakly supervised training and easy to produce distorted predictions. To alleviate these limitations, we propose a novel dual graph convolution based postoperative facial appearance prediction model which considers the surface geometry by learning on two graphs constructed from the facial mesh in the Euclidean and geodesic spaces, and transfers the bone movements to facial movements in dual spaces. We further adopt a coarse-to-fine strategy which performs coarse predictions for facial meshes with fewer vertices and then adds more to obtain more robust fine predictions. Experiments on real clinical data demonstrate that our method outperforms state-of-the-art deep learning based methods qualitatively and quantitatively.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"99 ","pages":"Article 103350"},"PeriodicalIF":10.7,"publicationDate":"2024-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142322648","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
UM-Net: Rethinking ICGNet for polyp segmentation with uncertainty modeling UM-Net:利用不确定性建模反思息肉分割 ICGNet
IF 10.7 1区 医学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-19 DOI: 10.1016/j.media.2024.103347
Xiuquan Du , Xuebin Xu , Jiajia Chen , Xuejun Zhang , Lei Li , Heng Liu , Shuo Li
Automatic segmentation of polyps from colonoscopy images plays a critical role in the early diagnosis and treatment of colorectal cancer. Nevertheless, some bottlenecks still exist. In our previous work, we mainly focused on polyps with intra-class inconsistency and low contrast, using ICGNet to solve them. Due to the different equipment, specific locations and properties of polyps, the color distribution of the collected images is inconsistent. ICGNet was designed primarily with reverse-contour guide information and local–global context information, ignoring this inconsistent color distribution, which leads to overfitting problems and makes it difficult to focus only on beneficial image content. In addition, a trustworthy segmentation model should not only produce high-precision results but also provide a measure of uncertainty to accompany its predictions so that physicians can make informed decisions. However, ICGNet only gives the segmentation result and lacks the uncertainty measure. To cope with these novel bottlenecks, we further extend the original ICGNet to a comprehensive and effective network (UM-Net) with two main contributions that have been proved by experiments to have substantial practical value. Firstly, we employ a color transfer operation to weaken the relationship between color and polyps, making the model more concerned with the shape of the polyps. Secondly, we provide the uncertainty to represent the reliability of the segmentation results and use variance to rectify uncertainty. Our improved method is evaluated on five polyp datasets, which shows competitive results compared to other advanced methods in both learning ability and generalization capability. The source code is available at https://github.com/dxqllp/UM-Net.
从结肠镜图像中自动分割息肉在结肠直肠癌的早期诊断和治疗中起着至关重要的作用。然而,目前仍存在一些瓶颈。在以往的工作中,我们主要针对类内不一致和对比度低的息肉,使用 ICGNet 解决这些问题。由于设备的不同、息肉的特殊位置和特性,采集到的图像颜色分布并不一致。ICGNet 在设计时主要使用了反向轮廓引导信息和局部-全局上下文信息,忽略了这种不一致的颜色分布,从而导致过拟合问题,难以只关注有利的图像内容。此外,一个值得信赖的分割模型不仅应该产生高精度的结果,还应该在预测的同时提供不确定性的度量,以便医生做出明智的决定。然而,ICGNet 只提供分割结果,缺乏不确定性测量。为了应对这些新的瓶颈,我们进一步将原有的 ICGNet 扩展为一个全面有效的网络(UM-Net),该网络有两大贡献,已被实验证明具有很大的实用价值。首先,我们采用了颜色转移操作来弱化颜色与息肉之间的关系,使模型更关注息肉的形状。其次,我们提供了不确定性来表示分割结果的可靠性,并使用方差来纠正不确定性。我们的改进方法在五个息肉数据集上进行了评估,结果显示,与其他先进方法相比,我们的方法在学习能力和泛化能力方面都具有竞争力。源代码见 https://github.com/dxqllp/UM-Net。
{"title":"UM-Net: Rethinking ICGNet for polyp segmentation with uncertainty modeling","authors":"Xiuquan Du ,&nbsp;Xuebin Xu ,&nbsp;Jiajia Chen ,&nbsp;Xuejun Zhang ,&nbsp;Lei Li ,&nbsp;Heng Liu ,&nbsp;Shuo Li","doi":"10.1016/j.media.2024.103347","DOIUrl":"10.1016/j.media.2024.103347","url":null,"abstract":"<div><div>Automatic segmentation of polyps from colonoscopy images plays a critical role in the early diagnosis and treatment of colorectal cancer. Nevertheless, some bottlenecks still exist. In our previous work, we mainly focused on polyps with intra-class inconsistency and low contrast, using ICGNet to solve them. Due to the different equipment, specific locations and properties of polyps, the color distribution of the collected images is inconsistent. ICGNet was designed primarily with reverse-contour guide information and local–global context information, ignoring this inconsistent color distribution, which leads to overfitting problems and makes it difficult to focus only on beneficial image content. In addition, a trustworthy segmentation model should not only produce high-precision results but also provide a measure of uncertainty to accompany its predictions so that physicians can make informed decisions. However, ICGNet only gives the segmentation result and lacks the uncertainty measure. To cope with these novel bottlenecks, we further extend the original ICGNet to a comprehensive and effective network (UM-Net) with two main contributions that have been proved by experiments to have substantial practical value. Firstly, we employ a color transfer operation to weaken the relationship between color and polyps, making the model more concerned with the shape of the polyps. Secondly, we provide the uncertainty to represent the reliability of the segmentation results and use variance to rectify uncertainty. Our improved method is evaluated on five polyp datasets, which shows competitive results compared to other advanced methods in both learning ability and generalization capability. The source code is available at <span><span>https://github.com/dxqllp/UM-Net</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"99 ","pages":"Article 103347"},"PeriodicalIF":10.7,"publicationDate":"2024-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142312994","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Medical image analysis
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1