NCCT-to-CECT synthesis with contrast-enhanced knowledge and anatomical perception for multi-organ segmentation in non-contrast CT images

IF 10.7 1区 医学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Medical image analysis Pub Date : 2024-11-26 DOI:10.1016/j.media.2024.103397
Liming Zhong , Ruolin Xiao , Hai Shu , Kaiyi Zheng , Xinming Li , Yuankui Wu , Jianhua Ma , Qianjin Feng , Wei Yang
{"title":"NCCT-to-CECT synthesis with contrast-enhanced knowledge and anatomical perception for multi-organ segmentation in non-contrast CT images","authors":"Liming Zhong ,&nbsp;Ruolin Xiao ,&nbsp;Hai Shu ,&nbsp;Kaiyi Zheng ,&nbsp;Xinming Li ,&nbsp;Yuankui Wu ,&nbsp;Jianhua Ma ,&nbsp;Qianjin Feng ,&nbsp;Wei Yang","doi":"10.1016/j.media.2024.103397","DOIUrl":null,"url":null,"abstract":"<div><div>Contrast-enhanced computed tomography (CECT) is constantly used for delineating organs-at-risk (OARs) in radiation therapy planning. The delineated OARs are needed to transfer from CECT to non-contrast CT (NCCT) for dose calculation. Yet, the use of iodinated contrast agents (CA) in CECT and the dose calculation errors caused by the spatial misalignment between NCCT and CECT images pose risks of adverse side effects. A promising solution is synthesizing CECT images from NCCT scans, which can improve the visibility of organs and abnormalities for more effective multi-organ segmentation in NCCT images. However, existing methods neglect the difference between tissues induced by CA and lack the ability to synthesize the details of organ edges and blood vessels. To address these issues, we propose a contrast-enhanced knowledge and anatomical perception network (CKAP-Net) for NCCT-to-CECT synthesis. CKAP-Net leverages a contrast-enhanced knowledge learning network to capture both similarities and dissimilarities in domain characteristics attributable to CA. Specifically, a CA-based perceptual loss function is introduced to enhance the synthesis of CA details. Furthermore, we design a multi-scale anatomical perception transformer that utilizes multi-scale anatomical information from NCCT images, enabling the precise synthesis of tissue details. Our CKAP-Net is evaluated on a multi-center abdominal NCCT-CECT dataset, a head an neck NCCT-CECT dataset, and an NCMRI-CEMRI dataset. It achieves a MAE of 25.96 ± 2.64, a SSIM of 0.855 ± 0.017, and a PSNR of 32.60 ± 0.02 for CECT synthesis, and a DSC of 81.21 ± 4.44 for segmentation on the internal dataset. Extensive experiments demonstrate that CKAP-Net outperforms state-of-the-art CA synthesis methods and has better generalizability across different datasets.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"100 ","pages":"Article 103397"},"PeriodicalIF":10.7000,"publicationDate":"2024-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Medical image analysis","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1361841524003220","RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Contrast-enhanced computed tomography (CECT) is constantly used for delineating organs-at-risk (OARs) in radiation therapy planning. The delineated OARs are needed to transfer from CECT to non-contrast CT (NCCT) for dose calculation. Yet, the use of iodinated contrast agents (CA) in CECT and the dose calculation errors caused by the spatial misalignment between NCCT and CECT images pose risks of adverse side effects. A promising solution is synthesizing CECT images from NCCT scans, which can improve the visibility of organs and abnormalities for more effective multi-organ segmentation in NCCT images. However, existing methods neglect the difference between tissues induced by CA and lack the ability to synthesize the details of organ edges and blood vessels. To address these issues, we propose a contrast-enhanced knowledge and anatomical perception network (CKAP-Net) for NCCT-to-CECT synthesis. CKAP-Net leverages a contrast-enhanced knowledge learning network to capture both similarities and dissimilarities in domain characteristics attributable to CA. Specifically, a CA-based perceptual loss function is introduced to enhance the synthesis of CA details. Furthermore, we design a multi-scale anatomical perception transformer that utilizes multi-scale anatomical information from NCCT images, enabling the precise synthesis of tissue details. Our CKAP-Net is evaluated on a multi-center abdominal NCCT-CECT dataset, a head an neck NCCT-CECT dataset, and an NCMRI-CEMRI dataset. It achieves a MAE of 25.96 ± 2.64, a SSIM of 0.855 ± 0.017, and a PSNR of 32.60 ± 0.02 for CECT synthesis, and a DSC of 81.21 ± 4.44 for segmentation on the internal dataset. Extensive experiments demonstrate that CKAP-Net outperforms state-of-the-art CA synthesis methods and has better generalizability across different datasets.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
NCCT-to-CECT合成与对比度增强知识和解剖感知在非对比CT图像中进行多器官分割
对比增强计算机断层扫描(CECT)在放射治疗计划中经常用于描绘危险器官(OARs)。所描绘的桨需要从CECT转移到非对比CT (NCCT)进行剂量计算。然而,在CECT中使用碘造影剂(CA)以及NCCT与CECT图像空间错位导致的剂量计算误差存在不良副作用的风险。一种很有前景的解决方案是从NCCT扫描中合成CECT图像,这可以提高器官和异常的可见性,从而更有效地在NCCT图像中进行多器官分割。然而,现有的方法忽略了CA诱导的组织之间的差异,缺乏对器官边缘和血管细节的合成能力。为了解决这些问题,我们提出了一个对比增强的知识和解剖感知网络(CKAP-Net)用于ncct到cect的合成。CKAP-Net利用对比度增强的知识学习网络来捕获可归因于CA的领域特征的相似性和差异性。具体而言,引入了基于CA的感知损失函数来增强CA细节的合成。此外,我们设计了一个多尺度解剖感知转换器,利用来自NCCT图像的多尺度解剖信息,实现组织细节的精确合成。我们的CKAP-Net在多中心腹部NCCT-CECT数据集、头部和颈部NCCT-CECT数据集和ncmi - cemri数据集上进行了评估。在内部数据集上,CECT合成的MAE为25.96±2.64,SSIM为0.855±0.017,PSNR为32.60±0.02,DSC为81.21±4.44。大量的实验表明,CKAP-Net优于最先进的CA合成方法,并且在不同的数据集上具有更好的泛化性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Medical image analysis
Medical image analysis 工程技术-工程:生物医学
CiteScore
22.10
自引率
6.40%
发文量
309
审稿时长
6.6 months
期刊介绍: Medical Image Analysis serves as a platform for sharing new research findings in the realm of medical and biological image analysis, with a focus on applications of computer vision, virtual reality, and robotics to biomedical imaging challenges. The journal prioritizes the publication of high-quality, original papers contributing to the fundamental science of processing, analyzing, and utilizing medical and biological images. It welcomes approaches utilizing biomedical image datasets across all spatial scales, from molecular/cellular imaging to tissue/organ imaging.
期刊最新文献
Corrigendum to "Detection and analysis of cerebral aneurysms based on X-ray rotational angiography - the CADA 2020 challenge" [Medical Image Analysis, April 2022, Volume 77, 102333]. Editorial for Special Issue on Foundation Models for Medical Image Analysis. Few-shot medical image segmentation with high-fidelity prototypes. The Developing Human Connectome Project: A fast deep learning-based pipeline for neonatal cortical surface reconstruction. SAF-IS: A spatial annotation free framework for instance segmentation of surgical tools
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1