SC-GAN: Structure-completion generative adversarial network for synthetic CT generation from MR images with truncated anatomy

IF 5.4 2区 医学 Q1 ENGINEERING, BIOMEDICAL Computerized Medical Imaging and Graphics Pub Date : 2024-02-10 DOI:10.1016/j.compmedimag.2024.102353
Xinru Chen , Yao Zhao , Laurence E. Court , He Wang , Tinsu Pan , Jack Phan , Xin Wang , Yao Ding , Jinzhong Yang
{"title":"SC-GAN: Structure-completion generative adversarial network for synthetic CT generation from MR images with truncated anatomy","authors":"Xinru Chen ,&nbsp;Yao Zhao ,&nbsp;Laurence E. Court ,&nbsp;He Wang ,&nbsp;Tinsu Pan ,&nbsp;Jack Phan ,&nbsp;Xin Wang ,&nbsp;Yao Ding ,&nbsp;Jinzhong Yang","doi":"10.1016/j.compmedimag.2024.102353","DOIUrl":null,"url":null,"abstract":"<div><p>Creating synthetic CT (sCT) from magnetic resonance (MR) images enables MR-based treatment planning in radiation therapy. However, the MR images used for MR-guided adaptive planning are often truncated in the boundary regions due to the limited field of view and the need for sequence optimization. Consequently, the sCT generated from these truncated MR images lacks complete anatomic information, leading to dose calculation error for MR-based adaptive planning. We propose a novel structure-completion generative adversarial network (SC-GAN) to generate sCT with full anatomic details from the truncated MR images. To enable anatomy compensation, we expand input channels of the CT generator by including a body mask and introduce a truncation loss between sCT and real CT. The body mask for each patient was automatically created from the simulation CT scans and transformed to daily MR images by rigid registration as another input for our SC-GAN in addition to the MR images. The truncation loss was constructed by implementing either an auto-segmentor or an edge detector to penalize the difference in body outlines between sCT and real CT. The experimental results show that our SC-GAN achieved much improved accuracy of sCT generation in both truncated and untruncated regions compared to the original cycleGAN and conditional GAN methods.</p></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"113 ","pages":"Article 102353"},"PeriodicalIF":5.4000,"publicationDate":"2024-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computerized Medical Imaging and Graphics","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0895611124000302","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, BIOMEDICAL","Score":null,"Total":0}
引用次数: 0

Abstract

Creating synthetic CT (sCT) from magnetic resonance (MR) images enables MR-based treatment planning in radiation therapy. However, the MR images used for MR-guided adaptive planning are often truncated in the boundary regions due to the limited field of view and the need for sequence optimization. Consequently, the sCT generated from these truncated MR images lacks complete anatomic information, leading to dose calculation error for MR-based adaptive planning. We propose a novel structure-completion generative adversarial network (SC-GAN) to generate sCT with full anatomic details from the truncated MR images. To enable anatomy compensation, we expand input channels of the CT generator by including a body mask and introduce a truncation loss between sCT and real CT. The body mask for each patient was automatically created from the simulation CT scans and transformed to daily MR images by rigid registration as another input for our SC-GAN in addition to the MR images. The truncation loss was constructed by implementing either an auto-segmentor or an edge detector to penalize the difference in body outlines between sCT and real CT. The experimental results show that our SC-GAN achieved much improved accuracy of sCT generation in both truncated and untruncated regions compared to the original cycleGAN and conditional GAN methods.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
SC-GAN:用于从截断解剖结构的 MR 图像生成合成 CT 的结构补全生成对抗网络
根据磁共振(MR)图像创建合成 CT(sCT)可实现基于 MR 的放射治疗计划。然而,由于视野有限和序列优化的需要,用于磁共振引导自适应规划的磁共振图像通常会在边界区域被截断。因此,由这些截断的磁共振图像生成的 sCT 缺乏完整的解剖信息,导致基于磁共振的自适应计划的剂量计算错误。我们提出了一种新颖的结构补全生成对抗网络(SC-GAN),可从截断的磁共振图像中生成具有完整解剖细节的 sCT。为了实现解剖补偿,我们通过加入人体遮罩来扩展 CT 生成器的输入通道,并在 sCT 和真实 CT 之间引入截断损失。每个患者的身体掩膜都是根据模拟 CT 扫描自动创建的,并通过刚性配准转换为日常 MR 图像,作为除 MR 图像外 SC-GAN 的另一个输入。截断损失是通过实施自动分割器或边缘检测器来构建的,以惩罚 sCT 和真实 CT 之间身体轮廓的差异。实验结果表明,与原始 cycleGAN 和条件 GAN 方法相比,我们的 SC-GAN 在截断和未截断区域生成 sCT 的准确性都有很大提高。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
10.70
自引率
3.50%
发文量
71
审稿时长
26 days
期刊介绍: The purpose of the journal Computerized Medical Imaging and Graphics is to act as a source for the exchange of research results concerning algorithmic advances, development, and application of digital imaging in disease detection, diagnosis, intervention, prevention, precision medicine, and population health. Included in the journal will be articles on novel computerized imaging or visualization techniques, including artificial intelligence and machine learning, augmented reality for surgical planning and guidance, big biomedical data visualization, computer-aided diagnosis, computerized-robotic surgery, image-guided therapy, imaging scanning and reconstruction, mobile and tele-imaging, radiomics, and imaging integration and modeling with other information relevant to digital health. The types of biomedical imaging include: magnetic resonance, computed tomography, ultrasound, nuclear medicine, X-ray, microwave, optical and multi-photon microscopy, video and sensory imaging, and the convergence of biomedical images with other non-imaging datasets.
期刊最新文献
Single color digital H&E staining with In-and-Out Net. Cervical OCT image classification using contrastive masked autoencoders with Swin Transformer. Circumpapillary OCT-based multi-sector analysis of retinal layer thickness in patients with glaucoma and high myopia. Dual attention model with reinforcement learning for classification of histology whole-slide images. CIS-UNet: Multi-class segmentation of the aorta in computed tomography angiography via context-aware shifted window self-attention.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1