Xinru Chen , Yao Zhao , Laurence E. Court , He Wang , Tinsu Pan , Jack Phan , Xin Wang , Yao Ding , Jinzhong Yang
{"title":"SC-GAN: Structure-completion generative adversarial network for synthetic CT generation from MR images with truncated anatomy","authors":"Xinru Chen , Yao Zhao , Laurence E. Court , He Wang , Tinsu Pan , Jack Phan , Xin Wang , Yao Ding , Jinzhong Yang","doi":"10.1016/j.compmedimag.2024.102353","DOIUrl":null,"url":null,"abstract":"<div><p>Creating synthetic CT (sCT) from magnetic resonance (MR) images enables MR-based treatment planning in radiation therapy. However, the MR images used for MR-guided adaptive planning are often truncated in the boundary regions due to the limited field of view and the need for sequence optimization. Consequently, the sCT generated from these truncated MR images lacks complete anatomic information, leading to dose calculation error for MR-based adaptive planning. We propose a novel structure-completion generative adversarial network (SC-GAN) to generate sCT with full anatomic details from the truncated MR images. To enable anatomy compensation, we expand input channels of the CT generator by including a body mask and introduce a truncation loss between sCT and real CT. The body mask for each patient was automatically created from the simulation CT scans and transformed to daily MR images by rigid registration as another input for our SC-GAN in addition to the MR images. The truncation loss was constructed by implementing either an auto-segmentor or an edge detector to penalize the difference in body outlines between sCT and real CT. The experimental results show that our SC-GAN achieved much improved accuracy of sCT generation in both truncated and untruncated regions compared to the original cycleGAN and conditional GAN methods.</p></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"113 ","pages":"Article 102353"},"PeriodicalIF":5.4000,"publicationDate":"2024-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computerized Medical Imaging and Graphics","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0895611124000302","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, BIOMEDICAL","Score":null,"Total":0}
引用次数: 0
Abstract
Creating synthetic CT (sCT) from magnetic resonance (MR) images enables MR-based treatment planning in radiation therapy. However, the MR images used for MR-guided adaptive planning are often truncated in the boundary regions due to the limited field of view and the need for sequence optimization. Consequently, the sCT generated from these truncated MR images lacks complete anatomic information, leading to dose calculation error for MR-based adaptive planning. We propose a novel structure-completion generative adversarial network (SC-GAN) to generate sCT with full anatomic details from the truncated MR images. To enable anatomy compensation, we expand input channels of the CT generator by including a body mask and introduce a truncation loss between sCT and real CT. The body mask for each patient was automatically created from the simulation CT scans and transformed to daily MR images by rigid registration as another input for our SC-GAN in addition to the MR images. The truncation loss was constructed by implementing either an auto-segmentor or an edge detector to penalize the difference in body outlines between sCT and real CT. The experimental results show that our SC-GAN achieved much improved accuracy of sCT generation in both truncated and untruncated regions compared to the original cycleGAN and conditional GAN methods.
期刊介绍:
The purpose of the journal Computerized Medical Imaging and Graphics is to act as a source for the exchange of research results concerning algorithmic advances, development, and application of digital imaging in disease detection, diagnosis, intervention, prevention, precision medicine, and population health. Included in the journal will be articles on novel computerized imaging or visualization techniques, including artificial intelligence and machine learning, augmented reality for surgical planning and guidance, big biomedical data visualization, computer-aided diagnosis, computerized-robotic surgery, image-guided therapy, imaging scanning and reconstruction, mobile and tele-imaging, radiomics, and imaging integration and modeling with other information relevant to digital health. The types of biomedical imaging include: magnetic resonance, computed tomography, ultrasound, nuclear medicine, X-ray, microwave, optical and multi-photon microscopy, video and sensory imaging, and the convergence of biomedical images with other non-imaging datasets.