A joint learning framework for multisite CBCT-to-CT translation using a hybrid CNN-transformer synthesizer and a registration network

Ying Hu, Mengjie Cheng, Hui Wei, Zhiwen Liang
{"title":"A joint learning framework for multisite CBCT-to-CT translation using a hybrid CNN-transformer synthesizer and a registration network","authors":"Ying Hu, Mengjie Cheng, Hui Wei, Zhiwen Liang","doi":"10.3389/fonc.2024.1440944","DOIUrl":null,"url":null,"abstract":"Cone-beam computed tomography (CBCT) is a convenient method for adaptive radiation therapy (ART), but its application is often hindered by its image quality. We aim to develop a unified deep learning model that can consistently enhance the quality of CBCT images across various anatomical sites by generating synthetic CT (sCT) images.A dataset of paired CBCT and planning CT images from 135 cancer patients, including head and neck, chest and abdominal tumors, was collected. This dataset, with its rich anatomical diversity and scanning parameters, was carefully selected to ensure comprehensive model training. Due to the imperfect registration, the inherent challenge of local structural misalignment of paired dataset may lead to suboptimal model performance. To address this limitation, we propose SynREG, a supervised learning framework. SynREG integrates a hybrid CNN-transformer architecture designed for generating high-fidelity sCT images and a registration network designed to correct local structural misalignment dynamically during training. An independent test set of 23 additional patients was used to evaluate the image quality, and the results were compared with those of several benchmark models (pix2pix, cycleGAN and SwinIR). Furthermore, the performance of an autosegmentation application was also assessed.The proposed model disentangled sCT generation from anatomical correction, leading to a more rational optimization process. As a result, the model effectively suppressed noise and artifacts in multisite applications, significantly enhancing CBCT image quality. Specifically, the mean absolute error (MAE) of SynREG was reduced to 16.81 ± 8.42 HU, whereas the structural similarity index (SSIM) increased to 94.34 ± 2.85%, representing improvements over the raw CBCT data, which had the MAE of 26.74 ± 10.11 HU and the SSIM of 89.73 ± 3.46%. The enhanced image quality was particularly beneficial for organs with low contrast resolution, significantly increasing the accuracy of automatic segmentation in these regions. Notably, for the brainstem, the mean Dice similarity coefficient (DSC) increased from 0.61 to 0.89, and the MDA decreased from 3.72 mm to 0.98 mm, indicating a substantial improvement in segmentation accuracy and precision.SynREG can effectively alleviate the differences in residual anatomy between paired datasets and enhance the quality of CBCT images.","PeriodicalId":507440,"journal":{"name":"Frontiers in Oncology","volume":"50 7","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Frontiers in Oncology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3389/fonc.2024.1440944","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Cone-beam computed tomography (CBCT) is a convenient method for adaptive radiation therapy (ART), but its application is often hindered by its image quality. We aim to develop a unified deep learning model that can consistently enhance the quality of CBCT images across various anatomical sites by generating synthetic CT (sCT) images.A dataset of paired CBCT and planning CT images from 135 cancer patients, including head and neck, chest and abdominal tumors, was collected. This dataset, with its rich anatomical diversity and scanning parameters, was carefully selected to ensure comprehensive model training. Due to the imperfect registration, the inherent challenge of local structural misalignment of paired dataset may lead to suboptimal model performance. To address this limitation, we propose SynREG, a supervised learning framework. SynREG integrates a hybrid CNN-transformer architecture designed for generating high-fidelity sCT images and a registration network designed to correct local structural misalignment dynamically during training. An independent test set of 23 additional patients was used to evaluate the image quality, and the results were compared with those of several benchmark models (pix2pix, cycleGAN and SwinIR). Furthermore, the performance of an autosegmentation application was also assessed.The proposed model disentangled sCT generation from anatomical correction, leading to a more rational optimization process. As a result, the model effectively suppressed noise and artifacts in multisite applications, significantly enhancing CBCT image quality. Specifically, the mean absolute error (MAE) of SynREG was reduced to 16.81 ± 8.42 HU, whereas the structural similarity index (SSIM) increased to 94.34 ± 2.85%, representing improvements over the raw CBCT data, which had the MAE of 26.74 ± 10.11 HU and the SSIM of 89.73 ± 3.46%. The enhanced image quality was particularly beneficial for organs with low contrast resolution, significantly increasing the accuracy of automatic segmentation in these regions. Notably, for the brainstem, the mean Dice similarity coefficient (DSC) increased from 0.61 to 0.89, and the MDA decreased from 3.72 mm to 0.98 mm, indicating a substantial improvement in segmentation accuracy and precision.SynREG can effectively alleviate the differences in residual anatomy between paired datasets and enhance the quality of CBCT images.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
利用混合 CNN 变换器合成器和注册网络实现多站点 CBCT 到 CT 转换的联合学习框架
锥形束计算机断层扫描(CBCT)是自适应放射治疗(ART)的一种便捷方法,但其应用往往受到图像质量的阻碍。我们的目标是开发一种统一的深度学习模型,通过生成合成 CT(sCT)图像,持续提高不同解剖部位的 CBCT 图像质量。该数据集具有丰富的解剖多样性和扫描参数,经过精心挑选以确保进行全面的模型训练。由于配对数据集的不完美配准,其固有的局部结构错位问题可能会导致模型性能不理想。为解决这一局限性,我们提出了监督学习框架 SynREG。SynREG 整合了一个混合 CNN 变换器架构和一个配准网络,前者用于生成高保真 sCT 图像,后者用于在训练过程中动态纠正局部结构错位。为了评估图像质量,还使用了另外 23 名患者组成的独立测试集,并将结果与几个基准模型(pix2pix、cycleGAN 和 SwinIR)进行了比较。提出的模型将 sCT 生成与解剖校正分离开来,从而使优化过程更加合理。因此,该模型有效抑制了多部位应用中的噪声和伪影,显著提高了 CBCT 图像质量。具体来说,SynREG 的平均绝对误差(MAE)降低到了 16.81 ± 8.42 HU,而结构相似性指数(SSIM)增加到了 94.34 ± 2.85%,与原始 CBCT 数据(MAE 为 26.74 ± 10.11 HU,SSIM 为 89.73 ± 3.46%)相比有了明显改善。图像质量的提高对对比度分辨率较低的器官尤其有利,大大提高了这些区域自动分割的准确性。值得注意的是,对于脑干,平均狄斯相似系数(DSC)从 0.61 增加到 0.89,MDA 从 3.72 毫米减少到 0.98 毫米,表明分割的准确性和精确度有了大幅提高。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Clinical experience with adaptive MRI-guided pancreatic SBRT and the use of abdominal compression to reduce treatment volume Frequency of pathogenic germline variants in pediatric medulloblastoma survivors Long segment ureterectomy with tapered demucosalized ileum replacement of ureter for ureteral cancer: a case report and literature review Application of λ esophagojejunostomy in total gastrectomy under laparoscopy: a modified technique for post-gastrectomy reconstruction Treatment accuracy of standard linear accelerator-based prostate SBRT: the delivered dose assessment of patients treated within two major clinical trials using an in-house position monitoring system
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1