Cross-Modal PET Synthesis Method Based on Improved Edge-Aware Generative Adversarial Network

IF 0.6 4区 工程技术 Q4 ENGINEERING, ELECTRICAL & ELECTRONIC Journal of Nanoelectronics and Optoelectronics Pub Date : 2023-10-01 DOI:10.1166/jno.2023.3502
Liting Lei, Rui Zhang, Haifei Zhang, Xiujing Li, Yuchao Zou, Saad Aldosary, Azza S. Hassanein
{"title":"Cross-Modal PET Synthesis Method Based on Improved Edge-Aware Generative Adversarial Network","authors":"Liting Lei, Rui Zhang, Haifei Zhang, Xiujing Li, Yuchao Zou, Saad Aldosary, Azza S. Hassanein","doi":"10.1166/jno.2023.3502","DOIUrl":null,"url":null,"abstract":"Current cross-modal synthesis techniques for medical imaging have limits in their ability to accurately capture the structural information of human tissue, leading to problems such edge information loss and poor signal-to-noise ratio in the generated images. In order to synthesize PET pictures from Magnetic Resonance (MR) images, a novel approach for cross-modal synthesis of medical images is thus suggested. The foundation of this approach is an enhanced Edge-aware Generative Adversarial Network (Ea-GAN), which integrates an edge detector into the GAN framework to better capture local texture and edge information in the pictures. The Convolutional Block Attention Module (CBAM) is added in the generator portion of the GAN to prioritize important characteristics in the pictures. In order to improve the Ea-GAN discriminator, its receptive field is shrunk to concentrate more on the tiny features of brain tissue in the pictures, boosting the generator’s performance. The edge loss between actual PET pictures and synthetic PET images is also included into the algorithm’s loss function, further enhancing the generator’s performance. The suggested PET image synthesis algorithm, which is based on the enhanced Ea-GAN, outperforms different current approaches in terms of both quantitative and qualitative assessments, according to experimental findings. The architecture of the brain tissue are effectively preserved in the synthetic PET pictures, which also aesthetically nearly resemble genuine images.","PeriodicalId":16446,"journal":{"name":"Journal of Nanoelectronics and Optoelectronics","volume":"16 1","pages":""},"PeriodicalIF":0.6000,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Nanoelectronics and Optoelectronics","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1166/jno.2023.3502","RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0

Abstract

Current cross-modal synthesis techniques for medical imaging have limits in their ability to accurately capture the structural information of human tissue, leading to problems such edge information loss and poor signal-to-noise ratio in the generated images. In order to synthesize PET pictures from Magnetic Resonance (MR) images, a novel approach for cross-modal synthesis of medical images is thus suggested. The foundation of this approach is an enhanced Edge-aware Generative Adversarial Network (Ea-GAN), which integrates an edge detector into the GAN framework to better capture local texture and edge information in the pictures. The Convolutional Block Attention Module (CBAM) is added in the generator portion of the GAN to prioritize important characteristics in the pictures. In order to improve the Ea-GAN discriminator, its receptive field is shrunk to concentrate more on the tiny features of brain tissue in the pictures, boosting the generator’s performance. The edge loss between actual PET pictures and synthetic PET images is also included into the algorithm’s loss function, further enhancing the generator’s performance. The suggested PET image synthesis algorithm, which is based on the enhanced Ea-GAN, outperforms different current approaches in terms of both quantitative and qualitative assessments, according to experimental findings. The architecture of the brain tissue are effectively preserved in the synthetic PET pictures, which also aesthetically nearly resemble genuine images.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于改进的边缘感知生成对抗网络的跨模态 PET 合成方法
目前用于医学成像的跨模态合成技术在准确捕捉人体组织结构信息方面存在局限性,导致生成的图像存在边缘信息丢失和信噪比差等问题。为了从磁共振(MR)图像合成 PET 图像,我们提出了一种新的医学图像跨模态合成方法。这种方法的基础是增强型边缘感知生成对抗网络(Ea-GAN),它将边缘检测器集成到 GAN 框架中,以更好地捕捉图片中的局部纹理和边缘信息。卷积块注意模块(CBAM)被添加到 GAN 的生成器部分,以优先处理图片中的重要特征。为了改进 Ea-GAN 识别器,缩小了它的感受野,使其更集中于图片中脑组织的微小特征,从而提高了生成器的性能。算法的损失函数还包括实际 PET 图像与合成 PET 图像之间的边缘损失,进一步提高了生成器的性能。实验结果表明,基于增强型 Ea-GAN 的 PET 图像合成算法在定量和定性评估方面均优于现有的各种方法。合成的 PET 图像有效地保留了脑组织的结构,在美学上也几乎与真实图像相似。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Journal of Nanoelectronics and Optoelectronics
Journal of Nanoelectronics and Optoelectronics 工程技术-工程:电子与电气
自引率
16.70%
发文量
48
审稿时长
12.5 months
期刊最新文献
Pulsed Optoelectronic Rangefinder and Its Measurement Applications in Architectural Design Rationality Assessment Electrochemical Micro-Reaction and Failure Mechanism of New Materials Used at Low Temperature in Coastal Environment Ultrawideband Tunable Polarization Converter Based on Metamaterials Nanofluid Heat Transfer and Flow Characteristics in a Convex Plate Heat Exchanger Based on Multi-Objective Optimization Characterization of ZnO/rGO Nanocomposite and Its Application for Photocatalytic Degradation
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1