A cross-type multi-dimensional network based on feature enhancement and triple interactive attention for LDCT denoising.

IF 1.4 3区 医学 Q3 INSTRUMENTS & INSTRUMENTATION Journal of X-Ray Science and Technology Pub Date : 2025-03-01 Epub Date: 2025-01-29 DOI:10.1177/08953996241306696
Lina Jia, Beibei Jia, Zongyang Li, Yizhuo Zhang, Zhiguo Gui
{"title":"A cross-type multi-dimensional network based on feature enhancement and triple interactive attention for LDCT denoising.","authors":"Lina Jia, Beibei Jia, Zongyang Li, Yizhuo Zhang, Zhiguo Gui","doi":"10.1177/08953996241306696","DOIUrl":null,"url":null,"abstract":"<p><p>BackgroundNumerous deep leaning methods for low-dose computed technology (CT) image denoising have been proposed, achieving impressive results. However, issues such as loss of structure and edge information and low denoising efficiency still exist.ObjectiveTo improve image denoising quality, an enhanced multi-dimensional hybrid attention LDCT image denoising network based on edge detection is proposed in this paper.MethodsIn our network, we employ a trainable Sobel convolution to design an edge enhancement module and fuse an enhanced triplet attention network (ETAN) after each <math><mn>3</mn><mo>×</mo><mn>3</mn></math> convolutional layer to extract richer features more comprehensively and suppress useless information. During the training process, we adopt a strategy that combines total variation loss (TVLoss) with mean squared error (MSE) loss to reduce high-frequency artifacts in image reconstruction and balance image denoising and detail preservation.ResultsCompared with other advanced algorithms (CT-former, REDCNN and EDCNN), our proposed model achieves the best PSNR and SSIM values in CT image of the abdomen, which are 34.8211and 0.9131, respectively.ConclusionThrough comparative experiments with other related algorithms, it can be seen that the algorithm proposed in this article has achieved significant improvements in both subjective vision and objective indicators.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"393-404"},"PeriodicalIF":1.4000,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of X-Ray Science and Technology","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1177/08953996241306696","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/1/29 0:00:00","PubModel":"Epub","JCR":"Q3","JCRName":"INSTRUMENTS & INSTRUMENTATION","Score":null,"Total":0}
引用次数: 0

Abstract

BackgroundNumerous deep leaning methods for low-dose computed technology (CT) image denoising have been proposed, achieving impressive results. However, issues such as loss of structure and edge information and low denoising efficiency still exist.ObjectiveTo improve image denoising quality, an enhanced multi-dimensional hybrid attention LDCT image denoising network based on edge detection is proposed in this paper.MethodsIn our network, we employ a trainable Sobel convolution to design an edge enhancement module and fuse an enhanced triplet attention network (ETAN) after each 3×3 convolutional layer to extract richer features more comprehensively and suppress useless information. During the training process, we adopt a strategy that combines total variation loss (TVLoss) with mean squared error (MSE) loss to reduce high-frequency artifacts in image reconstruction and balance image denoising and detail preservation.ResultsCompared with other advanced algorithms (CT-former, REDCNN and EDCNN), our proposed model achieves the best PSNR and SSIM values in CT image of the abdomen, which are 34.8211and 0.9131, respectively.ConclusionThrough comparative experiments with other related algorithms, it can be seen that the algorithm proposed in this article has achieved significant improvements in both subjective vision and objective indicators.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于特征增强和三重交互关注的交叉型多维网络LDCT去噪。
背景:针对低剂量CT图像去噪,人们提出了许多深度学习方法,并取得了令人印象深刻的效果。但仍然存在结构和边缘信息丢失、去噪效率低等问题。目的:为了提高图像去噪质量,提出了一种基于边缘检测的增强型多维混合关注LDCT图像去噪网络。方法:在我们的网络中,我们采用可训练的Sobel卷积设计边缘增强模块,并在每个3×3卷积层后融合一个增强的三重关注网络(ETAN),以更全面地提取更丰富的特征,并抑制无用信息。在训练过程中,我们采用了总变异损失(TVLoss)和均方误差(MSE)损失相结合的策略来减少图像重构中的高频伪影,平衡图像去噪和细节保留。结果:与其他先进算法(CT-former、REDCNN和EDCNN)相比,我们提出的模型在腹部CT图像上的PSNR和SSIM值最佳,分别为34.8211和0.9131。结论:通过与其他相关算法的对比实验可以看出,本文提出的算法无论在主观视觉还是客观指标上都取得了显著的进步。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
4.90
自引率
23.30%
发文量
150
审稿时长
3 months
期刊介绍: Research areas within the scope of the journal include: Interaction of x-rays with matter: x-ray phenomena, biological effects of radiation, radiation safety and optical constants X-ray sources: x-rays from synchrotrons, x-ray lasers, plasmas, and other sources, conventional or unconventional Optical elements: grazing incidence optics, multilayer mirrors, zone plates, gratings, other diffraction optics Optical instruments: interferometers, spectrometers, microscopes, telescopes, microprobes
期刊最新文献
Time-resolved tomography algorithm using one projection per time step: Non-monotonic case. Two-stage universal liver cancer segmentation network for 3D dual-modality abdominal nuclear medical images based on mixed-label and multi-type training strategy. Branchless distance-driven and hybrid projectors in iterative cone beam CT. UMamba-Dual: A dual-branch model based on UMamba for cesarean scar disorder segmentation. Image quality and radiation dose assessment of Thai-made 2D and 3D dental extraoral imaging scanners.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1