Deep residual learning with Anscombe transformation for low-dose digital tomosynthesis

IF 0.8 4区 物理与天体物理 Q3 PHYSICS, MULTIDISCIPLINARY Journal of the Korean Physical Society Pub Date : 2024-06-17 DOI:10.1007/s40042-024-01117-4
Youngjin Lee, Seungwan Lee, Chanrok Park
{"title":"Deep residual learning with Anscombe transformation for low-dose digital tomosynthesis","authors":"Youngjin Lee,&nbsp;Seungwan Lee,&nbsp;Chanrok Park","doi":"10.1007/s40042-024-01117-4","DOIUrl":null,"url":null,"abstract":"<div><p>Deep learning-based convolutional neural networks (CNNs) have been proposed for enhancing the quality of digital tomosynthesis (DTS) images. However, the direct applications of the conventional CNNs for low-dose DTS imaging are limited to provide acceptable image quality due to the inaccurate recognition of complex texture patterns. In this study, a deep residual learning network combined with the Anscombe transformation was proposed for simplifying the complex texture and restoring the low-dose DTS image quality. The proposed network consisted of convolution layers, max-pooling layers, up-sampling layers, and skip connections. The network training was performed to learn the residual images between the ground-truth and low-dose projections, which were converted using the Anscombe transformation. As a result, the proposed network enhanced the quantitative accuracy and noise characteristic of DTS images by 1.01–1.27 and 1.14–1.71 times, respectively, in comparison to low-dose DTS images and other deep learning networks. The spatial resolution of the DTS image restored using the proposed network was 1.12 times higher than that obtained using a deep image learning network. In conclusion, the proposed network can restore the low-dose DTS image quality and provide an optimal model for low-dose DTS imaging.</p></div>","PeriodicalId":677,"journal":{"name":"Journal of the Korean Physical Society","volume":"85 4","pages":"333 - 341"},"PeriodicalIF":0.8000,"publicationDate":"2024-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of the Korean Physical Society","FirstCategoryId":"101","ListUrlMain":"https://link.springer.com/article/10.1007/s40042-024-01117-4","RegionNum":4,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"PHYSICS, MULTIDISCIPLINARY","Score":null,"Total":0}
引用次数: 0

Abstract

Deep learning-based convolutional neural networks (CNNs) have been proposed for enhancing the quality of digital tomosynthesis (DTS) images. However, the direct applications of the conventional CNNs for low-dose DTS imaging are limited to provide acceptable image quality due to the inaccurate recognition of complex texture patterns. In this study, a deep residual learning network combined with the Anscombe transformation was proposed for simplifying the complex texture and restoring the low-dose DTS image quality. The proposed network consisted of convolution layers, max-pooling layers, up-sampling layers, and skip connections. The network training was performed to learn the residual images between the ground-truth and low-dose projections, which were converted using the Anscombe transformation. As a result, the proposed network enhanced the quantitative accuracy and noise characteristic of DTS images by 1.01–1.27 and 1.14–1.71 times, respectively, in comparison to low-dose DTS images and other deep learning networks. The spatial resolution of the DTS image restored using the proposed network was 1.12 times higher than that obtained using a deep image learning network. In conclusion, the proposed network can restore the low-dose DTS image quality and provide an optimal model for low-dose DTS imaging.

Abstract Image

Abstract Image

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
针对低剂量数字断层合成的深度残差学习与安斯孔变换
基于深度学习的卷积神经网络(CNN)已被提出用于提高数字断层合成(DTS)图像的质量。然而,由于对复杂纹理模式的识别不准确,传统卷积神经网络在低剂量 DTS 成像中的直接应用受到限制,无法提供可接受的图像质量。本研究提出了一种结合安斯孔变换的深度残差学习网络,用于简化复杂纹理并恢复低剂量 DTS 图像质量。该网络由卷积层、最大池化层、上采样层和跳转连接组成。网络训练的目的是学习地面实况投影和低剂量投影之间的残余图像,并使用安斯康转换进行转换。结果,与低剂量 DTS 图像和其他深度学习网络相比,所提出的网络将 DTS 图像的定量准确性和噪声特性分别提高了 1.01-1.27 倍和 1.14-1.71 倍。使用拟议网络修复的 DTS 图像的空间分辨率是使用深度图像学习网络的 1.12 倍。总之,所提出的网络可以恢复低剂量 DTS 图像质量,并为低剂量 DTS 成像提供最佳模型。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Journal of the Korean Physical Society
Journal of the Korean Physical Society PHYSICS, MULTIDISCIPLINARY-
CiteScore
1.20
自引率
16.70%
发文量
276
审稿时长
5.5 months
期刊介绍: The Journal of the Korean Physical Society (JKPS) covers all fields of physics spanning from statistical physics and condensed matter physics to particle physics. The manuscript to be published in JKPS is required to hold the originality, significance, and recent completeness. The journal is composed of Full paper, Letters, and Brief sections. In addition, featured articles with outstanding results are selected by the Editorial board and introduced in the online version. For emphasis on aspect of international journal, several world-distinguished researchers join the Editorial board. High quality of papers may be express-published when it is recommended or requested.
期刊最新文献
Improved electrical conductivity of graphene film using thermal expansion-assisted hot pressing method A study on the effect of correlated data on predictive capabilities A customized template matching classification system Erratum: Comparative analysis of single and triple material 10 nm Tri-gate FinFET Revisit to the fluid Love numbers and the permanent tide of the Earth
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1