CT Image Super Resolution Based On Improved SRGAN

Xuhao Jiang, Yifei Xu, Pingping Wei, Zhuming Zhou
{"title":"CT Image Super Resolution Based On Improved SRGAN","authors":"Xuhao Jiang, Yifei Xu, Pingping Wei, Zhuming Zhou","doi":"10.1109/ICCCS49078.2020.9118497","DOIUrl":null,"url":null,"abstract":"CT images are commonly used in medical clinical diagnosis. However, due to factors such as hardware and scanning time, CT images in real scenes are limited by spatial resolution so that doctors cannot perform accurate disease analysis on tiny lesion areas and pathological features. An image super-resolution (SR) method based on deep learning is a good way to solve this problem. Although many excellent networks have been proposed, but they all pay more attention to image quality indicators than image visual perception quality. Unlike other networks that focus more on image evaluation metrics, the super resolution generative adversarial network (SRGAN) has achieved tremendous improvements in image perception quality. Based on the above, this paper proposes a CT image super-resolution algorithm based on improved SRGAN. In order to improve the visual quality of CT images, a dilated convolution module is introduced. At the same time, in order to improve the overall visual effect of the image, the mean structural similarity (MSSIM) loss is also introduced to improve the perceptual loss function. Experimental results on the public CT image dataset demonstrate that our model is better than the baseline method SRGAN not only in mean opinion score(MOS), but also in peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) values.","PeriodicalId":105556,"journal":{"name":"2020 5th International Conference on Computer and Communication Systems (ICCCS)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"8","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 5th International Conference on Computer and Communication Systems (ICCCS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCCS49078.2020.9118497","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 8

Abstract

CT images are commonly used in medical clinical diagnosis. However, due to factors such as hardware and scanning time, CT images in real scenes are limited by spatial resolution so that doctors cannot perform accurate disease analysis on tiny lesion areas and pathological features. An image super-resolution (SR) method based on deep learning is a good way to solve this problem. Although many excellent networks have been proposed, but they all pay more attention to image quality indicators than image visual perception quality. Unlike other networks that focus more on image evaluation metrics, the super resolution generative adversarial network (SRGAN) has achieved tremendous improvements in image perception quality. Based on the above, this paper proposes a CT image super-resolution algorithm based on improved SRGAN. In order to improve the visual quality of CT images, a dilated convolution module is introduced. At the same time, in order to improve the overall visual effect of the image, the mean structural similarity (MSSIM) loss is also introduced to improve the perceptual loss function. Experimental results on the public CT image dataset demonstrate that our model is better than the baseline method SRGAN not only in mean opinion score(MOS), but also in peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) values.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于改进SRGAN的CT图像超分辨率
CT图像在医学临床诊断中是常用的。然而,由于硬件和扫描时间等因素,真实场景的CT图像受到空间分辨率的限制,医生无法对微小的病变区域和病理特征进行准确的疾病分析。基于深度学习的图像超分辨率(SR)方法是解决这一问题的一种很好的方法。虽然已经提出了许多优秀的网络,但它们都更关注图像质量指标,而不是图像视觉感知质量。与其他更多关注图像评价指标的网络不同,超分辨率生成对抗网络(SRGAN)在图像感知质量方面取得了巨大的进步。在此基础上,本文提出了一种基于改进SRGAN的CT图像超分辨率算法。为了提高CT图像的视觉质量,引入了一种扩展卷积模块。同时,为了提高图像的整体视觉效果,还引入了平均结构相似度(MSSIM)损失来改进感知损失函数。在公共CT图像数据集上的实验结果表明,我们的模型不仅在平均意见评分(MOS)上优于基线方法SRGAN,而且在峰值信噪比(PSNR)和结构相似性(SSIM)值上也优于基线方法SRGAN。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
The Resource Dynamic Recombination and Its Technology Development of Space TT&C Equipment Automatic Arousal Detection Using Multi-model Deep Neural Network Internet Traffic Categories Demand Prediction to Support Dynamic QoS Research on Scatter Imaging Method for Electromagnetic Field Inverse Problem Based on Sparse Constraints Usage Intention of Internet of Vehicles Based on CAB Model: The Moderating Effect of Reference Groups
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1