利用增强型深度卷积自动编码器网络为大脑成像中的非解剖对象涂色

Puranam Revanth Kumar, B Shilpa, Rajesh Kumar Jha, B Deevena Raju, Thayyaba Khatoon Mohammed
{"title":"利用增强型深度卷积自动编码器网络为大脑成像中的非解剖对象涂色","authors":"Puranam Revanth Kumar, B Shilpa, Rajesh Kumar Jha, B Deevena Raju, Thayyaba Khatoon Mohammed","doi":"10.1007/s12046-024-02536-6","DOIUrl":null,"url":null,"abstract":"<p>Medical diagnosis can be severely hindered by distorted medical images, especially in the analysis of Magnetic Resonance Imaging (MRI) and Computed Tomography (CT) images. Therefore, enhancing the accuracy of diagnostic imaging and inpainting damaged areas are essential for medical diagnosis. Over the past decade, image inpainting techniques have advanced due to deep learning and multimedia information. In this paper, we proposed a deep convolutional autoencoder network with improved parameters as a robust method for inpainting non-anatomical objects in MRI and CT images. Traditional approaches based on the exemplar methods are much less effective than deep learning methods in capturing high-level features. However, the inpainted regions would appear blurr and with global inconsistency. To handle the fuzzy problem, we enhanced the network model by introducing skip connections between mirrored layers in the encoder and decoder stacks. This allowed the generative process of the inpainting region to directly use the low-level feature information of the processed image. To provide both pixel-accurate and local-global contents consistency, the proposed model is trained with a combination of the typical pixel-wise reconstruction loss and two adversarial losses, which makes the inpainted output seem more realistic and consistent with its surrounding contexts. As a result, the proposed approach is much faster than existing methods while providing unprecedented qualitative and quantitative evaluation with a high inpainting inception score of 10.58, peak signal-to-noise ratio (PSNR) 52.44, structural similarity index (SSIM) 0.95, universal image quality index (UQI) 0.96, and mean squared error (MSE) 40.73 for CT and MRI images. This offers a promising avenue for enhancing image fidelity, potentially advancing clinical decision-making and patient care in neuroimaging practice.</p>","PeriodicalId":21498,"journal":{"name":"Sādhanā","volume":"22 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Inpainting non-anatomical objects in brain imaging using enhanced deep convolutional autoencoder network\",\"authors\":\"Puranam Revanth Kumar, B Shilpa, Rajesh Kumar Jha, B Deevena Raju, Thayyaba Khatoon Mohammed\",\"doi\":\"10.1007/s12046-024-02536-6\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Medical diagnosis can be severely hindered by distorted medical images, especially in the analysis of Magnetic Resonance Imaging (MRI) and Computed Tomography (CT) images. Therefore, enhancing the accuracy of diagnostic imaging and inpainting damaged areas are essential for medical diagnosis. Over the past decade, image inpainting techniques have advanced due to deep learning and multimedia information. In this paper, we proposed a deep convolutional autoencoder network with improved parameters as a robust method for inpainting non-anatomical objects in MRI and CT images. Traditional approaches based on the exemplar methods are much less effective than deep learning methods in capturing high-level features. However, the inpainted regions would appear blurr and with global inconsistency. To handle the fuzzy problem, we enhanced the network model by introducing skip connections between mirrored layers in the encoder and decoder stacks. This allowed the generative process of the inpainting region to directly use the low-level feature information of the processed image. To provide both pixel-accurate and local-global contents consistency, the proposed model is trained with a combination of the typical pixel-wise reconstruction loss and two adversarial losses, which makes the inpainted output seem more realistic and consistent with its surrounding contexts. As a result, the proposed approach is much faster than existing methods while providing unprecedented qualitative and quantitative evaluation with a high inpainting inception score of 10.58, peak signal-to-noise ratio (PSNR) 52.44, structural similarity index (SSIM) 0.95, universal image quality index (UQI) 0.96, and mean squared error (MSE) 40.73 for CT and MRI images. This offers a promising avenue for enhancing image fidelity, potentially advancing clinical decision-making and patient care in neuroimaging practice.</p>\",\"PeriodicalId\":21498,\"journal\":{\"name\":\"Sādhanā\",\"volume\":\"22 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-05-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Sādhanā\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1007/s12046-024-02536-6\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Sādhanā","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1007/s12046-024-02536-6","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

医疗图像失真会严重影响医疗诊断,尤其是在分析磁共振成像(MRI)和计算机断层扫描(CT)图像时。因此,提高诊断成像的准确性和对受损区域进行着色对医疗诊断至关重要。在过去十年中,深度学习和多媒体信息推动了图像内绘技术的发展。在本文中,我们提出了一种具有改进参数的深度卷积自动编码器网络,作为对核磁共振成像和 CT 图像中的非解剖对象进行内绘的稳健方法。在捕捉高层次特征方面,基于示例方法的传统方法远不如深度学习方法有效。然而,内绘区域会出现模糊和全局不一致性。为了解决模糊问题,我们通过在编码器和解码器堆栈的镜像层之间引入跳转连接来增强网络模型。这样,内绘区域的生成过程就可以直接使用处理后图像的低层特征信息。为了提供像素精确性和局部-全局内容一致性,所提出的模型在训练时结合了典型的像素重构损失和两种对抗损失,这使得内绘输出看起来更真实,并与其周围环境保持一致。因此,与现有方法相比,所提出的方法速度更快,同时还能提供前所未有的定性和定量评估,在 CT 和 MRI 图像中,其内绘截取得分高达 10.58,峰值信噪比 (PSNR) 为 52.44,结构相似性指数 (SSIM) 为 0.95,通用图像质量指数 (UQI) 为 0.96,均方误差 (MSE) 为 40.73。这为提高图像保真度提供了一个前景广阔的途径,有可能在神经成像实践中促进临床决策和患者护理。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

摘要图片

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Inpainting non-anatomical objects in brain imaging using enhanced deep convolutional autoencoder network

Medical diagnosis can be severely hindered by distorted medical images, especially in the analysis of Magnetic Resonance Imaging (MRI) and Computed Tomography (CT) images. Therefore, enhancing the accuracy of diagnostic imaging and inpainting damaged areas are essential for medical diagnosis. Over the past decade, image inpainting techniques have advanced due to deep learning and multimedia information. In this paper, we proposed a deep convolutional autoencoder network with improved parameters as a robust method for inpainting non-anatomical objects in MRI and CT images. Traditional approaches based on the exemplar methods are much less effective than deep learning methods in capturing high-level features. However, the inpainted regions would appear blurr and with global inconsistency. To handle the fuzzy problem, we enhanced the network model by introducing skip connections between mirrored layers in the encoder and decoder stacks. This allowed the generative process of the inpainting region to directly use the low-level feature information of the processed image. To provide both pixel-accurate and local-global contents consistency, the proposed model is trained with a combination of the typical pixel-wise reconstruction loss and two adversarial losses, which makes the inpainted output seem more realistic and consistent with its surrounding contexts. As a result, the proposed approach is much faster than existing methods while providing unprecedented qualitative and quantitative evaluation with a high inpainting inception score of 10.58, peak signal-to-noise ratio (PSNR) 52.44, structural similarity index (SSIM) 0.95, universal image quality index (UQI) 0.96, and mean squared error (MSE) 40.73 for CT and MRI images. This offers a promising avenue for enhancing image fidelity, potentially advancing clinical decision-making and patient care in neuroimaging practice.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Buckling performance optimization of sub-stiffened composite panels with straight and curvilinear sub-stiffeners Transformer-based Pouranic topic classification in Indian mythology Influence of non-stoichiometric solutions on the THF hydrate growth: chemical affinity modelling and visualization Development and analysis of Hastelloy-X alloy butt joint made by laser beam welding Comparative analysis of a remotely-controlled wetland paddy seeder and conventional drum seeder
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1