Efficient guided inpainting of larger hole missing images based on hierarchical decoding network

IF 5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Complex & Intelligent Systems Pub Date : 2025-01-23 DOI:10.1007/s40747-024-01686-8
Xiucheng Dong, Yaling Ju, Dangcheng Zhang, Bing Hou, Jinqing He
{"title":"Efficient guided inpainting of larger hole missing images based on hierarchical decoding network","authors":"Xiucheng Dong, Yaling Ju, Dangcheng Zhang, Bing Hou, Jinqing He","doi":"10.1007/s40747-024-01686-8","DOIUrl":null,"url":null,"abstract":"<p>When dealing with images containing large hole-missing regions, deep learning-based image inpainting algorithms often face challenges such as local structural distortions and blurriness. In this paper, a novel hierarchical decoding network for image inpainting is proposed. Firstly, the structural priors extracted from the encoding layer are utilized to guide the first decoding layer, while residual blocks are employed to extract deep-level image features. Secondly, multiple hierarchical decoding layers progressively fill in the missing regions from top to bottom, then interlayer features and gradient priors are used to guide information transfer between layers. Furthermore, a proposed Multi-dimensional Efficient Attention is introduced for feature fusion, enabling more effective extraction of image features across different dimensions compared to conventional methods. Finally, Efficient Context Fusion combines the reconstructed feature maps from different decoding layers into the image space, preserving the semantic integrity of the output image. Experiments have been conducted to validate the effectiveness of the proposed method, demonstrating superior performance in both subjective and objective evaluations. When inpainting images with missing regions ranging from 50% to 60%, the proposed method achieves improvements of 0.02 dB (0.22 dB) and 0.001 (0.003) in PSNR and SSIM, on the CelebA-HQ (Places2) dataset, respectively.</p>","PeriodicalId":10524,"journal":{"name":"Complex & Intelligent Systems","volume":"50 1","pages":""},"PeriodicalIF":5.0000,"publicationDate":"2025-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Complex & Intelligent Systems","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s40747-024-01686-8","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

When dealing with images containing large hole-missing regions, deep learning-based image inpainting algorithms often face challenges such as local structural distortions and blurriness. In this paper, a novel hierarchical decoding network for image inpainting is proposed. Firstly, the structural priors extracted from the encoding layer are utilized to guide the first decoding layer, while residual blocks are employed to extract deep-level image features. Secondly, multiple hierarchical decoding layers progressively fill in the missing regions from top to bottom, then interlayer features and gradient priors are used to guide information transfer between layers. Furthermore, a proposed Multi-dimensional Efficient Attention is introduced for feature fusion, enabling more effective extraction of image features across different dimensions compared to conventional methods. Finally, Efficient Context Fusion combines the reconstructed feature maps from different decoding layers into the image space, preserving the semantic integrity of the output image. Experiments have been conducted to validate the effectiveness of the proposed method, demonstrating superior performance in both subjective and objective evaluations. When inpainting images with missing regions ranging from 50% to 60%, the proposed method achieves improvements of 0.02 dB (0.22 dB) and 0.001 (0.003) in PSNR and SSIM, on the CelebA-HQ (Places2) dataset, respectively.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于分层解码网络的大孔缺失图像的高效引导补图
当处理包含大空洞缺失区域的图像时,基于深度学习的图像绘制算法经常面临局部结构扭曲和模糊等挑战。本文提出了一种新的用于图像补图的分层解码网络。首先,利用从编码层提取的结构先验来引导第一解码层,同时利用残差块提取深层图像特征。其次,分层解码层从上到下依次填充缺失区域,利用层间特征和梯度先验引导层间信息传递;此外,本文还提出了一种基于多维高效关注的特征融合方法,与传统方法相比,可以更有效地提取不同维度的图像特征。最后,高效上下文融合将来自不同解码层的重构特征映射合并到图像空间中,保持输出图像的语义完整性。实验验证了该方法的有效性,在主观和客观评价方面都表现出优异的性能。在CelebA-HQ (Places2)数据集上,当缺失区域范围在50% ~ 60%之间时,该方法的PSNR和SSIM分别提高了0.02 dB (0.22 dB)和0.001(0.003)。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Complex & Intelligent Systems
Complex & Intelligent Systems COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE-
CiteScore
9.60
自引率
10.30%
发文量
297
期刊介绍: Complex & Intelligent Systems aims to provide a forum for presenting and discussing novel approaches, tools and techniques meant for attaining a cross-fertilization between the broad fields of complex systems, computational simulation, and intelligent analytics and visualization. The transdisciplinary research that the journal focuses on will expand the boundaries of our understanding by investigating the principles and processes that underlie many of the most profound problems facing society today.
期刊最新文献
A low-carbon scheduling method based on improved ant colony algorithm for underground electric transportation vehicles Vehicle positioning systems in tunnel environments: a review A survey of security threats in federated learning Barriers and enhance strategies for green supply chain management using continuous linear diophantine neural networks XTNSR: Xception-based transformer network for single image super resolution
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1