通过纹理和结构双向信息流绘制图像

IF 3.4 2区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Signal Processing Pub Date : 2024-08-23 DOI:10.1016/j.sigpro.2024.109672
Jing Lian , Jibao Zhang , Huaikun Zhang , Yuekai Chen , Jiajun Zhang , Jizhao Liu
{"title":"通过纹理和结构双向信息流绘制图像","authors":"Jing Lian ,&nbsp;Jibao Zhang ,&nbsp;Huaikun Zhang ,&nbsp;Yuekai Chen ,&nbsp;Jiajun Zhang ,&nbsp;Jizhao Liu","doi":"10.1016/j.sigpro.2024.109672","DOIUrl":null,"url":null,"abstract":"<div><p>Image inpainting aims to recover damaged regions of a corrupted image and maintain the integrity of the structure and texture within the filled regions. Previous popular approaches have restored images with both vivid textures and structures by introducing structure priors. However, the structure prior-based approaches meet the following main challenges: (1) the fine-grained textures suffer from adverse inpainting effects because they do not fully consider the interaction between structures and textures, (2) the features of the multi-scale objects in structural and textural information cannot be extracted correctly due to the limited receptive fields in convolution operation. In this paper, we propose a texture and structure bidirectional generation network (TSBGNet) to address the above issues. We first reconstruct the texture and structure of corrupted images; then, we design a texture-enhanced-FCMSPCNN (TE-FCMSPCNN) to optimize the generated textures. We also conjoin a bidirectional information flow (BIF) module and a detail enhancement (DE) module to integrate texture and structure features globally. Additionally, we derive a multi-scale attentional feature fusion (MAFF) module to fuse multi-scale features. Experimental results demonstrate that TSBGNet effectively reconstructs realistic contents and significantly outperforms other state-of-the-art approaches on three popular datasets. Moreover, the proposed approach yields promising results on the Dunhuang Mogao Grottoes Mural dataset.</p></div>","PeriodicalId":49523,"journal":{"name":"Signal Processing","volume":"226 ","pages":"Article 109672"},"PeriodicalIF":3.4000,"publicationDate":"2024-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0165168424002925/pdfft?md5=c9cca6312a3bc5cd8f65a072b69a1004&pid=1-s2.0-S0165168424002925-main.pdf","citationCount":"0","resultStr":"{\"title\":\"Image inpainting by bidirectional information flow on texture and structure\",\"authors\":\"Jing Lian ,&nbsp;Jibao Zhang ,&nbsp;Huaikun Zhang ,&nbsp;Yuekai Chen ,&nbsp;Jiajun Zhang ,&nbsp;Jizhao Liu\",\"doi\":\"10.1016/j.sigpro.2024.109672\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Image inpainting aims to recover damaged regions of a corrupted image and maintain the integrity of the structure and texture within the filled regions. Previous popular approaches have restored images with both vivid textures and structures by introducing structure priors. However, the structure prior-based approaches meet the following main challenges: (1) the fine-grained textures suffer from adverse inpainting effects because they do not fully consider the interaction between structures and textures, (2) the features of the multi-scale objects in structural and textural information cannot be extracted correctly due to the limited receptive fields in convolution operation. In this paper, we propose a texture and structure bidirectional generation network (TSBGNet) to address the above issues. We first reconstruct the texture and structure of corrupted images; then, we design a texture-enhanced-FCMSPCNN (TE-FCMSPCNN) to optimize the generated textures. We also conjoin a bidirectional information flow (BIF) module and a detail enhancement (DE) module to integrate texture and structure features globally. Additionally, we derive a multi-scale attentional feature fusion (MAFF) module to fuse multi-scale features. Experimental results demonstrate that TSBGNet effectively reconstructs realistic contents and significantly outperforms other state-of-the-art approaches on three popular datasets. Moreover, the proposed approach yields promising results on the Dunhuang Mogao Grottoes Mural dataset.</p></div>\",\"PeriodicalId\":49523,\"journal\":{\"name\":\"Signal Processing\",\"volume\":\"226 \",\"pages\":\"Article 109672\"},\"PeriodicalIF\":3.4000,\"publicationDate\":\"2024-08-23\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.sciencedirect.com/science/article/pii/S0165168424002925/pdfft?md5=c9cca6312a3bc5cd8f65a072b69a1004&pid=1-s2.0-S0165168424002925-main.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Signal Processing\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0165168424002925\",\"RegionNum\":2,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Signal Processing","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0165168424002925","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0

摘要

图像内绘旨在恢复损坏图像中的受损区域,并保持填充区域内结构和纹理的完整性。以往流行的方法是通过引入结构先验来恢复具有生动纹理和结构的图像。然而,基于结构先验的方法主要面临以下挑战:(1) 由于没有充分考虑结构和纹理之间的相互作用,细粒度纹理会受到不利的涂抹效果;(2) 由于卷积操作中的感受野有限,无法正确提取结构和纹理信息中多尺度对象的特征。本文提出了一种纹理和结构双向生成网络(TSBGNet)来解决上述问题。首先,我们重建了损坏图像的纹理和结构;然后,我们设计了一个纹理增强-FCMSPCNN(TE-FCMSPCNN)来优化生成的纹理。我们还结合了双向信息流(BIF)模块和细节增强(DE)模块,对纹理和结构特征进行全局整合。此外,我们还开发了一个多尺度注意力特征融合(MAFF)模块,用于融合多尺度特征。实验结果表明,TSBGNet 能有效地重建现实内容,在三个流行数据集上的表现明显优于其他最先进的方法。此外,所提出的方法在敦煌莫高窟壁画数据集上也取得了可喜的成果。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Image inpainting by bidirectional information flow on texture and structure

Image inpainting aims to recover damaged regions of a corrupted image and maintain the integrity of the structure and texture within the filled regions. Previous popular approaches have restored images with both vivid textures and structures by introducing structure priors. However, the structure prior-based approaches meet the following main challenges: (1) the fine-grained textures suffer from adverse inpainting effects because they do not fully consider the interaction between structures and textures, (2) the features of the multi-scale objects in structural and textural information cannot be extracted correctly due to the limited receptive fields in convolution operation. In this paper, we propose a texture and structure bidirectional generation network (TSBGNet) to address the above issues. We first reconstruct the texture and structure of corrupted images; then, we design a texture-enhanced-FCMSPCNN (TE-FCMSPCNN) to optimize the generated textures. We also conjoin a bidirectional information flow (BIF) module and a detail enhancement (DE) module to integrate texture and structure features globally. Additionally, we derive a multi-scale attentional feature fusion (MAFF) module to fuse multi-scale features. Experimental results demonstrate that TSBGNet effectively reconstructs realistic contents and significantly outperforms other state-of-the-art approaches on three popular datasets. Moreover, the proposed approach yields promising results on the Dunhuang Mogao Grottoes Mural dataset.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Signal Processing
Signal Processing 工程技术-工程:电子与电气
CiteScore
9.20
自引率
9.10%
发文量
309
审稿时长
41 days
期刊介绍: Signal Processing incorporates all aspects of the theory and practice of signal processing. It features original research work, tutorial and review articles, and accounts of practical developments. It is intended for a rapid dissemination of knowledge and experience to engineers and scientists working in the research, development or practical application of signal processing. Subject areas covered by the journal include: Signal Theory; Stochastic Processes; Detection and Estimation; Spectral Analysis; Filtering; Signal Processing Systems; Software Developments; Image Processing; Pattern Recognition; Optical Signal Processing; Digital Signal Processing; Multi-dimensional Signal Processing; Communication Signal Processing; Biomedical Signal Processing; Geophysical and Astrophysical Signal Processing; Earth Resources Signal Processing; Acoustic and Vibration Signal Processing; Data Processing; Remote Sensing; Signal Processing Technology; Radar Signal Processing; Sonar Signal Processing; Industrial Applications; New Applications.
期刊最新文献
PIPO-Net: A Penalty-based Independent Parameters Optimization deep unfolding Network An interference power allocation method against multi-objective radars based on optimized proximal policy optimization Codesign of transmit waveform and reflective beamforming for active reconfigurable intelligent surface-aided MIMO ISAC system Adaptive three-dimensional histogram modification for JPEG reversible data hiding Euclidean direction search algorithm with maximum correntropy criterion for active noise control system
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1