Localization of Deep Inpainting Using High-Pass Fully Convolutional Network

Haodong Li, Jiwu Huang
{"title":"Localization of Deep Inpainting Using High-Pass Fully Convolutional Network","authors":"Haodong Li, Jiwu Huang","doi":"10.1109/ICCV.2019.00839","DOIUrl":null,"url":null,"abstract":"Image inpainting has been substantially improved with deep learning in the past years. Deep inpainting can fill image regions with plausible contents, which are not visually apparent. Although inpainting is originally designed to repair images, it can even be used for malicious manipulations, e.g., removal of specific objects. Therefore, it is necessary to identify the presence of inpainting in an image. This paper presents a method to locate the regions manipulated by deep inpainting. The proposed method employs a fully convolutional network that is based on high-pass filtered image residuals. Firstly, we analyze and observe that the inpainted regions are more distinguishable from the untouched ones in the residual domain. Hence, a high-pass pre-filtering module is designed to get image residuals for enhancing inpainting traces. Then, a feature extraction module, which learns discriminative features from image residuals, is built with four concatenated ResNet blocks. The learned feature maps are finally enlarged by an up-sampling module, so that a pixel-wise inpainting localization map is obtained. The whole network is trained end-to-end with a loss addressing the class imbalance. Extensive experimental results evaluated on both synthetic and realistic images subjected to deep inpainting have shown the effectiveness of the proposed method.","PeriodicalId":6728,"journal":{"name":"2019 IEEE/CVF International Conference on Computer Vision (ICCV)","volume":"2003 1","pages":"8300-8309"},"PeriodicalIF":0.0000,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"58","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 IEEE/CVF International Conference on Computer Vision (ICCV)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCV.2019.00839","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 58

Abstract

Image inpainting has been substantially improved with deep learning in the past years. Deep inpainting can fill image regions with plausible contents, which are not visually apparent. Although inpainting is originally designed to repair images, it can even be used for malicious manipulations, e.g., removal of specific objects. Therefore, it is necessary to identify the presence of inpainting in an image. This paper presents a method to locate the regions manipulated by deep inpainting. The proposed method employs a fully convolutional network that is based on high-pass filtered image residuals. Firstly, we analyze and observe that the inpainted regions are more distinguishable from the untouched ones in the residual domain. Hence, a high-pass pre-filtering module is designed to get image residuals for enhancing inpainting traces. Then, a feature extraction module, which learns discriminative features from image residuals, is built with four concatenated ResNet blocks. The learned feature maps are finally enlarged by an up-sampling module, so that a pixel-wise inpainting localization map is obtained. The whole network is trained end-to-end with a loss addressing the class imbalance. Extensive experimental results evaluated on both synthetic and realistic images subjected to deep inpainting have shown the effectiveness of the proposed method.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于高通全卷积网络的深度喷漆定位
在过去的几年里,随着深度学习的发展,图像绘制已经有了很大的改进。深度着色可以用看似合理的内容填充图像区域,这些内容在视觉上不明显。尽管inpainting最初是为了修复图像而设计的,但它甚至可以用于恶意操作,例如删除特定对象。因此,有必要识别图像中是否存在补漆。提出了一种基于深度图像处理的区域定位方法。该方法采用基于高通滤波图像残差的全卷积网络。首先,我们分析和观察到,在残差域中,绘制的区域与未绘制的区域更容易区分。因此,设计了高通预滤波模块来获取图像残差,以增强涂漆痕迹。然后,用四个连接的ResNet块构建特征提取模块,从图像残差中学习判别特征。最后通过上采样模块对学习到的特征图进行放大,从而得到逐像素的图像定位图。整个网络是端到端的训练,用一个loss来解决类的不平衡。大量的实验结果评估了合成和真实图像经过深度着色表明了该方法的有效性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Very Long Natural Scenery Image Prediction by Outpainting VTNFP: An Image-Based Virtual Try-On Network With Body and Clothing Feature Preservation Towards Latent Attribute Discovery From Triplet Similarities Gaze360: Physically Unconstrained Gaze Estimation in the Wild Attention Bridging Network for Knowledge Transfer
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1