AFTLNet: An efficient adaptive forgery traces learning network for deep image inpainting localization

IF 3.8 2区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Journal of Information Security and Applications Pub Date : 2024-06-28 DOI:10.1016/j.jisa.2024.103825
Xiangling Ding, Yingqian Deng, Yulin Zhao, Wenyi Zhu
{"title":"AFTLNet: An efficient adaptive forgery traces learning network for deep image inpainting localization","authors":"Xiangling Ding,&nbsp;Yingqian Deng,&nbsp;Yulin Zhao,&nbsp;Wenyi Zhu","doi":"10.1016/j.jisa.2024.103825","DOIUrl":null,"url":null,"abstract":"<div><p>Deep-learning-based image inpainting repairs a region with visually believable content, leaving behind imperceptible traces. Since deep image inpainting approaches can malevolently remove key objects and erase visible copyright watermarks, the desire for an effective method to distinguish the inpainted regions has become urgent. In this work, we propose an adaptive forgery trace learning network (AFTLN), which consists of two subblocks: the adaptive block and the Densenet block. Specifically, the adaptive block exploits an adaptive difference convolution to maximize the forgery traces by iteratively updating its weights. Meanwhile, the Densenet block improves the feature weights and reduces the impact of noise on the forgery traces. An image-inpainting detector, namely AFTLNet, is designed by integrating AFTLN with neural architecture search, and global and local attention modules, which aims to find potential tampered regions, enhance feature consistency, and reduce intra-class differences, respectively. The experimental results present that our proposed AFTLNet exceeds existing inpainting detection approaches. Finally, an inpainting dataset of 26K image pairs is constructed for future research. The dataset is available at <span>https://pan.baidu.com/s/10SRJeQBNnTHJXvxl8xzHcg</span><svg><path></path></svg> with password: 1234.</p></div>","PeriodicalId":48638,"journal":{"name":"Journal of Information Security and Applications","volume":"84 ","pages":"Article 103825"},"PeriodicalIF":3.8000,"publicationDate":"2024-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Information Security and Applications","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2214212624001285","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

Abstract

Deep-learning-based image inpainting repairs a region with visually believable content, leaving behind imperceptible traces. Since deep image inpainting approaches can malevolently remove key objects and erase visible copyright watermarks, the desire for an effective method to distinguish the inpainted regions has become urgent. In this work, we propose an adaptive forgery trace learning network (AFTLN), which consists of two subblocks: the adaptive block and the Densenet block. Specifically, the adaptive block exploits an adaptive difference convolution to maximize the forgery traces by iteratively updating its weights. Meanwhile, the Densenet block improves the feature weights and reduces the impact of noise on the forgery traces. An image-inpainting detector, namely AFTLNet, is designed by integrating AFTLN with neural architecture search, and global and local attention modules, which aims to find potential tampered regions, enhance feature consistency, and reduce intra-class differences, respectively. The experimental results present that our proposed AFTLNet exceeds existing inpainting detection approaches. Finally, an inpainting dataset of 26K image pairs is constructed for future research. The dataset is available at https://pan.baidu.com/s/10SRJeQBNnTHJXvxl8xzHcg with password: 1234.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
AFTLNet:用于深度图像绘制定位的高效自适应伪造痕迹学习网络
基于深度学习的图像内绘可以修复具有视觉可信内容的区域,并留下不易察觉的痕迹。由于深度图像绘制方法会恶意移除关键对象并擦除可见的版权水印,因此迫切需要一种有效的方法来区分绘制区域。在这项工作中,我们提出了一种自适应伪造痕迹学习网络(AFTLN),它由两个子块组成:自适应块和 Densenet 块。具体来说,自适应块利用自适应差分卷积,通过迭代更新权重来最大化伪造痕迹。同时,Densenet 块改进了特征权重,降低了噪声对伪造痕迹的影响。通过将 AFTLN 与神经架构搜索、全局和局部注意力模块相结合,设计了一种图像绘制检测器,即 AFTLNet,其目的分别是查找潜在的篡改区域、增强特征一致性和减少类内差异。实验结果表明,我们提出的 AFTLNet 超越了现有的画中画检测方法。最后,我们构建了一个包含 26K 对图像的涂色数据集,供未来研究使用。该数据集可在 https://pan.baidu.com/s/10SRJeQBNnTHJXvxl8xzHcg 上获取,密码为:1234。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Journal of Information Security and Applications
Journal of Information Security and Applications Computer Science-Computer Networks and Communications
CiteScore
10.90
自引率
5.40%
发文量
206
审稿时长
56 days
期刊介绍: Journal of Information Security and Applications (JISA) focuses on the original research and practice-driven applications with relevance to information security and applications. JISA provides a common linkage between a vibrant scientific and research community and industry professionals by offering a clear view on modern problems and challenges in information security, as well as identifying promising scientific and "best-practice" solutions. JISA issues offer a balance between original research work and innovative industrial approaches by internationally renowned information security experts and researchers.
期刊最新文献
Multi-ciphertext equality test heterogeneous signcryption scheme based on location privacy Towards an intelligent and automatic irrigation system based on internet of things with authentication feature in VANET A novel blockchain-based anonymous roaming authentication scheme for VANET Efficient quantum algorithms to break group ring cryptosystems IDPriU: A two-party ID-private data union protocol for privacy-preserving machine learning
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1