通过强大和视觉上自然的对抗补丁来欺骗人类探测器

IF 5.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Neurocomputing Pub Date : 2024-11-17 DOI:10.1016/j.neucom.2024.128915
Dawei Zhou , Hongbin Qu , Nannan Wang , Chunlei Peng , Zhuoqi Ma , Xi Yang , Xinbo Gao
{"title":"通过强大和视觉上自然的对抗补丁来欺骗人类探测器","authors":"Dawei Zhou ,&nbsp;Hongbin Qu ,&nbsp;Nannan Wang ,&nbsp;Chunlei Peng ,&nbsp;Zhuoqi Ma ,&nbsp;Xi Yang ,&nbsp;Xinbo Gao","doi":"10.1016/j.neucom.2024.128915","DOIUrl":null,"url":null,"abstract":"<div><div>DNNs are vulnerable to adversarial attacks. Physical attacks alter local regions of images by either physically equipping crafted objects or synthesizing adversarial patches. This design is applicable to real-world image capturing scenarios. Currently, adversarial patches are typically generated from random noise. Their textures are different from image textures. Also, these patches are developed without focusing on the relationship between human poses and adversarial robustness. The unnatural pose and texture make patches noticeable in practice. In this work, we propose to synthesize adversarial patches which are visually natural from the perspectives of both poses and textures. In order to adapt adversarial patches to human pose, we propose a patch adaption network PosePatch for patch synthesis, which is guided by perspective transform with estimated human poses. Meanwhile, we develop a network StylePatch to generate harmonized textures for adversarial patches. These networks are combined together for end-to-end training. As a result, our method can synthesize adversarial patches for arbitrary human images without knowing poses and localization in advance. Experiments on benchmark datasets and real-world scenarios show that our method is robust to human pose variations and synthesized adversarial patches are effective, and a user study is made to validate the naturalness.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"616 ","pages":"Article 128915"},"PeriodicalIF":5.5000,"publicationDate":"2024-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Fooling human detectors via robust and visually natural adversarial patches\",\"authors\":\"Dawei Zhou ,&nbsp;Hongbin Qu ,&nbsp;Nannan Wang ,&nbsp;Chunlei Peng ,&nbsp;Zhuoqi Ma ,&nbsp;Xi Yang ,&nbsp;Xinbo Gao\",\"doi\":\"10.1016/j.neucom.2024.128915\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>DNNs are vulnerable to adversarial attacks. Physical attacks alter local regions of images by either physically equipping crafted objects or synthesizing adversarial patches. This design is applicable to real-world image capturing scenarios. Currently, adversarial patches are typically generated from random noise. Their textures are different from image textures. Also, these patches are developed without focusing on the relationship between human poses and adversarial robustness. The unnatural pose and texture make patches noticeable in practice. In this work, we propose to synthesize adversarial patches which are visually natural from the perspectives of both poses and textures. In order to adapt adversarial patches to human pose, we propose a patch adaption network PosePatch for patch synthesis, which is guided by perspective transform with estimated human poses. Meanwhile, we develop a network StylePatch to generate harmonized textures for adversarial patches. These networks are combined together for end-to-end training. As a result, our method can synthesize adversarial patches for arbitrary human images without knowing poses and localization in advance. Experiments on benchmark datasets and real-world scenarios show that our method is robust to human pose variations and synthesized adversarial patches are effective, and a user study is made to validate the naturalness.</div></div>\",\"PeriodicalId\":19268,\"journal\":{\"name\":\"Neurocomputing\",\"volume\":\"616 \",\"pages\":\"Article 128915\"},\"PeriodicalIF\":5.5000,\"publicationDate\":\"2024-11-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Neurocomputing\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0925231224016862\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neurocomputing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0925231224016862","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

dnn很容易受到对抗性攻击。物理攻击通过物理装备精心制作的物体或合成对抗性补丁来改变图像的局部区域。本设计适用于现实世界的图像捕捉场景。目前,对抗补丁通常是由随机噪声产生的。它们的纹理与图像纹理不同。此外,这些补丁的开发没有关注人类姿势和对抗鲁棒性之间的关系。不自然的姿势和纹理使补丁在实践中明显。在这项工作中,我们建议从姿势和纹理的角度合成视觉上自然的对抗性斑块。为了使对抗补丁适应人体姿态,我们提出了一种基于预估人体姿态的透视变换指导下的补丁合成补丁自适应网络PosePatch。同时,我们开发了一个网络StylePatch来生成对抗性补丁的协调纹理。这些网络组合在一起进行端到端训练。因此,我们的方法可以在不知道姿态和定位的情况下,对任意的人体图像合成对抗补丁。基于基准数据集和真实场景的实验表明,该方法对人体姿态变化具有鲁棒性,合成的对抗补丁是有效的,并通过用户研究验证了该方法的自然度。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

摘要图片

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Fooling human detectors via robust and visually natural adversarial patches
DNNs are vulnerable to adversarial attacks. Physical attacks alter local regions of images by either physically equipping crafted objects or synthesizing adversarial patches. This design is applicable to real-world image capturing scenarios. Currently, adversarial patches are typically generated from random noise. Their textures are different from image textures. Also, these patches are developed without focusing on the relationship between human poses and adversarial robustness. The unnatural pose and texture make patches noticeable in practice. In this work, we propose to synthesize adversarial patches which are visually natural from the perspectives of both poses and textures. In order to adapt adversarial patches to human pose, we propose a patch adaption network PosePatch for patch synthesis, which is guided by perspective transform with estimated human poses. Meanwhile, we develop a network StylePatch to generate harmonized textures for adversarial patches. These networks are combined together for end-to-end training. As a result, our method can synthesize adversarial patches for arbitrary human images without knowing poses and localization in advance. Experiments on benchmark datasets and real-world scenarios show that our method is robust to human pose variations and synthesized adversarial patches are effective, and a user study is made to validate the naturalness.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Neurocomputing
Neurocomputing 工程技术-计算机:人工智能
CiteScore
13.10
自引率
10.00%
发文量
1382
审稿时长
70 days
期刊介绍: Neurocomputing publishes articles describing recent fundamental contributions in the field of neurocomputing. Neurocomputing theory, practice and applications are the essential topics being covered.
期刊最新文献
Monocular thermal SLAM with neural radiance fields for 3D scene reconstruction Learning a more compact representation for low-rank tensor completion An HVS-derived network for assessing the quality of camouflaged targets with feature fusion Global Span Semantic Dependency Awareness and Filtering Network for nested named entity recognition A user behavior-aware multi-task learning model for enhanced short video recommendation
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1