通过人工纹理对图像分类器进行无箱通用对抗性干扰

IF 6.3 1区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS IEEE Transactions on Information Forensics and Security Pub Date : 2024-10-11 DOI:10.1109/TIFS.2024.3478828
Ningping Mou;Binqing Guo;Lingchen Zhao;Cong Wang;Yue Zhao;Qian Wang
{"title":"通过人工纹理对图像分类器进行无箱通用对抗性干扰","authors":"Ningping Mou;Binqing Guo;Lingchen Zhao;Cong Wang;Yue Zhao;Qian Wang","doi":"10.1109/TIFS.2024.3478828","DOIUrl":null,"url":null,"abstract":"Recent advancements in adversarial attack research have seen a transition from white-box to black-box and even no-box threat models, greatly enhancing the practicality of these attacks. However, existing no-box attacks focus on instance-specific perturbations, leaving more powerful universal adversarial perturbations (UAPs) unexplored. This study addresses a crucial question: can UAPs be generated under a no-box threat model? Our findings provide an affirmative answer with a texture-based method. Artificially crafted textures can act as UAPs, termed Texture-Adv. With a modest density and a fixed budget for perturbations, it can achieve an attack success rate of 80% under the constraint of \n<inline-formula> <tex-math>$l_{\\infty }$ </tex-math></inline-formula>\n = 10/255. In addition, Texture-Adv can also take effect under traditional black-box threat models. Building upon a phenomenon associated with dominant labels, we utilize Texture-Adv to develop a highly efficient decision-based attack strategy, named Adv-Pool. This approach creates and traverses a set of Texture-Adv instances with diverse classification distributions, significantly reducing the average query budget to less than 1.3, which is near the 1-query lower bound for decision-based attacks. Moreover, we empirically demonstrate that Texture-Adv, when used as a starting point, can enhance the success rates of existing transfer attacks and the efficiency of decision-based attacks. The discovery suggests its potential as an effective starting point for various adversarial attacks while preserving the original constraints of their threat models.","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"19 ","pages":"9803-9818"},"PeriodicalIF":6.3000,"publicationDate":"2024-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"No-Box Universal Adversarial Perturbations Against Image Classifiers via Artificial Textures\",\"authors\":\"Ningping Mou;Binqing Guo;Lingchen Zhao;Cong Wang;Yue Zhao;Qian Wang\",\"doi\":\"10.1109/TIFS.2024.3478828\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Recent advancements in adversarial attack research have seen a transition from white-box to black-box and even no-box threat models, greatly enhancing the practicality of these attacks. However, existing no-box attacks focus on instance-specific perturbations, leaving more powerful universal adversarial perturbations (UAPs) unexplored. This study addresses a crucial question: can UAPs be generated under a no-box threat model? Our findings provide an affirmative answer with a texture-based method. Artificially crafted textures can act as UAPs, termed Texture-Adv. With a modest density and a fixed budget for perturbations, it can achieve an attack success rate of 80% under the constraint of \\n<inline-formula> <tex-math>$l_{\\\\infty }$ </tex-math></inline-formula>\\n = 10/255. In addition, Texture-Adv can also take effect under traditional black-box threat models. Building upon a phenomenon associated with dominant labels, we utilize Texture-Adv to develop a highly efficient decision-based attack strategy, named Adv-Pool. This approach creates and traverses a set of Texture-Adv instances with diverse classification distributions, significantly reducing the average query budget to less than 1.3, which is near the 1-query lower bound for decision-based attacks. Moreover, we empirically demonstrate that Texture-Adv, when used as a starting point, can enhance the success rates of existing transfer attacks and the efficiency of decision-based attacks. The discovery suggests its potential as an effective starting point for various adversarial attacks while preserving the original constraints of their threat models.\",\"PeriodicalId\":13492,\"journal\":{\"name\":\"IEEE Transactions on Information Forensics and Security\",\"volume\":\"19 \",\"pages\":\"9803-9818\"},\"PeriodicalIF\":6.3000,\"publicationDate\":\"2024-10-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Information Forensics and Security\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10714478/\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, THEORY & METHODS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Information Forensics and Security","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10714478/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, THEORY & METHODS","Score":null,"Total":0}
引用次数: 0

摘要

最近,对抗性攻击研究取得了长足进步,威胁模型已从白盒过渡到黑盒甚至无盒,大大提高了这些攻击的实用性。然而,现有的无箱攻击主要针对特定实例的扰动,而更强大的通用对抗扰动(UAPs)尚未被探索。本研究解决了一个关键问题:在无盒威胁模型下能否生成 UAP?我们的研究结果通过一种基于纹理的方法给出了肯定的答案。人工制作的纹理可以充当 UAP,被称为纹理-Adv。 在适度的密度和固定的扰动预算下,它可以在 $l_\{infty }$ = 10/255 的约束条件下实现 80% 的攻击成功率。此外,Texture-Adv 还能在传统的黑盒威胁模型下发挥作用。基于与优势标签相关的现象,我们利用 Texture-Adv 开发出一种高效的基于决策的攻击策略,命名为 Adv-Pool。这种方法创建并遍历一组具有不同分类分布的 Texture-Adv 实例,从而将平均查询预算大幅降低到 1.3 以下,接近基于决策的攻击的 1 查询下限。此外,我们还通过经验证明,以 Texture-Adv 为起点,可以提高现有转移攻击的成功率和基于决策攻击的效率。这一发现表明,Texture-Adv 有潜力成为各种对抗性攻击的有效起点,同时保留其威胁模型的原始约束。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
No-Box Universal Adversarial Perturbations Against Image Classifiers via Artificial Textures
Recent advancements in adversarial attack research have seen a transition from white-box to black-box and even no-box threat models, greatly enhancing the practicality of these attacks. However, existing no-box attacks focus on instance-specific perturbations, leaving more powerful universal adversarial perturbations (UAPs) unexplored. This study addresses a crucial question: can UAPs be generated under a no-box threat model? Our findings provide an affirmative answer with a texture-based method. Artificially crafted textures can act as UAPs, termed Texture-Adv. With a modest density and a fixed budget for perturbations, it can achieve an attack success rate of 80% under the constraint of $l_{\infty }$ = 10/255. In addition, Texture-Adv can also take effect under traditional black-box threat models. Building upon a phenomenon associated with dominant labels, we utilize Texture-Adv to develop a highly efficient decision-based attack strategy, named Adv-Pool. This approach creates and traverses a set of Texture-Adv instances with diverse classification distributions, significantly reducing the average query budget to less than 1.3, which is near the 1-query lower bound for decision-based attacks. Moreover, we empirically demonstrate that Texture-Adv, when used as a starting point, can enhance the success rates of existing transfer attacks and the efficiency of decision-based attacks. The discovery suggests its potential as an effective starting point for various adversarial attacks while preserving the original constraints of their threat models.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
IEEE Transactions on Information Forensics and Security
IEEE Transactions on Information Forensics and Security 工程技术-工程:电子与电气
CiteScore
14.40
自引率
7.40%
发文量
234
审稿时长
6.5 months
期刊介绍: The IEEE Transactions on Information Forensics and Security covers the sciences, technologies, and applications relating to information forensics, information security, biometrics, surveillance and systems applications that incorporate these features
期刊最新文献
Attackers Are Not the Same! Unveiling the Impact of Feature Distribution on Label Inference Attacks Backdoor Online Tracing With Evolving Graphs LHADRO: A Robust Control Framework for Autonomous Vehicles Under Cyber-Physical Attacks Towards Mobile Palmprint Recognition via Multi-view Hierarchical Graph Learning Succinct Hash-based Arbitrary-Range Proofs
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1