Ningping Mou;Binqing Guo;Lingchen Zhao;Cong Wang;Yue Zhao;Qian Wang
{"title":"通过人工纹理对图像分类器进行无箱通用对抗性干扰","authors":"Ningping Mou;Binqing Guo;Lingchen Zhao;Cong Wang;Yue Zhao;Qian Wang","doi":"10.1109/TIFS.2024.3478828","DOIUrl":null,"url":null,"abstract":"Recent advancements in adversarial attack research have seen a transition from white-box to black-box and even no-box threat models, greatly enhancing the practicality of these attacks. However, existing no-box attacks focus on instance-specific perturbations, leaving more powerful universal adversarial perturbations (UAPs) unexplored. This study addresses a crucial question: can UAPs be generated under a no-box threat model? Our findings provide an affirmative answer with a texture-based method. Artificially crafted textures can act as UAPs, termed Texture-Adv. With a modest density and a fixed budget for perturbations, it can achieve an attack success rate of 80% under the constraint of \n<inline-formula> <tex-math>$l_{\\infty }$ </tex-math></inline-formula>\n = 10/255. In addition, Texture-Adv can also take effect under traditional black-box threat models. Building upon a phenomenon associated with dominant labels, we utilize Texture-Adv to develop a highly efficient decision-based attack strategy, named Adv-Pool. This approach creates and traverses a set of Texture-Adv instances with diverse classification distributions, significantly reducing the average query budget to less than 1.3, which is near the 1-query lower bound for decision-based attacks. Moreover, we empirically demonstrate that Texture-Adv, when used as a starting point, can enhance the success rates of existing transfer attacks and the efficiency of decision-based attacks. The discovery suggests its potential as an effective starting point for various adversarial attacks while preserving the original constraints of their threat models.","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"19 ","pages":"9803-9818"},"PeriodicalIF":6.3000,"publicationDate":"2024-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"No-Box Universal Adversarial Perturbations Against Image Classifiers via Artificial Textures\",\"authors\":\"Ningping Mou;Binqing Guo;Lingchen Zhao;Cong Wang;Yue Zhao;Qian Wang\",\"doi\":\"10.1109/TIFS.2024.3478828\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Recent advancements in adversarial attack research have seen a transition from white-box to black-box and even no-box threat models, greatly enhancing the practicality of these attacks. However, existing no-box attacks focus on instance-specific perturbations, leaving more powerful universal adversarial perturbations (UAPs) unexplored. This study addresses a crucial question: can UAPs be generated under a no-box threat model? Our findings provide an affirmative answer with a texture-based method. Artificially crafted textures can act as UAPs, termed Texture-Adv. With a modest density and a fixed budget for perturbations, it can achieve an attack success rate of 80% under the constraint of \\n<inline-formula> <tex-math>$l_{\\\\infty }$ </tex-math></inline-formula>\\n = 10/255. In addition, Texture-Adv can also take effect under traditional black-box threat models. Building upon a phenomenon associated with dominant labels, we utilize Texture-Adv to develop a highly efficient decision-based attack strategy, named Adv-Pool. This approach creates and traverses a set of Texture-Adv instances with diverse classification distributions, significantly reducing the average query budget to less than 1.3, which is near the 1-query lower bound for decision-based attacks. Moreover, we empirically demonstrate that Texture-Adv, when used as a starting point, can enhance the success rates of existing transfer attacks and the efficiency of decision-based attacks. The discovery suggests its potential as an effective starting point for various adversarial attacks while preserving the original constraints of their threat models.\",\"PeriodicalId\":13492,\"journal\":{\"name\":\"IEEE Transactions on Information Forensics and Security\",\"volume\":\"19 \",\"pages\":\"9803-9818\"},\"PeriodicalIF\":6.3000,\"publicationDate\":\"2024-10-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Information Forensics and Security\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10714478/\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, THEORY & METHODS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Information Forensics and Security","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10714478/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, THEORY & METHODS","Score":null,"Total":0}
No-Box Universal Adversarial Perturbations Against Image Classifiers via Artificial Textures
Recent advancements in adversarial attack research have seen a transition from white-box to black-box and even no-box threat models, greatly enhancing the practicality of these attacks. However, existing no-box attacks focus on instance-specific perturbations, leaving more powerful universal adversarial perturbations (UAPs) unexplored. This study addresses a crucial question: can UAPs be generated under a no-box threat model? Our findings provide an affirmative answer with a texture-based method. Artificially crafted textures can act as UAPs, termed Texture-Adv. With a modest density and a fixed budget for perturbations, it can achieve an attack success rate of 80% under the constraint of
$l_{\infty }$
= 10/255. In addition, Texture-Adv can also take effect under traditional black-box threat models. Building upon a phenomenon associated with dominant labels, we utilize Texture-Adv to develop a highly efficient decision-based attack strategy, named Adv-Pool. This approach creates and traverses a set of Texture-Adv instances with diverse classification distributions, significantly reducing the average query budget to less than 1.3, which is near the 1-query lower bound for decision-based attacks. Moreover, we empirically demonstrate that Texture-Adv, when used as a starting point, can enhance the success rates of existing transfer attacks and the efficiency of decision-based attacks. The discovery suggests its potential as an effective starting point for various adversarial attacks while preserving the original constraints of their threat models.
期刊介绍:
The IEEE Transactions on Information Forensics and Security covers the sciences, technologies, and applications relating to information forensics, information security, biometrics, surveillance and systems applications that incorporate these features