Xin Ma, Kai Yang, Chuanzhen Zhang, Hualing Li, Xin Zheng
{"title":"人工智能中的物理对抗攻击","authors":"Xin Ma, Kai Yang, Chuanzhen Zhang, Hualing Li, Xin Zheng","doi":"10.1049/cmu2.12714","DOIUrl":null,"url":null,"abstract":"<p>With the continuous development of wireless communication and artificial intelligence technology, Internet of Things (IoT) technology has made great progress. Deep learning methods are currently used in IoT technology, but deep neural networks (DNNs) are notoriously susceptible to adversarial examples, and subtle pixel changes to images can result in incorrect recognition results from DNNs. In the real-world application, the patches generated by the recent physical attack methods are larger or less realistic and easily detectable. To address this problem, a Generative Adversarial Network based on Visual attention model and Style transfer network (GAN-VS) is proposed, which reduces the patch area and makes the patch more natural and less noticeable. A visual attention model combined with generative adversarial network is introduced to detect the critical regions of image recognition, and only generate patches within the critical regions to reduce patch area and improve attack efficiency. For any type of seed patch, an adversarial patch can be generated with a high degree of stylistic and content similarity to the attacked image by generative adversarial network and style transfer network. Experimental evaluation shows that the proposed GAN-VS has good camouflage and outperforms state-of-the-art adversarial patch attack methods.</p>","PeriodicalId":55001,"journal":{"name":"IET Communications","volume":"18 6","pages":"375-385"},"PeriodicalIF":1.5000,"publicationDate":"2023-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/cmu2.12714","citationCount":"0","resultStr":"{\"title\":\"Physical adversarial attack in artificial intelligence of things\",\"authors\":\"Xin Ma, Kai Yang, Chuanzhen Zhang, Hualing Li, Xin Zheng\",\"doi\":\"10.1049/cmu2.12714\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>With the continuous development of wireless communication and artificial intelligence technology, Internet of Things (IoT) technology has made great progress. Deep learning methods are currently used in IoT technology, but deep neural networks (DNNs) are notoriously susceptible to adversarial examples, and subtle pixel changes to images can result in incorrect recognition results from DNNs. In the real-world application, the patches generated by the recent physical attack methods are larger or less realistic and easily detectable. To address this problem, a Generative Adversarial Network based on Visual attention model and Style transfer network (GAN-VS) is proposed, which reduces the patch area and makes the patch more natural and less noticeable. A visual attention model combined with generative adversarial network is introduced to detect the critical regions of image recognition, and only generate patches within the critical regions to reduce patch area and improve attack efficiency. For any type of seed patch, an adversarial patch can be generated with a high degree of stylistic and content similarity to the attacked image by generative adversarial network and style transfer network. Experimental evaluation shows that the proposed GAN-VS has good camouflage and outperforms state-of-the-art adversarial patch attack methods.</p>\",\"PeriodicalId\":55001,\"journal\":{\"name\":\"IET Communications\",\"volume\":\"18 6\",\"pages\":\"375-385\"},\"PeriodicalIF\":1.5000,\"publicationDate\":\"2023-12-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://onlinelibrary.wiley.com/doi/epdf/10.1049/cmu2.12714\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IET Communications\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1049/cmu2.12714\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IET Communications","FirstCategoryId":"94","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1049/cmu2.12714","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
Physical adversarial attack in artificial intelligence of things
With the continuous development of wireless communication and artificial intelligence technology, Internet of Things (IoT) technology has made great progress. Deep learning methods are currently used in IoT technology, but deep neural networks (DNNs) are notoriously susceptible to adversarial examples, and subtle pixel changes to images can result in incorrect recognition results from DNNs. In the real-world application, the patches generated by the recent physical attack methods are larger or less realistic and easily detectable. To address this problem, a Generative Adversarial Network based on Visual attention model and Style transfer network (GAN-VS) is proposed, which reduces the patch area and makes the patch more natural and less noticeable. A visual attention model combined with generative adversarial network is introduced to detect the critical regions of image recognition, and only generate patches within the critical regions to reduce patch area and improve attack efficiency. For any type of seed patch, an adversarial patch can be generated with a high degree of stylistic and content similarity to the attacked image by generative adversarial network and style transfer network. Experimental evaluation shows that the proposed GAN-VS has good camouflage and outperforms state-of-the-art adversarial patch attack methods.
期刊介绍:
IET Communications covers the fundamental and generic research for a better understanding of communication technologies to harness the signals for better performing communication systems using various wired and/or wireless media. This Journal is particularly interested in research papers reporting novel solutions to the dominating problems of noise, interference, timing and errors for reduction systems deficiencies such as wasting scarce resources such as spectra, energy and bandwidth.
Topics include, but are not limited to:
Coding and Communication Theory;
Modulation and Signal Design;
Wired, Wireless and Optical Communication;
Communication System
Special Issues. Current Call for Papers:
Cognitive and AI-enabled Wireless and Mobile - https://digital-library.theiet.org/files/IET_COM_CFP_CAWM.pdf
UAV-Enabled Mobile Edge Computing - https://digital-library.theiet.org/files/IET_COM_CFP_UAV.pdf