{"title":"Adversarial Neon Beam: Robust Physical-World Adversarial Attack to DNNs","authors":"Chen-Hao Hu, Kalibinuer Tiliwalidi","doi":"10.48550/arXiv.2204.00853","DOIUrl":null,"url":null,"abstract":"In the physical world, light affects the performance of deep neural networks. Nowadays, many products based on deep neural network have been put into daily life. There are few researches on the effect of light on the performance of deep neural network models. However, the adversarial perturbations generated by light may have extremely dangerous effects on these systems. In this work, we propose an attack method called adversarial neon beam (AdvNB), which can execute the physical attack by obtaining the physical parameters of adversarial neon beams with very few queries. Experiments show that our algorithm can achieve advanced attack effect in both digital test and physical test. In the digital environment, 99.3% attack success rate was achieved, and in the physical environment, 100% attack success rate was achieved. Compared with the most advanced physical attack methods, our method can achieve better physical perturbation concealment. In addition, by analyzing the experimental data, we reveal some new phenomena brought about by the adversarial neon beam attack. Visual comparison: The adversarial perturbations generated by RP2 [25] can be captured by the camera well, achieving good adversarial attack effect, but failed to achieve good concealment. Similarly, the adversarial perturbations generated by AdvLB [29] is difficult to achieve good concealment. In contrast, the adversarial perturbations generated by AdvNB can not only achieve higher attack success rate, but also achieve better concealment.","PeriodicalId":93888,"journal":{"name":"ArXiv","volume":"37 3 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ArXiv","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.48550/arXiv.2204.00853","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
In the physical world, light affects the performance of deep neural networks. Nowadays, many products based on deep neural network have been put into daily life. There are few researches on the effect of light on the performance of deep neural network models. However, the adversarial perturbations generated by light may have extremely dangerous effects on these systems. In this work, we propose an attack method called adversarial neon beam (AdvNB), which can execute the physical attack by obtaining the physical parameters of adversarial neon beams with very few queries. Experiments show that our algorithm can achieve advanced attack effect in both digital test and physical test. In the digital environment, 99.3% attack success rate was achieved, and in the physical environment, 100% attack success rate was achieved. Compared with the most advanced physical attack methods, our method can achieve better physical perturbation concealment. In addition, by analyzing the experimental data, we reveal some new phenomena brought about by the adversarial neon beam attack. Visual comparison: The adversarial perturbations generated by RP2 [25] can be captured by the camera well, achieving good adversarial attack effect, but failed to achieve good concealment. Similarly, the adversarial perturbations generated by AdvLB [29] is difficult to achieve good concealment. In contrast, the adversarial perturbations generated by AdvNB can not only achieve higher attack success rate, but also achieve better concealment.