{"title":"Stealthy Vehicle Adversarial Camouflage Texture Generation Based on Neural Style Transfer.","authors":"Wei Cai, Xingyu Di, Xin Wang, Weijie Gao, Haoran Jia","doi":"10.3390/e26110903","DOIUrl":null,"url":null,"abstract":"<p><p>Adversarial attacks that mislead deep neural networks (DNNs) into making incorrect predictions can also be implemented in the physical world. However, most of the existing adversarial camouflage textures that attack object detection models only consider the effectiveness of the attack, ignoring the stealthiness of adversarial attacks, resulting in the generated adversarial camouflage textures appearing abrupt to human observers. To address this issue, we propose a style transfer module added to an adversarial texture generation framework. By calculating the style loss between the texture and the specified style image, the adversarial texture generated by the model is guided to have good stealthiness and is not easily detected by DNNs and human observers in specific scenes. Experiments have shown that in both the digital and physical worlds, the vehicle full coverage adversarial camouflage texture we create has good stealthiness and can effectively fool advanced DNN object detectors while evading human observers in specific scenes.</p>","PeriodicalId":11694,"journal":{"name":"Entropy","volume":"26 11","pages":""},"PeriodicalIF":2.1000,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11592712/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Entropy","FirstCategoryId":"101","ListUrlMain":"https://doi.org/10.3390/e26110903","RegionNum":3,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"PHYSICS, MULTIDISCIPLINARY","Score":null,"Total":0}
引用次数: 0
Abstract
Adversarial attacks that mislead deep neural networks (DNNs) into making incorrect predictions can also be implemented in the physical world. However, most of the existing adversarial camouflage textures that attack object detection models only consider the effectiveness of the attack, ignoring the stealthiness of adversarial attacks, resulting in the generated adversarial camouflage textures appearing abrupt to human observers. To address this issue, we propose a style transfer module added to an adversarial texture generation framework. By calculating the style loss between the texture and the specified style image, the adversarial texture generated by the model is guided to have good stealthiness and is not easily detected by DNNs and human observers in specific scenes. Experiments have shown that in both the digital and physical worlds, the vehicle full coverage adversarial camouflage texture we create has good stealthiness and can effectively fool advanced DNN object detectors while evading human observers in specific scenes.
期刊介绍:
Entropy (ISSN 1099-4300), an international and interdisciplinary journal of entropy and information studies, publishes reviews, regular research papers and short notes. Our aim is to encourage scientists to publish as much as possible their theoretical and experimental details. There is no restriction on the length of the papers. If there are computation and the experiment, the details must be provided so that the results can be reproduced.