Jun Yan, Huilin Yin, Bin Ye, Wanchen Ge, Hao Zhang, Gerhard Rigoll
{"title":"交通标志显著区域的对抗性攻击","authors":"Jun Yan, Huilin Yin, Bin Ye, Wanchen Ge, Hao Zhang, Gerhard Rigoll","doi":"10.1007/s42154-023-00220-9","DOIUrl":null,"url":null,"abstract":"<div><p>The state-of-the-art deep neural networks are vulnerable to the attacks of adversarial examples with small-magnitude perturbations. In the field of deep-learning-based automated driving, such adversarial attack threats testify to the weakness of AI models. This limitation can lead to severe issues regarding the safety of the intended functionality (SOTIF) in automated driving. From the perspective of causality, the adversarial attacks can be regarded as confounding effects with spurious correlations established by the non-causal features. However, few previous research works are devoted to building the relationship between adversarial examples, causality, and SOTIF. This paper proposes a robust physical adversarial perturbation generation method that aims at the salient image regions of the targeted attack class with the guidance of class activation mapping (CAM). With the utilization of CAM, the maximization of the confounding effects can be achieved through the intermediate variable of the front-door criterion between images and targeted attack labels. In the simulation experiment, the proposed method achieved a 94.6% targeted attack success rate (ASR) on the released dataset when the speed-speed-limit-60 km/h (speed-limit-60) signs could be attacked as speed-speed-limit-80 km/h (speed-limit-80) signs. In the real physical experiment, the targeted ASR is 75% and the untargeted ASR is 100%. Besides the state-of-the-art attack result, a detailed experiment is implemented to evaluate the performance of the proposed method under low resolutions, diverse optimizers, and multifarious defense methods. The code and data are released at the repository: https://github.com/yebin999/rp2-with-cam.</p></div>","PeriodicalId":36310,"journal":{"name":"Automotive Innovation","volume":"6 2","pages":"190 - 203"},"PeriodicalIF":4.8000,"publicationDate":"2023-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"An Adversarial Attack on Salient Regions of Traffic Sign\",\"authors\":\"Jun Yan, Huilin Yin, Bin Ye, Wanchen Ge, Hao Zhang, Gerhard Rigoll\",\"doi\":\"10.1007/s42154-023-00220-9\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>The state-of-the-art deep neural networks are vulnerable to the attacks of adversarial examples with small-magnitude perturbations. In the field of deep-learning-based automated driving, such adversarial attack threats testify to the weakness of AI models. This limitation can lead to severe issues regarding the safety of the intended functionality (SOTIF) in automated driving. From the perspective of causality, the adversarial attacks can be regarded as confounding effects with spurious correlations established by the non-causal features. However, few previous research works are devoted to building the relationship between adversarial examples, causality, and SOTIF. This paper proposes a robust physical adversarial perturbation generation method that aims at the salient image regions of the targeted attack class with the guidance of class activation mapping (CAM). With the utilization of CAM, the maximization of the confounding effects can be achieved through the intermediate variable of the front-door criterion between images and targeted attack labels. In the simulation experiment, the proposed method achieved a 94.6% targeted attack success rate (ASR) on the released dataset when the speed-speed-limit-60 km/h (speed-limit-60) signs could be attacked as speed-speed-limit-80 km/h (speed-limit-80) signs. In the real physical experiment, the targeted ASR is 75% and the untargeted ASR is 100%. Besides the state-of-the-art attack result, a detailed experiment is implemented to evaluate the performance of the proposed method under low resolutions, diverse optimizers, and multifarious defense methods. The code and data are released at the repository: https://github.com/yebin999/rp2-with-cam.</p></div>\",\"PeriodicalId\":36310,\"journal\":{\"name\":\"Automotive Innovation\",\"volume\":\"6 2\",\"pages\":\"190 - 203\"},\"PeriodicalIF\":4.8000,\"publicationDate\":\"2023-04-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Automotive Innovation\",\"FirstCategoryId\":\"1087\",\"ListUrlMain\":\"https://link.springer.com/article/10.1007/s42154-023-00220-9\",\"RegionNum\":1,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Automotive Innovation","FirstCategoryId":"1087","ListUrlMain":"https://link.springer.com/article/10.1007/s42154-023-00220-9","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
An Adversarial Attack on Salient Regions of Traffic Sign
The state-of-the-art deep neural networks are vulnerable to the attacks of adversarial examples with small-magnitude perturbations. In the field of deep-learning-based automated driving, such adversarial attack threats testify to the weakness of AI models. This limitation can lead to severe issues regarding the safety of the intended functionality (SOTIF) in automated driving. From the perspective of causality, the adversarial attacks can be regarded as confounding effects with spurious correlations established by the non-causal features. However, few previous research works are devoted to building the relationship between adversarial examples, causality, and SOTIF. This paper proposes a robust physical adversarial perturbation generation method that aims at the salient image regions of the targeted attack class with the guidance of class activation mapping (CAM). With the utilization of CAM, the maximization of the confounding effects can be achieved through the intermediate variable of the front-door criterion between images and targeted attack labels. In the simulation experiment, the proposed method achieved a 94.6% targeted attack success rate (ASR) on the released dataset when the speed-speed-limit-60 km/h (speed-limit-60) signs could be attacked as speed-speed-limit-80 km/h (speed-limit-80) signs. In the real physical experiment, the targeted ASR is 75% and the untargeted ASR is 100%. Besides the state-of-the-art attack result, a detailed experiment is implemented to evaluate the performance of the proposed method under low resolutions, diverse optimizers, and multifarious defense methods. The code and data are released at the repository: https://github.com/yebin999/rp2-with-cam.
期刊介绍:
Automotive Innovation is dedicated to the publication of innovative findings in the automotive field as well as other related disciplines, covering the principles, methodologies, theoretical studies, experimental studies, product engineering and engineering application. The main topics include but are not limited to: energy-saving, electrification, intelligent and connected, new energy vehicle, safety and lightweight technologies. The journal presents the latest trend and advances of automotive technology.