Yufeng LI , Fengyu YANG , Qi LIU , Jiangtao LI , Chenhong CAO
{"title":"Light can be Dangerous: Stealthy and Effective Physical-world Adversarial Attack by Spot Light","authors":"Yufeng LI , Fengyu YANG , Qi LIU , Jiangtao LI , Chenhong CAO","doi":"10.1016/j.cose.2023.103345","DOIUrl":null,"url":null,"abstract":"<div><p>With the development of machine learning<span> models throughout industries, there has been a corresponding increase in research demonstrating its vulnerability to adversarial examples<span><span> (AE). Realizing physically robust AE that survive real-world environmental conditions faces the challenges such as varied viewing distances or angles. Laser beam-based methods claim to overcome the obviousness, semi-permanence, and unchangeable drawbacks of adversarial patches. However, laser beam-based AE could not be captured by the camera under daylight, which makes the application scenarios limited. In this research, we introduce Adversarial Spot Light (AdvSL), a novel approach that enables adversaries to build physically robust real-world AE by utilizing spotlight flashlights. Since the spotlight flashlights may be switched on and off as required, AdvSL allows adversaries to perform more flexible attacks than adversarial patches. Especially, AdvSL is feasible in a variety of ambient light conditions. As a first step, we modeled a spot light with a set of parameters that can be physically controlled by the adversary. To determine the optimal parameters for the light, a heuristic optimization approach is adopted. Further, we use the k-random-restart technique to prevent the approach from being stuck in a local optimum. To demonstrate the effectiveness of the proposed approach, we conduct experiments under different physical conditions, including indoor and outdoor tests. In the digital test, AdvSL causes misclassifications on state-of-the-art neural networks with up to 93.7% attack success rate. In the outdoor test, AdvSL causes misclassifications on the traffic sign </span>classification model with up to 84% attack success rate. In the physical setting, experiments show that the AdvSL is robust in non-bright settings and is feasible in bright settings. Finally, we discuss the defense of AdvSL and evaluate an adaptive defender using adversarial learning, which is able to reduce the attack success rate from 92.2% to 54.8% in the digital.</span></span></p></div>","PeriodicalId":51004,"journal":{"name":"Computers & Security","volume":"132 ","pages":"Article 103345"},"PeriodicalIF":5.4000,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers & Security","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0167404823002559","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
With the development of machine learning models throughout industries, there has been a corresponding increase in research demonstrating its vulnerability to adversarial examples (AE). Realizing physically robust AE that survive real-world environmental conditions faces the challenges such as varied viewing distances or angles. Laser beam-based methods claim to overcome the obviousness, semi-permanence, and unchangeable drawbacks of adversarial patches. However, laser beam-based AE could not be captured by the camera under daylight, which makes the application scenarios limited. In this research, we introduce Adversarial Spot Light (AdvSL), a novel approach that enables adversaries to build physically robust real-world AE by utilizing spotlight flashlights. Since the spotlight flashlights may be switched on and off as required, AdvSL allows adversaries to perform more flexible attacks than adversarial patches. Especially, AdvSL is feasible in a variety of ambient light conditions. As a first step, we modeled a spot light with a set of parameters that can be physically controlled by the adversary. To determine the optimal parameters for the light, a heuristic optimization approach is adopted. Further, we use the k-random-restart technique to prevent the approach from being stuck in a local optimum. To demonstrate the effectiveness of the proposed approach, we conduct experiments under different physical conditions, including indoor and outdoor tests. In the digital test, AdvSL causes misclassifications on state-of-the-art neural networks with up to 93.7% attack success rate. In the outdoor test, AdvSL causes misclassifications on the traffic sign classification model with up to 84% attack success rate. In the physical setting, experiments show that the AdvSL is robust in non-bright settings and is feasible in bright settings. Finally, we discuss the defense of AdvSL and evaluate an adaptive defender using adversarial learning, which is able to reduce the attack success rate from 92.2% to 54.8% in the digital.
期刊介绍:
Computers & Security is the most respected technical journal in the IT security field. With its high-profile editorial board and informative regular features and columns, the journal is essential reading for IT security professionals around the world.
Computers & Security provides you with a unique blend of leading edge research and sound practical management advice. It is aimed at the professional involved with computer security, audit, control and data integrity in all sectors - industry, commerce and academia. Recognized worldwide as THE primary source of reference for applied research and technical expertise it is your first step to fully secure systems.