Light can be Dangerous: Stealthy and Effective Physical-world Adversarial Attack by Spot Light

IF 5.4 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Computers & Security Pub Date : 2023-09-01 DOI:10.1016/j.cose.2023.103345
Yufeng LI , Fengyu YANG , Qi LIU , Jiangtao LI , Chenhong CAO
{"title":"Light can be Dangerous: Stealthy and Effective Physical-world Adversarial Attack by Spot Light","authors":"Yufeng LI ,&nbsp;Fengyu YANG ,&nbsp;Qi LIU ,&nbsp;Jiangtao LI ,&nbsp;Chenhong CAO","doi":"10.1016/j.cose.2023.103345","DOIUrl":null,"url":null,"abstract":"<div><p>With the development of machine learning<span> models throughout industries, there has been a corresponding increase in research demonstrating its vulnerability to adversarial examples<span><span> (AE). Realizing physically robust AE that survive real-world environmental conditions faces the challenges such as varied viewing distances or angles. Laser beam-based methods claim to overcome the obviousness, semi-permanence, and unchangeable drawbacks of adversarial patches. However, laser beam-based AE could not be captured by the camera under daylight, which makes the application scenarios limited. In this research, we introduce Adversarial Spot Light (AdvSL), a novel approach that enables adversaries to build physically robust real-world AE by utilizing spotlight flashlights. Since the spotlight flashlights may be switched on and off as required, AdvSL allows adversaries to perform more flexible attacks than adversarial patches. Especially, AdvSL is feasible in a variety of ambient light conditions. As a first step, we modeled a spot light with a set of parameters that can be physically controlled by the adversary. To determine the optimal parameters for the light, a heuristic optimization approach is adopted. Further, we use the k-random-restart technique to prevent the approach from being stuck in a local optimum. To demonstrate the effectiveness of the proposed approach, we conduct experiments under different physical conditions, including indoor and outdoor tests. In the digital test, AdvSL causes misclassifications on state-of-the-art neural networks with up to 93.7% attack success rate. In the outdoor test, AdvSL causes misclassifications on the traffic sign </span>classification model with up to 84% attack success rate. In the physical setting, experiments show that the AdvSL is robust in non-bright settings and is feasible in bright settings. Finally, we discuss the defense of AdvSL and evaluate an adaptive defender using adversarial learning, which is able to reduce the attack success rate from 92.2% to 54.8% in the digital.</span></span></p></div>","PeriodicalId":51004,"journal":{"name":"Computers & Security","volume":"132 ","pages":"Article 103345"},"PeriodicalIF":5.4000,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers & Security","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0167404823002559","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

Abstract

With the development of machine learning models throughout industries, there has been a corresponding increase in research demonstrating its vulnerability to adversarial examples (AE). Realizing physically robust AE that survive real-world environmental conditions faces the challenges such as varied viewing distances or angles. Laser beam-based methods claim to overcome the obviousness, semi-permanence, and unchangeable drawbacks of adversarial patches. However, laser beam-based AE could not be captured by the camera under daylight, which makes the application scenarios limited. In this research, we introduce Adversarial Spot Light (AdvSL), a novel approach that enables adversaries to build physically robust real-world AE by utilizing spotlight flashlights. Since the spotlight flashlights may be switched on and off as required, AdvSL allows adversaries to perform more flexible attacks than adversarial patches. Especially, AdvSL is feasible in a variety of ambient light conditions. As a first step, we modeled a spot light with a set of parameters that can be physically controlled by the adversary. To determine the optimal parameters for the light, a heuristic optimization approach is adopted. Further, we use the k-random-restart technique to prevent the approach from being stuck in a local optimum. To demonstrate the effectiveness of the proposed approach, we conduct experiments under different physical conditions, including indoor and outdoor tests. In the digital test, AdvSL causes misclassifications on state-of-the-art neural networks with up to 93.7% attack success rate. In the outdoor test, AdvSL causes misclassifications on the traffic sign classification model with up to 84% attack success rate. In the physical setting, experiments show that the AdvSL is robust in non-bright settings and is feasible in bright settings. Finally, we discuss the defense of AdvSL and evaluate an adaptive defender using adversarial learning, which is able to reduce the attack success rate from 92.2% to 54.8% in the digital.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
光也可能是危险的:聚光灯对物理世界的潜行和有效的对抗性攻击
随着机器学习模型在各个行业的发展,证明其易受对抗性示例(AE)影响的研究也相应增加。实现在现实世界环境条件下生存的物理鲁棒AE面临着各种挑战,如不同的观看距离或角度。基于激光束的方法声称可以克服对抗性补丁的明显性、半永久性和不可更改的缺点。然而,基于激光束的AE在日光下无法被相机捕捉到,这使得应用场景受到限制。在这项研究中,我们介绍了对抗性聚光灯(AdvSL),这是一种新颖的方法,使对手能够利用聚光灯闪光灯构建物理上强大的真实世界AE。由于聚光灯闪光灯可以根据需要打开和关闭,AdvSL允许对手执行比对抗补丁更灵活的攻击。特别是,AdvSL在各种环境光照条件下都是可行的。作为第一步,我们用一组可以由对手物理控制的参数对聚光灯进行建模。为了确定光的最佳参数,采用了启发式优化方法。此外,我们使用k-随机重新启动技术来防止该方法陷入局部最优。为了证明所提出的方法的有效性,我们在不同的物理条件下进行了实验,包括室内和室外测试。在数字测试中,AdvSL会导致最先进的神经网络出现错误分类,攻击成功率高达93.7%。在室外测试中,AdvSL导致交通标志分类模型的错误分类,攻击成功率高达84%。在物理设置中,实验表明AdvSL在非明亮设置中是稳健的,在明亮设置中也是可行的。最后,我们讨论了AdvSL的防御,并使用对抗性学习评估了一种自适应防御器,该防御器能够将数字攻击的成功率从92.2%降低到54.8%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Computers & Security
Computers & Security 工程技术-计算机:信息系统
CiteScore
12.40
自引率
7.10%
发文量
365
审稿时长
10.7 months
期刊介绍: Computers & Security is the most respected technical journal in the IT security field. With its high-profile editorial board and informative regular features and columns, the journal is essential reading for IT security professionals around the world. Computers & Security provides you with a unique blend of leading edge research and sound practical management advice. It is aimed at the professional involved with computer security, audit, control and data integrity in all sectors - industry, commerce and academia. Recognized worldwide as THE primary source of reference for applied research and technical expertise it is your first step to fully secure systems.
期刊最新文献
Editorial Board Understanding surveillance stress in cybersecurity professionals: A stage model perspective Impact of cybersecurity recommendations from smart home vendors’ chatbots on user’s cybersecurity coping process Cookies, identifiers and other data that google silently stores on android handsets Coping with input stage challenges in information security policy development: Information security managers’ perspectives in a hybrid work environment
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1