Traffic Sign Classifiers Under Physical World Realistic Sticker Occlusions: A Cross Analysis Study

Yasin Bayzidi, Alen Smajic, Fabian Hüger, Ruby L. V. Moritz, Serin Varghese, Peter Schlicht, Alois Knoll
{"title":"Traffic Sign Classifiers Under Physical World Realistic Sticker Occlusions: A Cross Analysis Study","authors":"Yasin Bayzidi, Alen Smajic, Fabian Hüger, Ruby L. V. Moritz, Serin Varghese, Peter Schlicht, Alois Knoll","doi":"10.1109/iv51971.2022.9827143","DOIUrl":null,"url":null,"abstract":"Recent adversarial attacks with real world applications are capable of deceiving deep neural networks (DNN), which often appear as printed stickers applied to objects in physical world. Though achieving high success rate in lab tests and limited field tests, such attacks have not been tested on multiple DNN architectures with a standard setup to unveil the common robustness and weakness points of both the DNNs and the attacks. Furthermore, realistic looking stickers applied by normal people as acts of vandalism are not studied to discover their potential risks as well the risk of optimizing the location of such realistic stickers to achieve the maximum performance drop. In this paper, (a) we study the case of realistic looking sticker application effects on traffic sign detectors performance; (b) we use traffic sign image classification as our use case and train and attack 11 of the modern architectures for our analysis; (c) by considering different factors like brightness, blurriness and contrast of the train images in our sticker application procedure, we show that simple image processing techniques can help realistic looking stickers fit into their background to mimic real world tests; (d) by performing structured synthetic and real-world evaluations, we study the difference of various traffic sign classes in terms of their crucial distinctive features among the tested DNNs.","PeriodicalId":184622,"journal":{"name":"2022 IEEE Intelligent Vehicles Symposium (IV)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE Intelligent Vehicles Symposium (IV)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/iv51971.2022.9827143","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4

Abstract

Recent adversarial attacks with real world applications are capable of deceiving deep neural networks (DNN), which often appear as printed stickers applied to objects in physical world. Though achieving high success rate in lab tests and limited field tests, such attacks have not been tested on multiple DNN architectures with a standard setup to unveil the common robustness and weakness points of both the DNNs and the attacks. Furthermore, realistic looking stickers applied by normal people as acts of vandalism are not studied to discover their potential risks as well the risk of optimizing the location of such realistic stickers to achieve the maximum performance drop. In this paper, (a) we study the case of realistic looking sticker application effects on traffic sign detectors performance; (b) we use traffic sign image classification as our use case and train and attack 11 of the modern architectures for our analysis; (c) by considering different factors like brightness, blurriness and contrast of the train images in our sticker application procedure, we show that simple image processing techniques can help realistic looking stickers fit into their background to mimic real world tests; (d) by performing structured synthetic and real-world evaluations, we study the difference of various traffic sign classes in terms of their crucial distinctive features among the tested DNNs.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
物理世界真实贴纸遮挡下的交通标志分类器:交叉分析研究
最近针对现实世界应用的对抗性攻击能够欺骗深度神经网络(DNN),深度神经网络通常以打印贴纸的形式出现在物理世界的物体上。虽然在实验室测试和有限的现场测试中取得了很高的成功率,但这种攻击尚未在多个深度神经网络架构上进行测试,并采用标准设置来揭示深度神经网络和攻击的共同鲁棒性和弱点。此外,没有研究正常人作为破坏行为使用的逼真的贴纸,以发现其潜在风险,以及优化这种逼真贴纸的位置以实现最大性能下降的风险。在本文中,(a)我们研究了逼真的贴纸应用对交通标志检测器性能的影响;(b)我们使用交通标志图像分类作为我们的用例,并训练和攻击11个现代架构用于我们的分析;(c)在我们的贴纸应用程序中,通过考虑火车图像的亮度、模糊度和对比度等不同因素,我们表明,简单的图像处理技术可以帮助逼真的贴纸融入其背景,模拟真实世界的测试;(d)通过进行结构化的综合和现实世界的评估,我们研究了各种交通标志类别在测试dnn之间的关键特征差异。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Dynamic Conflict Mitigation for Cooperative Driving Control of Intelligent Vehicles Detecting vehicles in the dark in urban environments - A human benchmark A Sequential Decision-theoretic Method for Detecting Mobile Robots Localization Failures Scene Spatio-Temporal Graph Convolutional Network for Pedestrian Intention Estimation What Can be Seen is What You Get: Structure Aware Point Cloud Augmentation
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1