Research on a Method of Defense Adversarial Samples for Target Detection Model of Driverless Cars

Ruzhi Xu, Min Li, Xin Yang, Dexin Liu, Dawei Chen
{"title":"Research on a Method of Defense Adversarial Samples for Target Detection Model of Driverless Cars","authors":"Ruzhi Xu, Min Li, Xin Yang, Dexin Liu, Dawei Chen","doi":"10.34028/iajit/20/5/6","DOIUrl":null,"url":null,"abstract":"The adversarial examples make the object detection model make a wrong judgment, which threatens the security of driverless cars. In this paper, by improving the Momentum Iterative Fast Gradient Sign Method (MI-FGSM), based on ensemble learning, combined with L∞ perturbation and spatial transformation, a strong transferable black-box adversarial attack algorithm for object detection model of driverless cars is proposed. Through a large number of experiments on the nuScenes driverless dataset, it is proved that the adversarial attack algorithm proposed in this paper have strong transferability, and successfully make the mainstream object detection models such as FasterRcnn, SSD, YOLOv3 make wrong judgments. Based on the adversarial attack algorithm proposed in this paper, the parametric noise injection with adversarial training is performed to generate a defense model with strong robustness. The defense model proposed in this paper significantly improves the robustness of the object detection model. It can effectively alleviate various adversarial attacks against the object detection model of driverless cars, and does not affect the accuracy of clean samples. This is of great significance for studying the application of object detection model of driverless cars in the real physical world.","PeriodicalId":161392,"journal":{"name":"The International Arab Journal of Information Technology","volume":"44 4 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"The International Arab Journal of Information Technology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.34028/iajit/20/5/6","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

The adversarial examples make the object detection model make a wrong judgment, which threatens the security of driverless cars. In this paper, by improving the Momentum Iterative Fast Gradient Sign Method (MI-FGSM), based on ensemble learning, combined with L∞ perturbation and spatial transformation, a strong transferable black-box adversarial attack algorithm for object detection model of driverless cars is proposed. Through a large number of experiments on the nuScenes driverless dataset, it is proved that the adversarial attack algorithm proposed in this paper have strong transferability, and successfully make the mainstream object detection models such as FasterRcnn, SSD, YOLOv3 make wrong judgments. Based on the adversarial attack algorithm proposed in this paper, the parametric noise injection with adversarial training is performed to generate a defense model with strong robustness. The defense model proposed in this paper significantly improves the robustness of the object detection model. It can effectively alleviate various adversarial attacks against the object detection model of driverless cars, and does not affect the accuracy of clean samples. This is of great significance for studying the application of object detection model of driverless cars in the real physical world.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
一种针对无人驾驶汽车目标检测模型的防御对抗样本方法研究
对抗性示例会使目标检测模型做出错误判断,从而威胁到无人驾驶汽车的安全性。本文通过改进基于集成学习的动量迭代快速梯度符号法(MI-FGSM),结合L∞摄动和空间变换,提出了一种针对无人驾驶汽车目标检测模型的强可转移黑盒对抗攻击算法。通过在nuScenes无人驾驶数据集上的大量实验,证明本文提出的对抗性攻击算法具有很强的可移植性,并成功使FasterRcnn、SSD、YOLOv3等主流目标检测模型做出错误判断。在本文提出的对抗攻击算法的基础上,对参数噪声注入进行对抗训练,生成具有较强鲁棒性的防御模型。本文提出的防御模型显著提高了目标检测模型的鲁棒性。它可以有效缓解针对无人驾驶汽车目标检测模型的各种对抗性攻击,并且不影响干净样本的准确性。这对于研究无人驾驶汽车目标检测模型在真实物理世界中的应用具有重要意义。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Cohesive Pair-Wises Constrained Deep Embedding for Semi-Supervised Clustering with Very Few Labeled Samples* Scrupulous SCGAN Framework for Recognition of Restored Images with Caffe based PCA Filtration Fuzzy Heuristics for Detecting and Preventing Black Hole Attack XAI-PDF: A Robust Framework for Malicious PDF Detection Leveraging SHAP-Based Feature Engineering Healthcare Data Security in Cloud Storage Using Light Weight Symmetric Key Algorithm
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1