Bo Yang , Zizhi Jin , Yushi Cheng , Xiaoyu Ji , Wenyuan Xu
{"title":"自动驾驶中包含激光雷达模型的对抗鲁棒性分析","authors":"Bo Yang , Zizhi Jin , Yushi Cheng , Xiaoyu Ji , Wenyuan Xu","doi":"10.1016/j.hcc.2024.100203","DOIUrl":null,"url":null,"abstract":"<div><p>In autonomous driving systems, perception is pivotal, relying chiefly on sensors like LiDAR and cameras for environmental awareness. LiDAR, celebrated for its detailed depth perception, is being increasingly integrated into autonomous vehicles. In this article, we analyze the robustness of four LiDAR-included models against adversarial points under physical constraints. We first introduce an attack technique that, by simply adding a limited number of physically constrained adversarial points above a vehicle, can make the vehicle undetectable by the LiDAR-included models. Experiments reveal that adversarial points adversely affect the detection capabilities of both LiDAR-only and LiDAR–camera fusion models, with a tendency for more adversarial points to escalate attack success rates. Notably, voxel-based models are more susceptible to deception by these adversarial points. We also investigated the impact of the distance and angle of the added adversarial points on the attack success rate. Typically, the farther the victim object to be hidden and the closer to the front of the LiDAR, the higher the attack success rate. Additionally, we have experimentally proven that our generated adversarial points possess good cross-model adversarial transferability and validated the effectiveness of our proposed optimization method through ablation studies. Furthermore, we propose a new plug-and-play, model-agnostic defense method based on the concept of point smoothness. The ROC curve of this defense method shows an AUC value of approximately 0.909, demonstrating its effectiveness.</p></div>","PeriodicalId":100605,"journal":{"name":"High-Confidence Computing","volume":"4 1","pages":"Article 100203"},"PeriodicalIF":3.2000,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667295224000060/pdfft?md5=7e68638f7a7e1d0186a514efa45060f4&pid=1-s2.0-S2667295224000060-main.pdf","citationCount":"0","resultStr":"{\"title\":\"Adversarial robustness analysis of LiDAR-included models in autonomous driving\",\"authors\":\"Bo Yang , Zizhi Jin , Yushi Cheng , Xiaoyu Ji , Wenyuan Xu\",\"doi\":\"10.1016/j.hcc.2024.100203\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>In autonomous driving systems, perception is pivotal, relying chiefly on sensors like LiDAR and cameras for environmental awareness. LiDAR, celebrated for its detailed depth perception, is being increasingly integrated into autonomous vehicles. In this article, we analyze the robustness of four LiDAR-included models against adversarial points under physical constraints. We first introduce an attack technique that, by simply adding a limited number of physically constrained adversarial points above a vehicle, can make the vehicle undetectable by the LiDAR-included models. Experiments reveal that adversarial points adversely affect the detection capabilities of both LiDAR-only and LiDAR–camera fusion models, with a tendency for more adversarial points to escalate attack success rates. Notably, voxel-based models are more susceptible to deception by these adversarial points. We also investigated the impact of the distance and angle of the added adversarial points on the attack success rate. Typically, the farther the victim object to be hidden and the closer to the front of the LiDAR, the higher the attack success rate. Additionally, we have experimentally proven that our generated adversarial points possess good cross-model adversarial transferability and validated the effectiveness of our proposed optimization method through ablation studies. Furthermore, we propose a new plug-and-play, model-agnostic defense method based on the concept of point smoothness. The ROC curve of this defense method shows an AUC value of approximately 0.909, demonstrating its effectiveness.</p></div>\",\"PeriodicalId\":100605,\"journal\":{\"name\":\"High-Confidence Computing\",\"volume\":\"4 1\",\"pages\":\"Article 100203\"},\"PeriodicalIF\":3.2000,\"publicationDate\":\"2024-03-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.sciencedirect.com/science/article/pii/S2667295224000060/pdfft?md5=7e68638f7a7e1d0186a514efa45060f4&pid=1-s2.0-S2667295224000060-main.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"High-Confidence Computing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2667295224000060\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"High-Confidence Computing","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2667295224000060","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
Adversarial robustness analysis of LiDAR-included models in autonomous driving
In autonomous driving systems, perception is pivotal, relying chiefly on sensors like LiDAR and cameras for environmental awareness. LiDAR, celebrated for its detailed depth perception, is being increasingly integrated into autonomous vehicles. In this article, we analyze the robustness of four LiDAR-included models against adversarial points under physical constraints. We first introduce an attack technique that, by simply adding a limited number of physically constrained adversarial points above a vehicle, can make the vehicle undetectable by the LiDAR-included models. Experiments reveal that adversarial points adversely affect the detection capabilities of both LiDAR-only and LiDAR–camera fusion models, with a tendency for more adversarial points to escalate attack success rates. Notably, voxel-based models are more susceptible to deception by these adversarial points. We also investigated the impact of the distance and angle of the added adversarial points on the attack success rate. Typically, the farther the victim object to be hidden and the closer to the front of the LiDAR, the higher the attack success rate. Additionally, we have experimentally proven that our generated adversarial points possess good cross-model adversarial transferability and validated the effectiveness of our proposed optimization method through ablation studies. Furthermore, we propose a new plug-and-play, model-agnostic defense method based on the concept of point smoothness. The ROC curve of this defense method shows an AUC value of approximately 0.909, demonstrating its effectiveness.