Lu Wang, Tianyuan Zhang, Yikai Han, Muyang Fang, Ting Jin, Jiaqi Kang
{"title":"Attack End-to-End Autonomous Driving through Module-Wise Noise","authors":"Lu Wang, Tianyuan Zhang, Yikai Han, Muyang Fang, Ting Jin, Jiaqi Kang","doi":"arxiv-2409.07706","DOIUrl":null,"url":null,"abstract":"With recent breakthroughs in deep neural networks, numerous tasks within\nautonomous driving have exhibited remarkable performance. However, deep\nlearning models are susceptible to adversarial attacks, presenting significant\nsecurity risks to autonomous driving systems. Presently, end-to-end\narchitectures have emerged as the predominant solution for autonomous driving,\nowing to their collaborative nature across different tasks. Yet, the\nimplications of adversarial attacks on such models remain relatively\nunexplored. In this paper, we conduct comprehensive adversarial security\nresearch on the modular end-to-end autonomous driving model for the first time.\nWe thoroughly consider the potential vulnerabilities in the model inference\nprocess and design a universal attack scheme through module-wise noise\ninjection. We conduct large-scale experiments on the full-stack autonomous\ndriving model and demonstrate that our attack method outperforms previous\nattack methods. We trust that our research will offer fresh insights into\nensuring the safety and reliability of autonomous driving systems.","PeriodicalId":501301,"journal":{"name":"arXiv - CS - Machine Learning","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Machine Learning","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.07706","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
With recent breakthroughs in deep neural networks, numerous tasks within
autonomous driving have exhibited remarkable performance. However, deep
learning models are susceptible to adversarial attacks, presenting significant
security risks to autonomous driving systems. Presently, end-to-end
architectures have emerged as the predominant solution for autonomous driving,
owing to their collaborative nature across different tasks. Yet, the
implications of adversarial attacks on such models remain relatively
unexplored. In this paper, we conduct comprehensive adversarial security
research on the modular end-to-end autonomous driving model for the first time.
We thoroughly consider the potential vulnerabilities in the model inference
process and design a universal attack scheme through module-wise noise
injection. We conduct large-scale experiments on the full-stack autonomous
driving model and demonstrate that our attack method outperforms previous
attack methods. We trust that our research will offer fresh insights into
ensuring the safety and reliability of autonomous driving systems.