Are Existing Road Design Guidelines Suitable for Autonomous Vehicles?

Yang Sun, Christopher M. Poskitt, Jun Sun
{"title":"Are Existing Road Design Guidelines Suitable for Autonomous Vehicles?","authors":"Yang Sun, Christopher M. Poskitt, Jun Sun","doi":"arxiv-2409.10562","DOIUrl":null,"url":null,"abstract":"The emergence of Autonomous Vehicles (AVs) has spurred research into testing\nthe resilience of their perception systems, i.e. to ensure they are not\nsusceptible to making critical misjudgements. It is important that they are\ntested not only with respect to other vehicles on the road, but also those\nobjects placed on the roadside. Trash bins, billboards, and greenery are all\nexamples of such objects, typically placed according to guidelines that were\ndeveloped for the human visual system, and which may not align perfectly with\nthe needs of AVs. Existing tests, however, usually focus on adversarial objects\nwith conspicuous shapes/patches, that are ultimately unrealistic given their\nunnatural appearances and the need for white box knowledge. In this work, we\nintroduce a black box attack on the perception systems of AVs, in which the\nobjective is to create realistic adversarial scenarios (i.e. satisfying road\ndesign guidelines) by manipulating the positions of common roadside objects,\nand without resorting to `unnatural' adversarial patches. In particular, we\npropose TrashFuzz , a fuzzing algorithm to find scenarios in which the\nplacement of these objects leads to substantial misperceptions by the AV --\nsuch as mistaking a traffic light's colour -- with overall the goal of causing\nit to violate traffic laws. To ensure the realism of these scenarios, they must\nsatisfy several rules encoding regulatory guidelines about the placement of\nobjects on public streets. We implemented and evaluated these attacks for the\nApollo, finding that TrashFuzz induced it into violating 15 out of 24 different\ntraffic laws.","PeriodicalId":501278,"journal":{"name":"arXiv - CS - Software Engineering","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Software Engineering","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.10562","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

The emergence of Autonomous Vehicles (AVs) has spurred research into testing the resilience of their perception systems, i.e. to ensure they are not susceptible to making critical misjudgements. It is important that they are tested not only with respect to other vehicles on the road, but also those objects placed on the roadside. Trash bins, billboards, and greenery are all examples of such objects, typically placed according to guidelines that were developed for the human visual system, and which may not align perfectly with the needs of AVs. Existing tests, however, usually focus on adversarial objects with conspicuous shapes/patches, that are ultimately unrealistic given their unnatural appearances and the need for white box knowledge. In this work, we introduce a black box attack on the perception systems of AVs, in which the objective is to create realistic adversarial scenarios (i.e. satisfying road design guidelines) by manipulating the positions of common roadside objects, and without resorting to `unnatural' adversarial patches. In particular, we propose TrashFuzz , a fuzzing algorithm to find scenarios in which the placement of these objects leads to substantial misperceptions by the AV -- such as mistaking a traffic light's colour -- with overall the goal of causing it to violate traffic laws. To ensure the realism of these scenarios, they must satisfy several rules encoding regulatory guidelines about the placement of objects on public streets. We implemented and evaluated these attacks for the Apollo, finding that TrashFuzz induced it into violating 15 out of 24 different traffic laws.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
现有道路设计指南是否适合自动驾驶汽车?
自动驾驶汽车(AV)的出现促使人们开始研究测试其感知系统的适应能力,即确保它们不会做出关键的错误判断。重要的是,不仅要对道路上的其他车辆进行测试,还要对路边的物体进行测试。垃圾箱、广告牌和绿化带就是这类物体的典型例子,它们通常是根据为人类视觉系统开发的准则摆放的,而这些准则可能并不完全符合自动驾驶汽车的需求。然而,现有的测试通常侧重于具有明显形状/斑块的对抗性物体,鉴于其不自然的外观和对白盒知识的需求,这种测试最终是不现实的。在这项工作中,我们引入了对自动驾驶汽车感知系统的黑盒攻击,其目标是通过操纵常见路边物体的位置来创建逼真的对抗场景(即满足道路设计指南),而无需借助 "不自然 "的对抗补丁。具体而言,我们提出了 TrashFuzz,这是一种模糊算法,用于寻找这些物体的位置会导致自动驾驶汽车产生重大误解的场景--例如误认为交通信号灯的颜色--其总体目标是导致自动驾驶汽车违反交通法规。为了确保这些场景的真实性,它们必须满足若干规则,这些规则编码了公共街道上物体摆放的监管准则。我们对阿波罗实施并评估了这些攻击,发现 TrashFuzz 能诱使它违反 24 项不同交通法规中的 15 项。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Promise and Peril of Collaborative Code Generation Models: Balancing Effectiveness and Memorization Shannon Entropy is better Feature than Category and Sentiment in User Feedback Processing Motivations, Challenges, Best Practices, and Benefits for Bots and Conversational Agents in Software Engineering: A Multivocal Literature Review A Taxonomy of Self-Admitted Technical Debt in Deep Learning Systems Investigating team maturity in an agile automotive reorganization
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1