遮挡模型——用于ADAS/AD功能虚拟测试的几何传感器建模方法

IF 4.6 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE IEEE Open Journal of Intelligent Transportation Systems Pub Date : 2023-06-07 DOI:10.1109/OJITS.2023.3283618
Simon Genser;Stefan Muckenhuber;Christoph Gaisberger;Sarah Haas;Timo Haid
{"title":"遮挡模型——用于ADAS/AD功能虚拟测试的几何传感器建模方法","authors":"Simon Genser;Stefan Muckenhuber;Christoph Gaisberger;Sarah Haas;Timo Haid","doi":"10.1109/OJITS.2023.3283618","DOIUrl":null,"url":null,"abstract":"New advanced driver assistance system/automated driving (ADAS/AD) functions have the potential to significantly enhance the safety of vehicle passengers and road users, while also enabling new transportation applications and potentially reducing CO2 emissions. To achieve the next level of driving automation, i.e., SAE Level-3, physical test drives need to be supplemented by simulations in virtual test environments. A major challenge for today’s virtual test environments is to provide a realistic representation of the vehicle’s perception system (camera, lidar, radar). Therefore, new and improved sensor models are required to perform representative virtual tests that can supplement physical test drives. In this article, we present a computationally efficient, mathematically complete, and geometrically exact generic sensor modeling approach that solves the FOV (field of view) and occlusion task. We also discuss potential extensions, such as bounding-box cropping and sensor-specific, weather-dependent FOV-reduction approaches for camera, lidar, and radar. The performance of the new modeling approach is demonstrated using camera measurements from a test campaign conducted in Hungary in 2020 plus three artificial scenarios (a multi-target scenario with an adjacent truck occluding other road users and two traffic jam situations in which the ego vehicle is either a car or a truck). These scenarios are benchmarked against existing sensor modeling approaches that only exclude objects that are outside the sensor’s maximum detection range or angle. The modeling approach presented can be used as is or provide the basis for a more complex sensor model, as it reduces the number of potentially detectable targets and therefore improves the performance of subsequent simulation steps.","PeriodicalId":100631,"journal":{"name":"IEEE Open Journal of Intelligent Transportation Systems","volume":"4 ","pages":"439-455"},"PeriodicalIF":4.6000,"publicationDate":"2023-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/iel7/8784355/9999144/10146003.pdf","citationCount":"0","resultStr":"{\"title\":\"Occlusion Model—A Geometric Sensor Modeling Approach for Virtual Testing of ADAS/AD Functions\",\"authors\":\"Simon Genser;Stefan Muckenhuber;Christoph Gaisberger;Sarah Haas;Timo Haid\",\"doi\":\"10.1109/OJITS.2023.3283618\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"New advanced driver assistance system/automated driving (ADAS/AD) functions have the potential to significantly enhance the safety of vehicle passengers and road users, while also enabling new transportation applications and potentially reducing CO2 emissions. To achieve the next level of driving automation, i.e., SAE Level-3, physical test drives need to be supplemented by simulations in virtual test environments. A major challenge for today’s virtual test environments is to provide a realistic representation of the vehicle’s perception system (camera, lidar, radar). Therefore, new and improved sensor models are required to perform representative virtual tests that can supplement physical test drives. In this article, we present a computationally efficient, mathematically complete, and geometrically exact generic sensor modeling approach that solves the FOV (field of view) and occlusion task. We also discuss potential extensions, such as bounding-box cropping and sensor-specific, weather-dependent FOV-reduction approaches for camera, lidar, and radar. The performance of the new modeling approach is demonstrated using camera measurements from a test campaign conducted in Hungary in 2020 plus three artificial scenarios (a multi-target scenario with an adjacent truck occluding other road users and two traffic jam situations in which the ego vehicle is either a car or a truck). These scenarios are benchmarked against existing sensor modeling approaches that only exclude objects that are outside the sensor’s maximum detection range or angle. The modeling approach presented can be used as is or provide the basis for a more complex sensor model, as it reduces the number of potentially detectable targets and therefore improves the performance of subsequent simulation steps.\",\"PeriodicalId\":100631,\"journal\":{\"name\":\"IEEE Open Journal of Intelligent Transportation Systems\",\"volume\":\"4 \",\"pages\":\"439-455\"},\"PeriodicalIF\":4.6000,\"publicationDate\":\"2023-06-07\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://ieeexplore.ieee.org/iel7/8784355/9999144/10146003.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Open Journal of Intelligent Transportation Systems\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10146003/\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Open Journal of Intelligent Transportation Systems","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10146003/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

新的高级驾驶辅助系统/自动驾驶(ADAS/AD)功能有可能显著提高车辆乘客和道路使用者的安全性,同时还能实现新的交通应用,并有可能减少二氧化碳排放。为了实现下一个驾驶自动化水平,即SAE level -3,物理测试驾驶需要辅以虚拟测试环境中的模拟。当前虚拟测试环境面临的一个主要挑战是提供车辆感知系统(摄像头、激光雷达、雷达)的真实表现。因此,需要新的和改进的传感器模型来执行具有代表性的虚拟测试,以补充物理测试驱动。在本文中,我们提出了一种计算高效,数学完整,几何精确的通用传感器建模方法,用于解决FOV(视场)和遮挡任务。我们还讨论了潜在的扩展,如边界盒裁剪和传感器特定的,与天气相关的相机,激光雷达和雷达的fov减小方法。新建模方法的性能通过2020年在匈牙利进行的测试活动中的摄像头测量结果以及三种人工场景(多目标场景,相邻卡车阻塞其他道路使用者,以及两种交通拥堵情况,自我车辆是汽车或卡车)进行了演示。这些场景是根据现有的传感器建模方法进行基准测试的,这些方法只排除传感器最大检测范围或角度之外的物体。所提出的建模方法可以直接使用,也可以为更复杂的传感器模型提供基础,因为它减少了潜在可检测目标的数量,从而提高了后续仿真步骤的性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Occlusion Model—A Geometric Sensor Modeling Approach for Virtual Testing of ADAS/AD Functions
New advanced driver assistance system/automated driving (ADAS/AD) functions have the potential to significantly enhance the safety of vehicle passengers and road users, while also enabling new transportation applications and potentially reducing CO2 emissions. To achieve the next level of driving automation, i.e., SAE Level-3, physical test drives need to be supplemented by simulations in virtual test environments. A major challenge for today’s virtual test environments is to provide a realistic representation of the vehicle’s perception system (camera, lidar, radar). Therefore, new and improved sensor models are required to perform representative virtual tests that can supplement physical test drives. In this article, we present a computationally efficient, mathematically complete, and geometrically exact generic sensor modeling approach that solves the FOV (field of view) and occlusion task. We also discuss potential extensions, such as bounding-box cropping and sensor-specific, weather-dependent FOV-reduction approaches for camera, lidar, and radar. The performance of the new modeling approach is demonstrated using camera measurements from a test campaign conducted in Hungary in 2020 plus three artificial scenarios (a multi-target scenario with an adjacent truck occluding other road users and two traffic jam situations in which the ego vehicle is either a car or a truck). These scenarios are benchmarked against existing sensor modeling approaches that only exclude objects that are outside the sensor’s maximum detection range or angle. The modeling approach presented can be used as is or provide the basis for a more complex sensor model, as it reduces the number of potentially detectable targets and therefore improves the performance of subsequent simulation steps.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
5.40
自引率
0.00%
发文量
0
期刊最新文献
Predictor-Based CACC Design for Heterogeneous Vehicles With Distinct Input Delays NLOS Dies Twice: Challenges and Solutions of V2X for Cooperative Perception Control Allocation Approach Using Differential Steering to Compensate for Steering Actuator Failure Path Planning Optimization of Smart Vehicle With Fast Converging Distance-Dependent PSO Algorithm An Extensible Python Open-Source Simulation Platform for Developing and Benchmarking Bus Holding Strategies
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1