首页 > 最新文献

2022 IEEE Intelligent Vehicles Symposium (IV)最新文献

英文 中文
Sharpness Continuous Path optimization and Sparsification for Automated Vehicles 自动驾驶汽车的锐度连续路径优化与稀疏化
Pub Date : 2022-06-05 DOI: 10.1109/iv51971.2022.9827011
Mohit Kumar, Peter Strauss, Sven Kraus, Ömer Sahin Tas, C. Stiller
We present a path optimization approach that ensures driveability while considering a vehicle’s lateral dynamics. The lateral dynamics are non-holonomic; therefore, a vehicle cannot follow a path with abrupt changes even with infinitely fast steering. The curvature and sharpness, i.e., the rate change of curvature with respect to the traveled distance, must be continuous to track a defined reference path efficiently. Existing path optimization techniques typically include sharpness limitations but not sharpness continuity. The sharpness discontinuity is especially problematic for heavy-duty vehicles because their actuator dynamics are even slower than cars. We propose an algorithm that constructs a sparsified sharpness continuous path for a given reference path considering the limits on sharpness and its derivative, which subsequently addresses the torque restrictions of the actuator. The sharpness continuous path needs less steering effort and reduces mechanical stress and fatigue in the steering unit. We compare and present the outcomes for each of the three different types of optimized paths. Simulation results demonstrate that computed sharpness continuous path profiles reduce lateral jerks, enhancing comfort and driveability.
我们提出了一种路径优化方法,在考虑车辆横向动力学的同时确保驾驶性能。横向动力学是非完整的;因此,即使无限快速转向,车辆也无法沿着突然变化的路径行驶。曲率和锐度,即曲率相对于行进距离的变化率,必须是连续的,才能有效地跟踪定义的参考路径。现有的路径优化技术通常包括清晰度限制,但不包括清晰度连续性。对于重型车辆来说,锐度不连续性尤其成问题,因为它们的致动器动力学甚至比汽车还要慢。我们提出了一种算法,该算法考虑了锐度及其导数的限制,为给定的参考路径构建了稀疏的锐度连续路径,从而解决了执行器的扭矩限制。锋利的连续路径需要更少的转向努力,减少机械应力和疲劳在转向装置。我们比较并展示了三种不同类型的优化路径的结果。仿真结果表明,计算出的锐度连续路径轮廓减少了横向抖动,提高了舒适性和驾驶性能。
{"title":"Sharpness Continuous Path optimization and Sparsification for Automated Vehicles","authors":"Mohit Kumar, Peter Strauss, Sven Kraus, Ömer Sahin Tas, C. Stiller","doi":"10.1109/iv51971.2022.9827011","DOIUrl":"https://doi.org/10.1109/iv51971.2022.9827011","url":null,"abstract":"We present a path optimization approach that ensures driveability while considering a vehicle’s lateral dynamics. The lateral dynamics are non-holonomic; therefore, a vehicle cannot follow a path with abrupt changes even with infinitely fast steering. The curvature and sharpness, i.e., the rate change of curvature with respect to the traveled distance, must be continuous to track a defined reference path efficiently. Existing path optimization techniques typically include sharpness limitations but not sharpness continuity. The sharpness discontinuity is especially problematic for heavy-duty vehicles because their actuator dynamics are even slower than cars. We propose an algorithm that constructs a sparsified sharpness continuous path for a given reference path considering the limits on sharpness and its derivative, which subsequently addresses the torque restrictions of the actuator. The sharpness continuous path needs less steering effort and reduces mechanical stress and fatigue in the steering unit. We compare and present the outcomes for each of the three different types of optimized paths. Simulation results demonstrate that computed sharpness continuous path profiles reduce lateral jerks, enhancing comfort and driveability.","PeriodicalId":184622,"journal":{"name":"2022 IEEE Intelligent Vehicles Symposium (IV)","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127696934","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Virtual Test Scenarios for ADAS: Distance to Real Scenarios Matters! ADAS的虚拟测试场景:与真实场景的距离很重要!
Pub Date : 2022-06-05 DOI: 10.1109/iv51971.2022.9827170
Mohamed El Mostadi, H. Waeselynck, Jean-Marc Gabriel
Testing in virtual road environments is a widespread approach to validate advanced driver assistance systems (ADAS). A number of automated strategies have been proposed to explore dangerous scenarios, like search-based strategies guided by fitness functions. However, such strategies are likely to produce many uninteresting scenarios, representing so extreme driving situations that fatal accidents are unavoidable irrespective of the action of the ADAS. We propose leveraging datasets from real drives to better align the virtual scenarios to reasonable ones. The alignment is based on a simple distance metric that relates the virtual scenario parameters to the real data. We demonstrate the use of this metric for testing an autonomous emergency braking (AEB) system, taking the highD dataset as a reference for normal situations. We show how search-based testing quickly converges toward very distant scenarios that do not bring much insight into the AEB performance. We then provide an example of a distance-aware strategy that searches for less extreme scenarios that the AEB cannot overcome.
在虚拟道路环境中进行测试是验证先进驾驶辅助系统(ADAS)的一种广泛方法。已经提出了许多自动化策略来探索危险场景,例如由适应度函数指导的基于搜索的策略。然而,这样的策略可能会产生许多无趣的场景,代表如此极端的驾驶情况,致命的事故是不可避免的,无论ADAS的行动。我们建议利用来自真实驱动器的数据集来更好地将虚拟场景与合理的场景结合起来。对齐基于一个简单的距离度量,该度量将虚拟场景参数与真实数据联系起来。我们演示了使用该度量来测试自动紧急制动(AEB)系统,并将高d数据集作为正常情况下的参考。我们展示了基于搜索的测试如何快速地收敛到非常遥远的场景,这些场景不会对AEB的性能带来太多的了解。然后,我们提供了一个距离感知策略的示例,该策略搜索AEB无法克服的不太极端的场景。
{"title":"Virtual Test Scenarios for ADAS: Distance to Real Scenarios Matters!","authors":"Mohamed El Mostadi, H. Waeselynck, Jean-Marc Gabriel","doi":"10.1109/iv51971.2022.9827170","DOIUrl":"https://doi.org/10.1109/iv51971.2022.9827170","url":null,"abstract":"Testing in virtual road environments is a widespread approach to validate advanced driver assistance systems (ADAS). A number of automated strategies have been proposed to explore dangerous scenarios, like search-based strategies guided by fitness functions. However, such strategies are likely to produce many uninteresting scenarios, representing so extreme driving situations that fatal accidents are unavoidable irrespective of the action of the ADAS. We propose leveraging datasets from real drives to better align the virtual scenarios to reasonable ones. The alignment is based on a simple distance metric that relates the virtual scenario parameters to the real data. We demonstrate the use of this metric for testing an autonomous emergency braking (AEB) system, taking the highD dataset as a reference for normal situations. We show how search-based testing quickly converges toward very distant scenarios that do not bring much insight into the AEB performance. We then provide an example of a distance-aware strategy that searches for less extreme scenarios that the AEB cannot overcome.","PeriodicalId":184622,"journal":{"name":"2022 IEEE Intelligent Vehicles Symposium (IV)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125783168","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Fusion Attention Network for Autonomous Cars Semantic Segmentation 自动驾驶汽车语义分割的融合注意网络
Pub Date : 2022-06-05 DOI: 10.1109/iv51971.2022.9827377
Chuyao Wang, N. Aouf
Semantic segmentation is vital for autonomous car scene understanding. It provides more precise subject information than raw RGB images and this, in turn, boosts the performance of autonomous driving. Recently, self-attention methods show great improvement in image semantic segmentation. Attention maps help scene parsing with abundant relationships of every pixel in an image. However, it is computationally demanding. Besides, existing works focus either on channel attention, ignoring the pixel position factors, or on spatial attention, disregarding the impacts of the channels on each other. To address these problems, we present Fusion Attention Network based on self-attention mechanism to harvest rich contextual dependencies. This model consists of two chains: pyramid fusion spatial attention and fusion channel attention. We apply pyramid sampling in the spatial attention module to reduce the computation for spatial attention maps. Channel attention has a similar structure to the spatial attention. We also introduce a fusion technique to calculate contextual dependencies using features from both attention chains. We concatenate the results from spatial and channel attention modules as the enhanced attention map, leading to better semantic segmentation results. We conduct extensive experiments on popular datasets with different settings in addition to an ablation study to prove the efficiency of our approach. Our model achieves better results, on Cityscapes [7], compared to state-of-the-art methods, and also show good generalization capability on PASCAL VOC 2012 [9].
语义分割对于自动驾驶汽车场景的理解至关重要。它提供了比原始RGB图像更精确的主题信息,这反过来又提高了自动驾驶的性能。近年来,自注意方法在图像语义分割方面取得了很大的进步。注意图利用图像中每个像素的丰富关系帮助场景解析。然而,它的计算要求很高。此外,现有的作品要么关注通道注意力,忽略了像素位置因素,要么关注空间注意力,忽略了通道之间的相互影响。为了解决这些问题,我们提出了基于自注意机制的融合注意网络来获取丰富的上下文依赖关系。该模型由金字塔型融合空间注意和融合通道注意两条链组成。为了减少空间注意图的计算量,我们在空间注意模块中采用金字塔抽样。通道注意与空间注意具有相似的结构。我们还引入了一种融合技术,利用两个注意链的特征来计算上下文依赖关系。我们将空间注意模块和通道注意模块的结果连接为增强的注意图,从而得到更好的语义分割结果。除了消融研究外,我们还在不同设置的流行数据集上进行了大量实验,以证明我们方法的有效性。与现有的方法相比,我们的模型在cityscape[7]上取得了更好的结果,并且在PASCAL VOC 2012[9]上也表现出了良好的泛化能力。
{"title":"Fusion Attention Network for Autonomous Cars Semantic Segmentation","authors":"Chuyao Wang, N. Aouf","doi":"10.1109/iv51971.2022.9827377","DOIUrl":"https://doi.org/10.1109/iv51971.2022.9827377","url":null,"abstract":"Semantic segmentation is vital for autonomous car scene understanding. It provides more precise subject information than raw RGB images and this, in turn, boosts the performance of autonomous driving. Recently, self-attention methods show great improvement in image semantic segmentation. Attention maps help scene parsing with abundant relationships of every pixel in an image. However, it is computationally demanding. Besides, existing works focus either on channel attention, ignoring the pixel position factors, or on spatial attention, disregarding the impacts of the channels on each other. To address these problems, we present Fusion Attention Network based on self-attention mechanism to harvest rich contextual dependencies. This model consists of two chains: pyramid fusion spatial attention and fusion channel attention. We apply pyramid sampling in the spatial attention module to reduce the computation for spatial attention maps. Channel attention has a similar structure to the spatial attention. We also introduce a fusion technique to calculate contextual dependencies using features from both attention chains. We concatenate the results from spatial and channel attention modules as the enhanced attention map, leading to better semantic segmentation results. We conduct extensive experiments on popular datasets with different settings in addition to an ablation study to prove the efficiency of our approach. Our model achieves better results, on Cityscapes [7], compared to state-of-the-art methods, and also show good generalization capability on PASCAL VOC 2012 [9].","PeriodicalId":184622,"journal":{"name":"2022 IEEE Intelligent Vehicles Symposium (IV)","volume":"111 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130098822","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Non-local Evasive Overtaking of Downstream Incidents in Distributed Behavior Planning of Connected Vehicles 分布式网联车辆行为规划中下游事件的非局部回避超车
Pub Date : 2022-06-05 DOI: 10.48550/arXiv.2206.14391
Abdul Rahman Kreidieh, Y. Farid, K. Oguchi
The prevalence of high-speed vehicle-to-everything (V2X) communication will likely significantly influence the future of vehicle autonomy. In several autonomous driving applications, however, the role such systems will play is seldom understood. In this paper, we explore the role of communication signals in enhancing the performance of lane change assistance systems in situations where downstream bottlenecks restrict the mobility of a few lanes. Building off of prior work on modeling lane change incentives, we design a controller that 1) encourages automated vehicles to subvert lanes in which distant downstream delays are likely to occur, while also 2) ignoring greedy local incentives when such delays are needed to maintain a specific route. Numerical results on different traffic conditions and penetration rates suggest that the model successfully subverts a significant portion of delays brought about by downstream bottlenecks, both globally and from the perspective of the controlled vehicles.
高速车联网(V2X)通信的普及可能会对汽车自动驾驶的未来产生重大影响。然而,在一些自动驾驶应用中,这种系统将发挥的作用很少被理解。在本文中,我们探讨了在下游瓶颈限制少数车道移动的情况下,通信信号在提高变道辅助系统性能方面的作用。在之前对变道激励建模的基础上,我们设计了一个控制器,该控制器1)鼓励自动车辆颠覆可能发生远距离下游延迟的车道,同时2)当需要保持特定路线时,忽略贪婪的本地激励。在不同交通条件和渗透率下的数值结果表明,无论从全局还是从被控车辆的角度来看,该模型都成功地颠覆了很大一部分由下游瓶颈带来的延迟。
{"title":"Non-local Evasive Overtaking of Downstream Incidents in Distributed Behavior Planning of Connected Vehicles","authors":"Abdul Rahman Kreidieh, Y. Farid, K. Oguchi","doi":"10.48550/arXiv.2206.14391","DOIUrl":"https://doi.org/10.48550/arXiv.2206.14391","url":null,"abstract":"The prevalence of high-speed vehicle-to-everything (V2X) communication will likely significantly influence the future of vehicle autonomy. In several autonomous driving applications, however, the role such systems will play is seldom understood. In this paper, we explore the role of communication signals in enhancing the performance of lane change assistance systems in situations where downstream bottlenecks restrict the mobility of a few lanes. Building off of prior work on modeling lane change incentives, we design a controller that 1) encourages automated vehicles to subvert lanes in which distant downstream delays are likely to occur, while also 2) ignoring greedy local incentives when such delays are needed to maintain a specific route. Numerical results on different traffic conditions and penetration rates suggest that the model successfully subverts a significant portion of delays brought about by downstream bottlenecks, both globally and from the perspective of the controlled vehicles.","PeriodicalId":184622,"journal":{"name":"2022 IEEE Intelligent Vehicles Symposium (IV)","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131047772","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
DAROD: A Deep Automotive Radar Object Detector on Range-Doppler maps DAROD:深度汽车雷达目标探测器的距离多普勒地图
Pub Date : 2022-06-05 DOI: 10.1109/iv51971.2022.9827281
Colin Decourt, R. V. Rullen, D. Salle, T. Oberlin
Due to the small number of raw data automotive radar datasets and the low resolution of such radar sensors, automotive radar object detection has been little explored with deep learning models in comparison to camera and lidar-based approaches. However, radars are low-cost sensors able to accurately sense surrounding object characteristics (e.g., distance, radial velocity, direction of arrival, radar cross-section) regardless of weather conditions (e.g., rain, snow, fog). Recent open-source datasets such as CARRADA, RADDet or CRUW have opened up research on several topics ranging from object classification to object detection and segmentation. In this paper, we present DAROD, an adaptation of Faster R-CNN object detector for automotive radar on the range-Doppler spectra. We propose a light architecture for features extraction, which shows an increased performance compare to heavier vision-based backbone architectures. Our models reach respectively an mAP@0.5 of 55.83 and 46.57 on CARRADA and RADDet datasets, outperforming competing methods.
由于原始数据汽车雷达数据集数量少,且此类雷达传感器的分辨率较低,与基于相机和激光雷达的方法相比,深度学习模型在汽车雷达目标检测方面的探索很少。然而,雷达是低成本的传感器,能够准确地感知周围物体的特征(例如,距离、径向速度、到达方向、雷达截面),而不受天气条件(例如,雨、雪、雾)的影响。最近的开源数据集,如CARRADA, RADDet或CRUW,已经开启了从目标分类到目标检测和分割等多个主题的研究。本文提出了一种基于距离-多普勒光谱的基于Faster R-CNN的汽车雷达目标检测器DAROD。我们提出了一种轻量级的特征提取架构,与基于视觉的主干架构相比,它显示出更高的性能。我们的模型在CARRADA和RADDet数据集上分别达到了55.83和46.57的mAP@0.5,优于竞争对手的方法。
{"title":"DAROD: A Deep Automotive Radar Object Detector on Range-Doppler maps","authors":"Colin Decourt, R. V. Rullen, D. Salle, T. Oberlin","doi":"10.1109/iv51971.2022.9827281","DOIUrl":"https://doi.org/10.1109/iv51971.2022.9827281","url":null,"abstract":"Due to the small number of raw data automotive radar datasets and the low resolution of such radar sensors, automotive radar object detection has been little explored with deep learning models in comparison to camera and lidar-based approaches. However, radars are low-cost sensors able to accurately sense surrounding object characteristics (e.g., distance, radial velocity, direction of arrival, radar cross-section) regardless of weather conditions (e.g., rain, snow, fog). Recent open-source datasets such as CARRADA, RADDet or CRUW have opened up research on several topics ranging from object classification to object detection and segmentation. In this paper, we present DAROD, an adaptation of Faster R-CNN object detector for automotive radar on the range-Doppler spectra. We propose a light architecture for features extraction, which shows an increased performance compare to heavier vision-based backbone architectures. Our models reach respectively an mAP@0.5 of 55.83 and 46.57 on CARRADA and RADDet datasets, outperforming competing methods.","PeriodicalId":184622,"journal":{"name":"2022 IEEE Intelligent Vehicles Symposium (IV)","volume":"166 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130748161","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Fair Division meets Vehicle Routing: Fairness for Drivers with Monotone Profits 公平划分满足车辆路径:单调利润驾驶员的公平性
Pub Date : 2022-06-05 DOI: 10.1109/IV51971.2022.9827432
M. Aleksandrov
We propose a new model for fair division and vehicle routing, where drivers have monotone profit preferences, and their vehicles have feasibility constraints, for customer requests. For this model, we design two new axiomatic notions for fairness for drivers: FEQ1 and FEF1. FEQ1 encodes driver pairwise bounded equitability. FEF1 encodes driver pairwise bounded envy freeness. We compare FEQ1 and FEF1 with popular fair division notions such as EQ1 and EF1. We also give algorithms for guaranteeing FEQ1 and FEF1, respectively.
我们提出了一种新的公平分配和车辆路径模型,其中驾驶员具有单调的利润偏好,并且他们的车辆具有可行性约束,以满足客户的要求。对于这个模型,我们设计了两个新的公理化概念:FEQ1和FEF1。FEQ1编码驱动对有界公平性。FEF1编码驱动对有界嫉妒度。我们将FEQ1和FEF1与流行的公平划分概念(如EQ1和EF1)进行比较。并分别给出了保证FEQ1和FEF1的算法。
{"title":"Fair Division meets Vehicle Routing: Fairness for Drivers with Monotone Profits","authors":"M. Aleksandrov","doi":"10.1109/IV51971.2022.9827432","DOIUrl":"https://doi.org/10.1109/IV51971.2022.9827432","url":null,"abstract":"We propose a new model for fair division and vehicle routing, where drivers have monotone profit preferences, and their vehicles have feasibility constraints, for customer requests. For this model, we design two new axiomatic notions for fairness for drivers: FEQ1 and FEF1. FEQ1 encodes driver pairwise bounded equitability. FEF1 encodes driver pairwise bounded envy freeness. We compare FEQ1 and FEF1 with popular fair division notions such as EQ1 and EF1. We also give algorithms for guaranteeing FEQ1 and FEF1, respectively.","PeriodicalId":184622,"journal":{"name":"2022 IEEE Intelligent Vehicles Symposium (IV)","volume":" 40","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133021411","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Learning-Based Radar Detector for Complex Automotive Scenarios 基于深度学习的复杂汽车场景雷达探测器
Pub Date : 2022-06-05 DOI: 10.1109/iv51971.2022.9827045
Roberto Franceschi, D. Rachkov
Recent research explored advantages of applying a learning-based method to the radar target detection problem. A single point target case was mainly considered, though. This work extends those studies to complex automotive scenarios. We propose a Convolutional Neural Networks-based model able to detect and locate targets in multi-dimensional space of range, velocity, azimuth, and elevation. Due to the lack of publicly available datasets containing raw radar data (after analog-to-digital converter), we simulated a dataset comprising more than 17000 frames of automotive scenarios and various road objects including (but not limited to) cars, pedestrians, cyclists, trees, and guardrails. The proposed model was trained exclusively on simulated data and its performance was compared to that of conventional radar detection and angle estimation pipeline. In unseen simulated scenarios, our model outperformed the conventional CFAR-based methods, improving by 14.5% the dice score in range-Doppler domain. Our model was also qualitatively evaluated on unseen real-world radar recordings, achieving more detection points per object than the conventional processing.
近年来的研究探索了将基于学习的方法应用于雷达目标检测问题的优点。不过,主要考虑的是单点目标情况。这项工作将这些研究扩展到复杂的汽车场景。我们提出了一种基于卷积神经网络的模型,该模型能够在距离、速度、方位角和仰角等多维空间中检测和定位目标。由于缺乏包含原始雷达数据(经过模数转换)的公开可用数据集,我们模拟了一个包含超过17000帧汽车场景和各种道路物体的数据集,包括(但不限于)汽车、行人、骑自行车的人、树木和护栏。该模型仅在仿真数据上进行了训练,并与传统的雷达探测和角度估计管道进行了性能比较。在未知的模拟场景中,我们的模型优于传统的基于cfar的方法,在距离-多普勒域提高了14.5%的骰子分数。我们的模型还在未见过的真实雷达记录上进行了定性评估,与传统处理相比,每个物体获得了更多的检测点。
{"title":"Deep Learning-Based Radar Detector for Complex Automotive Scenarios","authors":"Roberto Franceschi, D. Rachkov","doi":"10.1109/iv51971.2022.9827045","DOIUrl":"https://doi.org/10.1109/iv51971.2022.9827045","url":null,"abstract":"Recent research explored advantages of applying a learning-based method to the radar target detection problem. A single point target case was mainly considered, though. This work extends those studies to complex automotive scenarios. We propose a Convolutional Neural Networks-based model able to detect and locate targets in multi-dimensional space of range, velocity, azimuth, and elevation. Due to the lack of publicly available datasets containing raw radar data (after analog-to-digital converter), we simulated a dataset comprising more than 17000 frames of automotive scenarios and various road objects including (but not limited to) cars, pedestrians, cyclists, trees, and guardrails. The proposed model was trained exclusively on simulated data and its performance was compared to that of conventional radar detection and angle estimation pipeline. In unseen simulated scenarios, our model outperformed the conventional CFAR-based methods, improving by 14.5% the dice score in range-Doppler domain. Our model was also qualitatively evaluated on unseen real-world radar recordings, achieving more detection points per object than the conventional processing.","PeriodicalId":184622,"journal":{"name":"2022 IEEE Intelligent Vehicles Symposium (IV)","volume":"80 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133780148","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Efficient Radar Deep Temporal Detection in Urban Traffic Scenes 城市交通场景下的高效雷达时序检测
Pub Date : 2022-06-05 DOI: 10.1109/iv51971.2022.9827053
Zuyuan Guo, Haoran Wang, Wei Yi, Jiahao Zhang
This paper explores object detection on radar range-Doppler map. Most of the radar processing algorithms are proposed for detecting objects without classifying. Meanwhile, these approaches neglect the useful information available in the temporal domain. To address these problems, we propose an online radar deep temporal detection framework by frame-to-frame prediction and association with low computation. The core idea is that once an object is detected, its location and class can be predicted in the future frame to improve detection results. The experiment results illustrate this method achieves better detection and classification performance, and shows the usability of radar data for traffic scenes.
本文探讨了雷达距离-多普勒图上的目标检测。大多数雷达处理算法都是为了不分类地检测目标。同时,这些方法忽略了时态域中可用的有用信息。为了解决这些问题,我们提出了一种基于帧对帧预测和低计算关联的在线雷达深度时间检测框架。其核心思想是,一旦检测到一个物体,它的位置和类别可以在未来的框架中预测,以提高检测结果。实验结果表明,该方法取得了较好的检测和分类性能,显示了雷达数据在交通场景中的可用性。
{"title":"Efficient Radar Deep Temporal Detection in Urban Traffic Scenes","authors":"Zuyuan Guo, Haoran Wang, Wei Yi, Jiahao Zhang","doi":"10.1109/iv51971.2022.9827053","DOIUrl":"https://doi.org/10.1109/iv51971.2022.9827053","url":null,"abstract":"This paper explores object detection on radar range-Doppler map. Most of the radar processing algorithms are proposed for detecting objects without classifying. Meanwhile, these approaches neglect the useful information available in the temporal domain. To address these problems, we propose an online radar deep temporal detection framework by frame-to-frame prediction and association with low computation. The core idea is that once an object is detected, its location and class can be predicted in the future frame to improve detection results. The experiment results illustrate this method achieves better detection and classification performance, and shows the usability of radar data for traffic scenes.","PeriodicalId":184622,"journal":{"name":"2022 IEEE Intelligent Vehicles Symposium (IV)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114965858","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Monte Carlo particle filter formulation for mapless-based localization 一种基于无映射定位的蒙特卡罗粒子滤波公式
Pub Date : 2022-06-05 DOI: 10.1109/iv51971.2022.9827064
André Przewodowski, F. Osório
In this paper, we extend the Monte Carlo Localization formulation for a more efficient global localization using coarse digital maps (for instance, the OpenStreetMap maps). The proposed formulation uses the map constraints in order to reduce the state dimension, which is ideal for a Monte Carlo-based particle filter. Also, we propose including to the data association process the matching of the traffic signals’ information to the road properties, so that their exact position do not need to be previously mapped for updating the filter. In the proposed approach, no low-level point cloud mapping was required and neither the use of LIDAR data. The experiments were conducted using a dataset collected by the CARINA II intelligent vehicle and the results suggest that the method is adequate for a localization pipeline. The dataset is available online and the code is available on GitHub.
在本文中,我们扩展了蒙特卡罗定位公式,使用粗糙的数字地图(例如,OpenStreetMap地图)进行更有效的全局定位。提出的公式使用映射约束来降低状态维数,这对于基于蒙特卡罗的粒子滤波器来说是理想的。此外,我们建议在数据关联过程中加入交通信号信息与道路属性的匹配,这样就不需要预先映射它们的确切位置来更新过滤器。在提出的方法中,不需要低层点云映射,也不需要使用激光雷达数据。利用CARINA II智能车辆收集的数据集进行了实验,结果表明该方法足以用于定位管道。数据集可以在线获得,代码可以在GitHub上获得。
{"title":"A Monte Carlo particle filter formulation for mapless-based localization","authors":"André Przewodowski, F. Osório","doi":"10.1109/iv51971.2022.9827064","DOIUrl":"https://doi.org/10.1109/iv51971.2022.9827064","url":null,"abstract":"In this paper, we extend the Monte Carlo Localization formulation for a more efficient global localization using coarse digital maps (for instance, the OpenStreetMap maps). The proposed formulation uses the map constraints in order to reduce the state dimension, which is ideal for a Monte Carlo-based particle filter. Also, we propose including to the data association process the matching of the traffic signals’ information to the road properties, so that their exact position do not need to be previously mapped for updating the filter. In the proposed approach, no low-level point cloud mapping was required and neither the use of LIDAR data. The experiments were conducted using a dataset collected by the CARINA II intelligent vehicle and the results suggest that the method is adequate for a localization pipeline. The dataset is available online and the code is available on GitHub.","PeriodicalId":184622,"journal":{"name":"2022 IEEE Intelligent Vehicles Symposium (IV)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123717165","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Object-Level Targeted Selection via Deep Template Matching 通过深度模板匹配的对象级目标选择
Pub Date : 2022-06-05 DOI: 10.48550/arXiv.2207.01778
S. Kothawade
Retrieving images with objects that are semantically similar to objects of interest (OOI) in a query image has many practical use cases. A few examples include fixing failures like false negatives/positives of a learned model or mitigating class imbalance in a dataset. The targeted selection task requires finding the relevant data from a large-scale pool of unlabeled data. Manual mining at this scale is infeasible. Further, the OOI are often small and occupy less than 1% of image area, are occluded, and co-exist with many semantically different objects in cluttered scenes. Existing semantic image retrieval methods often focus on mining for larger sized geographical landmarks, and/or require extra labeled data, such as images/image-pairs with similar objects, for mining images with generic objects. We propose a fast and robust template matching algorithm in the DNN feature space, that retrieves semantically similar images at the object-level from a large unlabeled pool of data. We project the region(s) around the OOI in the query image to the DNN feature space for use as the template. This enables our method to focus on the semantics of the OOI without requiring extra labeled data. In the context of autonomous driving, we evaluate our system for targeted selection by using failure cases of object detectors as OOI. We demonstrate its efficacy on a large unlabeled dataset with 2.2M images and show high recall in mining for images with small-sized OOI. We compare our method against a well-known semantic image retrieval method, which also does not require extra labeled data. Lastly, we show that our method is flexible and retrieves images with one or more semantically different co-occurring OOI seamlessly.
检索具有与查询图像中感兴趣对象(OOI)语义相似的对象的图像有许多实际用例。一些例子包括修复失败,如学习模型的假阴性/阳性或减轻数据集中的类不平衡。目标选择任务需要从大规模的未标记数据池中找到相关数据。这种规模的人工采矿是不可行的。此外,OOI通常很小,占用不到1%的图像面积,被遮挡,并且在混乱的场景中与许多语义不同的对象共存。现有的语义图像检索方法通常侧重于挖掘较大尺寸的地理地标,并且/或者需要额外的标记数据,例如具有相似对象的图像/图像对,以挖掘具有通用对象的图像。我们在DNN特征空间中提出了一种快速鲁棒的模板匹配算法,该算法从大量未标记的数据池中检索对象级语义相似的图像。我们将查询图像中OOI周围的区域投影到DNN特征空间中作为模板使用。这使我们的方法能够专注于OOI的语义,而不需要额外的标记数据。在自动驾驶的背景下,我们通过使用目标探测器的故障案例作为OOI来评估系统的目标选择。我们在一个包含220万张图像的大型未标记数据集上证明了它的有效性,并且在挖掘具有小型OOI的图像时显示了高召回率。我们将我们的方法与一种众所周知的语义图像检索方法进行了比较,该方法也不需要额外的标记数据。最后,我们展示了我们的方法是灵活的,可以无缝地检索具有一个或多个语义不同的共同出现的OOI的图像。
{"title":"Object-Level Targeted Selection via Deep Template Matching","authors":"S. Kothawade","doi":"10.48550/arXiv.2207.01778","DOIUrl":"https://doi.org/10.48550/arXiv.2207.01778","url":null,"abstract":"Retrieving images with objects that are semantically similar to objects of interest (OOI) in a query image has many practical use cases. A few examples include fixing failures like false negatives/positives of a learned model or mitigating class imbalance in a dataset. The targeted selection task requires finding the relevant data from a large-scale pool of unlabeled data. Manual mining at this scale is infeasible. Further, the OOI are often small and occupy less than 1% of image area, are occluded, and co-exist with many semantically different objects in cluttered scenes. Existing semantic image retrieval methods often focus on mining for larger sized geographical landmarks, and/or require extra labeled data, such as images/image-pairs with similar objects, for mining images with generic objects. We propose a fast and robust template matching algorithm in the DNN feature space, that retrieves semantically similar images at the object-level from a large unlabeled pool of data. We project the region(s) around the OOI in the query image to the DNN feature space for use as the template. This enables our method to focus on the semantics of the OOI without requiring extra labeled data. In the context of autonomous driving, we evaluate our system for targeted selection by using failure cases of object detectors as OOI. We demonstrate its efficacy on a large unlabeled dataset with 2.2M images and show high recall in mining for images with small-sized OOI. We compare our method against a well-known semantic image retrieval method, which also does not require extra labeled data. Lastly, we show that our method is flexible and retrieves images with one or more semantically different co-occurring OOI seamlessly.","PeriodicalId":184622,"journal":{"name":"2022 IEEE Intelligent Vehicles Symposium (IV)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129939063","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
2022 IEEE Intelligent Vehicles Symposium (IV)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1