首页 > 最新文献

Autonomous Vehicles and Machines最新文献

英文 中文
End-to-End Multitask Learning for Driver Gaze and Head Pose Estimation 端到端多任务学习的驾驶员注视和头部姿态估计
Pub Date : 2020-01-26 DOI: 10.2352/ISSN.2470-1173.2020.16.AVM-108
Mahmoud Ewaisha, Marwa El Shawarby, Hazem M. Abbas, Ibrahim Sobh
Modern automobiles accidents occur mostly due to inattentive behavior of drivers, which is why driver’s gaze estimation is becoming a critical component in automotive industry. Gaze estimation has introduced many challenges due to the nature of the surrounding environment like changes in illumination, or driver’s head motion, partial face occlusion, or wearing eye decorations. Previous work conducted in this field includes explicit extraction of hand-crafted features such as eye corners and pupil center to be used to estimate gaze, or appearance-based methods like Convolutional Neural Networks which implicitly extracts features from an image and directly map it to the corresponding gaze angle. In this work, a multitask Convolutional Neural Network architecture is proposed to predict subject’s gaze yaw and pitch angles, along with the head pose as an auxiliary task, making the model robust to head pose variations, without needing any complex preprocessing or hand-crafted feature extraction.Then the network’s output is clustered into nine gaze classes relevant in the driving scenario. The model achieves 95.8% accuracy on the test set and 78.2% accuracy in cross-subject testing, proving the model’s generalization capability and robustness to head pose variation.
现代汽车事故的发生大多是由于驾驶员的不注意行为,这就是为什么驾驶员的目光估计正在成为汽车工业的关键组成部分。由于周围环境的性质,例如照明的变化、驾驶员的头部运动、部分面部遮挡或佩戴眼部装饰,凝视估计带来了许多挑战。之前在该领域进行的工作包括明确提取手工制作的特征,如眼角和瞳孔中心,用于估计凝视,或者基于外观的方法,如卷积神经网络,从图像中隐式提取特征,并直接将其映射到相应的凝视角度。在这项工作中,提出了一种多任务卷积神经网络架构来预测受试者的凝视偏航和俯仰角,以及头部姿态作为辅助任务,使模型对头部姿态变化具有鲁棒性,无需任何复杂的预处理或手工特征提取。然后,该网络的输出被聚类成九个与驾驶场景相关的凝视类别。该模型在测试集上的准确率达到95.8%,在跨主题测试中准确率达到78.2%,证明了模型的泛化能力和对头部姿态变化的鲁棒性。
{"title":"End-to-End Multitask Learning for Driver Gaze and Head Pose Estimation","authors":"Mahmoud Ewaisha, Marwa El Shawarby, Hazem M. Abbas, Ibrahim Sobh","doi":"10.2352/ISSN.2470-1173.2020.16.AVM-108","DOIUrl":"https://doi.org/10.2352/ISSN.2470-1173.2020.16.AVM-108","url":null,"abstract":"\u0000 Modern automobiles accidents occur mostly due to inattentive behavior of drivers, which is why driver’s gaze estimation is becoming a critical component in automotive industry. Gaze estimation has introduced many challenges due to the nature of the surrounding environment like\u0000 changes in illumination, or driver’s head motion, partial face occlusion, or wearing eye decorations. Previous work conducted in this field includes explicit extraction of hand-crafted features such as eye corners and pupil center to be used to estimate gaze, or appearance-based methods\u0000 like Convolutional Neural Networks which implicitly extracts features from an image and directly map it to the corresponding gaze angle. In this work, a multitask Convolutional Neural Network architecture is proposed to predict subject’s gaze yaw and pitch angles, along with the head\u0000 pose as an auxiliary task, making the model robust to head pose variations, without needing any complex preprocessing or hand-crafted feature extraction.Then the network’s output is clustered into nine gaze classes relevant in the driving scenario. The model achieves 95.8% accuracy on\u0000 the test set and 78.2% accuracy in cross-subject testing, proving the model’s generalization capability and robustness to head pose variation.\u0000","PeriodicalId":177462,"journal":{"name":"Autonomous Vehicles and Machines","volume":"113 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-01-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116017692","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A tool for semi-automatic ground truth annotation of traffic videos 交通视频半自动地面真值标注工具
Pub Date : 2020-01-26 DOI: 10.2352/ISSN.2470-1173.2020.16.AVM-150
Florian Groh, Dominik Schörkhuber, M. Gelautz
We have developed a semi-automatic annotation tool – “CVL Annotator” – for bounding box ground truth generation in videos. Our research is particularly motivated by the need for reference annotations of challenging nighttime traffic scenes with highly dynamic lighting conditions due to reflections, headlights and halos from oncoming traffic. Our tool incorporates a suite of different state-of-the-art tracking algorithms in order to minimize the amount of human input necessary to generate high-quality ground truth data. We focus our user interface on the premise of minimizing user interaction and visualizing all information relevant to the user at a glance. We perform a preliminary user study to measure the amount of time and clicks necessary to produce ground truth annotations of video traffic scenes and evaluate the accuracy of the final annotation results.
我们开发了一个半自动注释工具-“CVL注释器”-用于视频中边界框地面真相生成。我们的研究特别受到参考注释的需要,这些注释具有挑战性的夜间交通场景,由于反射,前灯和来自迎面而来的车辆的光晕,具有高度动态的照明条件。我们的工具包含了一套不同的最先进的跟踪算法,以最大限度地减少产生高质量地面真实数据所需的人工输入量。我们关注用户界面的前提是尽量减少用户交互,并使与用户相关的所有信息一目了然。我们进行了初步的用户研究,以测量产生视频交通场景的地面真实注释所需的时间和点击量,并评估最终注释结果的准确性。
{"title":"A tool for semi-automatic ground truth annotation of traffic videos","authors":"Florian Groh, Dominik Schörkhuber, M. Gelautz","doi":"10.2352/ISSN.2470-1173.2020.16.AVM-150","DOIUrl":"https://doi.org/10.2352/ISSN.2470-1173.2020.16.AVM-150","url":null,"abstract":"\u0000 We have developed a semi-automatic annotation tool – “CVL Annotator” – for bounding box ground truth generation in videos. Our research is particularly motivated by the need for reference annotations of challenging nighttime traffic scenes with highly dynamic\u0000 lighting conditions due to reflections, headlights and halos from oncoming traffic. Our tool incorporates a suite of different state-of-the-art tracking algorithms in order to minimize the amount of human input necessary to generate high-quality ground truth data. We focus our user interface\u0000 on the premise of minimizing user interaction and visualizing all information relevant to the user at a glance. We perform a preliminary user study to measure the amount of time and clicks necessary to produce ground truth annotations of video traffic scenes and evaluate the accuracy of the\u0000 final annotation results.\u0000","PeriodicalId":177462,"journal":{"name":"Autonomous Vehicles and Machines","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-01-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126331561","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Multi-Sensor Fusion in Dynamic Environment using Evidential Grid Mapping 动态环境下基于证据网格映射的多传感器融合
Pub Date : 2020-01-26 DOI: 10.2352/ISSN.2470-1173.2020.16.AVM-203
G. Godaliyadda, Vijay Pothukuchi, J. Roh
Grid mapping is widely used to represent the environment surrounding a car or a robot for autonomous navigation. This paper describes an algorithm for evidential occupancy grid (OG) mapping that fuses measurements from different sensors, based on the Dempster-Shafer theory, and is intended for scenes with stationary and moving (dynamic) objects. Conventional OGmapping algorithms tend to struggle in the presence of moving objects because they do not explicitly distinguish between moving and stationary objects. In contrast, evidential OG mapping allows for dynamic and ambiguous states (e.g. a LIDAR measurement: cannot differentiate between moving and stationary objects) that are more aligned with measurements made by sensors. In this paper, we present a framework for fusing measurements as they are received from disparate sensors (e.g. radar, camera and LIDAR) using evidential grid mapping. With this approach, we can form a live map of the environment, and also alleviate the problem of having to synchronize sensors in time. We also designed a new inverse sensor model for radar that allows us to extract more information from object level measurements, by incorporating knowledge of the sensor’s characteristics. We have implemented our algorithm in the OpenVX framework to enable seamless integration into embedded platforms. Test results show compelling performance especially in the presence of moving objects.
网格映射被广泛用于表示汽车或机器人的自主导航环境。本文描述了一种基于Dempster-Shafer理论的证据占用网格(OG)映射算法,该算法融合了来自不同传感器的测量结果,适用于具有静止和移动(动态)物体的场景。传统的OGmapping算法往往难以处理移动对象,因为它们没有明确区分移动对象和静止对象。相比之下,证据OG映射允许动态和模糊状态(例如激光雷达测量:无法区分移动和静止物体),这些状态与传感器的测量结果更一致。在本文中,我们提出了一个框架,用于融合测量,因为它们是从不同的传感器(如雷达,相机和激光雷达)接收到的,使用证据网格映射。通过这种方法,我们可以形成一个实时的环境地图,也减轻了必须及时同步传感器的问题。我们还为雷达设计了一种新的逆传感器模型,通过结合传感器特性的知识,使我们能够从物体水平测量中提取更多信息。我们已经在OpenVX框架中实现了我们的算法,以实现与嵌入式平台的无缝集成。测试结果显示了令人信服的性能,特别是在移动物体的存在。
{"title":"Multi-Sensor Fusion in Dynamic Environment using Evidential Grid Mapping","authors":"G. Godaliyadda, Vijay Pothukuchi, J. Roh","doi":"10.2352/ISSN.2470-1173.2020.16.AVM-203","DOIUrl":"https://doi.org/10.2352/ISSN.2470-1173.2020.16.AVM-203","url":null,"abstract":"\u0000 Grid mapping is widely used to represent the environment surrounding a car or a robot for autonomous navigation. This paper describes an algorithm for evidential occupancy grid (OG) mapping that fuses measurements from different sensors, based on the Dempster-Shafer theory, and is\u0000 intended for scenes with stationary and moving (dynamic) objects. Conventional OGmapping algorithms tend to struggle in the presence of moving objects because they do not explicitly distinguish between moving and stationary objects. In contrast, evidential OG mapping allows for dynamic and\u0000 ambiguous states (e.g. a LIDAR measurement: cannot differentiate between moving and stationary objects) that are more aligned with measurements made by sensors.\u0000 \u0000 In this paper, we present a framework for fusing measurements as they are received from disparate sensors (e.g. radar,\u0000 camera and LIDAR) using evidential grid mapping. With this approach, we can form a live map of the environment, and also alleviate the problem of having to synchronize sensors in time. We also designed a new inverse sensor model for radar that allows us to extract more information from object\u0000 level measurements, by incorporating knowledge of the sensor’s characteristics. We have implemented our algorithm in the OpenVX framework to enable seamless integration into embedded platforms. Test results show compelling performance especially in the presence of moving objects.\u0000","PeriodicalId":177462,"journal":{"name":"Autonomous Vehicles and Machines","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-01-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121456366","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
VisibilityNet: Camera visibility detection and image restoration for autonomous driving VisibilityNet:用于自动驾驶的摄像头可见度检测和图像恢复
Pub Date : 2020-01-26 DOI: 10.2352/J.IMAGINGSCI.TECHNOL.2019.63.6.060405
Michal Uřičář, Hazem Rashed, Adithya Ranga, Ashok Dahal, S. Yogamani
{"title":"VisibilityNet: Camera visibility detection and image restoration for autonomous driving","authors":"Michal Uřičář, Hazem Rashed, Adithya Ranga, Ashok Dahal, S. Yogamani","doi":"10.2352/J.IMAGINGSCI.TECHNOL.2019.63.6.060405","DOIUrl":"https://doi.org/10.2352/J.IMAGINGSCI.TECHNOL.2019.63.6.060405","url":null,"abstract":"","PeriodicalId":177462,"journal":{"name":"Autonomous Vehicles and Machines","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-01-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114166842","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Multiple pedestrian tracking using Siamese random forests and shallow Convolutional Neural Networks 基于Siamese随机森林和浅卷积神经网络的多重行人跟踪
Pub Date : 2020-01-26 DOI: 10.2352/ISSN.2470-1173.2020.16.AVM-088
Jimi Lee, J. Nam, ByoungChul Ko
In this study, we propose a new multi-pedestrian tracking (MPT) method that performs quickly and efficiently track pedestrians in real-time system. The proposed method considers combining shallow convolutional neural networks (CNN) with ensemble learning method, Siamese random forests (SRF). Unlike conventional methods, to promote robustness of ensemble method, feature transformation is applied which exploit shallow networks in appearances of still images to extract enrich features. We formulate the problem of MOT in a structured learning framework based on SRF. Each forest learns differences of random feature pairs, which are extracted from the former process to enhance robustness to easily happened circumstances in a moving vehicle. When it compares to the conventional tracking algorithms, the proposed approach, based on SRF, takes advantage of lightweight and efficiency. The proposed lightweight multiple pedestrian tracker was successfully applied to benchmark datasets and yielded a similar or better performance level as compared with state-of-theart methods.
在本研究中,我们提出了一种新的多行人跟踪(MPT)方法,可以在实时系统中快速有效地跟踪行人。该方法考虑将浅卷积神经网络(CNN)与集成学习方法暹罗随机森林(SRF)相结合。与传统方法不同,为了提高集成方法的鲁棒性,采用特征变换方法,利用静止图像表面的浅层网络提取丰富的特征。我们在基于SRF的结构化学习框架中提出了MOT问题。每个森林学习随机特征对的差异,这些特征对是从前一个过程中提取出来的,以增强对移动车辆中容易发生的情况的鲁棒性。与传统的跟踪算法相比,该方法具有轻量化和高效率的优点。所提出的轻量级多行人跟踪器成功地应用于基准数据集,并产生了类似或更好的性能水平,与最先进的方法相比。
{"title":"Multiple pedestrian tracking using Siamese random forests and shallow Convolutional Neural Networks","authors":"Jimi Lee, J. Nam, ByoungChul Ko","doi":"10.2352/ISSN.2470-1173.2020.16.AVM-088","DOIUrl":"https://doi.org/10.2352/ISSN.2470-1173.2020.16.AVM-088","url":null,"abstract":"\u0000 In this study, we propose a new multi-pedestrian tracking (MPT) method that performs quickly and efficiently track pedestrians in real-time system. The proposed method considers combining shallow convolutional neural networks (CNN) with ensemble learning method, Siamese random forests\u0000 (SRF). Unlike conventional methods, to promote robustness of ensemble method, feature transformation is applied which exploit shallow networks in appearances of still images to extract enrich features. We formulate the problem of MOT in a structured learning framework based on SRF. Each forest\u0000 learns differences of random feature pairs, which are extracted from the former process to enhance robustness to easily happened circumstances in a moving vehicle. When it compares to the conventional tracking algorithms, the proposed approach, based on SRF, takes advantage of lightweight\u0000 and efficiency. The proposed lightweight multiple pedestrian tracker was successfully applied to benchmark datasets and yielded a similar or better performance level as compared with state-of-theart methods.\u0000","PeriodicalId":177462,"journal":{"name":"Autonomous Vehicles and Machines","volume":"102 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-01-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134061773","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Object Detection Using an Ideal Observer Model 使用理想观测器模型的目标检测
Pub Date : 2020-01-26 DOI: 10.2352/issn.2470-1173.2020.16.avm-041
O. Skorka, P. Kane
Many of the metrics developed for informational imaging are useful in automotive imaging, since many of the tasks – for example, object detection and identification – are similar. This work discusses sensor characterization parameters for the Ideal Observer SNR model, and elaborates on the noise power spectrum. It presents cross-correlation analysis results for matched-filter detection of a tribar pattern in sets of resolution target images that were captured with three image sensors over a range of illumination levels. Lastly, the work compares the crosscorrelation data to predictions made by the Ideal Observer Model and demonstrates good agreement between the two methods on relative evaluation of detection capabilities.
为信息成像开发的许多指标在汽车成像中很有用,因为许多任务(例如,目标检测和识别)是相似的。本文讨论了理想观测器信噪比模型的传感器特性参数,并详细阐述了噪声功率谱。它提出了匹配滤波器检测三柱模式的相互关联分析结果,这些图像是用三个图像传感器在一定范围的照明水平上捕获的分辨率目标图像集。最后,本文将相关数据与理想观测器模型的预测结果进行了比较,证明了两种方法在相对评估探测能力方面的一致性。
{"title":"Object Detection Using an Ideal Observer Model","authors":"O. Skorka, P. Kane","doi":"10.2352/issn.2470-1173.2020.16.avm-041","DOIUrl":"https://doi.org/10.2352/issn.2470-1173.2020.16.avm-041","url":null,"abstract":"\u0000 Many of the metrics developed for informational imaging are useful in automotive imaging, since many of the tasks – for example, object detection and identification – are similar. This work discusses sensor characterization parameters for the Ideal Observer SNR model,\u0000 and elaborates on the noise power spectrum. It presents cross-correlation analysis results for matched-filter detection of a tribar pattern in sets of resolution target images that were captured with three image sensors over a range of illumination levels. Lastly, the work compares the crosscorrelation\u0000 data to predictions made by the Ideal Observer Model and demonstrates good agreement between the two methods on relative evaluation of detection capabilities. \u0000","PeriodicalId":177462,"journal":{"name":"Autonomous Vehicles and Machines","volume":"140 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-01-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131798518","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Single image haze removal using multiple scattering model for road scenes 使用多重散射模型对道路场景进行单幅图像雾霾去除
Pub Date : 2020-01-26 DOI: 10.2352/ISSN.2470-1173.2020.16.AVM-080
Minsub Kim, Soonyoung Hong, M. Kang
Haze is one of the sources cause image degradation. Haze affects contrast and saturation of not only for the real world image, but also the road scenes. Most haze removal algorithms use an atmospheric scattering model for removing the effect of haze. Most of haze removal algorithms are based on the single scattering model which does not consider the blur in the haze image. In this paper, a novel haze removal algorithm using a multiple scattering model with deconvolution is proposed. The proposed algorithm considers blurring effect in the haze image. Down sampling of the haze image is also used for estimating the atmospheric light efficiently. The synthetic road scenes with and without haze are used to evaluate the performance of the proposed method. Experimental result demonstrates that the proposed algorithm performs better for restoring images affected by haze both qualitatively and quantitatively.
雾霾是引起图像退化的来源之一。雾霾不仅影响真实世界图像的对比度和饱和度,也影响道路场景。大多数雾霾去除算法使用大气散射模型来去除雾霾的影响。大多数去雾算法都是基于单一散射模型,没有考虑雾图像中的模糊。本文提出了一种基于反褶积的多重散射模型的雾霾去除算法。该算法考虑了雾霾图像的模糊效应。对雾霾图像进行下采样,有效地估计了大气光。用有雾霾和无雾霾的合成道路场景来评价该方法的性能。实验结果表明,该算法在定性和定量上都能较好地恢复受雾霾影响的图像。
{"title":"Single image haze removal using multiple scattering model for road scenes","authors":"Minsub Kim, Soonyoung Hong, M. Kang","doi":"10.2352/ISSN.2470-1173.2020.16.AVM-080","DOIUrl":"https://doi.org/10.2352/ISSN.2470-1173.2020.16.AVM-080","url":null,"abstract":"\u0000 Haze is one of the sources cause image degradation. Haze affects contrast and saturation of not only for the real world image, but also the road scenes. Most haze removal algorithms use an atmospheric scattering model for removing the effect of haze. Most of haze removal algorithms\u0000 are based on the single scattering model which does not consider the blur in the haze image. In this paper, a novel haze removal algorithm using a multiple scattering model with deconvolution is proposed. The proposed algorithm considers blurring effect in the haze image. Down sampling of\u0000 the haze image is also used for estimating the atmospheric light efficiently. The synthetic road scenes with and without haze are used to evaluate the performance of the proposed method. Experimental result demonstrates that the proposed algorithm performs better for restoring images affected\u0000 by haze both qualitatively and quantitatively.\u0000","PeriodicalId":177462,"journal":{"name":"Autonomous Vehicles and Machines","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-01-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129720209","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Metrology Impact of Advanced Driver Assistance Systems 先进驾驶辅助系统的计量影响
Pub Date : 2020-01-26 DOI: 10.2352/ISSN.2470-1173.2020.16.AVM-200
P. Iacomussi
Metrological applications to road environment are usually focused on the characterization of the road, considering as measurands several characteristics related to the road as a whole or the performances of single components, like the road surface, lighting systems, active and/or passive signaling and obviously vehicles equipment. In current standards approach, driving on the road means to navigate ”visually” (for a human being driver), the characterizations are mostly photometric performances oriented for given reference conditions and reference observer (photometric observer observing the road from assigned points of view, with given spectral sensitivity). But considering the present and future technological trends and knowledge on visual performances, characterizations based on only photometric quantities in reference conditions as described in the current standards would be not fully suitable, even for human driver visual needs. Nowadays research on components and systems for advanced driver assistance are evolving, following different paths toward different solutions: it is not possible, nor useful to define strict constraints as it has been done previously for road applications measurements. The paper presents the current situation of metrological characterization of road environment and components, on laboratory and on site using mobile high efficiency laboratories, and suggests to use ADAS (Advanced Driver Assistance System) for diffuse mapping of road characteristics for a better understanding of the road environment and maintenance. The suggestion has the additional advantage of minimizing measurement costs, but for its full applicability, the reliability and metrological performances of installed devices and of the measurements performed by ADAS are a priority.
道路环境的计量应用通常集中在道路的特征上,考虑与整个道路或单个组件的性能相关的几个特征,如路面、照明系统、主动和/或被动信号,显然还有车辆设备。在目前的标准方法中,在道路上驾驶意味着“视觉”导航(对于人类驾驶员来说),其特征主要是针对给定参考条件和参考观察者(从指定的角度观察道路,具有给定的光谱灵敏度的光度观察者)的光度性能。但考虑到当前和未来的技术趋势以及对视觉性能的了解,仅基于当前标准中描述的参考条件下的光度量的特征描述将不完全合适,即使是人类驾驶员的视觉需求。如今,对先进驾驶辅助系统的研究正在不断发展,沿着不同的道路走向不同的解决方案:像以前的道路应用测量那样定义严格的约束是不可能的,也没有用。本文介绍了道路环境及其组成要素计量表征的现状,在实验室和现场使用了移动高效实验室,并建议使用ADAS (Advanced Driver Assistance System)进行道路特征的漫反射映射,以便更好地了解道路环境和维护。该建议具有最小化测量成本的额外优势,但对于其完全适用性,安装设备和ADAS执行的测量的可靠性和计量性能是优先考虑的。
{"title":"Metrology Impact of Advanced Driver Assistance Systems","authors":"P. Iacomussi","doi":"10.2352/ISSN.2470-1173.2020.16.AVM-200","DOIUrl":"https://doi.org/10.2352/ISSN.2470-1173.2020.16.AVM-200","url":null,"abstract":"\u0000 Metrological applications to road environment are usually focused on the characterization of the road, considering as measurands several characteristics related to the road as a whole or the performances of single components, like the road surface, lighting systems, active and/or\u0000 passive signaling and obviously vehicles equipment. In current standards approach, driving on the road means to navigate ”visually” (for a human being driver), the characterizations are mostly photometric performances oriented for given reference conditions and reference observer\u0000 (photometric observer observing the road from assigned points of view, with given spectral sensitivity). But considering the present and future technological trends and knowledge on visual performances, characterizations based on only photometric quantities in reference conditions as described\u0000 in the current standards would be not fully suitable, even for human driver visual needs.\u0000 \u0000 Nowadays research on components and systems for advanced driver assistance are evolving, following different paths toward different solutions: it is not possible, nor useful to define strict\u0000 constraints as it has been done previously for road applications measurements. The paper presents the current situation of metrological characterization of road environment and components, on laboratory and on site using mobile high efficiency laboratories, and suggests to use ADAS (Advanced\u0000 Driver Assistance System) for diffuse mapping of road characteristics for a better understanding of the road environment and maintenance. The suggestion has the additional advantage of minimizing measurement costs, but for its full applicability, the reliability and metrological performances\u0000 of installed devices and of the measurements performed by ADAS are a priority.\u0000","PeriodicalId":177462,"journal":{"name":"Autonomous Vehicles and Machines","volume":"141 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-01-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127557483","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Object Tracking Continuity through Track and Trace Method 通过跟踪和跟踪方法实现目标跟踪的连续性
Pub Date : 2020-01-26 DOI: 10.2352/ISSN.2470-1173.2020.16.AVM-258
Haney W. Williams, S. Simske
The demand for object tracking (OT) applications has been increasing for the past few decades in many areas of interest: security, surveillance, intelligence gathering, and reconnaissance. Lately, newly-defined requirements for unmanned vehicles have enhanced the interest in OT. Advancements in machine learning, data analytics, and deep learning have facilitated the recognition and tracking of objects of interest; however, continuous tracking is currently a problem of interest to many research projects. This paper presents a system implementing a means to continuously track an object and predict its trajectory based on its previous pathway, even when the object is partially or fully concealed for a period of time. The system is composed of six main subsystems: Image Processing, Detection Algorithm, Image Subtractor, Image Tracking, Tracking Predictor, and the Feedback Analyzer. Combined, these systems allow for reasonable object continuity in the face of object concealment.
在过去的几十年里,在安全、监视、情报收集和侦察等许多领域,对对象跟踪(OT)应用程序的需求一直在增加。最近,对无人驾驶车辆的新定义要求增强了人们对OT的兴趣。机器学习、数据分析和深度学习的进步促进了对感兴趣对象的识别和跟踪;然而,持续跟踪是目前许多研究项目感兴趣的问题。本文提出了一种系统实现了一种方法,即使在物体部分或完全隐藏一段时间时,也可以根据物体先前的路径连续跟踪物体并预测其轨迹。该系统由六个主要子系统组成:图像处理、检测算法、图像减法器、图像跟踪、跟踪预测器和反馈分析仪。结合起来,这些系统在面对物体隐藏时允许合理的物体连续性。
{"title":"Object Tracking Continuity through Track and Trace Method","authors":"Haney W. Williams, S. Simske","doi":"10.2352/ISSN.2470-1173.2020.16.AVM-258","DOIUrl":"https://doi.org/10.2352/ISSN.2470-1173.2020.16.AVM-258","url":null,"abstract":"\u0000 \u0000 The demand for object tracking (OT) applications has been increasing for the past few decades in many areas of interest: security, surveillance, intelligence gathering, and reconnaissance. Lately, newly-defined requirements for unmanned vehicles have enhanced the interest in OT.\u0000 Advancements in machine learning, data analytics, and deep learning have facilitated the recognition and tracking of objects of interest; however, continuous tracking is currently a problem of interest to many research projects. This paper presents a system implementing a means to continuously\u0000 track an object and predict its trajectory based on its previous pathway, even when the object is partially or fully concealed for a period of time. The system is composed of six main subsystems: Image Processing, Detection Algorithm, Image Subtractor, Image Tracking, Tracking Predictor, and\u0000 the Feedback Analyzer. Combined, these systems allow for reasonable object continuity in the face of object concealment.\u0000 \u0000","PeriodicalId":177462,"journal":{"name":"Autonomous Vehicles and Machines","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-01-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122321617","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Study on Training Data Selection for Object Detection in Nighttime Traffic Scenes 夜间交通场景目标检测训练数据选择研究
Pub Date : 2020-01-26 DOI: 10.2352/ISSN.2470-1173.2020.16.AVM-202
A. Unger, M. Gelautz, F. Seitner
With the growing demand for robust object detection algorithms in self-driving systems, it is important to consider the varying lighting and weather conditions in which cars operate all year round. The goal of our work is to gain a deeper understanding of meaningful strategies for selecting and merging training data from currently available databases and self-annotated videos in the context of automotive night scenes. We retrain an existing Convolutional Neural Network (YOLOv3) to study the influence of different training dataset combinations on the final object detection results in nighttime and low-visibility traffic scenes. Our evaluation shows that a suitable selection of training data from the GTSRD, VIPER, and BDD databases in conjunction with selfrecorded night scenes can achieve an mAP of 63,5% for ten object classes, which is an improvement of 16,7% when compared to the performance of the original YOLOv3 network on the same test set.
随着自动驾驶系统对强大的目标检测算法的需求不断增长,考虑汽车全年运行的不同照明和天气条件非常重要。我们的工作目标是更深入地了解在汽车夜景背景下从当前可用的数据库和自注释视频中选择和合并训练数据的有意义的策略。我们对现有的卷积神经网络(YOLOv3)进行再训练,研究不同训练数据集组合对夜间和低能见度交通场景下最终目标检测结果的影响。我们的评估表明,从GTSRD、VIPER和BDD数据库中选择合适的训练数据,并结合自记录的夜景,可以在10个对象类别中实现63.5%的mAP,与原始YOLOv3网络在相同测试集上的性能相比,提高了16.7%。
{"title":"A Study on Training Data Selection for Object Detection in Nighttime Traffic Scenes","authors":"A. Unger, M. Gelautz, F. Seitner","doi":"10.2352/ISSN.2470-1173.2020.16.AVM-202","DOIUrl":"https://doi.org/10.2352/ISSN.2470-1173.2020.16.AVM-202","url":null,"abstract":"\u0000 With the growing demand for robust object detection algorithms in self-driving systems, it is important to consider the varying lighting and weather conditions in which cars operate all year round. The goal of our work is to gain a deeper understanding of meaningful strategies for\u0000 selecting and merging training data from currently available databases and self-annotated videos in the context of automotive night scenes. We retrain an existing Convolutional Neural Network (YOLOv3) to study the influence of different training dataset combinations on the final object detection\u0000 results in nighttime and low-visibility traffic scenes. Our evaluation shows that a suitable selection of training data from the GTSRD, VIPER, and BDD databases in conjunction with selfrecorded night scenes can achieve an mAP of 63,5% for ten object classes, which is an improvement of 16,7%\u0000 when compared to the performance of the original YOLOv3 network on the same test set.\u0000","PeriodicalId":177462,"journal":{"name":"Autonomous Vehicles and Machines","volume":"358 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-01-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115898584","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
Autonomous Vehicles and Machines
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1