首页 > 最新文献

Autonomous Vehicles and Machines最新文献

英文 中文
Data Collection Through Translation Network Based on End-to-End Deep Learning for Autonomous Driving 基于端到端深度学习的自动驾驶翻译网络数据采集
Pub Date : 2021-01-18 DOI: 10.2352/issn.2470-1173.2021.17.avm-115
Zelin Zhang, J. Ohya
To avoid manual collections of a huge amount of labeled image data needed for training autonomous driving models, this paperproposes a novel automatic method for collecting image data with annotation for autonomous driving through a translation network that can transform the simulation CG images to real-world images. The translation network is designed in an end-to-end structure that contains two encoder-decoder networks. The forepart of the translation network is designed to represent the structure of the original simulation CG image with a semantic segmentation. Then the rear part of the network translates the segmentation to a realworld image by applying cGAN. After the training, the translation network can learn a mapping from simulation CG pixels to the realworld image pixels. To confirm the validity of the proposed system, we conducted three experiments under different learning policies by evaluating the MSE of the steering angle and vehicle speed. The first experiment demonstrates that the L1+cGAN performs best above all loss functions in the translation network. As a result of the second experiment conducted under different learning policies, it turns out that the ResNet architecture works best. The third experiment demonstrates that the model trained with the real-world images generated by the translation network can still work great in the real world. All the experimental results demonstrate the validity of our proposed method.
为了避免人工收集大量用于训练自动驾驶模型的标记图像数据,本文提出了一种新的自动收集带有注释的自动驾驶图像数据的方法,该方法通过翻译网络将仿真CG图像转换为真实图像。翻译网络设计为端到端结构,包含两个编码器-解码器网络。翻译网络的前半部分是通过语义分割来表示原始仿真CG图像的结构。网络的后半部分通过cGAN将分割后的图像转化为真实图像。经过训练,翻译网络可以学习从模拟CG像素到真实世界图像像素的映射。为了验证该系统的有效性,我们在不同的学习策略下进行了三个实验,评估了转向角和车速的均方差。第一个实验表明,在翻译网络中,L1+cGAN在所有损失函数中表现最好。在不同的学习策略下进行的第二次实验结果表明,ResNet架构效果最好。第三个实验表明,用翻译网络生成的真实世界图像训练的模型在现实世界中仍然可以很好地工作。实验结果表明了该方法的有效性。
{"title":"Data Collection Through Translation Network Based on End-to-End Deep Learning for Autonomous Driving","authors":"Zelin Zhang, J. Ohya","doi":"10.2352/issn.2470-1173.2021.17.avm-115","DOIUrl":"https://doi.org/10.2352/issn.2470-1173.2021.17.avm-115","url":null,"abstract":"\u0000 To avoid manual collections of a huge amount of labeled image data needed for training autonomous driving models, this paperproposes a novel automatic method for collecting image data with annotation for autonomous driving through a translation network that can transform the simulation\u0000 CG images to real-world images. The translation network is designed in an end-to-end structure that contains two encoder-decoder networks. The forepart of the translation network is designed to represent the structure of the original simulation CG image with a semantic segmentation. Then the\u0000 rear part of the network translates the segmentation to a realworld image by applying cGAN. After the training, the translation network can learn a mapping from simulation CG pixels to the realworld image pixels. To confirm the validity of the proposed system, we conducted three experiments\u0000 under different learning policies by evaluating the MSE of the steering angle and vehicle speed. The first experiment demonstrates that the L1+cGAN performs best above all loss functions in the translation network. As a result of the second experiment conducted under different learning policies,\u0000 it turns out that the ResNet architecture works best. The third experiment demonstrates that the model trained with the real-world images generated by the translation network can still work great in the real world. All the experimental results demonstrate the validity of our proposed method.\u0000","PeriodicalId":177462,"journal":{"name":"Autonomous Vehicles and Machines","volume":"05 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-01-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129578682","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evaluation of semi-frozen semi-fixed neural network for efficient computer vision inference 半冻结半固定神经网络对高效计算机视觉推理的评价
Pub Date : 2021-01-18 DOI: 10.2352/issn.2470-1173.2021.17.avm-213
Chyuan-Tyng Wu, P. V. Beek, Phillip Schmidt, Joao Peralta Moreira, T. Gardos
Deep neural networks have been utilized in an increasing number of computer vision tasks, demonstrating superior performance. Much research has been focused on making deep networks more suitable for efficient hardware implementation, for low-power and low-latency real-time applications. In [1], Isikdogan et al. introduced a deep neural network design that provides an effective trade-off between flexibility and hardware efficiency. The proposed solution consists of fixed-topology hardware blocks, with partially frozen/partially trainable weights, that can be configured into a full network. Initial results in a few computer vision tasks were presented in [1]. In this paper, we further evaluate this network design by applying it to several additional computer vision use cases and comparing it to other hardware-friendly networks. The experimental results presented here show that the proposed semi-fixed semi-frozen design achieves competitive performanc on a variety of benchmarks, while maintaining very high hardware efficiency.
深度神经网络在越来越多的计算机视觉任务中得到应用,表现出优异的性能。许多研究都集中在使深度网络更适合于高效的硬件实现,低功耗和低延迟的实时应用。在[1]中,Isikdogan等人介绍了一种深度神经网络设计,该设计在灵活性和硬件效率之间提供了有效的权衡。提出的解决方案由固定拓扑硬件块组成,具有部分冻结/部分可训练的权重,可以配置成完整的网络。[1]给出了一些计算机视觉任务的初步结果。在本文中,我们通过将其应用于几个额外的计算机视觉用例并将其与其他硬件友好网络进行比较来进一步评估该网络设计。实验结果表明,所提出的半固定半冻结设计在各种基准测试中获得了具有竞争力的性能,同时保持了非常高的硬件效率。
{"title":"Evaluation of semi-frozen semi-fixed neural network for efficient computer vision inference","authors":"Chyuan-Tyng Wu, P. V. Beek, Phillip Schmidt, Joao Peralta Moreira, T. Gardos","doi":"10.2352/issn.2470-1173.2021.17.avm-213","DOIUrl":"https://doi.org/10.2352/issn.2470-1173.2021.17.avm-213","url":null,"abstract":"\u0000 Deep neural networks have been utilized in an increasing number of computer vision tasks, demonstrating superior performance. Much research has been focused on making deep networks more suitable for efficient hardware implementation, for low-power and low-latency real-time applications.\u0000 In [1], Isikdogan et al. introduced a deep neural network design that provides an effective trade-off between flexibility and hardware efficiency. The proposed solution consists of fixed-topology hardware blocks, with partially frozen/partially trainable weights, that can be configured into\u0000 a full network. Initial results in a few computer vision tasks were presented in [1]. In this paper, we further evaluate this network design by applying it to several additional computer vision use cases and comparing it to other hardware-friendly networks. The experimental results presented\u0000 here show that the proposed semi-fixed semi-frozen design achieves competitive performanc on a variety of benchmarks, while maintaining very high hardware efficiency.\u0000","PeriodicalId":177462,"journal":{"name":"Autonomous Vehicles and Machines","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-01-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122174209","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Quantitative study of vehicle-pedestrian interactions: Towards pedestrian-adapted lighting communication functions for autonomous vehicles 车辆-行人相互作用的定量研究:自动驾驶车辆行人适应照明通信功能
Pub Date : 2021-01-18 DOI: 10.2352/issn.2470-1173.2021.17.avm-172
Guoqin Zang, Shéhérazade Azouigui, S. Saudrais, Olivier Peyricot, M. Hébert
This paper reports the main conclusions of a fielding observation of vehicle-pedestrian interactions at urban crosswalks, by describing the types, sequences, spatial distributions and probabilities of occurrence of the vehicle and pedestrian behaviors. This study was motivated by the fact that in a near future, with the introduction of autonomous vehicles (AVs), human drivers will become mere passengers, no longer being able to participate into the traffic interactions. With the purpose of recreating the necessary interactions, there is a strong need of new communication abilities for AVs to express their status and intentions, especially to pedestrians who constitute the most vulnerable road users. As pedestrians highly rely on the actual behavioral mechanism to interact with vehicles, it looks preferable to take into account this mechanism in the design of new communication functions. In this study, through more than one hundred of video-recorded vehicle-pedestrian interaction scenes at urban crosswalks, eight scenarios were classified with respect to the different behavioral sequences. Based on the measured position of pedestrians relative to the vehicle at the time of the significant behaviors, quantitative analysis shows that distinct patterns exist for the pedestrian gaze behavior and the vehicle slowing down behavior as a function of Vehicle-to-Pedestrian (V2P) distance and angle.
本文通过对城市人行横道车辆与行人交互行为的类型、顺序、空间分布和发生概率的实地观察,总结了车辆与行人交互行为的主要结论。这项研究的动机是,在不久的将来,随着自动驾驶汽车(AVs)的引入,人类驾驶员将成为纯粹的乘客,不再能够参与交通互动。为了重现必要的互动,自动驾驶汽车迫切需要新的沟通能力来表达自己的状态和意图,尤其是对行人,他们是最脆弱的道路使用者。由于行人与车辆的互动高度依赖于实际的行为机制,因此在设计新的通信功能时最好考虑到这一机制。本研究通过100多个城市人行横道车辆与行人交互场景的视频记录,根据不同的行为序列对8个场景进行了分类。基于显著行为发生时行人相对于车辆的位置测量,定量分析了行人凝视行为和车辆减速行为随车-行人(V2P)距离和角度的变化规律。
{"title":"Quantitative study of vehicle-pedestrian interactions: Towards pedestrian-adapted lighting communication functions for autonomous vehicles","authors":"Guoqin Zang, Shéhérazade Azouigui, S. Saudrais, Olivier Peyricot, M. Hébert","doi":"10.2352/issn.2470-1173.2021.17.avm-172","DOIUrl":"https://doi.org/10.2352/issn.2470-1173.2021.17.avm-172","url":null,"abstract":"\u0000 This paper reports the main conclusions of a fielding observation of vehicle-pedestrian interactions at urban crosswalks, by describing the types, sequences, spatial distributions and probabilities of occurrence of the vehicle and pedestrian behaviors. This study was motivated by\u0000 the fact that in a near future, with the introduction of autonomous vehicles (AVs), human drivers will become mere passengers, no longer being able to participate into the traffic interactions. With the purpose of recreating the necessary interactions, there is a strong need of new communication\u0000 abilities for AVs to express their status and intentions, especially to pedestrians who constitute the most vulnerable road users. As pedestrians highly rely on the actual behavioral mechanism to interact with vehicles, it looks preferable to take into account this mechanism in the design\u0000 of new communication functions. In this study, through more than one hundred of video-recorded vehicle-pedestrian interaction scenes at urban crosswalks, eight scenarios were classified with respect to the different behavioral sequences. Based on the measured position of pedestrians relative\u0000 to the vehicle at the time of the significant behaviors, quantitative analysis shows that distinct patterns exist for the pedestrian gaze behavior and the vehicle slowing down behavior as a function of Vehicle-to-Pedestrian (V2P) distance and angle.\u0000","PeriodicalId":177462,"journal":{"name":"Autonomous Vehicles and Machines","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-01-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132228272","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
An analytic-numerical image flicker study to test novel flicker metrics 分析-数值图像闪烁研究,测试新的闪烁度量
Pub Date : 2021-01-18 DOI: 10.2352/issn.2470-1173.2021.17.avm-183
Christian Wittpahl, B. Deegan, Bob Black, Alexander Braun
The IEEE P2020 Automotive Image Quality working group is proposing new metrics and test protocols to measure image flicker. A comprehensive validation activity is therefore required. Light source flicker (often LED flicker), as captured in a camera output, is a product of camera exposure time, sensitivity, full well capacity, readout timing, HDR scheme, and the light source frequency, duty cycle, intensity, waveform and spectrum. The proposed LED flicker metrics have to be tested and validated for a sufficient number of combinations of these camera and lighting configurations. The test space of the combinations of camera and lighting parameters is unfeasibly large to test with physical cameras and lighting setups. A numerical simulation study to validate the proposed metrics has therefore been performed. To model flicker, a representative pixel model has been implemented in code. The pixel model incorporates exposure time, sensitivity, full well capacity, and representative readout timings. The implemented light source model comprises an hybrid analyticnumerical approach that allows for efficient generation of complex temporal lighting profiles. It simulates full and half wave rectified sinusoidal waveforms, representative of AC lighting, as well as pulse width modulated lighting with variable frequency, duty cycle, intensity, and complex edge rise/fall time behaviour. In this article, both initial results from the flicker simulation model, and evaluation of proposed IEEE metrics, are presented.
IEEE P2020汽车图像质量工作组提出了测量图像闪烁的新指标和测试协议。因此需要一个全面的验证活动。光源闪烁(通常是LED闪烁)是相机输出中捕捉到的光源闪烁,是相机曝光时间、灵敏度、满孔容量、读出时序、HDR方案以及光源频率、占空比、强度、波形和频谱的产物。所提出的LED闪烁指标必须经过测试和验证,以满足这些相机和照明配置的足够数量的组合。摄像机和照明参数组合的测试空间大到无法用物理摄像机和照明设置进行测试。因此,进行了数值模拟研究以验证所建议的度量。为了模拟闪烁,在代码中实现了一个具有代表性的像素模型。像素模型包含曝光时间、灵敏度、全井容量和代表性读出时间。实现的光源模型包括一种混合分析数值方法,允许有效地生成复杂的时间照明轮廓。它模拟了全波和半波整流正弦波形,代表交流照明,以及具有可变频率、占空比、强度和复杂边缘上升/下降时间行为的脉宽调制照明。本文给出了闪变仿真模型的初步结果,以及对所提出的IEEE指标的评估。
{"title":"An analytic-numerical image flicker study to test novel flicker metrics","authors":"Christian Wittpahl, B. Deegan, Bob Black, Alexander Braun","doi":"10.2352/issn.2470-1173.2021.17.avm-183","DOIUrl":"https://doi.org/10.2352/issn.2470-1173.2021.17.avm-183","url":null,"abstract":"\u0000 The IEEE P2020 Automotive Image Quality working group is proposing new metrics and test protocols to measure image flicker. A comprehensive validation activity is therefore required. Light source flicker (often LED flicker), as captured in a camera output, is a product of camera\u0000 exposure time, sensitivity, full well capacity, readout timing, HDR scheme, and the light source frequency, duty cycle, intensity, waveform and spectrum. The proposed LED flicker metrics have to be tested and validated for a sufficient number of combinations of these camera and lighting configurations.\u0000 The test space of the combinations of camera and lighting parameters is unfeasibly large to test with physical cameras and lighting setups. A numerical simulation study to validate the proposed metrics has therefore been performed. To model flicker, a representative pixel model has been implemented\u0000 in code. The pixel model incorporates exposure time, sensitivity, full well capacity, and representative readout timings. The implemented light source model comprises an hybrid analyticnumerical approach that allows for efficient generation of complex temporal lighting profiles. It simulates\u0000 full and half wave rectified sinusoidal waveforms, representative of AC lighting, as well as pulse width modulated lighting with variable frequency, duty cycle, intensity, and complex edge rise/fall time behaviour. In this article, both initial results from the flicker simulation model, and\u0000 evaluation of proposed IEEE metrics, are presented.\u0000","PeriodicalId":177462,"journal":{"name":"Autonomous Vehicles and Machines","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-01-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114176876","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DRAM Bandwidth Optimal Perspective Transform Engine DRAM带宽最优透视变换引擎
Pub Date : 2021-01-18 DOI: 10.2352/issn.2470-1173.2021.17.avm-114
Mihir Mody, Rajasekhar Allu, Gang Hua, Brijesh Jadav, Niraj Nandan, Ankur Ankur, Mayank Mangla
Perspective transform (or Homography) is commonly used algorithms in ADAS and Automated Driving System. Perspective transform is used in multiple use-cases e.g. viewpoint change, fisheye lens distortion correction, chromatic aberration correction, stereo image pair rectification, This algorithm needs high external DRAM memory bandwidth due to inherent scaling, resulting in nonaligned two dimensional memory burst accesses, resulting in large degradation in system performance and latencies. In this paper, we propose a novel perspective transform engine to reduce external memory DRAM bandwidth to alleviate this problem. The proposed solution consists of multiple regions slicing of input video frame with block size tuned for each region. The paper also gives an algorithm for finding optimal region boundaries with corresponding block size tuned for each region. The proposed solution enables average BW reduction of 67% compared to traditional implementation and achieves clock up-to 720 MHz with output pixel throughput of 1 cycle/pixel in 16nm FinFET process node.
透视变换是ADAS和自动驾驶系统中常用的一种算法。视角变换应用于视点变换、鱼眼镜头畸变校正、色差校正、立体图像对校正等多种场景,该算法由于固有缩放,需要较高的外部DRAM内存带宽,导致二维内存突发访问不连续,导致系统性能和延迟大幅下降。在本文中,我们提出了一种新的视角转换引擎来减少外部存储器DRAM带宽,以缓解这一问题。提出的解决方案包括输入视频帧的多个区域切片,并针对每个区域调整块大小。本文还给出了一种寻找最优区域边界的算法,并对每个区域进行了相应的块大小调整。与传统实现相比,提出的解决方案使平均BW减少67%,并在16nm FinFET工艺节点中实现高达720 MHz的时钟,输出像素吞吐量为1周期/像素。
{"title":"DRAM Bandwidth Optimal Perspective Transform Engine","authors":"Mihir Mody, Rajasekhar Allu, Gang Hua, Brijesh Jadav, Niraj Nandan, Ankur Ankur, Mayank Mangla","doi":"10.2352/issn.2470-1173.2021.17.avm-114","DOIUrl":"https://doi.org/10.2352/issn.2470-1173.2021.17.avm-114","url":null,"abstract":"\u0000 Perspective transform (or Homography) is commonly used algorithms in ADAS and Automated Driving System. Perspective transform is used in multiple use-cases e.g. viewpoint change, fisheye lens distortion correction, chromatic aberration correction, stereo image pair rectification,\u0000 This algorithm needs high external DRAM memory bandwidth due to inherent scaling, resulting in nonaligned two dimensional memory burst accesses, resulting in large degradation in system performance and latencies. In this paper, we propose a novel perspective transform engine to reduce external\u0000 memory DRAM bandwidth to alleviate this problem. The proposed solution consists of multiple regions slicing of input video frame with block size tuned for each region. The paper also gives an algorithm for finding optimal region boundaries with corresponding block size tuned for each region.\u0000 The proposed solution enables average BW reduction of 67% compared to traditional implementation and achieves clock up-to 720 MHz with output pixel throughput of 1 cycle/pixel in 16nm FinFET process node.\u0000","PeriodicalId":177462,"journal":{"name":"Autonomous Vehicles and Machines","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-01-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129487074","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Design, Implementation, and Evaluation of a Semi-Autonomous, Vision-based, Modular Unmanned Ground Vehicle Prototype 半自主、基于视觉的模块化无人地面车辆原型的设计、实现和评估
Pub Date : 2021-01-18 DOI: 10.2352/issn.2470-1173.2021.17.avm-214
Doncey Albin, S. Simske
In some traditional development processes, engineering teams communicate their subsystem interfaces without much overlap o their respective disciplines and processes. However, for a systems engineering-driven design, a holistic, multidisciplined approach is implemented from the ground up, with considerable overlap between the teams in every phase of the project. Approaching a system from a holistic perspective, rather than an isolated subsystem perspective, is a fundamental component to rapid prototype development and successful system integration. It is also required for full project-level concerns such as the data, security, safety, and sustainability operations. This paper presents the development of a prototype modular unmanned ground vehicle (UGV) used for fire detection and elimination. Taking a systems engineering approach, the mechatronics and control systems designs are performed first, then the system and the important subsystems are built and tested, and finally, the evaluation results are fed back for the next prototype iteration. The goal of this paper is to give engineering students and professionals an example of the process behind holistic development of a semi-autonomous UGV and to begin an inexpensive, readilymodified platform for engineers to build upon.
在一些传统的开发过程中,工程团队在没有过多重叠各自的规程和过程的情况下交流他们的子系统接口。然而,对于系统工程驱动的设计,一个整体的、多学科的方法是从头开始实现的,在项目的每个阶段团队之间有相当大的重叠。从整体的角度来看系统,而不是从孤立的子系统的角度来看,是快速原型开发和成功的系统集成的基本组成部分。它也需要完整的项目级关注,如数据、安全、安全和可持续性操作。本文介绍了一种用于火灾探测和消除的模块化无人地面车辆(UGV)的原型。采用系统工程的方法,首先进行机电一体化和控制系统的设计,然后对系统和重要子系统进行构建和测试,最后将评估结果反馈给下一次原型迭代。本文的目的是为工程专业学生和专业人士提供一个例子,说明半自主UGV整体开发背后的过程,并开始为工程师提供一个廉价、易于修改的平台。
{"title":"Design, Implementation, and Evaluation of a Semi-Autonomous, Vision-based, Modular Unmanned Ground Vehicle Prototype","authors":"Doncey Albin, S. Simske","doi":"10.2352/issn.2470-1173.2021.17.avm-214","DOIUrl":"https://doi.org/10.2352/issn.2470-1173.2021.17.avm-214","url":null,"abstract":"\u0000 In some traditional development processes, engineering teams communicate their subsystem interfaces without much overlap o their respective disciplines and processes. However, for a systems engineering-driven design, a holistic, multidisciplined approach is implemented from the ground\u0000 up, with considerable overlap between the teams in every phase of the project. Approaching a system from a holistic perspective, rather than an isolated subsystem perspective, is a fundamental component to rapid prototype development and successful system integration. It is also required for\u0000 full project-level concerns such as the data, security, safety, and sustainability operations. This paper presents the development of a prototype modular unmanned ground vehicle (UGV) used for fire detection and elimination. Taking a systems engineering approach, the mechatronics and control\u0000 systems designs are performed first, then the system and the important subsystems are built and tested, and finally, the evaluation results are fed back for the next prototype iteration. The goal of this paper is to give engineering students and professionals an example of the process behind\u0000 holistic development of a semi-autonomous UGV and to begin an inexpensive, readilymodified platform for engineers to build upon.\u0000","PeriodicalId":177462,"journal":{"name":"Autonomous Vehicles and Machines","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-01-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126449198","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Contrast Signal to Noise Ratio 对比信噪比
Pub Date : 2021-01-18 DOI: 10.2352/issn.2470-1173.2021.17.avm-186
R. Jenkin
The detection and recognition of objects is essential for the operation of autonomous vehicles and robots. Designing and predicting the performance of camera systems intended to supply information to neural networks and vision algorithms is nontrivial. Optimization has to occur across many parameters, such as focal length, f-number, pixel and sensor size, exposure regime and transmission schemes. As such numerous metrics are being explored to assist with these design choices. Detectability index (SNRI) is derived from signal detection theory as applied to imaging systems and is used to estimate the ability of a system to statistically distinguish objects [1], most notably in the medical imaging and defense fields [2]. A new metric is proposed, Contrast Signal to Noise Ratio (CSNR), which is calculated simply as mean contrast divided by the standard deviation of the contrast. This is distinct from contrast to noise ratio which uses the noise of the image as the denominator [3,4]. It is shown mathematically that the metric is proportional to the idealized observer for a cobblestone target and a constant may be calculated to estimate SNRI from CSNR, accounting for target size. Results are further compared to Contrast Detection Probability (CDP), which is a relatively new objective image quality metric proposed within IEEE P2020 to rank the performance of camera systems intended for use in autonomous vehicles [5]. CSNR is shown to generate information in illumination and contrast conditions where CDP saturates and further can be modified to provide CDP-like results.
物体的检测和识别对于自动驾驶汽车和机器人的运行至关重要。设计和预测相机系统的性能,为神经网络和视觉算法提供信息是非常重要的。优化必须跨越许多参数,如焦距、f值、像素和传感器尺寸、曝光机制和传输方案。因此,我们正在探索许多指标来辅助这些设计选择。可探测性指数(Detectability index, SNRI)源于应用于成像系统的信号检测理论,用于估计系统统计区分物体的能力[1],主要用于医学成像和国防领域[2]。提出了一种新的度量,对比信噪比(CSNR),它可以简单地计算为平均对比度除以对比度的标准差。这与对比度噪声比不同,对比度噪声比使用图像的噪声作为分母[3,4]。数学上表明,对于鹅卵石目标,度量与理想观测器成正比,并且可以计算一个常数来从CSNR估计SNRI,考虑目标大小。结果进一步与对比度检测概率(CDP)进行比较,CDP是IEEE P2020中提出的一种相对较新的客观图像质量度量,用于对用于自动驾驶汽车的相机系统的性能进行排名[5]。CSNR显示在光照和对比度条件下生成信息,其中CDP饱和,并且可以进一步修改以提供类似CDP的结果。
{"title":"Contrast Signal to Noise Ratio","authors":"R. Jenkin","doi":"10.2352/issn.2470-1173.2021.17.avm-186","DOIUrl":"https://doi.org/10.2352/issn.2470-1173.2021.17.avm-186","url":null,"abstract":"\u0000 The detection and recognition of objects is essential for the operation of autonomous vehicles and robots. Designing and predicting the performance of camera systems intended to supply information to neural networks and vision algorithms is nontrivial. Optimization has to occur across\u0000 many parameters, such as focal length, f-number, pixel and sensor size, exposure regime and transmission schemes. As such numerous metrics are being explored to assist with these design choices. Detectability index (SNRI) is derived from signal detection theory as applied to imaging systems\u0000 and is used to estimate the ability of a system to statistically distinguish objects [1], most notably in the medical imaging and defense fields [2].\u0000 \u0000 A new metric is proposed, Contrast Signal to Noise Ratio (CSNR), which is calculated simply as mean contrast divided by the standard\u0000 deviation of the contrast. This is distinct from contrast to noise ratio which uses the noise of the image as the denominator [3,4]. It is shown mathematically that the metric is proportional to the idealized observer for a cobblestone target and a constant may be calculated to estimate SNRI\u0000 from CSNR, accounting for target size. Results are further compared to Contrast Detection Probability (CDP), which is a relatively new objective image quality metric proposed within IEEE P2020 to rank the performance of camera systems intended for use in autonomous vehicles [5]. CSNR is shown\u0000 to generate information in illumination and contrast conditions where CDP saturates and further can be modified to provide CDP-like results.\u0000","PeriodicalId":177462,"journal":{"name":"Autonomous Vehicles and Machines","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-01-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114805934","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Data driven degradation of automotive sensors and effect analysis 汽车传感器的数据驱动退化及影响分析
Pub Date : 2021-01-18 DOI: 10.2352/issn.2470-1173.2021.17.avm-180
S. Fleck, B. May, Gwen Daniel, C. Davies
Autonomous driving plays a crucial role to prevent accidents and modern vehicles are equipped with multimodal sensor systems and AI-driven perception and sensor fusion. These features are however not stable during a vehicle’s lifetime due to various means of degradation. This introduces an inherent, yet unaddressed risk: once vehicles are in the field, their individual exposure to environmental effects lead to unpredictable behavior. The goal of this paper is to raise awareness of automotive sensor degradation. Various effects exist, which in combination may have a severe impact on the AI-based processing and ultimately on the customer domain. Failure mode and effects analysis (FMEA) type approaches are used to structure a complete coverage of relevant automotive degradation effects. Sensors include cameras, RADARs, LiDARs and other modalities, both outside and in-cabin. Sensor robustness alone is a well-known topic which is addressed by DV/PV. However, this is not sufficient and various degradations will be looked at which go significantly beyond currently tested environmental stress scenarios. In addition, the combination of sensor degradation and its impact on AI processing is identified as a validation gap. An outlook to future analysis and ways to detect relevant sensor degradations is also presented.
自动驾驶在防止事故方面发挥着至关重要的作用,现代车辆配备了多模态传感器系统和人工智能驱动的感知和传感器融合。然而,由于各种退化手段,这些特征在车辆的使用寿命期间并不稳定。这带来了一个固有的、尚未解决的风险:一旦车辆进入现场,它们的个人暴露在环境影响下,会导致不可预测的行为。本文的目的是提高人们对汽车传感器退化问题的认识。存在各种影响,这些影响结合起来可能对基于ai的处理产生严重影响,并最终对客户领域产生影响。失效模式和影响分析(FMEA)类型的方法用于构建相关汽车退化影响的完整覆盖。传感器包括摄像头、雷达、激光雷达和其他模式,包括外部和内部。传感器鲁棒性本身就是DV/PV解决的一个众所周知的话题。然而,这是不够的,将研究各种退化,这些退化大大超出了目前测试的环境压力情景。此外,传感器退化及其对人工智能处理的影响的组合被认为是一个验证差距。展望了未来的分析和检测相关传感器退化的方法。
{"title":"Data driven degradation of automotive sensors and effect analysis","authors":"S. Fleck, B. May, Gwen Daniel, C. Davies","doi":"10.2352/issn.2470-1173.2021.17.avm-180","DOIUrl":"https://doi.org/10.2352/issn.2470-1173.2021.17.avm-180","url":null,"abstract":"\u0000 Autonomous driving plays a crucial role to prevent accidents and modern vehicles are equipped with multimodal sensor systems and AI-driven perception and sensor fusion. These features are however not stable during a vehicle’s lifetime due to various means of degradation. This\u0000 introduces an inherent, yet unaddressed risk: once vehicles are in the field, their individual exposure to environmental effects lead to unpredictable behavior. The goal of this paper is to raise awareness of automotive sensor degradation. Various effects exist, which in combination may have\u0000 a severe impact on the AI-based processing and ultimately on the customer domain. Failure mode and effects analysis (FMEA) type approaches are used to structure a complete coverage of relevant automotive degradation effects. Sensors include cameras, RADARs, LiDARs and other modalities, both\u0000 outside and in-cabin. Sensor robustness alone is a well-known topic which is addressed by DV/PV. However, this is not sufficient and various degradations will be looked at which go significantly beyond currently tested environmental stress scenarios. In addition, the combination of sensor\u0000 degradation and its impact on AI processing is identified as a validation gap. An outlook to future analysis and ways to detect relevant sensor degradations is also presented.\u0000","PeriodicalId":177462,"journal":{"name":"Autonomous Vehicles and Machines","volume":"149 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-01-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121397575","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Radiometry and Photometry for Autonomous Vehicles and Machines - Fundamental Performance Limits 自动驾驶车辆和机器的辐射测量和光度测定-基本性能限制
Pub Date : 2021-01-18 DOI: 10.2352/issn.2470-1173.2021.17.avm-211
R. Jenkin, Cheng Zhao
As autonomous vehicles and machines, such as self-driving cars, agricultural drones and industrial robots, become ubiquitous, there is an increasing need to understand the objective performance of cameras to support these functions. Images go beyond aesthetic and subjective roles as they assume increasing aspects of control, safety, and diagnostic capabilities. Radiometry and photometry are fundamental to describing the behavior of light and modeling the signal chain for imaging systems, and as such, are crucial for establishing objective behavior. As an engineer or scientist, having an intuitive feel for the magnitude of units and the physical behavior of components or systems in any field improves development capabilities and guards against rudimentary errors. Back-of-the-envelope estimations provide comparisons against which detailed calculations may be tested and will urge a developer to “try again” if the order of magnitude is off for example. They also provide a quick check for the feasibility of ideas, a “giggle” or “straight-face” test as it is sometimes known. This paper is a response to the observation of the authors that, amongst participants that are newly relying on the imaging field and existing image scientists alike, there is a general deficit of intuition around the units and order of magnitude of signals in typical cameras for autonomous vehicles and the conditions within which they operate. Further, there persists a number of misconceptions regarding general radiometric and photometric behavior. Confusion between the inverse square law as applied to illumination and consistency of image luminance versus distance is a common example. The authors detail radiometric and photometric model for an imaging system, using it to clarify vocabulary, units and behaviors. The model is then used to estimate the number of quanta expected in pixels for typical imaging systems for each of the patches of a MacBeth color checker under a wide variety of illumination conditions. These results form the basis to establish the fundamental limits of performance for passive camera systems based both solely on camera geometry and additionally considering typical quantum efficiencies available presently. Further a mental model is given which will quickly allow user to estimate numbers of photoelectrons in pixel.
随着自动驾驶汽车、农业无人机和工业机器人等自动驾驶车辆和机器变得无处不在,人们越来越需要了解相机的客观性能,以支持这些功能。图像超越了审美和主观的角色,因为它们承担了越来越多的控制、安全和诊断能力。辐射测量学和光度学是描述光的行为和成像系统信号链建模的基础,因此,对于建立客观行为至关重要。作为一名工程师或科学家,在任何领域对单元的大小和组件或系统的物理行为有一种直观的感觉,可以提高开发能力,防止基本错误。粗略的估计提供了可以测试的详细计算的比较,如果数量级不正确,将促使开发人员“再试一次”。它们还可以快速检查想法的可行性,有时被称为“傻笑”或“板着脸”测试。这篇论文是对作者的观察的回应,在新依赖成像领域的参与者和现有的图像科学家之间,对于自动驾驶汽车的典型相机中的信号的单位和数量级以及它们运行的条件,普遍存在直觉缺陷。此外,关于一般的辐射和光度行为,仍然存在一些误解。将平方反比定律应用于照明和图像亮度与距离的一致性之间的混淆是一个常见的例子。作者详细介绍了成像系统的辐射和光度模型,并用它来澄清词汇、单位和行为。然后,该模型用于估计在各种照明条件下麦克白颜色检查器的每个补丁的典型成像系统的像素期望量子数。这些结果构成了建立被动相机系统性能基本限制的基础,该系统仅基于相机几何形状,并考虑到目前可用的典型量子效率。此外,给出了一个心智模型,该模型将允许用户快速估计像素中的光电子数。
{"title":"Radiometry and Photometry for Autonomous Vehicles and Machines - Fundamental Performance Limits","authors":"R. Jenkin, Cheng Zhao","doi":"10.2352/issn.2470-1173.2021.17.avm-211","DOIUrl":"https://doi.org/10.2352/issn.2470-1173.2021.17.avm-211","url":null,"abstract":"\u0000 As autonomous vehicles and machines, such as self-driving cars, agricultural drones and industrial robots, become ubiquitous, there is an increasing need to understand the objective performance of cameras to support these functions. Images go beyond aesthetic and subjective roles\u0000 as they assume increasing aspects of control, safety, and diagnostic capabilities. Radiometry and photometry are fundamental to describing the behavior of light and modeling the signal chain for imaging systems, and as such, are crucial for establishing objective behavior.\u0000 \u0000 As an\u0000 engineer or scientist, having an intuitive feel for the magnitude of units and the physical behavior of components or systems in any field improves development capabilities and guards against rudimentary errors. Back-of-the-envelope estimations provide comparisons against which detailed calculations\u0000 may be tested and will urge a developer to “try again” if the order of magnitude is off for example. They also provide a quick check for the feasibility of ideas, a “giggle” or “straight-face” test as it is sometimes known.\u0000 \u0000 This paper is a response\u0000 to the observation of the authors that, amongst participants that are newly relying on the imaging field and existing image scientists alike, there is a general deficit of intuition around the units and order of magnitude of signals in typical cameras for autonomous vehicles and the conditions\u0000 within which they operate. Further, there persists a number of misconceptions regarding general radiometric and photometric behavior. Confusion between the inverse square law as applied to illumination and consistency of image luminance versus distance is a common example.\u0000 \u0000 The authors\u0000 detail radiometric and photometric model for an imaging system, using it to clarify vocabulary, units and behaviors. The model is then used to estimate the number of quanta expected in pixels for typical imaging systems for each of the patches of a MacBeth color checker under a wide variety\u0000 of illumination conditions. These results form the basis to establish the fundamental limits of performance for passive camera systems based both solely on camera geometry and additionally considering typical quantum efficiencies available presently. Further a mental model is given which will\u0000 quickly allow user to estimate numbers of photoelectrons in pixel.\u0000","PeriodicalId":177462,"journal":{"name":"Autonomous Vehicles and Machines","volume":"254 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-01-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134266443","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
RoadEdgeNet: Road Edge Detection System Using Surround View Camera Images roadadgenet:使用环绕视图相机图像的道路边缘检测系统
Pub Date : 2021-01-18 DOI: 10.2352/issn.2470-1173.2021.17.avm-210
Ashok Dahal, Eric Golab, Rajender Garlapati, Varun Ravi Kumar, S. Yogamani
Road Edge is defined as the borderline where there is a change from the road surface to the non-road surface. Most of the currently existing solutions for Road Edge Detection use only a single front camera to capture the input image; hence, the system’s performance and robustness suffer. Our efficient CNN trained on a very diverse dataset yields more than 98% semantic segmentation for the road surface, which is then used to obtain road edge segments for individual camera images. Afterward, the multi-cameras raw road edges are transformed into world coordinates, and RANSAC curve fitting is used to get the final road edges on both sides of the vehicle for driving assistance. The process of road edge extraction is also very computationally efficient as we can use the same generic road segmentation output, which is computed along with other semantic segmentation for driving assistance and autonomous driving. RoadEdgeNet algorithm is designed for automated driving in series production, and we discuss the various challenges and limitations of the current algorithm.
道路边缘被定义为从道路表面到非道路表面变化的边界。目前大多数现有的道路边缘检测解决方案仅使用单个前置摄像头来捕获输入图像;因此,系统的性能和健壮性受到影响。我们在非常多样化的数据集上训练的高效CNN对路面产生了超过98%的语义分割,然后用于获取单个相机图像的道路边缘段。然后,将多摄像机原始道路边缘转换为世界坐标,利用RANSAC曲线拟合得到车辆两侧的最终道路边缘,用于辅助驾驶。道路边缘提取过程的计算效率也非常高,因为我们可以使用相同的通用道路分割输出,它与驾驶辅助和自动驾驶的其他语义分割一起计算。RoadEdgeNet算法是为自动驾驶量产而设计的,我们讨论了当前算法的各种挑战和局限性。
{"title":"RoadEdgeNet: Road Edge Detection System Using Surround View Camera Images","authors":"Ashok Dahal, Eric Golab, Rajender Garlapati, Varun Ravi Kumar, S. Yogamani","doi":"10.2352/issn.2470-1173.2021.17.avm-210","DOIUrl":"https://doi.org/10.2352/issn.2470-1173.2021.17.avm-210","url":null,"abstract":"\u0000 Road Edge is defined as the borderline where there is a change from the road surface to the non-road surface. Most of the currently existing solutions for Road Edge Detection use only a single front camera to capture the input image; hence, the system’s performance and robustness\u0000 suffer. Our efficient CNN trained on a very diverse dataset yields more than 98% semantic segmentation for the road surface, which is then used to obtain road edge segments for individual camera images. Afterward, the multi-cameras raw road edges are transformed into world coordinates, and\u0000 RANSAC curve fitting is used to get the final road edges on both sides of the vehicle for driving assistance. The process of road edge extraction is also very computationally efficient as we can use the same generic road segmentation output, which is computed along with other semantic segmentation\u0000 for driving assistance and autonomous driving. RoadEdgeNet algorithm is designed for automated driving in series production, and we discuss the various challenges and limitations of the current algorithm.\u0000","PeriodicalId":177462,"journal":{"name":"Autonomous Vehicles and Machines","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-01-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123474379","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
期刊
Autonomous Vehicles and Machines
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1