首页 > 最新文献

Autonomous Vehicles and Machines最新文献

英文 中文
Unify The View of Camera Mesh Network to a Common Coordinate System 将摄像机网格网络的观点统一到一个共同的坐标系
Pub Date : 2021-01-18 DOI: 10.2352/issn.2470-1173.2021.17.avm-175
Haney W. Williams, S. Simske, Fr. Gregory Bishay
The demand for object tracking (OT) applications has been increasing for the past few decades in many areas of interest, including security, surveillance, intelligence gathering, and reconnaissance. Lately, newly-defined requirements for unmanned vehicles have enhanced the interest in OT. Advancements in machine learning, data analytics, and AI/deep learning have facilitated the improved recognition and tracking of objects of interest; however, continuous tracking is currently a problem of interest in many research projects. [1] In our past research, we proposed a system that implements the means to continuously track an object and predict its trajectory based on its previous pathway, even when the object is partially or fully concealed for a period of time. The second phase of this system proposed developing a common knowledge among a mesh of fixed cameras, akin to a real-time panorama. This paper discusses the method to coordinate the cameras' view to a common frame of reference so that the object location is known by all participants in the network.
在过去的几十年里,在包括安全、监视、情报收集和侦察在内的许多领域,对对象跟踪(OT)应用程序的需求一直在增加。最近,对无人驾驶车辆的新定义要求增强了人们对OT的兴趣。机器学习、数据分析和人工智能/深度学习的进步促进了对感兴趣对象的识别和跟踪的改进;然而,持续跟踪是目前许多研究项目感兴趣的问题。[1]在我们过去的研究中,我们提出了一种系统,即使物体部分或完全隐藏一段时间,也可以实现对物体的连续跟踪,并根据物体之前的路径预测其轨迹。该系统的第二阶段建议在固定摄像机的网格中开发一种共同知识,类似于实时全景。本文讨论了如何将摄像机的视角协调到一个共同的参照系,从而使网络中的所有参与者都知道目标的位置。
{"title":"Unify The View of Camera Mesh Network to a Common Coordinate System","authors":"Haney W. Williams, S. Simske, Fr. Gregory Bishay","doi":"10.2352/issn.2470-1173.2021.17.avm-175","DOIUrl":"https://doi.org/10.2352/issn.2470-1173.2021.17.avm-175","url":null,"abstract":"\u0000 The demand for object tracking (OT) applications has been increasing for the past few decades in many areas of interest, including security, surveillance, intelligence gathering, and reconnaissance. Lately, newly-defined requirements for unmanned vehicles have enhanced the interest\u0000 in OT. Advancements in machine learning, data analytics, and AI/deep learning have facilitated the improved recognition and tracking of objects of interest; however, continuous tracking is currently a problem of interest in many research projects. [1] In our past research, we proposed a system\u0000 that implements the means to continuously track an object and predict its trajectory based on its previous pathway, even when the object is partially or fully concealed for a period of time. The second phase of this system proposed developing a common knowledge among a mesh of fixed cameras,\u0000 akin to a real-time panorama. This paper discusses the method to coordinate the cameras' view to a common frame of reference so that the object location is known by all participants in the network.\u0000","PeriodicalId":177462,"journal":{"name":"Autonomous Vehicles and Machines","volume":"85 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-01-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114055588","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
End-to-End Imaging System Optimization for Computer Vision in Driving Automation 驾驶自动化计算机视觉端到端成像系统优化
Pub Date : 2021-01-18 DOI: 10.2352/issn.2470-1173.2021.17.avm-173
Korbinian Weikl, Damien Schroeder, Daniel Blau, Zhenyi Liu, W. Stechele
Full driving automation imposes to date unmet performance requirements on camera and computer vision systems, in order to replace the visual system of a human driver in any conditions. So far, the individual components of an automotive camera hav mostly been optimized independently, or without taking into account the effect on the computer vision applications. We propose an end-to-end optimization of the imaging system in software, from generation of radiometric input data over physically based camera component models to the output of a computer vision system. Specifically, we present an optimization framework which extends the ISETCam and ISET3d toolboxes to create synthetic spectral data of high dynamic range, and which models a stateof-the-art automotive camera in more detail. It includes a stateof-the-art object detection system as benchmark application. We highlight in which way the framework approximates the physical image formation process. As a result, we provide guidelines for optimization experiments involving modification of the model parameters, and show how these apply to a first experiment on high dynamic range imaging.
为了在任何条件下取代人类驾驶员的视觉系统,全自动驾驶对相机和计算机视觉系统提出了迄今为止尚未满足的性能要求。到目前为止,汽车摄像头的各个组件大多是独立优化的,或者没有考虑到对计算机视觉应用的影响。我们提出了一个端到端的软件成像系统优化,从生成基于物理相机组件模型的辐射输入数据到计算机视觉系统的输出。具体而言,我们提出了一个优化框架,该框架扩展了ISETCam和ISET3d工具箱,以创建高动态范围的合成光谱数据,并更详细地模拟了最先进的汽车相机。它包括一个最先进的目标检测系统作为基准应用程序。我们强调了框架近似物理图像形成过程的方式。因此,我们提供了涉及修改模型参数的优化实验指南,并展示了如何将这些应用于高动态范围成像的第一次实验。
{"title":"End-to-End Imaging System Optimization for Computer Vision in Driving Automation","authors":"Korbinian Weikl, Damien Schroeder, Daniel Blau, Zhenyi Liu, W. Stechele","doi":"10.2352/issn.2470-1173.2021.17.avm-173","DOIUrl":"https://doi.org/10.2352/issn.2470-1173.2021.17.avm-173","url":null,"abstract":"\u0000 Full driving automation imposes to date unmet performance requirements on camera and computer vision systems, in order to replace the visual system of a human driver in any conditions. So far, the individual components of an automotive camera hav mostly been optimized independently,\u0000 or without taking into account the effect on the computer vision applications. We propose an end-to-end optimization of the imaging system in software, from generation of radiometric input data over physically based camera component models to the output of a computer vision system. Specifically,\u0000 we present an optimization framework which extends the ISETCam and ISET3d toolboxes to create synthetic spectral data of high dynamic range, and which models a stateof-the-art automotive camera in more detail. It includes a stateof-the-art object detection system as benchmark application.\u0000 We highlight in which way the framework approximates the physical image formation process. As a result, we provide guidelines for optimization experiments involving modification of the model parameters, and show how these apply to a first experiment on high dynamic range imaging.\u0000","PeriodicalId":177462,"journal":{"name":"Autonomous Vehicles and Machines","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-01-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127071281","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Boosting computer vision performance by enhancing camera ISP 通过增强相机ISP提高计算机视觉性能
Pub Date : 2021-01-18 DOI: 10.2352/issn.2470-1173.2021.17.avm-174
P. V. Beek, Chyuan-Tyng Wu, B. Chaudhury, T. Gardos
Traditional image signal processors (ISPs) are primarily designed and optimized to improve the image quality perceived by humans. However, optimal perceptual image quality does not always translate into optimal performance for computer vision applications. In [1], Wu et al. proposed a set of methods, termed VisionISP, to enhance and optimize the ISP for computer vision purposes. The blocks in VisionISP are simple, content-aware, and trainable using existing machine learning methods. VisionISP significantly reduces the data transmission and power consumption requirements by reducing image bit-depth and resolution, while mitigating the loss of relevant information. In this paper, we show that VisionISP boosts the performance of subsequent computer vision algorithms in the context of multiple tasks, including object detection, face recognition, and stereo disparity estimation. The results demonstrate the benefits of VisionISP for a variety of computer vision applications, CNN model sizes, and benchmark datasets.
传统的图像信号处理器(isp)的设计和优化主要是为了提高人类感知的图像质量。然而,最佳的感知图像质量并不总是转化为计算机视觉应用的最佳性能。在2010年,Wu等人提出了一组方法,称为VisionISP,以增强和优化用于计算机视觉目的的ISP。VisionISP中的模块简单,内容感知,并且可以使用现有的机器学习方法进行训练。VisionISP通过降低图像位深和分辨率,显著降低了数据传输和功耗要求,同时减轻了相关信息的丢失。在本文中,我们证明了VisionISP在多个任务背景下提高了后续计算机视觉算法的性能,包括目标检测、人脸识别和立体视差估计。结果证明了VisionISP在各种计算机视觉应用、CNN模型大小和基准数据集方面的优势。
{"title":"Boosting computer vision performance by enhancing camera ISP","authors":"P. V. Beek, Chyuan-Tyng Wu, B. Chaudhury, T. Gardos","doi":"10.2352/issn.2470-1173.2021.17.avm-174","DOIUrl":"https://doi.org/10.2352/issn.2470-1173.2021.17.avm-174","url":null,"abstract":"\u0000 Traditional image signal processors (ISPs) are primarily designed and optimized to improve the image quality perceived by humans. However, optimal perceptual image quality does not always translate into optimal performance for computer vision applications. In [1], Wu et al. proposed\u0000 a set of methods, termed VisionISP, to enhance and optimize the ISP for computer vision purposes. The blocks in VisionISP are simple, content-aware, and trainable using existing machine learning methods.\u0000 \u0000 VisionISP significantly reduces the data transmission and power consumption\u0000 requirements by reducing image bit-depth and resolution, while mitigating the loss of relevant information. In this paper, we show that VisionISP boosts the performance of subsequent computer vision algorithms in the context of multiple tasks, including object detection, face recognition,\u0000 and stereo disparity estimation. The results demonstrate the benefits of VisionISP for a variety of computer vision applications, CNN model sizes, and benchmark datasets.\u0000","PeriodicalId":177462,"journal":{"name":"Autonomous Vehicles and Machines","volume":"31 11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-01-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122858542","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
GG-Net: Gaze Guided Network for Self-driving Cars GG-Net:自动驾驶汽车的注视引导网络
Pub Date : 2021-01-18 DOI: 10.2352/issn.2470-1173.2021.17.avm-171
M. Abdelkarim, M. Abbas, Alaa Osama, Dalia Anwar, Mostafa Azzam, M. Abdelalim, H. Mostafa, Samah El-Tantawy, Ibrahim Sobh
Imitation learning is used massively in autonomous driving for training networks to predict steering commands from frames using annotated data collected by an expert driver. Believing that the frames taken from a front-facing camera are completely mimicking the driver’s eyes raises the question of how eyes and the complex human vision system attention mechanisms perceive the scene. This paper proposes the idea of incorporating eye gaze information with the frames into an end-to-end deep neural network in the lane-following task. The proposed novel architecture, GG-Net, is composed of a spatial transformer network (STN), and a multitask network to predict steering angle as well as the gaze map for the input frame. The experimental results of this architecture show a great improvement in steering angle prediction accuracy of 36% over the baseline with inference time of 0.015 seconds per frame (66 fps) using NVIDIA K80 GPU enabling the proposed model to operate in real-time. We argue that incorporating gaze maps enhances the model generalization capability to the unseen environments. Additionally, a novel course-steering angle conversion algorithm with a complementing mathematical proof is proposed.
模仿学习在自动驾驶中被大量用于训练网络,利用专家驾驶员收集的带注释的数据从帧中预测转向命令。相信从前置摄像头拍摄的画面完全模仿了驾驶员的眼睛,这就提出了眼睛和复杂的人类视觉系统注意机制是如何感知场景的问题。本文提出了将人眼注视信息与帧融合到端到端深度神经网络中的思路。提出的新架构GG-Net由一个空间变压器网络(STN)和一个多任务网络组成,用于预测输入帧的转向角和凝视图。该架构的实验结果表明,在NVIDIA K80 GPU下,转向角预测精度比基线提高了36%,推理时间为每帧0.015秒(66 fps),使所提出的模型能够实时运行。我们认为,结合凝视图可以增强模型对未知环境的泛化能力。此外,提出了一种新的航向转向角转换算法,并给出了相应的数学证明。
{"title":"GG-Net: Gaze Guided Network for Self-driving Cars","authors":"M. Abdelkarim, M. Abbas, Alaa Osama, Dalia Anwar, Mostafa Azzam, M. Abdelalim, H. Mostafa, Samah El-Tantawy, Ibrahim Sobh","doi":"10.2352/issn.2470-1173.2021.17.avm-171","DOIUrl":"https://doi.org/10.2352/issn.2470-1173.2021.17.avm-171","url":null,"abstract":"\u0000 Imitation learning is used massively in autonomous driving for training networks to predict steering commands from frames using annotated data collected by an expert driver. Believing that the frames taken from a front-facing camera are completely mimicking the driver’s eyes\u0000 raises the question of how eyes and the complex human vision system attention mechanisms perceive the scene. This paper proposes the idea of incorporating eye gaze information with the frames into an end-to-end deep neural network in the lane-following task. The proposed novel architecture,\u0000 GG-Net, is composed of a spatial transformer network (STN), and a multitask network to predict steering angle as well as the gaze map for the input frame. The experimental results of this architecture show a great improvement in steering angle prediction accuracy of 36% over the baseline with\u0000 inference time of 0.015 seconds per frame (66 fps) using NVIDIA K80 GPU enabling the proposed model to operate in real-time. We argue that incorporating gaze maps enhances the model generalization capability to the unseen environments. Additionally, a novel course-steering angle conversion\u0000 algorithm with a complementing mathematical proof is proposed.\u0000","PeriodicalId":177462,"journal":{"name":"Autonomous Vehicles and Machines","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-01-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128569471","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Simulating tests to test simulation 模拟测试以测试模拟
Pub Date : 2020-01-26 DOI: 10.2352/ISSN.2470-1173.2020.16.AVM-148
Patrick Mueller, M. Lehmann, Alexander Braun
Simulation is an established tool to develop and validate camera systems. The goal of autonomous driving is pushing simulation into a more important and fundamental role for safety, validation and coverage of billions of miles. Realistic camera models are moving more and more into focus, as simulations need to be more then photo-realistic, they need to be physical-realistic, representing the actual camera system onboard the self-driving vehicle in all relevant physical aspects – and this is not only true for cameras, but also for radar and lidar. But when the camera simulations are becoming more and more realistic, how is this realism tested? Actual, physical camera samples are tested in laboratories following norms like ISO12233, EMVA1288 or the developing P2020, with test charts like dead leaves, slanted edge or OECF-charts. In this article we propose to validate the realism of camera simulations by simulating the physical test bench setup, and then comparing the synthetical simulation result with physical results from the real-world test bench using the established normative metrics and KPIs. While this procedure is used sporadically in industrial settings we are not aware of a rigorous presentation of these ideas in the context of realistic camera models for autonomous driving. After the description of the process we give concrete examples for several different measurement setups using MTF and SFR, and show how these can be used to characterize the quality of different camera models.
仿真是开发和验证摄像机系统的成熟工具。自动驾驶的目标是推动模拟在安全、验证和覆盖数十亿英里方面发挥更重要、更基础的作用。逼真的相机模型越来越成为焦点,因为模拟需要比照片更逼真,它们需要物理逼真,代表自动驾驶汽车上的实际相机系统在所有相关的物理方面——这不仅适用于相机,也适用于雷达和激光雷达。但是当摄像机模拟变得越来越真实时,这种真实性是如何测试的呢?实际的物理相机样品在实验室中按照ISO12233, EMVA1288或正在开发的P2020等规范进行测试,测试图表如枯叶,斜边或oecf图。在本文中,我们建议通过模拟物理测试台设置来验证相机模拟的真实感,然后使用已建立的规范指标和kpi将综合模拟结果与现实测试台的物理结果进行比较。虽然这个过程偶尔会在工业环境中使用,但我们还没有意识到这些想法在自动驾驶现实相机模型的背景下得到严格的展示。在描述了这个过程之后,我们给出了使用MTF和SFR的几种不同测量设置的具体例子,并展示了如何使用这些来表征不同相机型号的质量。
{"title":"Simulating tests to test simulation","authors":"Patrick Mueller, M. Lehmann, Alexander Braun","doi":"10.2352/ISSN.2470-1173.2020.16.AVM-148","DOIUrl":"https://doi.org/10.2352/ISSN.2470-1173.2020.16.AVM-148","url":null,"abstract":"\u0000 Simulation is an established tool to develop and validate camera systems. The goal of autonomous driving is pushing simulation into a more important and fundamental role for safety, validation and coverage of billions of miles. Realistic camera models are moving more and more into\u0000 focus, as simulations need to be more then photo-realistic, they need to be physical-realistic, representing the actual camera system onboard the self-driving vehicle in all relevant physical aspects – and this is not only true for cameras, but also for radar and lidar. But when the\u0000 camera simulations are becoming more and more realistic, how is this realism tested? Actual, physical camera samples are tested in laboratories following norms like ISO12233, EMVA1288 or the developing P2020, with test charts like dead leaves, slanted edge or OECF-charts. In this article we\u0000 propose to validate the realism of camera simulations by simulating the physical test bench setup, and then comparing the synthetical simulation result with physical results from the real-world test bench using the established normative metrics and KPIs. While this procedure is used sporadically\u0000 in industrial settings we are not aware of a rigorous presentation of these ideas in the context of realistic camera models for autonomous driving. After the description of the process we give concrete examples for several different measurement setups using MTF and SFR, and show how these\u0000 can be used to characterize the quality of different camera models.\u0000","PeriodicalId":177462,"journal":{"name":"Autonomous Vehicles and Machines","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-01-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121488036","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Let The Sunshine in: Sun Glare Detection on Automotive Surround-view Cameras 让阳光进来:汽车环视摄像头的太阳眩光检测
Pub Date : 2020-01-26 DOI: 10.2352/ISSN.2470-1173.2020.16.AVM-079
Lucie Yahiaoui, Michal Uřičář, Arindam Das, S. Yogamani
Sun glare is a commonly encountered problem in both manual and automated driving. Sun glare causes over-exposure in the image and significantly impacts visual perception algorithms. For higher levels of automated driving, it is essential for the system to understand that there is sun glare which can cause system degradation. There is very limited literature on detecting sun glare for automated driving. It is primarily based on finding saturated brightness areas and extracting regions via image processing heuristics. From the perspective of a safety system, it is necessary to have a highly robust algorithm. Thus we designed two complementary algorithms using classical image processing techniques and CNN which can learn global context. We also discuss how sun glare detection algorithm will efficiently fit into a typical automated driving system. As there is no public dataset, we created our own and will release it publicly via theWoodScape project [1] to encourage further research in this area.
在手动驾驶和自动驾驶中,太阳眩光都是一个常见的问题。太阳眩光导致图像过度曝光,并显著影响视觉感知算法。对于更高级别的自动驾驶,系统必须了解可能导致系统退化的太阳眩光。关于自动驾驶中检测太阳眩光的文献非常有限。它主要是基于寻找饱和亮度区域,并通过图像处理启发式提取区域。从一个安全系统的角度来看,必须有一个高度鲁棒的算法。因此,我们设计了两种互补的算法,使用经典图像处理技术和CNN来学习全局上下文。我们还讨论了太阳眩光检测算法如何有效地适应典型的自动驾驶系统。由于没有公共数据集,我们创建了自己的数据集,并将通过woodscape项目[1]公开发布,以鼓励该领域的进一步研究。
{"title":"Let The Sunshine in: Sun Glare Detection on Automotive Surround-view Cameras","authors":"Lucie Yahiaoui, Michal Uřičář, Arindam Das, S. Yogamani","doi":"10.2352/ISSN.2470-1173.2020.16.AVM-079","DOIUrl":"https://doi.org/10.2352/ISSN.2470-1173.2020.16.AVM-079","url":null,"abstract":"Sun glare is a commonly encountered problem in both manual and automated driving. Sun glare causes over-exposure in the image and significantly impacts visual perception algorithms. For higher levels of automated driving, it is essential for the system to understand that there is sun glare which can cause system degradation. There is very limited literature on detecting sun glare for automated driving. It is primarily based on finding saturated brightness areas and extracting regions via image processing heuristics. From the perspective of a safety system, it is necessary to have a highly robust algorithm. Thus we designed two complementary algorithms using classical image processing techniques and CNN which can learn global context. We also discuss how sun glare detection algorithm will efficiently fit into a typical automated driving system. As there is no public dataset, we created our own and will release it publicly via theWoodScape project [1] to encourage further research in this area.","PeriodicalId":177462,"journal":{"name":"Autonomous Vehicles and Machines","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-01-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131245952","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Describing and Sampling the LED Flicker Signal LED闪烁信号的描述与采样
Pub Date : 2020-01-26 DOI: 10.2352/issn.2470-1173.2020.16.avm-038
RobertC Sumner
High-frequency flickering light sources such as pulse-width modulated LEDs can cause image sensors to record incorrect levels. We describe a model with a loose set of assumptions (encompassing multi-exposure HDR schemes) which can be used to define the Flicker Signal, a continuous function of time based on the phase relationship between the light source and exposure window. Analysis of the shape of this signal yields a characterization of the camera’s response to a flickering light source–typically seen as an undesirable susceptibility–under a given set of parameters. Flicker Signal calculations are made on discrete samplings measured from image data. Sampling the signal is difficult, however, because it is a function of many parameters, including properties of the light source (frequency, duty cycle, intensity) and properties of the imaging system (exposure scheme, frame rate, row readout time). Moreover, there are degenerate scenarios where sufficient sampling is difficult to obtain. We present a computational approach for determining the evidence (region of interest, duration of test video) necessary to get coverage of this signal sufficient for characterization from a practical test lab setup.
高频闪烁光源,如脉冲宽度调制的led会导致图像传感器记录不正确的电平。我们描述了一个具有松散假设集(包括多曝光HDR方案)的模型,该模型可用于定义闪烁信号,闪烁信号是基于光源和曝光窗口之间的相位关系的连续时间函数。在给定的一组参数下,对该信号的形状进行分析,可以得出相机对闪烁光源(通常被视为不希望看到的敏感度)的响应特征。闪烁信号的计算是在从图像数据中测量的离散采样上进行的。然而,对信号进行采样是困难的,因为它是许多参数的函数,包括光源的特性(频率、占空比、强度)和成像系统的特性(曝光方案、帧速率、行读出时间)。此外,还存在难以获得足够采样的退化情形。我们提出了一种计算方法来确定证据(感兴趣的区域,测试视频的持续时间),以便从实际的测试实验室设置中获得足够的信号覆盖范围。
{"title":"Describing and Sampling the LED Flicker Signal","authors":"RobertC Sumner","doi":"10.2352/issn.2470-1173.2020.16.avm-038","DOIUrl":"https://doi.org/10.2352/issn.2470-1173.2020.16.avm-038","url":null,"abstract":"\u0000 High-frequency flickering light sources such as pulse-width modulated LEDs can cause image sensors to record incorrect levels. We describe a model with a loose set of assumptions (encompassing multi-exposure HDR schemes) which can be used to define the Flicker Signal, a continuous\u0000 function of time based on the phase relationship between the light source and exposure window. Analysis of the shape of this signal yields a characterization of the camera’s response to a flickering light source–typically seen as an undesirable susceptibility–under a given\u0000 set of parameters. Flicker Signal calculations are made on discrete samplings measured from image data. Sampling the signal is difficult, however, because it is a function of many parameters, including properties of the light source (frequency, duty cycle, intensity) and properties of the\u0000 imaging system (exposure scheme, frame rate, row readout time). Moreover, there are degenerate scenarios where sufficient sampling is difficult to obtain. We present a computational approach for determining the evidence (region of interest, duration of test video) necessary to get coverage\u0000 of this signal sufficient for characterization from a practical test lab setup.\u0000","PeriodicalId":177462,"journal":{"name":"Autonomous Vehicles and Machines","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-01-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128360275","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fast Prediction of Contrast Detection Probability 快速预测对比度检测概率
Pub Date : 2020-01-26 DOI: 10.2352/issn.2470-1173.2020.16.avm-040
R. Jenkin
Contrast detection probability (CDP) is proposed as an IEEE P2020 metric to predict camera performance intended for computer vision tasks for autonomous vehicles. Its calculation involves comparing combinations of pixel values between imaged patches. Computation of CDP for all meaningful combinations of m patches involves approximately 3/2(m2-m).n4 operations, where n is the length of one side of the patch in pixels. This work presents a method to estimate Weber contrast based CDP based on individual patch statistics and thus reduces to computation to approximately 4n2m calculations. For 180 patches of 10×10 pixels this is a reduction of approximately 6500 times and for 180 25×25 pixel patches, approximately 41000. The absolute error in the estimated CDP is less than 0.04 or 5% where the noise is well described by Gaussian statistics. Results are compared for simulated patches between the full calculation and the fast estimate. Basing the estimate of CDP on individual patch statistics, rather than by a pixel-to-pixel comparison facilitates the prediction of CDP values from a physical model of exposure and camera conditions. This allows Weber CDP behavior to be investigated for a wide variety of conditions and leads to the discovery that, for the case where contrast is increased by decreasing the tone value of one patch and therefore increasing noise as contrast increases, there exists a maxima which yields identical Weber CDP values for patches of different nominal contrast. This means Weber CDP is predicting the same detection performance for patches of different contrast.
对比度检测概率(CDP)被提议作为IEEE P2020指标来预测用于自动驾驶汽车计算机视觉任务的相机性能。它的计算包括比较图像块之间像素值的组合。对于m个patch的所有有意义的组合,CDP的计算约为3/2(m2-m)。N4次操作,其中n为patch一侧的长度(以像素为单位)。这项工作提出了一种基于单个斑块统计的基于韦伯对比度的CDP估计方法,从而将计算减少到大约4n2m的计算。对于180个10×10像素补丁,这大约减少了6500倍,对于180个25×25像素补丁,大约减少了41000倍。在高斯统计量能很好地描述噪声的情况下,估计CDP的绝对误差小于0.04或5%。对模拟斑块的完整计算结果和快速估计结果进行了比较。基于单个斑块统计估计CDP,而不是通过像素对像素的比较,有助于从曝光和相机条件的物理模型预测CDP值。这使得韦伯CDP行为可以在各种各样的条件下进行研究,并导致发现,对于通过降低一个贴片的色调值来增加对比度的情况,因此随着对比度的增加而增加噪声,存在一个最大值,对于不同标称对比度的贴片产生相同的韦伯CDP值。这意味着韦伯CDP对不同对比度的斑块预测相同的检测性能。
{"title":"Fast Prediction of Contrast Detection Probability","authors":"R. Jenkin","doi":"10.2352/issn.2470-1173.2020.16.avm-040","DOIUrl":"https://doi.org/10.2352/issn.2470-1173.2020.16.avm-040","url":null,"abstract":"\u0000 Contrast detection probability (CDP) is proposed as an IEEE P2020 metric to predict camera performance intended for computer vision tasks for autonomous vehicles. Its calculation involves comparing combinations of pixel values between imaged patches. Computation of CDP for all meaningful\u0000 combinations of m patches involves approximately 3/2(m2-m).n4 operations, where n is the length of one side of the patch in pixels. This work presents a method to estimate Weber contrast based CDP based on individual patch statistics and thus reduces to computation to approximately 4n2m calculations.\u0000 For 180 patches of 10×10 pixels this is a reduction of approximately 6500 times and for 180 25×25 pixel patches, approximately 41000. The absolute error in the estimated CDP is less than 0.04 or 5% where the noise is well described by Gaussian statistics.\u0000 \u0000 Results are\u0000 compared for simulated patches between the full calculation and the fast estimate. Basing the estimate of CDP on individual patch statistics, rather than by a pixel-to-pixel comparison facilitates the prediction of CDP values from a physical model of exposure and camera conditions. This allows\u0000 Weber CDP behavior to be investigated for a wide variety of conditions and leads to the discovery that, for the case where contrast is increased by decreasing the tone value of one patch and therefore increasing noise as contrast increases, there exists a maxima which yields identical Weber\u0000 CDP values for patches of different nominal contrast. This means Weber CDP is predicting the same detection performance for patches of different contrast.\u0000","PeriodicalId":177462,"journal":{"name":"Autonomous Vehicles and Machines","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-01-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134164412","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automotive Image Quality Concepts for the next SAE levels: Color Separation and Contrast Detection Probability 下一个SAE级别的汽车图像质量概念:颜色分离和对比度检测概率
Pub Date : 2020-01-26 DOI: 10.2352/issn.2470-1173.2020.16.avm-019
M. Geese
In this paper, we present an overview of automotive image quality challenges and link them to the physical properties of image acquisition. This process shows that the detection probability based KPIs are a helpful tool to link image quality to the tasks of the SAE classified supported and automated driving tasks. We develop questions around the challenges of the automotive image quality and show that especially color separation probability (CSP) and contrast detection probability (CDP) are a key enabler to improve the knowhow and overview of the image quality optimization problem. Next we introduce a proposal for color separation probability as a new KPI which is based on the random effects of photon shot noise and the properties of light spectra that cause color metamerism. This allows us to demonstrate the image quality influences related to color at different stages of the image generation pipeline. As a second part we investigated the already presented KPI Contrast Detection Probability and show how it links to different metrics of automotive imaging such as HDR, low light performance and detectivity of an object. As conclusion, this paper summarizes the status of the standardization status within IEEE P2020 of these detection probability based KPIs and outlines the next steps for these work packages.
在本文中,我们概述了汽车图像质量的挑战,并将它们与图像采集的物理特性联系起来。该过程表明,基于检测概率的kpi是将图像质量与SAE分类支持任务和自动驾驶任务联系起来的有用工具。我们围绕汽车图像质量的挑战提出了一些问题,并表明特别是分色概率(CSP)和对比度检测概率(CDP)是提高图像质量优化问题的知识和概述的关键因素。接下来,我们提出了一种基于光子散粒噪声的随机效应和引起颜色异谱的光谱特性的分色概率作为一种新的KPI。这使我们能够在图像生成管道的不同阶段演示与颜色相关的图像质量影响。作为第二部分,我们研究了已经提出的KPI对比度检测概率,并展示了它如何与汽车成像的不同指标(如HDR、弱光性能和物体的检测性)相关联。作为结论,本文总结了这些基于检测概率的kpi在IEEE P2020中的标准化现状,并概述了这些工作包的下一步工作。
{"title":"Automotive Image Quality Concepts for the next SAE levels: Color Separation and Contrast Detection Probability","authors":"M. Geese","doi":"10.2352/issn.2470-1173.2020.16.avm-019","DOIUrl":"https://doi.org/10.2352/issn.2470-1173.2020.16.avm-019","url":null,"abstract":"\u0000 In this paper, we present an overview of automotive image quality challenges and link them to the physical properties of image acquisition. This process shows that the detection probability based KPIs are a helpful tool to link image quality to the tasks of the SAE classified supported\u0000 and automated driving tasks. We develop questions around the challenges of the automotive image quality and show that especially color separation probability (CSP) and contrast detection probability (CDP) are a key enabler to improve the knowhow and overview of the image quality optimization\u0000 problem. Next we introduce a proposal for color separation probability as a new KPI which is based on the random effects of photon shot noise and the properties of light spectra that cause color metamerism. This allows us to demonstrate the image quality influences related to color at different\u0000 stages of the image generation pipeline. As a second part we investigated the already presented KPI Contrast Detection Probability and show how it links to different metrics of automotive imaging such as HDR, low light performance and detectivity of an object. As conclusion, this paper summarizes\u0000 the status of the standardization status within IEEE P2020 of these detection probability based KPIs and outlines the next steps for these work packages.\u0000","PeriodicalId":177462,"journal":{"name":"Autonomous Vehicles and Machines","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-01-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130324462","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LiDAR-Camera Fusion for 3D Object Detection 激光雷达-相机融合三维目标检测
Pub Date : 2020-01-26 DOI: 10.2352/ISSN.2470-1173.2020.16.AVM-255
D. Bhanushali, R. Relyea, Karan Manghi, Abhishek Vashist, C. Hochgraf, A. Ganguly, Andres Kwasinski, M. Kuhl, R. Ptucha
The performance of autonomous agents in both commercial and consumer applications increases along with their situational awareness. Tasks such as obstacle avoidance, agent to agent interaction, and path planning are directly dependent upon their ability to convert sensor readings into scene understanding. Central to this is the ability to detect and recognize objects. Many object detection methodologies operate on a single modality such as vision or LiDAR. Camera-based object detection models benefit from an abundance of feature-rich information for classifying different types of objects. LiDAR-based object detection models use sparse point clouds, where each point contains accurate 3D position of object surfaces. Camera-based methods lack accurate object to lens distance measurements, while LiDAR-based methods lack dense feature-rich details. By utilizing information from both camera and LiDAR sensors, advanced object detection and identification is possible. In this work, we introduce a deep learning framework for fusing these modalities and produce a robust real-time 3D bounding box object detection network. We demonstrate qualitative and quantitative analysis of the proposed fusion model on the popular KITTI dataset.
自主代理在商业和消费者应用程序中的性能随着其态势感知能力的提高而提高。诸如避障、智能体间交互和路径规划等任务直接依赖于它们将传感器读数转换为场景理解的能力。其核心是检测和识别物体的能力。许多目标检测方法在单一模式上运行,如视觉或激光雷达。基于相机的目标检测模型受益于丰富的特征信息,可以对不同类型的目标进行分类。基于激光雷达的目标检测模型使用稀疏点云,其中每个点包含目标表面的精确3D位置。基于相机的方法缺乏精确的物体到透镜的距离测量,而基于激光雷达的方法缺乏密集的特征丰富的细节。通过利用来自摄像头和激光雷达传感器的信息,先进的目标检测和识别成为可能。在这项工作中,我们引入了一个深度学习框架来融合这些模式,并产生一个鲁棒的实时3D边界盒目标检测网络。我们在流行的KITTI数据集上演示了所提出的融合模型的定性和定量分析。
{"title":"LiDAR-Camera Fusion for 3D Object Detection","authors":"D. Bhanushali, R. Relyea, Karan Manghi, Abhishek Vashist, C. Hochgraf, A. Ganguly, Andres Kwasinski, M. Kuhl, R. Ptucha","doi":"10.2352/ISSN.2470-1173.2020.16.AVM-255","DOIUrl":"https://doi.org/10.2352/ISSN.2470-1173.2020.16.AVM-255","url":null,"abstract":"\u0000 The performance of autonomous agents in both commercial and consumer applications increases along with their situational awareness. Tasks such as obstacle avoidance, agent to agent interaction, and path planning are directly dependent upon their ability to convert sensor readings\u0000 into scene understanding. Central to this is the ability to detect and recognize objects. Many object detection methodologies operate on a single modality such as vision or LiDAR. Camera-based object detection models benefit from an abundance of feature-rich information for classifying different\u0000 types of objects. LiDAR-based object detection models use sparse point clouds, where each point contains accurate 3D position of object surfaces. Camera-based methods lack accurate object to lens distance measurements, while LiDAR-based methods lack dense feature-rich details. By utilizing\u0000 information from both camera and LiDAR sensors, advanced object detection and identification is possible. In this work, we introduce a deep learning framework for fusing these modalities and produce a robust real-time 3D bounding box object detection network. We demonstrate qualitative and\u0000 quantitative analysis of the proposed fusion model on the popular KITTI dataset.\u0000","PeriodicalId":177462,"journal":{"name":"Autonomous Vehicles and Machines","volume":"107 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-01-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117262062","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
期刊
Autonomous Vehicles and Machines
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1