首页 > 最新文献

Autonomous Vehicles and Machines最新文献

英文 中文
Regaining Sight of Humanity on The Roadway towards Automation 在走向自动化的道路上重新看到人性
Pub Date : 2020-01-26 DOI: 10.2352/ISSN.2470-1173.2020.16.AVM-081
M. López-González
A primary goal of the auto industry is to revolutionize transportation with autonomous vehicles. Given the mammoth nature of such a target, success depends on a clearly defined balance between technological advances, machine learning algorithms, physical and network infrastructure, safety, standards and regulations, and end-user education. Unfortunately, technological advancement is outpacing the regulatory space and competition is driving deployment. Moreover, hope is being built around algorithms that are far from reaching human-like capacities on the road. Since human behaviors and idiosyncrasies and natural phenomena are not going anywhere anytime soon and so-called edge cases are the roadway norm, the industry stands at a historic crossroads. Why? Because human factors such as cognitive and behavioral insights into how we think, feel, act, plan, make decisions, and problem-solve have been ignored. Human cognitive intelligence is foundational to driving the industry’s ambition forward. In this paper I discuss the role of the human in bridging the gaps between autonomous vehicle technology, design, implementation, and beyond.
汽车行业的一个主要目标是用自动驾驶汽车彻底改变交通运输。鉴于此类目标的巨大性质,成功取决于技术进步、机器学习算法、物理和网络基础设施、安全、标准和法规以及最终用户教育之间的明确平衡。不幸的是,技术进步的速度超过了监管空间,竞争正在推动部署。此外,人们正在围绕远未达到道路上类似人类能力的算法建立希望。由于人类的行为、特质和自然现象不会很快消失,所谓的“边缘案例”是道路规范,该行业正站在一个历史性的十字路口。为什么?因为人类的因素,如认知和行为洞察我们如何思考,感觉,行动,计划,做决定,和解决问题被忽视了。人类的认知智能是推动行业向前发展的基础。在本文中,我讨论了人类在弥合自动驾驶汽车技术,设计,实施等方面的差距方面的作用。
{"title":"Regaining Sight of Humanity on The Roadway towards Automation","authors":"M. López-González","doi":"10.2352/ISSN.2470-1173.2020.16.AVM-081","DOIUrl":"https://doi.org/10.2352/ISSN.2470-1173.2020.16.AVM-081","url":null,"abstract":"\u0000 A primary goal of the auto industry is to revolutionize transportation with autonomous vehicles. Given the mammoth nature of such a target, success depends on a clearly defined balance between technological advances, machine learning algorithms, physical and network infrastructure,\u0000 safety, standards and regulations, and end-user education. Unfortunately, technological advancement is outpacing the regulatory space and competition is driving deployment. Moreover, hope is being built around algorithms that are far from reaching human-like capacities on the road. Since human\u0000 behaviors and idiosyncrasies and natural phenomena are not going anywhere anytime soon and so-called edge cases are the roadway norm, the industry stands at a historic crossroads. Why? Because human factors such as cognitive and behavioral insights into how we think, feel, act, plan, make\u0000 decisions, and problem-solve have been ignored. Human cognitive intelligence is foundational to driving the industry’s ambition forward. In this paper I discuss the role of the human in bridging the gaps between autonomous vehicle technology, design, implementation, and beyond.\u0000","PeriodicalId":177462,"journal":{"name":"Autonomous Vehicles and Machines","volume":"48 5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-01-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131755466","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Active Stereo Vision for Precise Autonomous Vehicle Control 主动立体视觉用于自动驾驶车辆的精确控制
Pub Date : 2020-01-26 DOI: 10.2352/ISSN.2470-1173.2020.16.AVM-257
Michael Feller, Jae-Sang Hyun, Song Zhang
This paper describes the development of a low-cost, lowpower, accurate sensor designed for precise, feedback control of an autonomous vehicle to a hitch. The solution that has been developed uses an active stereo vision system, combining classical stereo vision with a low cost, low power laser speckle projection system, which solves the correspondence problem experienced by classic stereo vision sensors. A third camera is added to the sensor for texture mapping. A model test of the hitching problem was developed using an RC car and a target to represent a hitch. A control system is implemented to precisely control the vehicle to the hitch. The system can successfully control the vehicle from within 35° of perpendicular to the hitch, to a final position with an overall standard deviation of 3.0 m m of lateral error and 1.5° of angular error.
本文描述了一种低成本、低功耗、精确的传感器的开发,该传感器旨在对自动驾驶车辆进行精确的反馈控制。该方案采用主动式立体视觉系统,将经典立体视觉与低成本、低功率的激光散斑投影系统相结合,解决了经典立体视觉传感器的对应问题。第三个摄像头被添加到传感器中用于纹理映射。利用一辆RC汽车和一个代表顺风的目标,开发了一个顺风问题的模型试验。采用控制系统对车辆进行精确控制。该系统可以成功地将车辆从与挂索垂直35°内控制到最终位置,总体标准偏差为3.0 m m的横向误差和1.5°的角度误差。
{"title":"Active Stereo Vision for Precise Autonomous Vehicle Control","authors":"Michael Feller, Jae-Sang Hyun, Song Zhang","doi":"10.2352/ISSN.2470-1173.2020.16.AVM-257","DOIUrl":"https://doi.org/10.2352/ISSN.2470-1173.2020.16.AVM-257","url":null,"abstract":"\u0000 This paper describes the development of a low-cost, lowpower, accurate sensor designed for precise, feedback control of an autonomous vehicle to a hitch. The solution that has been developed uses an active stereo vision system, combining classical stereo vision with a low cost, low\u0000 power laser speckle projection system, which solves the correspondence problem experienced by classic stereo vision sensors. A third camera is added to the sensor for texture mapping. A model test of the hitching problem was developed using an RC car and a target to represent a hitch. A control\u0000 system is implemented to precisely control the vehicle to the hitch. The system can successfully control the vehicle from within 35° of perpendicular to the hitch, to a final position with an overall standard deviation of 3.0 m m of lateral error and 1.5° of angular error.\u0000","PeriodicalId":177462,"journal":{"name":"Autonomous Vehicles and Machines","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-01-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121749866","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Integration of advanced stereo obstacle detection with perspectively correct surround views 集成先进的立体障碍物检测与透视正确的环绕视图
Pub Date : 2019-01-13 DOI: 10.2352/issn.2470-1173.2019.15.avm-032
Christian Fuchs, D. Paulus
{"title":"Integration of advanced stereo obstacle detection with perspectively correct surround views","authors":"Christian Fuchs, D. Paulus","doi":"10.2352/issn.2470-1173.2019.15.avm-032","DOIUrl":"https://doi.org/10.2352/issn.2470-1173.2019.15.avm-032","url":null,"abstract":"","PeriodicalId":177462,"journal":{"name":"Autonomous Vehicles and Machines","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114438902","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Today is to see and know: An argument and proposal for integrating human cognitive intelligence into autonomous vehicle perception 今天是看和知道:将人类认知智能整合到自动驾驶汽车感知中的论点和建议
Pub Date : 2019-01-13 DOI: 10.2352/issn.2470-1173.2019.15.avm-054
M. López-González
The race to commercialize self-driving vehicles is in high gear. As carmakers and tech companies focus on creating cameras and sensors with more nuanced capabilities to achieve maximal effectiveness, efficiency, and safety, an interesting paradox has arisen: the human factor has been dismissed. If fleets of autonomous vehicles are to enter our roadways they must overcome the challenges of scene perception and cognition and be able to understand and interact with us humans. This entails a capacity to deal with the spontaneous, rule breaking, emotional, and improvisatory characteristics of our behaviors. Essentially, machine intelligence must integrate content identification with context understanding. Bridging the gap between engineering and cognitive science, I argue for the importance of translating insights from human perception and cognition to autonomous vehicle perception R&D. Introduction We are at a veritable turning point with autonomous vehicle perception technology. Machine intelligence is able to process enormous amounts of complex data simultaneously from cameras, lidar, and radar in a more accurate way than ever before. From a bird’s-eye view, autonomous self-driving vehicles have sufficient technical components to be deployed on our roads. Taking stock of major auto companies’ predictions regarding the expected year of self-driving vehicle deployment, we can see in Figure 1 that deployment is around the corner with year 2020 seeing significant promise [1], [2]. Furthermore, IEEE community members estimate 75% of all vehicles on the road will be autonomous by 2040 [3]. Figure 1. Top ten major global auto companies’ predictions for the year of deployment of their autonomous self-driving vehicles on public roads. Data are by no means exhaustive. The word ‘predictions’ should be read with caution given the rapidly changing state of technologies and the complexity of process from announcement to actuality. The autonomy automakers forecast to produce as early as 2019 are SAE International levels 4 (high automation whereby the vehicle can drive itself within a limited area and under certain conditions with minimal human input) and 5 (full automation whereby the vehicle can drive itself in all roadway and environmental conditions without any human input) [4]. From a cognitive science perspective, these levels entail a sophistication in higher level reasoning not yet possible by machine intelligence: a capacity to perceive an ever-changing environment with meaning and purpose, to make spontaneous, new predictions based on learned and hypothesized expectations, and to take consequential actions for a future outcome. Despite such vital capacities for an autonomous machine to successfully maneuver from point A to point B amidst dynamic and unpredictable environments common to roadways, Uber, for example, is back on Pittsburgh, Pennsylvania’s roadways to continue its on-road testing after halting operations in the wake of a fatality [5]. Moreove
自动驾驶汽车商业化的竞赛正如火如荼地进行着。随着汽车制造商和科技公司专注于制造具有更细微功能的摄像头和传感器,以实现最大的效率、效率和安全性,一个有趣的悖论出现了:人为因素被忽视了。如果自动驾驶汽车车队要进入我们的道路,它们必须克服场景感知和认知的挑战,能够理解人类并与人类互动。这需要一种能力来处理自发的、打破规则的、情绪化的和即兴的行为特征。本质上,机器智能必须将内容识别与上下文理解集成在一起。为了弥合工程学和认知科学之间的差距,我认为将人类感知和认知的见解转化为自动驾驶汽车感知研发的重要性。无人驾驶汽车感知技术正处于一个名副其实的转折点。机器智能能够以比以往任何时候都更准确的方式同时处理来自摄像头、激光雷达和雷达的大量复杂数据。从鸟瞰的角度来看,自动驾驶汽车有足够的技术组件可以部署在我们的道路上。综合各大汽车公司对自动驾驶汽车部署的预期年份的预测,我们可以从图1中看到,自动驾驶汽车的部署指日可待,2020年前景可观[1],[2]。此外,IEEE社区成员估计,到2040年,道路上75%的车辆将实现自动驾驶[3]。图1所示。全球十大汽车公司对其自动驾驶汽车在公共道路上部署的年份的预测。数据绝不是详尽的。考虑到技术的快速变化和从宣布到实现的过程的复杂性,“预测”这个词应该谨慎阅读。汽车制造商预测,最早在2019年生产的自动驾驶汽车是SAE国际4级(高度自动化,车辆可以在有限的区域和某些条件下以最少的人工输入自动驾驶)和5级(完全自动化,车辆可以在所有道路和环境条件下自动驾驶,无需任何人工输入)[4]。从认知科学的角度来看,这些水平需要机器智能无法实现的更高层次推理的复杂性:能够感知具有意义和目的的不断变化的环境,能够根据学习和假设的期望做出自发的新预测,并对未来的结果采取相应的行动。尽管自动驾驶机器在道路上常见的动态和不可预测的环境中成功地从A点机动到B点具有如此重要的能力,但以Uber为例,在发生死亡事故后暂停运营后,Uber又回到了宾夕法尼亚州匹兹堡的道路上继续进行道路测试[5]。此外,在加州,自动驾驶汽车事故是一个非常现实的问题。图2显示了从2014年到2018年底,每家公司都报告了至少一起自动驾驶汽车与传统车辆之间的事故,其中一些事故比其他事故要多。图2。每年报告在加州公共道路上发生的自动驾驶汽车与传统车辆碰撞的公司。美国加利福尼亚州机动车辆管理局公开提供的数据。截至2018年12月21日,共收到129份报告。报告日期为2014年10月14日至2018年12月11日[6]。虽然与2017年美国公路上传统机动车辆交通事故造成的死亡人数(37133人)相比,这些报告的事故(129人)在数年内数量较少[7],但在随后的人机交互新时代,它们需要引起关注。现实世界中常见的传统车辆和其他不可预测的有生命和无生命的元素远不会很快离开我们的道路。全自动系统的成功自然意味着在充满行动、干扰和不熟悉的环境中构成安全和不安全的坚实基础,以及能够造成损害的机器和/或IS&T 2019年电子成像国际研讨会2019年自动驾驶车辆和机器会议054-1 https://doi.org/10.2352/ISSN.2470-1173.2019.15.AVM-054©2019,成像科学与技术学会
{"title":"Today is to see and know: An argument and proposal for integrating human cognitive intelligence into autonomous vehicle perception","authors":"M. López-González","doi":"10.2352/issn.2470-1173.2019.15.avm-054","DOIUrl":"https://doi.org/10.2352/issn.2470-1173.2019.15.avm-054","url":null,"abstract":"The race to commercialize self-driving vehicles is in high gear. As carmakers and tech companies focus on creating cameras and sensors with more nuanced capabilities to achieve maximal effectiveness, efficiency, and safety, an interesting paradox has arisen: the human factor has been dismissed. If fleets of autonomous vehicles are to enter our roadways they must overcome the challenges of scene perception and cognition and be able to understand and interact with us humans. This entails a capacity to deal with the spontaneous, rule breaking, emotional, and improvisatory characteristics of our behaviors. Essentially, machine intelligence must integrate content identification with context understanding. Bridging the gap between engineering and cognitive science, I argue for the importance of translating insights from human perception and cognition to autonomous vehicle perception R&D. Introduction We are at a veritable turning point with autonomous vehicle perception technology. Machine intelligence is able to process enormous amounts of complex data simultaneously from cameras, lidar, and radar in a more accurate way than ever before. From a bird’s-eye view, autonomous self-driving vehicles have sufficient technical components to be deployed on our roads. Taking stock of major auto companies’ predictions regarding the expected year of self-driving vehicle deployment, we can see in Figure 1 that deployment is around the corner with year 2020 seeing significant promise [1], [2]. Furthermore, IEEE community members estimate 75% of all vehicles on the road will be autonomous by 2040 [3]. Figure 1. Top ten major global auto companies’ predictions for the year of deployment of their autonomous self-driving vehicles on public roads. Data are by no means exhaustive. The word ‘predictions’ should be read with caution given the rapidly changing state of technologies and the complexity of process from announcement to actuality. The autonomy automakers forecast to produce as early as 2019 are SAE International levels 4 (high automation whereby the vehicle can drive itself within a limited area and under certain conditions with minimal human input) and 5 (full automation whereby the vehicle can drive itself in all roadway and environmental conditions without any human input) [4]. From a cognitive science perspective, these levels entail a sophistication in higher level reasoning not yet possible by machine intelligence: a capacity to perceive an ever-changing environment with meaning and purpose, to make spontaneous, new predictions based on learned and hypothesized expectations, and to take consequential actions for a future outcome. Despite such vital capacities for an autonomous machine to successfully maneuver from point A to point B amidst dynamic and unpredictable environments common to roadways, Uber, for example, is back on Pittsburgh, Pennsylvania’s roadways to continue its on-road testing after halting operations in the wake of a fatality [5]. Moreove","PeriodicalId":177462,"journal":{"name":"Autonomous Vehicles and Machines","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122174267","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Color correction for RGB sensors with dual-band filters for in-cabin imaging applications 带有双带滤光片的RGB传感器的色彩校正,用于舱内成像应用
Pub Date : 2019-01-13 DOI: 10.2352/issn.2470-1173.2019.15.avm-046
O. Skorka, P. Kane, R. Ispasoiu
{"title":"Color correction for RGB sensors with dual-band filters for in-cabin imaging applications","authors":"O. Skorka, P. Kane, R. Ispasoiu","doi":"10.2352/issn.2470-1173.2019.15.avm-046","DOIUrl":"https://doi.org/10.2352/issn.2470-1173.2019.15.avm-046","url":null,"abstract":"","PeriodicalId":177462,"journal":{"name":"Autonomous Vehicles and Machines","volume":"172 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133836909","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Contrast detection probability - Implementation and use cases 对比检测概率-实现和用例
Pub Date : 2019-01-13 DOI: 10.2352/issn.2470-1173.2019.15.avm-030
Uwe Artmann, M. Geese, Max Gäde
The automotive industry formed the initiative IEEE-P2020 to jointly work on key performance indicators (KPIs) that can be used to predict how well a camera system suits the use cases. A very fundamental application of cameras is to detect object contrasts for object recognition or stereo vision object matching. The most important KPI the group is working on is the contrast detection probability (CDP), a metric that describes the performance of components and systems and is independent from any assumptions about the camera model or other properties. While the theory behind CDP is already well established, we present actual measurement results and the implementation for camera tests. We also show how CDP can be used to improve low light sensitivity and dynamic range measurements. Introduction The idea of Contrast Detection Probability (CDP) was first presented by Geese et.al.[1] in 2018. It was derived from the need to have a KPI that is independent from the system under test and also independent from which components are tested. So the same KPI shall be applied to describe the performance of a windshield or a lens. As shown in the examples in Figure 1, the cause for loss of contrast can be manifold and is not only related to the camera system itself. CDP was designed to describe the performance of a camera system to reproduce contrasts, the core functionality needed for advanced algorithms in machine vision. EXAMPLES FOR LOW OBJECT CONTRAST Contrast reduction on the input – fog in the scene or dust on the windshield Figu e 1. Different aspects in imaging that can lead to a contrast loss on the input side of a camera. In these examples this is fog or pollen dust on a windshield. Another important new aspect in automotive imaging is High Dynamic Range and the impact on system performance. As described in the IEEE-P2020 white paper [2] and shown in Figure 2, the HDR rendering process can lead to so called SNR drops. This is an effect from combining e.g. the dark part of one image with the bright part of another. The resulting SNR curve will show drops somewhere between the maximum and minimum light intensity. An example is shown in figure 3. The SNR drop can be observed in the SNR curve, a plot of the SNR vs. the light intensity. The open question is, how much impact does this have on the system performance. Even though the SNR value is a well established metric, it is very hard to derive precise system performance predictions from the SNR value. The CDP value has this possibility, as it is directly related to the system performance.
汽车行业成立了IEEE-P2020计划,共同制定关键绩效指标(kpi),用于预测摄像头系统对用例的适应程度。相机的一个非常基本的应用是检测物体识别或立体视觉物体匹配的物体对比度。该小组正在研究的最重要的KPI是对比度检测概率(CDP),这是一个描述组件和系统性能的度量,独立于关于相机模型或其他属性的任何假设。虽然CDP背后的理论已经很好地建立了,但我们提出了实际的测量结果和相机测试的实施。我们还展示了如何使用CDP来改善低光灵敏度和动态范围测量。对比度检测概率(CDP)的思想最早由Geese等人提出。[1] 2018年。它来源于需要有一个独立于被测试系统的KPI,也独立于被测试的组件。因此,同样的KPI也适用于描述挡风玻璃或镜片的性能。如图1中的示例所示,造成对比度下降的原因可能是多方面的,而且不仅仅与相机系统本身有关。CDP旨在描述相机系统再现对比度的性能,这是机器视觉中高级算法所需的核心功能。低物体对比度的例子输入对比度降低-场景中的雾或挡风玻璃上的灰尘。成像中的不同方面可能导致相机输入端的对比度损失。在这些例子中,这是挡风玻璃上的雾或花粉尘。汽车成像的另一个重要的新方面是高动态范围及其对系统性能的影响。如IEEE-P2020白皮书[2]所述,如图2所示,HDR渲染过程会导致所谓的信噪比下降。这是一种结合的效果,例如,一个图像的黑暗部分与另一个图像的明亮部分。所得的信噪比曲线将显示在最大和最小光强之间的某处下降。图3显示了一个示例。信噪比下降可以在信噪比曲线中观察到,信噪比与光强度的关系。悬而未决的问题是,这对系统性能有多大影响。尽管信噪比值是一个完善的度量,但很难从信噪比值中得出精确的系统性能预测。CDP的取值有这种可能性,因为它直接关系到系统的性能。
{"title":"Contrast detection probability - Implementation and use cases","authors":"Uwe Artmann, M. Geese, Max Gäde","doi":"10.2352/issn.2470-1173.2019.15.avm-030","DOIUrl":"https://doi.org/10.2352/issn.2470-1173.2019.15.avm-030","url":null,"abstract":"The automotive industry formed the initiative IEEE-P2020 to jointly work on key performance indicators (KPIs) that can be used to predict how well a camera system suits the use cases. A very fundamental application of cameras is to detect object contrasts for object recognition or stereo vision object matching. The most important KPI the group is working on is the contrast detection probability (CDP), a metric that describes the performance of components and systems and is independent from any assumptions about the camera model or other properties. While the theory behind CDP is already well established, we present actual measurement results and the implementation for camera tests. We also show how CDP can be used to improve low light sensitivity and dynamic range measurements. Introduction The idea of Contrast Detection Probability (CDP) was first presented by Geese et.al.[1] in 2018. It was derived from the need to have a KPI that is independent from the system under test and also independent from which components are tested. So the same KPI shall be applied to describe the performance of a windshield or a lens. As shown in the examples in Figure 1, the cause for loss of contrast can be manifold and is not only related to the camera system itself. CDP was designed to describe the performance of a camera system to reproduce contrasts, the core functionality needed for advanced algorithms in machine vision. EXAMPLES FOR LOW OBJECT CONTRAST Contrast reduction on the input – fog in the scene or dust on the windshield Figu e 1. Different aspects in imaging that can lead to a contrast loss on the input side of a camera. In these examples this is fog or pollen dust on a windshield. Another important new aspect in automotive imaging is High Dynamic Range and the impact on system performance. As described in the IEEE-P2020 white paper [2] and shown in Figure 2, the HDR rendering process can lead to so called SNR drops. This is an effect from combining e.g. the dark part of one image with the bright part of another. The resulting SNR curve will show drops somewhere between the maximum and minimum light intensity. An example is shown in figure 3. The SNR drop can be observed in the SNR curve, a plot of the SNR vs. the light intensity. The open question is, how much impact does this have on the system performance. Even though the SNR value is a well established metric, it is very hard to derive precise system performance predictions from the SNR value. The CDP value has this possibility, as it is directly related to the system performance.","PeriodicalId":177462,"journal":{"name":"Autonomous Vehicles and Machines","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116012775","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Real-time traffic sign recognition using deep network for embedded platforms 基于嵌入式平台的深度网络实时交通标志识别
Pub Date : 2019-01-13 DOI: 10.2352/issn.2470-1173.2019.15.avm-033
Raghav Nagpal, Chaitanya Krishna Paturu, V. Ragavan, Navinprashath R R, R. Bhat, Dipanjan Ghosh
{"title":"Real-time traffic sign recognition using deep network for embedded platforms","authors":"Raghav Nagpal, Chaitanya Krishna Paturu, V. Ragavan, Navinprashath R R, R. Bhat, Dipanjan Ghosh","doi":"10.2352/issn.2470-1173.2019.15.avm-033","DOIUrl":"https://doi.org/10.2352/issn.2470-1173.2019.15.avm-033","url":null,"abstract":"","PeriodicalId":177462,"journal":{"name":"Autonomous Vehicles and Machines","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132643012","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Deep dimension reduction for spatial-spectral road scene classification 空间光谱道路场景分类的深度降维方法
Pub Date : 2019-01-13 DOI: 10.2352/issn.2470-1173.2019.15.avm-049
Christian Winkens, Florian Sattler, D. Paulus
{"title":"Deep dimension reduction for spatial-spectral road scene classification","authors":"Christian Winkens, Florian Sattler, D. Paulus","doi":"10.2352/issn.2470-1173.2019.15.avm-049","DOIUrl":"https://doi.org/10.2352/issn.2470-1173.2019.15.avm-049","url":null,"abstract":"","PeriodicalId":177462,"journal":{"name":"Autonomous Vehicles and Machines","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128692678","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Image-based compression of LiDAR sensor data 基于图像的激光雷达传感器数据压缩
Pub Date : 2019-01-13 DOI: 10.2352/issn.2470-1173.2019.15.avm-043
P. Beek
{"title":"Image-based compression of LiDAR sensor data","authors":"P. Beek","doi":"10.2352/issn.2470-1173.2019.15.avm-043","DOIUrl":"https://doi.org/10.2352/issn.2470-1173.2019.15.avm-043","url":null,"abstract":"","PeriodicalId":177462,"journal":{"name":"Autonomous Vehicles and Machines","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127032356","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Signal detection theory and automotive imaging 信号检测理论与汽车成像
Pub Date : 2019-01-13 DOI: 10.2352/issn.2470-1173.2019.15.avm-027
P. Kane
{"title":"Signal detection theory and automotive imaging","authors":"P. Kane","doi":"10.2352/issn.2470-1173.2019.15.avm-027","DOIUrl":"https://doi.org/10.2352/issn.2470-1173.2019.15.avm-027","url":null,"abstract":"","PeriodicalId":177462,"journal":{"name":"Autonomous Vehicles and Machines","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132995800","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
期刊
Autonomous Vehicles and Machines
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1