Pub Date : 2020-01-26DOI: 10.2352/ISSN.2470-1173.2020.16.AVM-081
M. López-González
A primary goal of the auto industry is to revolutionize transportation with autonomous vehicles. Given the mammoth nature of such a target, success depends on a clearly defined balance between technological advances, machine learning algorithms, physical and network infrastructure, safety, standards and regulations, and end-user education. Unfortunately, technological advancement is outpacing the regulatory space and competition is driving deployment. Moreover, hope is being built around algorithms that are far from reaching human-like capacities on the road. Since human behaviors and idiosyncrasies and natural phenomena are not going anywhere anytime soon and so-called edge cases are the roadway norm, the industry stands at a historic crossroads. Why? Because human factors such as cognitive and behavioral insights into how we think, feel, act, plan, make decisions, and problem-solve have been ignored. Human cognitive intelligence is foundational to driving the industry’s ambition forward. In this paper I discuss the role of the human in bridging the gaps between autonomous vehicle technology, design, implementation, and beyond.
{"title":"Regaining Sight of Humanity on The Roadway towards Automation","authors":"M. López-González","doi":"10.2352/ISSN.2470-1173.2020.16.AVM-081","DOIUrl":"https://doi.org/10.2352/ISSN.2470-1173.2020.16.AVM-081","url":null,"abstract":"\u0000 A primary goal of the auto industry is to revolutionize transportation with autonomous vehicles. Given the mammoth nature of such a target, success depends on a clearly defined balance between technological advances, machine learning algorithms, physical and network infrastructure,\u0000 safety, standards and regulations, and end-user education. Unfortunately, technological advancement is outpacing the regulatory space and competition is driving deployment. Moreover, hope is being built around algorithms that are far from reaching human-like capacities on the road. Since human\u0000 behaviors and idiosyncrasies and natural phenomena are not going anywhere anytime soon and so-called edge cases are the roadway norm, the industry stands at a historic crossroads. Why? Because human factors such as cognitive and behavioral insights into how we think, feel, act, plan, make\u0000 decisions, and problem-solve have been ignored. Human cognitive intelligence is foundational to driving the industry’s ambition forward. In this paper I discuss the role of the human in bridging the gaps between autonomous vehicle technology, design, implementation, and beyond.\u0000","PeriodicalId":177462,"journal":{"name":"Autonomous Vehicles and Machines","volume":"48 5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-01-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131755466","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-01-26DOI: 10.2352/ISSN.2470-1173.2020.16.AVM-257
Michael Feller, Jae-Sang Hyun, Song Zhang
This paper describes the development of a low-cost, lowpower, accurate sensor designed for precise, feedback control of an autonomous vehicle to a hitch. The solution that has been developed uses an active stereo vision system, combining classical stereo vision with a low cost, low power laser speckle projection system, which solves the correspondence problem experienced by classic stereo vision sensors. A third camera is added to the sensor for texture mapping. A model test of the hitching problem was developed using an RC car and a target to represent a hitch. A control system is implemented to precisely control the vehicle to the hitch. The system can successfully control the vehicle from within 35° of perpendicular to the hitch, to a final position with an overall standard deviation of 3.0 m m of lateral error and 1.5° of angular error.
本文描述了一种低成本、低功耗、精确的传感器的开发,该传感器旨在对自动驾驶车辆进行精确的反馈控制。该方案采用主动式立体视觉系统,将经典立体视觉与低成本、低功率的激光散斑投影系统相结合,解决了经典立体视觉传感器的对应问题。第三个摄像头被添加到传感器中用于纹理映射。利用一辆RC汽车和一个代表顺风的目标,开发了一个顺风问题的模型试验。采用控制系统对车辆进行精确控制。该系统可以成功地将车辆从与挂索垂直35°内控制到最终位置,总体标准偏差为3.0 m m的横向误差和1.5°的角度误差。
{"title":"Active Stereo Vision for Precise Autonomous Vehicle Control","authors":"Michael Feller, Jae-Sang Hyun, Song Zhang","doi":"10.2352/ISSN.2470-1173.2020.16.AVM-257","DOIUrl":"https://doi.org/10.2352/ISSN.2470-1173.2020.16.AVM-257","url":null,"abstract":"\u0000 This paper describes the development of a low-cost, lowpower, accurate sensor designed for precise, feedback control of an autonomous vehicle to a hitch. The solution that has been developed uses an active stereo vision system, combining classical stereo vision with a low cost, low\u0000 power laser speckle projection system, which solves the correspondence problem experienced by classic stereo vision sensors. A third camera is added to the sensor for texture mapping. A model test of the hitching problem was developed using an RC car and a target to represent a hitch. A control\u0000 system is implemented to precisely control the vehicle to the hitch. The system can successfully control the vehicle from within 35° of perpendicular to the hitch, to a final position with an overall standard deviation of 3.0 m m of lateral error and 1.5° of angular error.\u0000","PeriodicalId":177462,"journal":{"name":"Autonomous Vehicles and Machines","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-01-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121749866","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-01-13DOI: 10.2352/issn.2470-1173.2019.15.avm-032
Christian Fuchs, D. Paulus
{"title":"Integration of advanced stereo obstacle detection with perspectively correct surround views","authors":"Christian Fuchs, D. Paulus","doi":"10.2352/issn.2470-1173.2019.15.avm-032","DOIUrl":"https://doi.org/10.2352/issn.2470-1173.2019.15.avm-032","url":null,"abstract":"","PeriodicalId":177462,"journal":{"name":"Autonomous Vehicles and Machines","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114438902","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-01-13DOI: 10.2352/issn.2470-1173.2019.15.avm-054
M. López-González
The race to commercialize self-driving vehicles is in high gear. As carmakers and tech companies focus on creating cameras and sensors with more nuanced capabilities to achieve maximal effectiveness, efficiency, and safety, an interesting paradox has arisen: the human factor has been dismissed. If fleets of autonomous vehicles are to enter our roadways they must overcome the challenges of scene perception and cognition and be able to understand and interact with us humans. This entails a capacity to deal with the spontaneous, rule breaking, emotional, and improvisatory characteristics of our behaviors. Essentially, machine intelligence must integrate content identification with context understanding. Bridging the gap between engineering and cognitive science, I argue for the importance of translating insights from human perception and cognition to autonomous vehicle perception R&D. Introduction We are at a veritable turning point with autonomous vehicle perception technology. Machine intelligence is able to process enormous amounts of complex data simultaneously from cameras, lidar, and radar in a more accurate way than ever before. From a bird’s-eye view, autonomous self-driving vehicles have sufficient technical components to be deployed on our roads. Taking stock of major auto companies’ predictions regarding the expected year of self-driving vehicle deployment, we can see in Figure 1 that deployment is around the corner with year 2020 seeing significant promise [1], [2]. Furthermore, IEEE community members estimate 75% of all vehicles on the road will be autonomous by 2040 [3]. Figure 1. Top ten major global auto companies’ predictions for the year of deployment of their autonomous self-driving vehicles on public roads. Data are by no means exhaustive. The word ‘predictions’ should be read with caution given the rapidly changing state of technologies and the complexity of process from announcement to actuality. The autonomy automakers forecast to produce as early as 2019 are SAE International levels 4 (high automation whereby the vehicle can drive itself within a limited area and under certain conditions with minimal human input) and 5 (full automation whereby the vehicle can drive itself in all roadway and environmental conditions without any human input) [4]. From a cognitive science perspective, these levels entail a sophistication in higher level reasoning not yet possible by machine intelligence: a capacity to perceive an ever-changing environment with meaning and purpose, to make spontaneous, new predictions based on learned and hypothesized expectations, and to take consequential actions for a future outcome. Despite such vital capacities for an autonomous machine to successfully maneuver from point A to point B amidst dynamic and unpredictable environments common to roadways, Uber, for example, is back on Pittsburgh, Pennsylvania’s roadways to continue its on-road testing after halting operations in the wake of a fatality [5]. Moreove
{"title":"Today is to see and know: An argument and proposal for integrating human cognitive intelligence into autonomous vehicle perception","authors":"M. López-González","doi":"10.2352/issn.2470-1173.2019.15.avm-054","DOIUrl":"https://doi.org/10.2352/issn.2470-1173.2019.15.avm-054","url":null,"abstract":"The race to commercialize self-driving vehicles is in high gear. As carmakers and tech companies focus on creating cameras and sensors with more nuanced capabilities to achieve maximal effectiveness, efficiency, and safety, an interesting paradox has arisen: the human factor has been dismissed. If fleets of autonomous vehicles are to enter our roadways they must overcome the challenges of scene perception and cognition and be able to understand and interact with us humans. This entails a capacity to deal with the spontaneous, rule breaking, emotional, and improvisatory characteristics of our behaviors. Essentially, machine intelligence must integrate content identification with context understanding. Bridging the gap between engineering and cognitive science, I argue for the importance of translating insights from human perception and cognition to autonomous vehicle perception R&D. Introduction We are at a veritable turning point with autonomous vehicle perception technology. Machine intelligence is able to process enormous amounts of complex data simultaneously from cameras, lidar, and radar in a more accurate way than ever before. From a bird’s-eye view, autonomous self-driving vehicles have sufficient technical components to be deployed on our roads. Taking stock of major auto companies’ predictions regarding the expected year of self-driving vehicle deployment, we can see in Figure 1 that deployment is around the corner with year 2020 seeing significant promise [1], [2]. Furthermore, IEEE community members estimate 75% of all vehicles on the road will be autonomous by 2040 [3]. Figure 1. Top ten major global auto companies’ predictions for the year of deployment of their autonomous self-driving vehicles on public roads. Data are by no means exhaustive. The word ‘predictions’ should be read with caution given the rapidly changing state of technologies and the complexity of process from announcement to actuality. The autonomy automakers forecast to produce as early as 2019 are SAE International levels 4 (high automation whereby the vehicle can drive itself within a limited area and under certain conditions with minimal human input) and 5 (full automation whereby the vehicle can drive itself in all roadway and environmental conditions without any human input) [4]. From a cognitive science perspective, these levels entail a sophistication in higher level reasoning not yet possible by machine intelligence: a capacity to perceive an ever-changing environment with meaning and purpose, to make spontaneous, new predictions based on learned and hypothesized expectations, and to take consequential actions for a future outcome. Despite such vital capacities for an autonomous machine to successfully maneuver from point A to point B amidst dynamic and unpredictable environments common to roadways, Uber, for example, is back on Pittsburgh, Pennsylvania’s roadways to continue its on-road testing after halting operations in the wake of a fatality [5]. Moreove","PeriodicalId":177462,"journal":{"name":"Autonomous Vehicles and Machines","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122174267","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-01-13DOI: 10.2352/issn.2470-1173.2019.15.avm-046
O. Skorka, P. Kane, R. Ispasoiu
{"title":"Color correction for RGB sensors with dual-band filters for in-cabin imaging applications","authors":"O. Skorka, P. Kane, R. Ispasoiu","doi":"10.2352/issn.2470-1173.2019.15.avm-046","DOIUrl":"https://doi.org/10.2352/issn.2470-1173.2019.15.avm-046","url":null,"abstract":"","PeriodicalId":177462,"journal":{"name":"Autonomous Vehicles and Machines","volume":"172 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133836909","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-01-13DOI: 10.2352/issn.2470-1173.2019.15.avm-030
Uwe Artmann, M. Geese, Max Gäde
The automotive industry formed the initiative IEEE-P2020 to jointly work on key performance indicators (KPIs) that can be used to predict how well a camera system suits the use cases. A very fundamental application of cameras is to detect object contrasts for object recognition or stereo vision object matching. The most important KPI the group is working on is the contrast detection probability (CDP), a metric that describes the performance of components and systems and is independent from any assumptions about the camera model or other properties. While the theory behind CDP is already well established, we present actual measurement results and the implementation for camera tests. We also show how CDP can be used to improve low light sensitivity and dynamic range measurements. Introduction The idea of Contrast Detection Probability (CDP) was first presented by Geese et.al.[1] in 2018. It was derived from the need to have a KPI that is independent from the system under test and also independent from which components are tested. So the same KPI shall be applied to describe the performance of a windshield or a lens. As shown in the examples in Figure 1, the cause for loss of contrast can be manifold and is not only related to the camera system itself. CDP was designed to describe the performance of a camera system to reproduce contrasts, the core functionality needed for advanced algorithms in machine vision. EXAMPLES FOR LOW OBJECT CONTRAST Contrast reduction on the input – fog in the scene or dust on the windshield Figu e 1. Different aspects in imaging that can lead to a contrast loss on the input side of a camera. In these examples this is fog or pollen dust on a windshield. Another important new aspect in automotive imaging is High Dynamic Range and the impact on system performance. As described in the IEEE-P2020 white paper [2] and shown in Figure 2, the HDR rendering process can lead to so called SNR drops. This is an effect from combining e.g. the dark part of one image with the bright part of another. The resulting SNR curve will show drops somewhere between the maximum and minimum light intensity. An example is shown in figure 3. The SNR drop can be observed in the SNR curve, a plot of the SNR vs. the light intensity. The open question is, how much impact does this have on the system performance. Even though the SNR value is a well established metric, it is very hard to derive precise system performance predictions from the SNR value. The CDP value has this possibility, as it is directly related to the system performance.
{"title":"Contrast detection probability - Implementation and use cases","authors":"Uwe Artmann, M. Geese, Max Gäde","doi":"10.2352/issn.2470-1173.2019.15.avm-030","DOIUrl":"https://doi.org/10.2352/issn.2470-1173.2019.15.avm-030","url":null,"abstract":"The automotive industry formed the initiative IEEE-P2020 to jointly work on key performance indicators (KPIs) that can be used to predict how well a camera system suits the use cases. A very fundamental application of cameras is to detect object contrasts for object recognition or stereo vision object matching. The most important KPI the group is working on is the contrast detection probability (CDP), a metric that describes the performance of components and systems and is independent from any assumptions about the camera model or other properties. While the theory behind CDP is already well established, we present actual measurement results and the implementation for camera tests. We also show how CDP can be used to improve low light sensitivity and dynamic range measurements. Introduction The idea of Contrast Detection Probability (CDP) was first presented by Geese et.al.[1] in 2018. It was derived from the need to have a KPI that is independent from the system under test and also independent from which components are tested. So the same KPI shall be applied to describe the performance of a windshield or a lens. As shown in the examples in Figure 1, the cause for loss of contrast can be manifold and is not only related to the camera system itself. CDP was designed to describe the performance of a camera system to reproduce contrasts, the core functionality needed for advanced algorithms in machine vision. EXAMPLES FOR LOW OBJECT CONTRAST Contrast reduction on the input – fog in the scene or dust on the windshield Figu e 1. Different aspects in imaging that can lead to a contrast loss on the input side of a camera. In these examples this is fog or pollen dust on a windshield. Another important new aspect in automotive imaging is High Dynamic Range and the impact on system performance. As described in the IEEE-P2020 white paper [2] and shown in Figure 2, the HDR rendering process can lead to so called SNR drops. This is an effect from combining e.g. the dark part of one image with the bright part of another. The resulting SNR curve will show drops somewhere between the maximum and minimum light intensity. An example is shown in figure 3. The SNR drop can be observed in the SNR curve, a plot of the SNR vs. the light intensity. The open question is, how much impact does this have on the system performance. Even though the SNR value is a well established metric, it is very hard to derive precise system performance predictions from the SNR value. The CDP value has this possibility, as it is directly related to the system performance.","PeriodicalId":177462,"journal":{"name":"Autonomous Vehicles and Machines","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116012775","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-01-13DOI: 10.2352/issn.2470-1173.2019.15.avm-033
Raghav Nagpal, Chaitanya Krishna Paturu, V. Ragavan, Navinprashath R R, R. Bhat, Dipanjan Ghosh
{"title":"Real-time traffic sign recognition using deep network for embedded platforms","authors":"Raghav Nagpal, Chaitanya Krishna Paturu, V. Ragavan, Navinprashath R R, R. Bhat, Dipanjan Ghosh","doi":"10.2352/issn.2470-1173.2019.15.avm-033","DOIUrl":"https://doi.org/10.2352/issn.2470-1173.2019.15.avm-033","url":null,"abstract":"","PeriodicalId":177462,"journal":{"name":"Autonomous Vehicles and Machines","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132643012","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-01-13DOI: 10.2352/issn.2470-1173.2019.15.avm-049
Christian Winkens, Florian Sattler, D. Paulus
{"title":"Deep dimension reduction for spatial-spectral road scene classification","authors":"Christian Winkens, Florian Sattler, D. Paulus","doi":"10.2352/issn.2470-1173.2019.15.avm-049","DOIUrl":"https://doi.org/10.2352/issn.2470-1173.2019.15.avm-049","url":null,"abstract":"","PeriodicalId":177462,"journal":{"name":"Autonomous Vehicles and Machines","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128692678","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-01-13DOI: 10.2352/issn.2470-1173.2019.15.avm-027
P. Kane
{"title":"Signal detection theory and automotive imaging","authors":"P. Kane","doi":"10.2352/issn.2470-1173.2019.15.avm-027","DOIUrl":"https://doi.org/10.2352/issn.2470-1173.2019.15.avm-027","url":null,"abstract":"","PeriodicalId":177462,"journal":{"name":"Autonomous Vehicles and Machines","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132995800","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}