Pub Date : 2019-01-13DOI: 10.2352/issn.2470-1173.2019.15.avm-034
Willem P. Sanberg, Gijs Dubbelman, P. D. With
This paper explores the use of stixels in a probabilistic stereo vision-based collision-warning system that can be part of an ADAS for intelligent vehicles. In most current systems, collision warnings are based on radar or on monocular vision using pattern recognition (and ultra-sound for park assist). Since detecting collisions is such a core functionality of intelligent vehicles, redundancy is key. Therefore, we explore the use of stereo vision for reliable collision prediction. Our algorithm consists of a Bayesian histogram filter that provides the probability of collision for multiple interception regions and angles towards the vehicle. This could additionally be fused with other sources of information in larger systems. Our algorithm builds upon the disparity Stixel World that has been developed for efficient automotive vision applications. Combined with image flow and uncertainty modeling, our system samples and propagates asteroids, which are dynamic particles that can be utilized for collision prediction. At best, our independent system detects all 31 simulated collisions (2 false warnings), while this setting generates 12 false warnings on the real-world data.
{"title":"From stixels to asteroids: Towards a collision warning system using stereo vision","authors":"Willem P. Sanberg, Gijs Dubbelman, P. D. With","doi":"10.2352/issn.2470-1173.2019.15.avm-034","DOIUrl":"https://doi.org/10.2352/issn.2470-1173.2019.15.avm-034","url":null,"abstract":"This paper explores the use of stixels in a probabilistic stereo vision-based collision-warning system that can be part of an ADAS for intelligent vehicles. In most current systems, collision warnings are based on radar or on monocular vision using pattern recognition (and ultra-sound for park assist). Since detecting collisions is such a core functionality of intelligent vehicles, redundancy is key. Therefore, we explore the use of stereo vision for reliable collision prediction. Our algorithm consists of a Bayesian histogram filter that provides the probability of collision for multiple interception regions and angles towards the vehicle. This could additionally be fused with other sources of information in larger systems. Our algorithm builds upon the disparity Stixel World that has been developed for efficient automotive vision applications. Combined with image flow and uncertainty modeling, our system samples and propagates asteroids, which are dynamic particles that can be utilized for collision prediction. At best, our independent system detects all 31 simulated collisions (2 false warnings), while this setting generates 12 false warnings on the real-world data.","PeriodicalId":177462,"journal":{"name":"Autonomous Vehicles and Machines","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134604530","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-01-13DOI: 10.2352/issn.2470-1173.2019.15.avm-041
K. Pichler, S. Haindl, Daniel Reischl, M. Trinkl
{"title":"Autonomous highway pilot using Bayesian networks and hidden Markov models","authors":"K. Pichler, S. Haindl, Daniel Reischl, M. Trinkl","doi":"10.2352/issn.2470-1173.2019.15.avm-041","DOIUrl":"https://doi.org/10.2352/issn.2470-1173.2019.15.avm-041","url":null,"abstract":"","PeriodicalId":177462,"journal":{"name":"Autonomous Vehicles and Machines","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128026480","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-01-13DOI: 10.2352/issn.2470-1173.2019.15.avm-039
H. Fujimoto, J. Morimoto, Takuya Hayashi, Junji Yamato, H. Ishii, J. Ohya, A. Takanishi
{"title":"Pattern and frontier-based, efficient and effective exploration of autonomous mobile robots in unknown environments","authors":"H. Fujimoto, J. Morimoto, Takuya Hayashi, Junji Yamato, H. Ishii, J. Ohya, A. Takanishi","doi":"10.2352/issn.2470-1173.2019.15.avm-039","DOIUrl":"https://doi.org/10.2352/issn.2470-1173.2019.15.avm-039","url":null,"abstract":"","PeriodicalId":177462,"journal":{"name":"Autonomous Vehicles and Machines","volume":"107 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123835287","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-01-13DOI: 10.2352/issn.2470-1173.2019.15.avm-045
Navinprashath R R, R. Bhat
{"title":"Learning based demosaicing and color correction for RGB-IR patterned image sensors","authors":"Navinprashath R R, R. Bhat","doi":"10.2352/issn.2470-1173.2019.15.avm-045","DOIUrl":"https://doi.org/10.2352/issn.2470-1173.2019.15.avm-045","url":null,"abstract":"","PeriodicalId":177462,"journal":{"name":"Autonomous Vehicles and Machines","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123802993","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-01-13DOI: 10.2352/issn.2470-1173.2019.15.avm-053
Zhenyi Liu, Minghao Shen, Jiaqi Zhang, Shuangting Liu, H. Blasinski, Trisha Lian, B. Wandell
We describe an open-source simulator that creates sensor irradiance and sensor images of typical automotive scenes in urban settings. The purpose of the system is to support camera design and testing for automotive applications. The user can specify scene parameters (e.g., scene type, road type, traffic density, time of day) to assemble a large number of random scenes from graphics assets stored in a database. The sensor irradiance is generated using quantitative computer graphics methods, and the sensor images are created using image systems sensor simulation. The synthetic sensor images have pixel level annotations; hence, they can be used to train and evaluate neural networks for imaging tasks, such as object detection and classification. The end-to-end simulation system supports quantitative assessment, from scene to camera to network accuracy, for automotive applications.
{"title":"A system for generating complex physically accurate sensor images for automotive applications","authors":"Zhenyi Liu, Minghao Shen, Jiaqi Zhang, Shuangting Liu, H. Blasinski, Trisha Lian, B. Wandell","doi":"10.2352/issn.2470-1173.2019.15.avm-053","DOIUrl":"https://doi.org/10.2352/issn.2470-1173.2019.15.avm-053","url":null,"abstract":"We describe an open-source simulator that creates sensor irradiance and sensor images of typical automotive scenes in urban settings. The purpose of the system is to support camera design and testing for automotive applications. The user can specify scene parameters (e.g., scene type, road type, traffic density, time of day) to assemble a large number of random scenes from graphics assets stored in a database. The sensor irradiance is generated using quantitative computer graphics methods, and the sensor images are created using image systems sensor simulation. The synthetic sensor images have pixel level annotations; hence, they can be used to train and evaluate neural networks for imaging tasks, such as object detection and classification. The end-to-end simulation system supports quantitative assessment, from scene to camera to network accuracy, for automotive applications.","PeriodicalId":177462,"journal":{"name":"Autonomous Vehicles and Machines","volume":"19 5","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114007766","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-01-13DOI: 10.2352/issn.2470-1173.2019.15.avm-044
Lucie Yahiaoui, Ciarán Hughes, J. Horgan, B. Deegan, Patrick Denny, S. Yogamani
{"title":"Optimization of ISP parameters for object detection algorithms","authors":"Lucie Yahiaoui, Ciarán Hughes, J. Horgan, B. Deegan, Patrick Denny, S. Yogamani","doi":"10.2352/issn.2470-1173.2019.15.avm-044","DOIUrl":"https://doi.org/10.2352/issn.2470-1173.2019.15.avm-044","url":null,"abstract":"","PeriodicalId":177462,"journal":{"name":"Autonomous Vehicles and Machines","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125228766","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-01-13DOI: 10.2352/issn.2470-1173.2019.15.avm-042
Ciarán Hughes, Sunil Chandra, Ganesh Sistu, J. Horgan, B. Deegan, Sumanth Chennupati, S. Yogamani
{"title":"DriveSpace: Towards context-aware drivable area detection","authors":"Ciarán Hughes, Sunil Chandra, Ganesh Sistu, J. Horgan, B. Deegan, Sumanth Chennupati, S. Yogamani","doi":"10.2352/issn.2470-1173.2019.15.avm-042","DOIUrl":"https://doi.org/10.2352/issn.2470-1173.2019.15.avm-042","url":null,"abstract":"","PeriodicalId":177462,"journal":{"name":"Autonomous Vehicles and Machines","volume":"13 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124186748","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-01-13DOI: 10.2352/issn.2470-1173.2019.15.avm-031
Christian Winkens, Veronika Adams, D. Paulus
{"title":"Automatic shadow detection using hyperspectral data for terrain classification","authors":"Christian Winkens, Veronika Adams, D. Paulus","doi":"10.2352/issn.2470-1173.2019.15.avm-031","DOIUrl":"https://doi.org/10.2352/issn.2470-1173.2019.15.avm-031","url":null,"abstract":"","PeriodicalId":177462,"journal":{"name":"Autonomous Vehicles and Machines","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127981510","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-01-13DOI: 10.2352/issn.2470-1173.2019.15.avm-035
Eren Unlu, Emmaneul Zenou, N. Rivière, P. Dupouy
In this work, we present a computer vision and machine learning backed autonomous drone surveillance system, in order to protect critical locations. The system is composed of a wide angle, high resolution daylight camera and a relatively narrow angle thermal camera mounted on a rotating turret. The wide angle daylight camera allows the detection of flying intruders, as small as 20 pixels with a very low false alarm rate. The primary detection is based on YOLO convolutional neural network (CNN) rather than conventional background subtraction algorithms due its low false alarm rate performance. At the same time, the tracked flying objects are tracked by the rotating turret and classified by the narrow angle, zoomed thermal camera, where classification algorithm is also based on CNNs. The train-ing of the algorithms is performed by artificial and augmented datasets due to scarcity of infrared videos of drones.
{"title":"An autonomous drone surveillance and tracking architecture","authors":"Eren Unlu, Emmaneul Zenou, N. Rivière, P. Dupouy","doi":"10.2352/issn.2470-1173.2019.15.avm-035","DOIUrl":"https://doi.org/10.2352/issn.2470-1173.2019.15.avm-035","url":null,"abstract":"In this work, we present a computer vision and machine learning backed autonomous drone surveillance system, in order to protect critical locations. The system is composed of a wide angle, high resolution daylight camera and a relatively narrow angle thermal camera mounted on a rotating turret. The wide angle daylight camera allows the detection of flying intruders, as small as 20 pixels with a very low false alarm rate. The primary detection is based on YOLO convolutional neural network (CNN) rather than conventional background subtraction algorithms due its low false alarm rate performance. At the same time, the tracked flying objects are tracked by the rotating turret and classified by the narrow angle, zoomed thermal camera, where classification algorithm is also based on CNNs. The train-ing of the algorithms is performed by artificial and augmented datasets due to scarcity of infrared videos of drones.","PeriodicalId":177462,"journal":{"name":"Autonomous Vehicles and Machines","volume":"197 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133274982","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-01-13DOI: 10.2352/issn.2470-1173.2019.15.avm-040
Zachariah Carmichael, Benjamin Glasstone, Frank Cwitkowitz, Kenneth Alexopoulos, R. Relyea, R. Ptucha
{"title":"Autonomous navigation using localization priors, sensor fusion, and terrain classification","authors":"Zachariah Carmichael, Benjamin Glasstone, Frank Cwitkowitz, Kenneth Alexopoulos, R. Relyea, R. Ptucha","doi":"10.2352/issn.2470-1173.2019.15.avm-040","DOIUrl":"https://doi.org/10.2352/issn.2470-1173.2019.15.avm-040","url":null,"abstract":"","PeriodicalId":177462,"journal":{"name":"Autonomous Vehicles and Machines","volume":"78 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116221967","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}