首页 > 最新文献

Autonomous Vehicles and Machines最新文献

英文 中文
From stixels to asteroids: Towards a collision warning system using stereo vision 从像素到小行星:使用立体视觉的碰撞预警系统
Pub Date : 2019-01-13 DOI: 10.2352/issn.2470-1173.2019.15.avm-034
Willem P. Sanberg, Gijs Dubbelman, P. D. With
This paper explores the use of stixels in a probabilistic stereo vision-based collision-warning system that can be part of an ADAS for intelligent vehicles. In most current systems, collision warnings are based on radar or on monocular vision using pattern recognition (and ultra-sound for park assist). Since detecting collisions is such a core functionality of intelligent vehicles, redundancy is key. Therefore, we explore the use of stereo vision for reliable collision prediction. Our algorithm consists of a Bayesian histogram filter that provides the probability of collision for multiple interception regions and angles towards the vehicle. This could additionally be fused with other sources of information in larger systems. Our algorithm builds upon the disparity Stixel World that has been developed for efficient automotive vision applications. Combined with image flow and uncertainty modeling, our system samples and propagates asteroids, which are dynamic particles that can be utilized for collision prediction. At best, our independent system detects all 31 simulated collisions (2 false warnings), while this setting generates 12 false warnings on the real-world data.
本文探讨了在基于概率立体视觉的碰撞预警系统中像素的使用,该系统可以作为智能车辆ADAS的一部分。在目前的大多数系统中,碰撞警告是基于雷达或使用模式识别的单目视觉(以及停车辅助的超声波)。由于碰撞检测是智能汽车的核心功能,因此冗余是关键。因此,我们探索使用立体视觉进行可靠的碰撞预测。我们的算法由贝叶斯直方图过滤器组成,该过滤器提供了多个拦截区域和车辆角度的碰撞概率。这还可以与大型系统中的其他信息源融合在一起。我们的算法建立在为高效的汽车视觉应用而开发的视差Stixel World的基础上。结合图像流和不确定性建模,我们的系统对小行星进行采样和传播,这是一种可以用于碰撞预测的动态粒子。在最好的情况下,我们的独立系统检测到所有31次模拟碰撞(2次错误警告),而这个设置在真实数据上产生12次错误警告。
{"title":"From stixels to asteroids: Towards a collision warning system using stereo vision","authors":"Willem P. Sanberg, Gijs Dubbelman, P. D. With","doi":"10.2352/issn.2470-1173.2019.15.avm-034","DOIUrl":"https://doi.org/10.2352/issn.2470-1173.2019.15.avm-034","url":null,"abstract":"This paper explores the use of stixels in a probabilistic stereo vision-based collision-warning system that can be part of an ADAS for intelligent vehicles. In most current systems, collision warnings are based on radar or on monocular vision using pattern recognition (and ultra-sound for park assist). Since detecting collisions is such a core functionality of intelligent vehicles, redundancy is key. Therefore, we explore the use of stereo vision for reliable collision prediction. Our algorithm consists of a Bayesian histogram filter that provides the probability of collision for multiple interception regions and angles towards the vehicle. This could additionally be fused with other sources of information in larger systems. Our algorithm builds upon the disparity Stixel World that has been developed for efficient automotive vision applications. Combined with image flow and uncertainty modeling, our system samples and propagates asteroids, which are dynamic particles that can be utilized for collision prediction. At best, our independent system detects all 31 simulated collisions (2 false warnings), while this setting generates 12 false warnings on the real-world data.","PeriodicalId":177462,"journal":{"name":"Autonomous Vehicles and Machines","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134604530","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Autonomous highway pilot using Bayesian networks and hidden Markov models 使用贝叶斯网络和隐马尔可夫模型的自动驾驶公路驾驶员
Pub Date : 2019-01-13 DOI: 10.2352/issn.2470-1173.2019.15.avm-041
K. Pichler, S. Haindl, Daniel Reischl, M. Trinkl
{"title":"Autonomous highway pilot using Bayesian networks and hidden Markov models","authors":"K. Pichler, S. Haindl, Daniel Reischl, M. Trinkl","doi":"10.2352/issn.2470-1173.2019.15.avm-041","DOIUrl":"https://doi.org/10.2352/issn.2470-1173.2019.15.avm-041","url":null,"abstract":"","PeriodicalId":177462,"journal":{"name":"Autonomous Vehicles and Machines","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128026480","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Pattern and frontier-based, efficient and effective exploration of autonomous mobile robots in unknown environments 基于模式和边界的自主移动机器人在未知环境中的高效探索
Pub Date : 2019-01-13 DOI: 10.2352/issn.2470-1173.2019.15.avm-039
H. Fujimoto, J. Morimoto, Takuya Hayashi, Junji Yamato, H. Ishii, J. Ohya, A. Takanishi
{"title":"Pattern and frontier-based, efficient and effective exploration of autonomous mobile robots in unknown environments","authors":"H. Fujimoto, J. Morimoto, Takuya Hayashi, Junji Yamato, H. Ishii, J. Ohya, A. Takanishi","doi":"10.2352/issn.2470-1173.2019.15.avm-039","DOIUrl":"https://doi.org/10.2352/issn.2470-1173.2019.15.avm-039","url":null,"abstract":"","PeriodicalId":177462,"journal":{"name":"Autonomous Vehicles and Machines","volume":"107 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123835287","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Learning based demosaicing and color correction for RGB-IR patterned image sensors 基于学习的RGB-IR图像传感器去马赛克和色彩校正
Pub Date : 2019-01-13 DOI: 10.2352/issn.2470-1173.2019.15.avm-045
Navinprashath R R, R. Bhat
{"title":"Learning based demosaicing and color correction for RGB-IR patterned image sensors","authors":"Navinprashath R R, R. Bhat","doi":"10.2352/issn.2470-1173.2019.15.avm-045","DOIUrl":"https://doi.org/10.2352/issn.2470-1173.2019.15.avm-045","url":null,"abstract":"","PeriodicalId":177462,"journal":{"name":"Autonomous Vehicles and Machines","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123802993","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
A system for generating complex physically accurate sensor images for automotive applications 为汽车应用生成复杂的物理精确传感器图像的系统
Pub Date : 2019-01-13 DOI: 10.2352/issn.2470-1173.2019.15.avm-053
Zhenyi Liu, Minghao Shen, Jiaqi Zhang, Shuangting Liu, H. Blasinski, Trisha Lian, B. Wandell
We describe an open-source simulator that creates sensor irradiance and sensor images of typical automotive scenes in urban settings. The purpose of the system is to support camera design and testing for automotive applications. The user can specify scene parameters (e.g., scene type, road type, traffic density, time of day) to assemble a large number of random scenes from graphics assets stored in a database. The sensor irradiance is generated using quantitative computer graphics methods, and the sensor images are created using image systems sensor simulation. The synthetic sensor images have pixel level annotations; hence, they can be used to train and evaluate neural networks for imaging tasks, such as object detection and classification. The end-to-end simulation system supports quantitative assessment, from scene to camera to network accuracy, for automotive applications.
我们描述了一个开源模拟器,可以创建城市环境中典型汽车场景的传感器辐照度和传感器图像。该系统的目的是为汽车应用的摄像头设计和测试提供支持。用户可以指定场景参数(例如,场景类型,道路类型,交通密度,一天中的时间),以从存储在数据库中的图形资源中组装大量随机场景。利用定量计算机图形学方法生成传感器辐照度,利用图像系统对传感器进行仿真生成传感器图像。合成传感器图像具有像素级标注;因此,它们可以用于训练和评估用于成像任务的神经网络,例如目标检测和分类。端到端仿真系统支持定量评估,从场景到摄像头到网络精度,用于汽车应用。
{"title":"A system for generating complex physically accurate sensor images for automotive applications","authors":"Zhenyi Liu, Minghao Shen, Jiaqi Zhang, Shuangting Liu, H. Blasinski, Trisha Lian, B. Wandell","doi":"10.2352/issn.2470-1173.2019.15.avm-053","DOIUrl":"https://doi.org/10.2352/issn.2470-1173.2019.15.avm-053","url":null,"abstract":"We describe an open-source simulator that creates sensor irradiance and sensor images of typical automotive scenes in urban settings. The purpose of the system is to support camera design and testing for automotive applications. The user can specify scene parameters (e.g., scene type, road type, traffic density, time of day) to assemble a large number of random scenes from graphics assets stored in a database. The sensor irradiance is generated using quantitative computer graphics methods, and the sensor images are created using image systems sensor simulation. The synthetic sensor images have pixel level annotations; hence, they can be used to train and evaluate neural networks for imaging tasks, such as object detection and classification. The end-to-end simulation system supports quantitative assessment, from scene to camera to network accuracy, for automotive applications.","PeriodicalId":177462,"journal":{"name":"Autonomous Vehicles and Machines","volume":"19 5","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114007766","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Optimization of ISP parameters for object detection algorithms 目标检测算法的ISP参数优化
Pub Date : 2019-01-13 DOI: 10.2352/issn.2470-1173.2019.15.avm-044
Lucie Yahiaoui, Ciarán Hughes, J. Horgan, B. Deegan, Patrick Denny, S. Yogamani
{"title":"Optimization of ISP parameters for object detection algorithms","authors":"Lucie Yahiaoui, Ciarán Hughes, J. Horgan, B. Deegan, Patrick Denny, S. Yogamani","doi":"10.2352/issn.2470-1173.2019.15.avm-044","DOIUrl":"https://doi.org/10.2352/issn.2470-1173.2019.15.avm-044","url":null,"abstract":"","PeriodicalId":177462,"journal":{"name":"Autonomous Vehicles and Machines","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125228766","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
DriveSpace: Towards context-aware drivable area detection DriveSpace:面向上下文感知的可驾驶区域检测
Pub Date : 2019-01-13 DOI: 10.2352/issn.2470-1173.2019.15.avm-042
Ciarán Hughes, Sunil Chandra, Ganesh Sistu, J. Horgan, B. Deegan, Sumanth Chennupati, S. Yogamani
{"title":"DriveSpace: Towards context-aware drivable area detection","authors":"Ciarán Hughes, Sunil Chandra, Ganesh Sistu, J. Horgan, B. Deegan, Sumanth Chennupati, S. Yogamani","doi":"10.2352/issn.2470-1173.2019.15.avm-042","DOIUrl":"https://doi.org/10.2352/issn.2470-1173.2019.15.avm-042","url":null,"abstract":"","PeriodicalId":177462,"journal":{"name":"Autonomous Vehicles and Machines","volume":"13 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124186748","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Automatic shadow detection using hyperspectral data for terrain classification 利用高光谱数据进行地形分类的自动阴影检测
Pub Date : 2019-01-13 DOI: 10.2352/issn.2470-1173.2019.15.avm-031
Christian Winkens, Veronika Adams, D. Paulus
{"title":"Automatic shadow detection using hyperspectral data for terrain classification","authors":"Christian Winkens, Veronika Adams, D. Paulus","doi":"10.2352/issn.2470-1173.2019.15.avm-031","DOIUrl":"https://doi.org/10.2352/issn.2470-1173.2019.15.avm-031","url":null,"abstract":"","PeriodicalId":177462,"journal":{"name":"Autonomous Vehicles and Machines","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127981510","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
An autonomous drone surveillance and tracking architecture 自主无人机监视和跟踪架构
Pub Date : 2019-01-13 DOI: 10.2352/issn.2470-1173.2019.15.avm-035
Eren Unlu, Emmaneul Zenou, N. Rivière, P. Dupouy
In this work, we present a computer vision and machine learning backed autonomous drone surveillance system, in order to protect critical locations. The system is composed of a wide angle, high resolution daylight camera and a relatively narrow angle thermal camera mounted on a rotating turret. The wide angle daylight camera allows the detection of flying intruders, as small as 20 pixels with a very low false alarm rate. The primary detection is based on YOLO convolutional neural network (CNN) rather than conventional background subtraction algorithms due its low false alarm rate performance. At the same time, the tracked flying objects are tracked by the rotating turret and classified by the narrow angle, zoomed thermal camera, where classification algorithm is also based on CNNs. The train-ing of the algorithms is performed by artificial and augmented datasets due to scarcity of infrared videos of drones.
在这项工作中,我们提出了一个计算机视觉和机器学习支持的自主无人机监视系统,以保护关键位置。该系统由一个广角、高分辨率日光摄像机和一个安装在旋转炮塔上的相对窄角热像仪组成。广角日光摄像机允许检测飞行入侵者,小到20像素,假警报率非常低。由于YOLO卷积神经网络(CNN)的误报率较低,因此主要的检测方法是基于YOLO卷积神经网络(CNN),而不是传统的背景相减算法。同时,被跟踪的飞行物由旋转转塔跟踪,用窄角变焦热像仪进行分类,分类算法也基于cnn。由于无人机红外视频的稀缺,算法的训练是通过人工和增强数据集进行的。
{"title":"An autonomous drone surveillance and tracking architecture","authors":"Eren Unlu, Emmaneul Zenou, N. Rivière, P. Dupouy","doi":"10.2352/issn.2470-1173.2019.15.avm-035","DOIUrl":"https://doi.org/10.2352/issn.2470-1173.2019.15.avm-035","url":null,"abstract":"In this work, we present a computer vision and machine learning backed autonomous drone surveillance system, in order to protect critical locations. The system is composed of a wide angle, high resolution daylight camera and a relatively narrow angle thermal camera mounted on a rotating turret. The wide angle daylight camera allows the detection of flying intruders, as small as 20 pixels with a very low false alarm rate. The primary detection is based on YOLO convolutional neural network (CNN) rather than conventional background subtraction algorithms due its low false alarm rate performance. At the same time, the tracked flying objects are tracked by the rotating turret and classified by the narrow angle, zoomed thermal camera, where classification algorithm is also based on CNNs. The train-ing of the algorithms is performed by artificial and augmented datasets due to scarcity of infrared videos of drones.","PeriodicalId":177462,"journal":{"name":"Autonomous Vehicles and Machines","volume":"197 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133274982","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Autonomous navigation using localization priors, sensor fusion, and terrain classification 使用定位先验、传感器融合和地形分类的自主导航
Pub Date : 2019-01-13 DOI: 10.2352/issn.2470-1173.2019.15.avm-040
Zachariah Carmichael, Benjamin Glasstone, Frank Cwitkowitz, Kenneth Alexopoulos, R. Relyea, R. Ptucha
{"title":"Autonomous navigation using localization priors, sensor fusion, and terrain classification","authors":"Zachariah Carmichael, Benjamin Glasstone, Frank Cwitkowitz, Kenneth Alexopoulos, R. Relyea, R. Ptucha","doi":"10.2352/issn.2470-1173.2019.15.avm-040","DOIUrl":"https://doi.org/10.2352/issn.2470-1173.2019.15.avm-040","url":null,"abstract":"","PeriodicalId":177462,"journal":{"name":"Autonomous Vehicles and Machines","volume":"78 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116221967","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Autonomous Vehicles and Machines
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1