Maleen Jayasuriya, G. Dissanayake, Ravindra Ranasinghe, N. Gandhi
{"title":"Leveraging Deep Learning Based Object Detection for Localising Autonomous Personal Mobility Devices in Sparse Maps","authors":"Maleen Jayasuriya, G. Dissanayake, Ravindra Ranasinghe, N. Gandhi","doi":"10.1109/ITSC.2019.8917454","DOIUrl":null,"url":null,"abstract":"This paper presents a low cost, resource efficient localisation approach for autonomous driving in GPS denied environments. One of the most challenging aspects of traditional landmark based localisation in the context of autonomous driving, is the necessity to accurately and frequently detect landmarks. We leverage the state of the art deep learning framework, YOLO (You Only Look Once), to carry out this important perceptual task using data obtained from monocular cameras. Extracted bearing only information from the YOLO framework, and vehicle odometry, is fused using an Extended Kalman Filter (EKF) to generate an estimate of the location of the autonomous vehicle, together with it’s associated uncertainty. This approach enables us to achieve real-time sub metre localisation accuracy, using only a sparse map of an outdoor urban environment. The broader motivation of this research is to improve the safety and reliability of Personal Mobility Devices (PMDs) through autonomous technology. Thus, all the ideas presented here are demonstrated using an instrumented mobility scooter platform.","PeriodicalId":6717,"journal":{"name":"2019 IEEE Intelligent Transportation Systems Conference (ITSC)","volume":"31 1","pages":"4081-4086"},"PeriodicalIF":0.0000,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 IEEE Intelligent Transportation Systems Conference (ITSC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ITSC.2019.8917454","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3
Abstract
This paper presents a low cost, resource efficient localisation approach for autonomous driving in GPS denied environments. One of the most challenging aspects of traditional landmark based localisation in the context of autonomous driving, is the necessity to accurately and frequently detect landmarks. We leverage the state of the art deep learning framework, YOLO (You Only Look Once), to carry out this important perceptual task using data obtained from monocular cameras. Extracted bearing only information from the YOLO framework, and vehicle odometry, is fused using an Extended Kalman Filter (EKF) to generate an estimate of the location of the autonomous vehicle, together with it’s associated uncertainty. This approach enables us to achieve real-time sub metre localisation accuracy, using only a sparse map of an outdoor urban environment. The broader motivation of this research is to improve the safety and reliability of Personal Mobility Devices (PMDs) through autonomous technology. Thus, all the ideas presented here are demonstrated using an instrumented mobility scooter platform.
本文提出了一种低成本、资源高效的GPS环境下自动驾驶定位方法。在自动驾驶背景下,传统的基于地标的定位最具挑战性的一个方面是,必须准确、频繁地检测地标。我们利用最先进的深度学习框架YOLO (You Only Look Once),使用从单目摄像机获得的数据来执行这一重要的感知任务。使用扩展卡尔曼滤波(EKF)将从YOLO框架中提取的仅方位信息与车辆里程数融合,以生成自动驾驶车辆的位置估计值及其相关的不确定性。这种方法使我们能够实现实时亚米定位精度,仅使用室外城市环境的稀疏地图。本研究的更广泛动机是通过自主技术提高个人移动设备(PMDs)的安全性和可靠性。因此,这里提出的所有想法都是使用仪表移动滑板车平台进行演示的。