Vehicle Geolocalization from Drone Imagery

David Novikov, Paul Sotirelis, Alper Yilmaz
{"title":"Vehicle Geolocalization from Drone Imagery","authors":"David Novikov, Paul Sotirelis, Alper Yilmaz","doi":"10.5194/isprs-annals-x-2-2024-171-2024","DOIUrl":null,"url":null,"abstract":"Abstract. We have developed a robust, novel, and cost-effective method for determining the geolocation of vehicles observed in drone camera footage. Previous studies in this area have relied on platform GPS and camera geometry to estimate the position of objects in drone footage, which we will refer to as object-to-drone location (ODL). The performance of these techniques is degraded with decreasing GPS measurement accuracy and camera orientation problems. Our method overcomes these shortcomings and reliably geolocates objects on the ground. We refer to our approach as object-to-map localization (OML). The proposed technique determines a transformation between drone camera footage and georectified aerial images, for example, from Google Maps. This transformation is then used to calculate the positions of objects captured in the drone camera footage. We provide an ablation study of our method’s configuration parameter, which are: feature extraction methods, key point filtering schemes, and types of transformations. We also conduct experiments with a simulated faulty GPS to demonstrate our method’s robustness to poor estimation of the drone’s position. Our approach requires only a drone with a camera and a low-accuracy estimate of its geoposition, we do not rely on markers or ground control points. As a result, our method can determine the geolocation of vehicles on the ground in an easy-to-set up and costeffective manner, making object geolocalization more accessible to users by decreasing the hardware and software requirements. Our GitHub with code can be found at https://github.com/OSUPCVLab/VehicleGeopositioning\n","PeriodicalId":508124,"journal":{"name":"ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences","volume":"8 21","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.5194/isprs-annals-x-2-2024-171-2024","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Abstract. We have developed a robust, novel, and cost-effective method for determining the geolocation of vehicles observed in drone camera footage. Previous studies in this area have relied on platform GPS and camera geometry to estimate the position of objects in drone footage, which we will refer to as object-to-drone location (ODL). The performance of these techniques is degraded with decreasing GPS measurement accuracy and camera orientation problems. Our method overcomes these shortcomings and reliably geolocates objects on the ground. We refer to our approach as object-to-map localization (OML). The proposed technique determines a transformation between drone camera footage and georectified aerial images, for example, from Google Maps. This transformation is then used to calculate the positions of objects captured in the drone camera footage. We provide an ablation study of our method’s configuration parameter, which are: feature extraction methods, key point filtering schemes, and types of transformations. We also conduct experiments with a simulated faulty GPS to demonstrate our method’s robustness to poor estimation of the drone’s position. Our approach requires only a drone with a camera and a low-accuracy estimate of its geoposition, we do not rely on markers or ground control points. As a result, our method can determine the geolocation of vehicles on the ground in an easy-to-set up and costeffective manner, making object geolocalization more accessible to users by decreasing the hardware and software requirements. Our GitHub with code can be found at https://github.com/OSUPCVLab/VehicleGeopositioning
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
利用无人机图像进行车辆地理定位
摘要我们开发了一种稳健、新颖且经济高效的方法,用于确定无人机摄像机镜头中观察到的车辆的地理位置。以前在这一领域的研究都是依靠平台 GPS 和相机几何图形来估计无人机镜头中物体的位置,我们称之为物体对无人机定位(ODL)。这些技术的性能会随着 GPS 测量精度的降低和相机定向问题而下降。我们的方法克服了这些缺点,能够可靠地对地面物体进行地理定位。我们将这种方法称为物体到地图定位(OML)。所提出的技术可确定无人机摄像机镜头与经过地理校正的空中图像(例如谷歌地图)之间的转换。然后,利用这种变换来计算无人机摄像头拍摄到的物体的位置。我们对方法的配置参数进行了消融研究,这些参数包括:特征提取方法、关键点过滤方案和变换类型。我们还使用模拟故障 GPS 进行了实验,以证明我们的方法对无人机位置估计不准确的鲁棒性。我们的方法只需要一架带有摄像头的无人机和对其地理位置的低精度估计,不依赖于标记或地面控制点。因此,我们的方法能以一种易于设置且成本低廉的方式确定地面车辆的地理位置,通过降低硬件和软件要求,使用户更容易获得物体地理定位。我们的 GitHub 代码见 https://github.com/OSUPCVLab/VehicleGeopositioning
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
The 19th 3D GeoInfo Conference: Preface Annals UAS Photogrammetry for Precise Digital Elevation Models of Complex Topography: A Strategy Guide Using Passive Multi-Modal Sensor Data for Thermal Simulation of Urban Surfaces Machine Learning Approaches for Vehicle Counting on Bridges Based on Global Ground-Based Radar Data Rectilinear Building Footprint Regularization Using Deep Learning
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1