Data Fusion for Sparse Semantic Localization Based on Object Detection

Pub Date : 2024-04-20 DOI:10.20965/jrm.2024.p0375
Irem Uygur, Renato Miyagusuku, Sarthak Pathak, Hajime Asama, Atsushi Yamashita
{"title":"Data Fusion for Sparse Semantic Localization Based on Object Detection","authors":"Irem Uygur, Renato Miyagusuku, Sarthak Pathak, Hajime Asama, Atsushi Yamashita","doi":"10.20965/jrm.2024.p0375","DOIUrl":null,"url":null,"abstract":"Semantic information has started to be used in localization methods to introduce a non-geometric distinction in the environment. However, efficient ways to integrate this information remain a question. We propose an approach for fusing data from different object classes by analyzing the posterior for each object class to improve robustness and accuracy for self-localization. Our system uses the bearing angle to the objects’ center and objects’ class names as sensor model input to localize the user on a 2D annotated map consisting of objects’ class names and center coordinates. Sensor model input is obtained by an object detector on equirectangular images of a 360° field of view camera. As object detection performance varies based on location and object class, different object classes generate different likelihoods. We account for this by using appropriate weights generated by a Gaussian process model trained by using our posterior analysis. Our approach follows a systematic way to fuse data from different object classes and use them as a likelihood function of a Monte Carlo localization (MCL) algorithm.","PeriodicalId":0,"journal":{"name":"","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-04-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.20965/jrm.2024.p0375","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Semantic information has started to be used in localization methods to introduce a non-geometric distinction in the environment. However, efficient ways to integrate this information remain a question. We propose an approach for fusing data from different object classes by analyzing the posterior for each object class to improve robustness and accuracy for self-localization. Our system uses the bearing angle to the objects’ center and objects’ class names as sensor model input to localize the user on a 2D annotated map consisting of objects’ class names and center coordinates. Sensor model input is obtained by an object detector on equirectangular images of a 360° field of view camera. As object detection performance varies based on location and object class, different object classes generate different likelihoods. We account for this by using appropriate weights generated by a Gaussian process model trained by using our posterior analysis. Our approach follows a systematic way to fuse data from different object classes and use them as a likelihood function of a Monte Carlo localization (MCL) algorithm.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
基于物体检测的稀疏语义定位数据融合
语义信息已开始用于定位方法,以引入环境中的非几何区分。然而,整合这些信息的有效方法仍然是个问题。我们提出了一种通过分析每个物体类别的后验数据来融合不同物体类别数据的方法,以提高自定位的稳健性和准确性。我们的系统使用到物体中心的方位角和物体类别名称作为传感器模型输入,在由物体类别名称和中心坐标组成的二维注释地图上定位用户。传感器模型输入由 360° 视场相机等角图像上的物体检测器获得。由于物体检测性能因位置和物体类别而异,不同的物体类别会产生不同的可能性。我们通过使用后验分析训练出的高斯过程模型所产生的适当权重来解决这一问题。我们的方法采用一种系统化的方式来融合来自不同物体类别的数据,并将其用作蒙特卡罗定位(MCL)算法的似然函数。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1