BFAR: improving radar odometry estimation using a bounded false alarm rate detector

IF 3.7 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Autonomous Robots Pub Date : 2024-11-19 DOI:10.1007/s10514-024-10176-2
Anas Alhashimi, Daniel Adolfsson, Henrik Andreasson, Achim Lilienthal, Martin Magnusson
{"title":"BFAR: improving radar odometry estimation using a bounded false alarm rate detector","authors":"Anas Alhashimi,&nbsp;Daniel Adolfsson,&nbsp;Henrik Andreasson,&nbsp;Achim Lilienthal,&nbsp;Martin Magnusson","doi":"10.1007/s10514-024-10176-2","DOIUrl":null,"url":null,"abstract":"<div><p>This work introduces a novel detector, bounded false-alarm rate (BFAR), for distinguishing true detections from noise in radar data, leading to improved accuracy in radar odometry estimation. Scanning frequency-modulated continuous wave (FMCW) radars can serve as valuable tools for localization and mapping under low visibility conditions. However, they tend to yield a higher level of noise in comparison to the more commonly employed lidars, thereby introducing additional challenges to the detection process. We propose a new radar target detector called BFAR which uses an affine transformation of the estimated noise level compared to the classical constant false-alarm rate (CFAR) detector. This transformation employs learned parameters that minimize the error in odometry estimation. Conceptually, BFAR can be viewed as an optimized blend of CFAR and fixed-level thresholding designed to minimize odometry estimation error. The strength of this approach lies in its simplicity. Only a single parameter needs to be learned from a training dataset when the affine transformation scale parameter is maintained. Compared to ad-hoc detectors, BFAR has the advantage of a specified upper-bound for the false-alarm probability, and better noise handling than CFAR. Repeatability tests show that BFAR yields highly repeatable detections with minimal redundancy. We have conducted simulations to compare the detection and false-alarm probabilities of BFAR with those of three baselines in non-homogeneous noise and varying target sizes. The results show that BFAR outperforms the other detectors. Moreover, We apply BFAR to the use case of radar odometry, and adapt a recent odometry pipeline, replacing its original conservative filtering with BFAR. In this way, we reduce the translation/rotation odometry errors/100 m from 1.3%/0.4<span>\\(^\\circ \\)</span> to 1.12%/0.38<span>\\(^\\circ \\)</span>, and from 1.62%/0.57<span>\\(^\\circ \\)</span> to 1.21%/0.32<span>\\(^\\circ \\)</span>, improving translation error by 14.2% and 25% on Oxford and Mulran public data sets, respectively.</p></div>","PeriodicalId":55409,"journal":{"name":"Autonomous Robots","volume":"48 8","pages":""},"PeriodicalIF":3.7000,"publicationDate":"2024-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10514-024-10176-2.pdf","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Autonomous Robots","FirstCategoryId":"94","ListUrlMain":"https://link.springer.com/article/10.1007/s10514-024-10176-2","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

This work introduces a novel detector, bounded false-alarm rate (BFAR), for distinguishing true detections from noise in radar data, leading to improved accuracy in radar odometry estimation. Scanning frequency-modulated continuous wave (FMCW) radars can serve as valuable tools for localization and mapping under low visibility conditions. However, they tend to yield a higher level of noise in comparison to the more commonly employed lidars, thereby introducing additional challenges to the detection process. We propose a new radar target detector called BFAR which uses an affine transformation of the estimated noise level compared to the classical constant false-alarm rate (CFAR) detector. This transformation employs learned parameters that minimize the error in odometry estimation. Conceptually, BFAR can be viewed as an optimized blend of CFAR and fixed-level thresholding designed to minimize odometry estimation error. The strength of this approach lies in its simplicity. Only a single parameter needs to be learned from a training dataset when the affine transformation scale parameter is maintained. Compared to ad-hoc detectors, BFAR has the advantage of a specified upper-bound for the false-alarm probability, and better noise handling than CFAR. Repeatability tests show that BFAR yields highly repeatable detections with minimal redundancy. We have conducted simulations to compare the detection and false-alarm probabilities of BFAR with those of three baselines in non-homogeneous noise and varying target sizes. The results show that BFAR outperforms the other detectors. Moreover, We apply BFAR to the use case of radar odometry, and adapt a recent odometry pipeline, replacing its original conservative filtering with BFAR. In this way, we reduce the translation/rotation odometry errors/100 m from 1.3%/0.4\(^\circ \) to 1.12%/0.38\(^\circ \), and from 1.62%/0.57\(^\circ \) to 1.21%/0.32\(^\circ \), improving translation error by 14.2% and 25% on Oxford and Mulran public data sets, respectively.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
BFAR:使用有界误报率检测器改进雷达测距估算
这项工作介绍了一种新型检测器--有界误报率(BFAR),用于区分雷达数据中的真实检测和噪声,从而提高雷达测距估算的准确性。扫描频率调制连续波(FMCW)雷达是低能见度条件下进行定位和绘图的重要工具。然而,与更常用的激光雷达相比,扫描频率调制连续波(FMCW)雷达往往会产生更高水平的噪声,从而给探测过程带来额外的挑战。我们提出了一种名为 BFAR 的新型雷达目标检测器,与传统的恒定误报率(CFAR)检测器相比,该检测器对估计噪声水平进行了仿射变换。这种变换采用了学习到的参数,能最大限度地减少测距估计中的误差。从概念上讲,BFAR 可以看作是 CFAR 和固定阈值的优化组合,旨在最大限度地减少里程估算误差。这种方法的优势在于其简单性。在保持仿射变换比例参数的情况下,只需从训练数据集中学习一个参数。与临时检测器相比,BFAR 的优势在于为误报概率指定了上限,而且噪声处理能力比 CFAR 更强。可重复性测试表明,BFAR 能以最小的冗余产生高度可重复的检测结果。我们进行了仿真,比较了 BFAR 与三种基线在非均匀噪声和不同目标大小条件下的检测概率和误报概率。结果表明,BFAR 的性能优于其他探测器。此外,我们还将 BFAR 应用于雷达测距,并调整了最新的测距管道,用 BFAR 取代了原有的保守滤波。通过这种方法,我们将平移/旋转测度误差/100米从1.3%/0.4(^\circ \)降低到1.12%/0.38(^\circ \),将平移误差/100米从1.62%/0.57(^\circ \)降低到1.21%/0.32(^\circ \),在牛津和穆兰公共数据集上,平移误差分别改善了14.2%和25%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Autonomous Robots
Autonomous Robots 工程技术-机器人学
CiteScore
7.90
自引率
5.70%
发文量
46
审稿时长
3 months
期刊介绍: Autonomous Robots reports on the theory and applications of robotic systems capable of some degree of self-sufficiency. It features papers that include performance data on actual robots in the real world. Coverage includes: control of autonomous robots · real-time vision · autonomous wheeled and tracked vehicles · legged vehicles · computational architectures for autonomous systems · distributed architectures for learning, control and adaptation · studies of autonomous robot systems · sensor fusion · theory of autonomous systems · terrain mapping and recognition · self-calibration and self-repair for robots · self-reproducing intelligent structures · genetic algorithms as models for robot development. The focus is on the ability to move and be self-sufficient, not on whether the system is an imitation of biology. Of course, biological models for robotic systems are of major interest to the journal since living systems are prototypes for autonomous behavior.
期刊最新文献
View: visual imitation learning with waypoints Safe and stable teleoperation of quadrotor UAVs under haptic shared autonomy Synthesizing compact behavior trees for probabilistic robotics domains Integrative biomechanics of a human–robot carrying task: implications for future collaborative work Mori-zwanzig approach for belief abstraction with application to belief space planning
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1