Fusion of Time-of-Flight Based Sensors with Monocular Cameras for a Robotic Person Follower

IF 3.1 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Journal of Intelligent & Robotic Systems Pub Date : 2024-02-03 DOI:10.1007/s10846-023-02037-4
José Sarmento, Filipe Neves dos Santos, André Silva Aguiar, Vítor Filipe, António Valente
{"title":"Fusion of Time-of-Flight Based Sensors with Monocular Cameras for a Robotic Person Follower","authors":"José Sarmento, Filipe Neves dos Santos, André Silva Aguiar, Vítor Filipe, António Valente","doi":"10.1007/s10846-023-02037-4","DOIUrl":null,"url":null,"abstract":"<p>Human-robot collaboration (HRC) is becoming increasingly important in advanced production systems, such as those used in industries and agriculture. This type of collaboration can contribute to productivity increase by reducing physical strain on humans, which can lead to reduced injuries and improved morale. One crucial aspect of HRC is the ability of the robot to follow a specific human operator safely. To address this challenge, a novel methodology is proposed that employs monocular vision and ultra-wideband (UWB) transceivers to determine the relative position of a human target with respect to the robot. UWB transceivers are capable of tracking humans with UWB transceivers but exhibit a significant angular error. To reduce this error, monocular cameras with Deep Learning object detection are used to detect humans. The reduction in angular error is achieved through sensor fusion, combining the outputs of both sensors using a histogram-based filter. This filter projects and intersects the measurements from both sources onto a 2D grid. By combining UWB and monocular vision, a remarkable 66.67% reduction in angular error compared to UWB localization alone is achieved. This approach demonstrates an average processing time of 0.0183s and an average localization error of 0.14 meters when tracking a person walking at an average speed of 0.21 m/s. This novel algorithm holds promise for enabling efficient and safe human-robot collaboration, providing a valuable contribution to the field of robotics.</p>","PeriodicalId":54794,"journal":{"name":"Journal of Intelligent & Robotic Systems","volume":null,"pages":null},"PeriodicalIF":3.1000,"publicationDate":"2024-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Intelligent & Robotic Systems","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s10846-023-02037-4","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Human-robot collaboration (HRC) is becoming increasingly important in advanced production systems, such as those used in industries and agriculture. This type of collaboration can contribute to productivity increase by reducing physical strain on humans, which can lead to reduced injuries and improved morale. One crucial aspect of HRC is the ability of the robot to follow a specific human operator safely. To address this challenge, a novel methodology is proposed that employs monocular vision and ultra-wideband (UWB) transceivers to determine the relative position of a human target with respect to the robot. UWB transceivers are capable of tracking humans with UWB transceivers but exhibit a significant angular error. To reduce this error, monocular cameras with Deep Learning object detection are used to detect humans. The reduction in angular error is achieved through sensor fusion, combining the outputs of both sensors using a histogram-based filter. This filter projects and intersects the measurements from both sources onto a 2D grid. By combining UWB and monocular vision, a remarkable 66.67% reduction in angular error compared to UWB localization alone is achieved. This approach demonstrates an average processing time of 0.0183s and an average localization error of 0.14 meters when tracking a person walking at an average speed of 0.21 m/s. This novel algorithm holds promise for enabling efficient and safe human-robot collaboration, providing a valuable contribution to the field of robotics.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于飞行时间的传感器与单目摄像头的融合,用于机器人人员跟随器
在工业和农业等领域使用的先进生产系统中,人机协作正变得越来越重要。这种协作可以减轻人类的体力负担,从而减少伤害和提高士气,有助于提高生产率。HRC 的一个重要方面是机器人安全跟随特定人类操作员的能力。为了应对这一挑战,我们提出了一种新方法,利用单目视觉和超宽带 (UWB) 收发器来确定人类目标与机器人的相对位置。超宽带收发器能够利用超宽带收发器跟踪人类,但会出现明显的角度误差。为了减少这一误差,我们使用了带有深度学习物体检测功能的单目摄像头来检测人类。角度误差的减少是通过传感器融合实现的,即使用基于直方图的滤波器将两个传感器的输出结合起来。该滤波器将两个来源的测量值投影并交汇到一个二维网格上。通过结合 UWB 和单目视觉,与单独的 UWB 定位相比,角度误差显著减少了 66.67%。这种方法的平均处理时间为 0.0183 秒,在以平均 0.21 米/秒的速度跟踪行走的人时,平均定位误差为 0.14 米。这种新型算法有望实现高效、安全的人机协作,为机器人学领域做出宝贵贡献。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Journal of Intelligent & Robotic Systems
Journal of Intelligent & Robotic Systems 工程技术-机器人学
CiteScore
7.00
自引率
9.10%
发文量
219
审稿时长
6 months
期刊介绍: The Journal of Intelligent and Robotic Systems bridges the gap between theory and practice in all areas of intelligent systems and robotics. It publishes original, peer reviewed contributions from initial concept and theory to prototyping to final product development and commercialization. On the theoretical side, the journal features papers focusing on intelligent systems engineering, distributed intelligence systems, multi-level systems, intelligent control, multi-robot systems, cooperation and coordination of unmanned vehicle systems, etc. On the application side, the journal emphasizes autonomous systems, industrial robotic systems, multi-robot systems, aerial vehicles, mobile robot platforms, underwater robots, sensors, sensor-fusion, and sensor-based control. Readers will also find papers on real applications of intelligent and robotic systems (e.g., mechatronics, manufacturing, biomedical, underwater, humanoid, mobile/legged robot and space applications, etc.).
期刊最新文献
UAV Routing for Enhancing the Performance of a Classifier-in-the-loop DFT-VSLAM: A Dynamic Optical Flow Tracking VSLAM Method Design and Development of a Robust Control Platform for a 3-Finger Robotic Gripper Using EMG-Derived Hand Muscle Signals in NI LabVIEW Neural Network-based Adaptive Finite-time Control for 2-DOF Helicopter Systems with Prescribed Performance and Input Saturation Six-Degree-of-Freedom Pose Estimation Method for Multi-Source Feature Points Based on Fully Convolutional Neural Network
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1