首页 > 最新文献

IEEE Robotics and Automation Letters最新文献

英文 中文
A Robust and Efficient Visual-Inertial SLAM Using Hybrid Point-Line Features 基于点-线混合特征的鲁棒高效视觉惯性SLAM
IF 5.3 2区 计算机科学 Q2 ROBOTICS Pub Date : 2025-12-26 DOI: 10.1109/LRA.2025.3648610
Shuhuan Wen;Songhao Tan;Xin Liu;Mengyu Li;Huaping Liu
Visual simultaneous localization and mapping (VSLAM) is a foundational technology in robotics, providing an optimal balance of cost and accuracy. However, existing systems often lack robustness in environments with fast motion, dynamic lighting, or low texture. This letter introduces ML-SLAM, a hybrid visual-inertial SLAM system that combines point-line features with learning-based techniques to improve performance in these challenging conditions. Built on the ORB-SLAM3 framework, ML-SLAM incorporates SuperPoint for adaptive keypoint detection and LightGlue for robust feature matching, along with a novel endpoint-based point-line association strategy to enhance tracking reliability in complex scenes. The system also features hybrid feature-based loop-closure detection and tightly coupled bundle adjustment (BA) incorporating inertial measurements, adapted as standard modules in the ORB-SLAM3 backend to seamlessly integrate the hybrid point-line frontend with the established backend. Extensive evaluations on the EuRoC, TartanAir, UMA-VI, and real-world indoor datasets show that ML-SLAM significantly outperforms state-of-the-art (SOTA) methods, with over 20% improvement in localization accuracy compared to ORB-SLAM3.
视觉同步定位和映射(VSLAM)是机器人技术的基础技术,提供了成本和精度的最佳平衡。然而,现有的系统在快速运动、动态光照或低纹理的环境中往往缺乏鲁棒性。这封信介绍了ML-SLAM,一种混合视觉-惯性SLAM系统,结合了点-线特征和基于学习的技术,以提高在这些具有挑战性的条件下的性能。基于ORB-SLAM3框架,ML-SLAM结合了SuperPoint自适应关键点检测和LightGlue鲁棒特征匹配,以及一种新颖的基于端点的点-线关联策略,以增强复杂场景中的跟踪可靠性。该系统还具有基于特征的闭环检测和结合惯性测量的紧密耦合束调整(BA)混合功能,可作为ORB-SLAM3后端的标准模块,将混合点线前端与已建立的后端无缝集成。对EuRoC、TartanAir、UMA-VI和真实室内数据集的广泛评估表明,ML-SLAM显著优于最先进的(SOTA)方法,与ORB-SLAM3相比,定位精度提高了20%以上。
{"title":"A Robust and Efficient Visual-Inertial SLAM Using Hybrid Point-Line Features","authors":"Shuhuan Wen;Songhao Tan;Xin Liu;Mengyu Li;Huaping Liu","doi":"10.1109/LRA.2025.3648610","DOIUrl":"https://doi.org/10.1109/LRA.2025.3648610","url":null,"abstract":"Visual simultaneous localization and mapping (VSLAM) is a foundational technology in robotics, providing an optimal balance of cost and accuracy. However, existing systems often lack robustness in environments with fast motion, dynamic lighting, or low texture. This letter introduces ML-SLAM, a hybrid visual-inertial SLAM system that combines point-line features with learning-based techniques to improve performance in these challenging conditions. Built on the ORB-SLAM3 framework, ML-SLAM incorporates SuperPoint for adaptive keypoint detection and LightGlue for robust feature matching, along with a novel endpoint-based point-line association strategy to enhance tracking reliability in complex scenes. The system also features hybrid feature-based loop-closure detection and tightly coupled bundle adjustment (BA) incorporating inertial measurements, adapted as standard modules in the ORB-SLAM3 backend to seamlessly integrate the hybrid point-line frontend with the established backend. Extensive evaluations on the EuRoC, TartanAir, UMA-VI, and real-world indoor datasets show that ML-SLAM significantly outperforms state-of-the-art (SOTA) methods, with over 20% improvement in localization accuracy compared to ORB-SLAM3.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"11 2","pages":"2258-2265"},"PeriodicalIF":5.3,"publicationDate":"2025-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145929630","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
STATE-NAV: Stability-Aware Traversability Estimation for Bipedal Navigation on Rough Terrain 基于稳定性感知的双足导航在崎岖地形上的可穿越性估计
IF 5.3 2区 计算机科学 Q2 ROBOTICS Pub Date : 2025-12-26 DOI: 10.1109/LRA.2025.3648502
Ziwon Yoon;Lawrence Y. Zhu;Jingxi Lu;Lu Gan;Ye Zhao
Bipedal robots have advantages in maneuvering human-centered environments, but face greater failure risk compared to other stable mobile platforms, such as wheeled or quadrupedal robots. While learning-based traversability has been widely studied for these platforms, bipedal traversability has instead relied on manually designed rules with limited consideration of locomotion stability on rough terrain. In this work, we present the first learning-based traversability estimation and risk-sensitive navigation framework for bipedal robots operating in diverse, uneven environments. TravFormer, a transformer-based neural network, is trained to predict bipedal instability with uncertainty, enabling risk-aware and adaptive planning. Based on the network, we define traversability as stability-aware command velocity—the fastest command velocity that keeps instability below a user-defined limit. This velocity-based traversability is integrated into a hierarchical planner that combines traversability-informed Rapid Random Tree Star (TravRRT*) for time-efficient path planning and Model Predictive Control (MPC) for safe execution. We validate our method in MuJoCo simulator and the real world, demonstrating improved stability, time efficiency, and robustness across diverse terrains compared with existing methods.
双足机器人在以人为中心的机动环境中具有优势,但与其他稳定的移动平台(如轮式或四足机器人)相比,它们面临更大的故障风险。虽然基于学习的可穿越性已经在这些平台上得到了广泛的研究,但两足动物的可穿越性一直依赖于人工设计的规则,很少考虑崎岖地形上的运动稳定性。在这项工作中,我们提出了第一个基于学习的两足机器人可遍历性估计和风险敏感导航框架,用于在不同,不均匀的环境中工作。TravFormer是一种基于变压器的神经网络,经过训练可以预测两足不确定性的不稳定性,实现风险感知和自适应规划。基于网络,我们将可遍历性定义为稳定感知的命令速度——使不稳定性低于用户定义限制的最快命令速度。这种基于速度的可穿越性被集成到一个分层规划器中,该规划器结合了可穿越性通知的快速随机树星(TravRRT*),用于时间效率的路径规划和模型预测控制(MPC),用于安全执行。我们在MuJoCo模拟器和现实世界中验证了我们的方法,与现有方法相比,在不同地形上证明了更高的稳定性、时间效率和鲁棒性。
{"title":"STATE-NAV: Stability-Aware Traversability Estimation for Bipedal Navigation on Rough Terrain","authors":"Ziwon Yoon;Lawrence Y. Zhu;Jingxi Lu;Lu Gan;Ye Zhao","doi":"10.1109/LRA.2025.3648502","DOIUrl":"https://doi.org/10.1109/LRA.2025.3648502","url":null,"abstract":"Bipedal robots have advantages in maneuvering human-centered environments, but face greater failure risk compared to other stable mobile platforms, such as wheeled or quadrupedal robots. While learning-based traversability has been widely studied for these platforms, bipedal traversability has instead relied on manually designed rules with limited consideration of locomotion stability on rough terrain. In this work, we present the first learning-based traversability estimation and risk-sensitive navigation framework for bipedal robots operating in diverse, uneven environments. TravFormer, a transformer-based neural network, is trained to predict bipedal instability with uncertainty, enabling risk-aware and adaptive planning. Based on the network, we define traversability as stability-aware command velocity—the fastest command velocity that keeps instability below a user-defined limit. This velocity-based traversability is integrated into a hierarchical planner that combines traversability-informed Rapid Random Tree Star (TravRRT*) for time-efficient path planning and Model Predictive Control (MPC) for safe execution. We validate our method in MuJoCo simulator and the real world, demonstrating improved stability, time efficiency, and robustness across diverse terrains compared with existing methods.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"11 2","pages":"2338-2345"},"PeriodicalIF":5.3,"publicationDate":"2025-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146026361","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning-Based Safety-Aware Task Scheduling for Efficient Human-Robot Collaboration 基于学习的人机高效协作安全感知任务调度
IF 5.3 2区 计算机科学 Q2 ROBOTICS Pub Date : 2025-12-25 DOI: 10.1109/LRA.2025.3648605
M. Faroni;A. Spanò;A. M. Zanchettin;P. Rocco
Ensuring human safety in collaborative robotics can compromise efficiency because traditional safety measures increase robot cycle time when human interaction is frequent. This letter proposes a safety-aware approach to mitigate efficiency losses without assuming prior knowledge of safety logic. Using a deep-learning model, the robot learns the relationship between system state and safety-induced speed reductions based on execution data. Our framework does not explicitly predict human motions but directly models the interaction effects on robot speed, simplifying implementation and enhancing generalizability to different safety logics. At runtime, the learned model optimizes task selection to minimize cycle time while adhering to safety requirements. Experiments on a pick-and-packaging scenario demonstrated significant reductions in cycle times.
在人机交互频繁的情况下,传统的安全措施增加了机器人的循环时间,从而降低了协作机器人的效率。这封信提出了一种安全意识的方法,以减轻效率损失,而不假设事先了解安全逻辑。使用深度学习模型,机器人根据执行数据学习系统状态和安全导致的减速之间的关系。我们的框架没有明确预测人类运动,而是直接模拟机器人速度的交互影响,简化了实现并增强了对不同安全逻辑的通用性。在运行时,学习模型优化任务选择,以在遵守安全需求的同时最小化周期时间。在拣选和包装场景的实验表明,在周期时间显著减少。
{"title":"Learning-Based Safety-Aware Task Scheduling for Efficient Human-Robot Collaboration","authors":"M. Faroni;A. Spanò;A. M. Zanchettin;P. Rocco","doi":"10.1109/LRA.2025.3648605","DOIUrl":"https://doi.org/10.1109/LRA.2025.3648605","url":null,"abstract":"Ensuring human safety in collaborative robotics can compromise efficiency because traditional safety measures increase robot cycle time when human interaction is frequent. This letter proposes a safety-aware approach to mitigate efficiency losses without assuming prior knowledge of safety logic. Using a deep-learning model, the robot learns the relationship between system state and safety-induced speed reductions based on execution data. Our framework does not explicitly predict human motions but directly models the interaction effects on robot speed, simplifying implementation and enhancing generalizability to different safety logics. At runtime, the learned model optimizes task selection to minimize cycle time while adhering to safety requirements. Experiments on a pick-and-packaging scenario demonstrated significant reductions in cycle times.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"11 2","pages":"2226-2233"},"PeriodicalIF":5.3,"publicationDate":"2025-12-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146082198","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Data-Efficient Constrained Robot Learning With Probabilistic Lagrangian Control 基于概率拉格朗日控制的数据高效约束机器人学习
IF 5.3 2区 计算机科学 Q2 ROBOTICS Pub Date : 2025-12-25 DOI: 10.1109/LRA.2025.3648503
Shiming He;Yuzhe Ding
We propose a novel framework for data-efficient black-box robot learning under constraints. Our approach integrates probabilistic inference with Lagrangian optimization. With the guide of a learned Gaussian process model, the Lagrange multiplier is controlled by the probability of whether the constraints would be satisfied. This reduces the typical oscillations seen in primal-dual updates and therefore improves both data efficiency and safety during learning. Both synthetic results and robot experiments demonstrate that our method is a scalable and effective solution for constrained robot learning problems.
我们提出了一种新的框架,用于约束下的数据高效黑箱机器人学习。我们的方法将概率推理与拉格朗日优化相结合。在学习到的高斯过程模型的指导下,拉格朗日乘子由约束是否满足的概率来控制。这减少了在原始对偶更新中看到的典型振荡,因此提高了学习期间的数据效率和安全性。综合结果和机器人实验表明,该方法是一种可扩展的、有效的解决约束机器人学习问题的方法。
{"title":"Data-Efficient Constrained Robot Learning With Probabilistic Lagrangian Control","authors":"Shiming He;Yuzhe Ding","doi":"10.1109/LRA.2025.3648503","DOIUrl":"https://doi.org/10.1109/LRA.2025.3648503","url":null,"abstract":"We propose a novel framework for data-efficient black-box robot learning under constraints. Our approach integrates probabilistic inference with Lagrangian optimization. With the guide of a learned Gaussian process model, the Lagrange multiplier is controlled by the probability of whether the constraints would be satisfied. This reduces the typical oscillations seen in primal-dual updates and therefore improves both data efficiency and safety during learning. Both synthetic results and robot experiments demonstrate that our method is a scalable and effective solution for constrained robot learning problems.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"11 2","pages":"2154-2161"},"PeriodicalIF":5.3,"publicationDate":"2025-12-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145886647","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Soft 3D-Printed Endoskeleton for Precise Tendon Routing in Soft Robotics 软机器人中用于精确肌腱路由的软3d打印内骨骼
IF 5.3 2区 计算机科学 Q2 ROBOTICS Pub Date : 2025-12-25 DOI: 10.1109/LRA.2025.3648604
Emanuele Solfiti;Alessio Mondini;Emanuela Del Dottore;Barbara Mazzolai;Alberto Parmiggiani
This paper presents the design, development, and testing of a soft 3D-printed endoskeleton for arbitrary cable routing in tendon-driven soft actuators. The endoskeleton is embedded in a silicone body, and it is fixed to the mold prior to the casting process. It enables tendons to be placed through predefined eyelets, ensuring accurate positioning within the soft body. To minimize its impact on the overall stiffness of the soft body, the endoskeleton was designed with a slim profile, flexible connections, and 3D-printed with elastic material (Shore A hardness 50), selected to roughly match the mechanical properties of the surrounding silicone matrix (typically with Shore 00 hardness 20–30). Key features of the proposed solution include a 3D-printable guide for tendon routing that is (1) fully soft, (2) easy to place, (3) rapidly reconfigurable for arbitrary tendon paths, (4) adaptable to variable soft body geometries, and (5) easy to fabricate with single-step casting. The current work describes the design, manufacturing, simulation, and testing of a simplified case study in which the endoskeleton is employed to reproduce a target pose predicted by FE analysis with good matching, demonstrating the effectiveness of the approach.
本文介绍了一种软3d打印内骨骼的设计、开发和测试,用于肌腱驱动的软执行器中的任意电缆路由。内骨架嵌入在硅胶体中,并且在铸造过程之前将其固定在模具上。它使肌腱能够通过预定义的眼孔放置,确保在柔软的身体内准确定位。为了最大限度地减少其对软组织整体刚度的影响,内骨骼被设计成细长的轮廓,灵活的连接,并使用弹性材料(邵氏硬度50)进行3d打印,选择大致匹配周围硅树脂基质的机械性能(通常为邵氏硬度20-30)。所提出的解决方案的主要特点包括一个用于肌腱路由的3d打印指南,它具有:(1)完全柔软,(2)易于放置,(3)快速重构任意肌腱路径,(4)适应可变的软体几何形状,以及(5)易于单步铸造制造。目前的工作描述了一个简化案例研究的设计、制造、模拟和测试,其中内骨骼被用来复制由有限元分析预测的目标姿态,具有良好的匹配,证明了该方法的有效性。
{"title":"Soft 3D-Printed Endoskeleton for Precise Tendon Routing in Soft Robotics","authors":"Emanuele Solfiti;Alessio Mondini;Emanuela Del Dottore;Barbara Mazzolai;Alberto Parmiggiani","doi":"10.1109/LRA.2025.3648604","DOIUrl":"https://doi.org/10.1109/LRA.2025.3648604","url":null,"abstract":"This paper presents the design, development, and testing of a soft 3D-printed endoskeleton for arbitrary cable routing in tendon-driven soft actuators. The endoskeleton is embedded in a silicone body, and it is fixed to the mold prior to the casting process. It enables tendons to be placed through predefined eyelets, ensuring accurate positioning within the soft body. To minimize its impact on the overall stiffness of the soft body, the endoskeleton was designed with a slim profile, flexible connections, and 3D-printed with elastic material (Shore A hardness 50), selected to roughly match the mechanical properties of the surrounding silicone matrix (typically with Shore 00 hardness 20–30). Key features of the proposed solution include a 3D-printable guide for tendon routing that is (1) fully soft, (2) easy to place, (3) rapidly reconfigurable for arbitrary tendon paths, (4) adaptable to variable soft body geometries, and (5) easy to fabricate with single-step casting. The current work describes the design, manufacturing, simulation, and testing of a simplified case study in which the endoskeleton is employed to reproduce a target pose predicted by FE analysis with good matching, demonstrating the effectiveness of the approach.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"11 2","pages":"2282-2289"},"PeriodicalIF":5.3,"publicationDate":"2025-12-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11315156","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145982292","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IEEE Robotics and Automation Society Information IEEE机器人与自动化学会信息
IF 5.3 2区 计算机科学 Q2 ROBOTICS Pub Date : 2025-12-19 DOI: 10.1109/LRA.2025.3642593
{"title":"IEEE Robotics and Automation Society Information","authors":"","doi":"10.1109/LRA.2025.3642593","DOIUrl":"https://doi.org/10.1109/LRA.2025.3642593","url":null,"abstract":"","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"11 1","pages":"C3-C3"},"PeriodicalIF":5.3,"publicationDate":"2025-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11306189","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145778450","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IEEE Robotics and Automation Letters Information for Authors IEEE机器人与自动化作者信函信息
IF 5.3 2区 计算机科学 Q2 ROBOTICS Pub Date : 2025-12-19 DOI: 10.1109/LRA.2025.3642595
{"title":"IEEE Robotics and Automation Letters Information for Authors","authors":"","doi":"10.1109/LRA.2025.3642595","DOIUrl":"https://doi.org/10.1109/LRA.2025.3642595","url":null,"abstract":"","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"11 1","pages":"C4-C4"},"PeriodicalIF":5.3,"publicationDate":"2025-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11306223","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145778452","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
BEASST: Behavioral Entropic Gradient Based Adaptive Source Seeking for Mobile Robots 基于行为熵梯度的移动机器人自适应源搜索
IF 5.3 2区 计算机科学 Q2 ROBOTICS Pub Date : 2025-12-18 DOI: 10.1109/LRA.2025.3645685
Donipolo Ghimire;Aamodh Suresh;Carlos Nieto-Granda;Solmaz S. Kia
This letter presents BEASST (Behavioral Entropic Gradient-based Adaptive Source Seeking for Mobile Robots), a novel framework for robotic source seeking in complex, unknown environments. Our approach enables mobile robots to efficiently balance exploration and exploitation by modeling normalized signal strength as a surrogate probability of source location. Building on Behavioral Entropy (BE) with Prelec's probability weighting function, we define an objective function that adapts robot behavior from risk-averse to risk-seeking based on signal reliability and mission urgency. The framework provides theoretical convergence guarantees under unimodal signal assumptions and practical stability under bounded disturbances. Experimental validation across DARPA SubT and multi-room scenarios demonstrates that BEASST consistently outperforms state-of-the-art methods and exhibits strong robustness to noisy gradient estimates while maintaining convergence. BEASST achieved 15% reduction in path length and 20% faster source localization through intelligent uncertainty-driven navigation that dynamically transitions between aggressive pursuit and cautious exploration.
这封信提出be助理(基于行为熵梯度的自适应寻源移动机器人),一个在复杂未知环境中寻找机器人源的新框架。我们的方法通过将归一化信号强度建模为源位置的替代概率,使移动机器人能够有效地平衡探索和利用。在行为熵(BE)的基础上,结合Prelec的概率加权函数,定义了基于信号可靠性和任务紧迫性的目标函数,使机器人的行为从风险厌恶转向风险寻求。该框架提供了单峰信号假设下的理论收敛性保证和有界扰动下的实际稳定性。DARPA SubT和多房间场景的实验验证表明,be助理始终优于最先进的方法,在保持收敛性的同时,对噪声梯度估计表现出强大的鲁棒性。be助理通过智能不确定性驱动导航,在积极追求和谨慎探索之间动态转换,实现了路径长度减少15%,源定位速度提高20%。
{"title":"BEASST: Behavioral Entropic Gradient Based Adaptive Source Seeking for Mobile Robots","authors":"Donipolo Ghimire;Aamodh Suresh;Carlos Nieto-Granda;Solmaz S. Kia","doi":"10.1109/LRA.2025.3645685","DOIUrl":"https://doi.org/10.1109/LRA.2025.3645685","url":null,"abstract":"This letter presents BEASST (Behavioral Entropic Gradient-based Adaptive Source Seeking for Mobile Robots), a novel framework for robotic source seeking in complex, unknown environments. Our approach enables mobile robots to efficiently balance exploration and exploitation by modeling normalized signal strength as a surrogate probability of source location. Building on Behavioral Entropy (BE) with Prelec's probability weighting function, we define an objective function that adapts robot behavior from risk-averse to risk-seeking based on signal reliability and mission urgency. The framework provides theoretical convergence guarantees under unimodal signal assumptions and practical stability under bounded disturbances. Experimental validation across DARPA SubT and multi-room scenarios demonstrates that BEASST consistently outperforms state-of-the-art methods and exhibits strong robustness to noisy gradient estimates while maintaining convergence. BEASST achieved 15% reduction in path length and 20% faster source localization through intelligent uncertainty-driven navigation that dynamically transitions between aggressive pursuit and cautious exploration.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"11 2","pages":"1906-1913"},"PeriodicalIF":5.3,"publicationDate":"2025-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145830770","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-Robot Collaborative SLAM (Multi-SLAM) With Distributed Lightweight Predictive Frontier Exploration (LPFE) 多机器人协同SLAM (Multi-SLAM)与分布式轻量级预测前沿探索(LPFE)
IF 5.3 2区 计算机科学 Q2 ROBOTICS Pub Date : 2025-12-18 DOI: 10.1109/LRA.2025.3645990
Achala Athukorala;Billy Pik Lik Lau;Khattiya Pongsirijinda;Chau Yuen;U-Xuan Tan
Autonomous mobile robot systems have been extremely useful in exploration tasks for inspection and surveying of unknown environments, where map quality and exploration speed are often important factors. To effectively increase the exploration speed, multi-robot systems and collaborative exploration have been gaining attention in recent years. However, multi-robot exploration introduces two main challenges: 1) shared mapping between the robots; and 2) efficient coordination between the robots. Towards efficient and practical multi-robot exploration, this work proposes a new Distributed Multi-Robot Collaborative SLAM (Multi-SLAM) framework and a Lightweight Predictive Frontier Exploration (LPFE) to enable ground robot fleets to explore unknown environments faster and efficiently. Our Multi-SLAM approach generates a graph based globally optimized map using information from all robots in the environment in a network bandwidth efficient manner, while our LPFE coordinates the exploration of the robots using a deterministic, inference-based heuristic, allowing robots to anticipate one another's actions without explicit communication. The experimental results demonstrate that our pipeline outperforms traditional frontier exploration approach, as well as state-of-the-art planners for ground robots, with up to 70% reduction in exploration times, with up to $13times$ less CPU usage and up to $50times$ less network bandwidth usage. We also present our Multi-SLAM and LPFE code-base which we have extensively tested in real-world robot fleets in different environments.
自主移动机器人系统在未知环境的探测任务中非常有用,其中地图质量和探测速度通常是重要因素。为了有效地提高探测速度,近年来多机器人系统和协同探测受到了人们的关注。然而,多机器人探索带来了两个主要挑战:1)机器人之间的共享映射;2)机器人之间的高效协调。为了实现高效实用的多机器人探索,本研究提出了一种新的分布式多机器人协作SLAM (Multi-SLAM)框架和轻量级预测性前沿探索(LPFE),使地面机器人团队能够更快、更有效地探索未知环境。我们的Multi-SLAM方法使用来自环境中所有机器人的信息,以网络带宽高效的方式生成基于全局优化地图的图形,而我们的LPFE使用确定性的、基于推理的启发式方法协调机器人的探索,允许机器人在没有明确通信的情况下预测彼此的行动。实验结果表明,我们的管道优于传统的前沿勘探方法,以及最先进的地面机器人规划器,勘探时间减少了70%,CPU使用减少了13倍,网络带宽使用减少了50倍。我们还介绍了我们的Multi-SLAM和LPFE代码库,我们已经在不同环境下的真实机器人车队中进行了广泛的测试。
{"title":"Multi-Robot Collaborative SLAM (Multi-SLAM) With Distributed Lightweight Predictive Frontier Exploration (LPFE)","authors":"Achala Athukorala;Billy Pik Lik Lau;Khattiya Pongsirijinda;Chau Yuen;U-Xuan Tan","doi":"10.1109/LRA.2025.3645990","DOIUrl":"https://doi.org/10.1109/LRA.2025.3645990","url":null,"abstract":"Autonomous mobile robot systems have been extremely useful in exploration tasks for inspection and surveying of unknown environments, where map quality and exploration speed are often important factors. To effectively increase the exploration speed, multi-robot systems and collaborative exploration have been gaining attention in recent years. However, multi-robot exploration introduces two main challenges: 1) shared mapping between the robots; and 2) efficient coordination between the robots. Towards efficient and practical multi-robot exploration, this work proposes a new Distributed Multi-Robot Collaborative SLAM (Multi-SLAM) framework and a Lightweight Predictive Frontier Exploration (LPFE) to enable ground robot fleets to explore unknown environments faster and efficiently. Our Multi-SLAM approach generates a graph based globally optimized map using information from all robots in the environment in a network bandwidth efficient manner, while our LPFE coordinates the exploration of the robots using a deterministic, inference-based heuristic, allowing robots to anticipate one another's actions without explicit communication. The experimental results demonstrate that our pipeline outperforms traditional frontier exploration approach, as well as state-of-the-art planners for ground robots, with up to 70% reduction in exploration times, with up to <inline-formula><tex-math>$13times$</tex-math></inline-formula> less CPU usage and up to <inline-formula><tex-math>$50times$</tex-math></inline-formula> less network bandwidth usage. We also present our Multi-SLAM and LPFE code-base which we have extensively tested in real-world robot fleets in different environments.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"11 2","pages":"2274-2281"},"PeriodicalIF":5.3,"publicationDate":"2025-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145929631","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SIGN: Safety-Aware Image-Goal Navigation for Autonomous Drones via Reinforcement Learning 标志:通过强化学习实现自主无人机的安全感知图像目标导航
IF 5.3 2区 计算机科学 Q2 ROBOTICS Pub Date : 2025-12-18 DOI: 10.1109/LRA.2025.3645668
Zichen Yan;Rui Huang;Lei He;Shao Guo;Lin Zhao
Image-goal navigation (ImageNav) tasks a robot with autonomously exploring an unknown environment and reaching a location that visually matches a given target image. While prior works primarily study ImageNav for ground robots, enabling this capability for autonomous drones is substantially more challenging due to their need for high-frequency feedback control and global localization for stable flight. In this letter, we propose a novel sim-to-real framework that leverages reinforcement learning (RL) to achieve ImageNav for drones. To enhance visual representation ability, our approach trains the vision backbone with auxiliary tasks, including image perturbations and future transition prediction, which results in more effective policy training. The proposed algorithm enables end-to-end ImageNav with direct velocity control, eliminating the need for external localization. Furthermore, we integrate a depth-based safety module for real-time obstacle avoidance, allowing the drone to safely navigate in cluttered environments. Unlike most existing drone navigation methods that focus solely on reference tracking or obstacle avoidance, our framework supports comprehensive navigation behaviors, including autonomous exploration, obstacle avoidance, and image-goal seeking, without requiring explicit global mapping.
图像目标导航(ImageNav)任务是让机器人自主探索未知环境,并到达视觉上与给定目标图像匹配的位置。虽然之前的工作主要是研究地面机器人的ImageNav,但由于自主无人机需要高频反馈控制和稳定飞行的全局定位,因此使其具备这种能力更具挑战性。在这封信中,我们提出了一个新的模拟到真实的框架,利用强化学习(RL)来实现无人机的ImageNav。为了增强视觉表征能力,我们的方法使用辅助任务训练视觉骨干,包括图像扰动和未来过渡预测,从而实现更有效的策略训练。该算法实现了端到端的ImageNav直接速度控制,消除了对外部定位的需要。此外,我们还集成了一个基于深度的安全模块,用于实时避障,使无人机能够在混乱的环境中安全导航。与大多数现有的无人机导航方法只关注参考跟踪或避障不同,我们的框架支持全面的导航行为,包括自主探索、避障和图像目标搜索,而不需要明确的全局映射。
{"title":"SIGN: Safety-Aware Image-Goal Navigation for Autonomous Drones via Reinforcement Learning","authors":"Zichen Yan;Rui Huang;Lei He;Shao Guo;Lin Zhao","doi":"10.1109/LRA.2025.3645668","DOIUrl":"https://doi.org/10.1109/LRA.2025.3645668","url":null,"abstract":"Image-goal navigation (ImageNav) tasks a robot with autonomously exploring an unknown environment and reaching a location that visually matches a given target image. While prior works primarily study ImageNav for ground robots, enabling this capability for autonomous drones is substantially more challenging due to their need for high-frequency feedback control and global localization for stable flight. In this letter, we propose a novel sim-to-real framework that leverages reinforcement learning (RL) to achieve ImageNav for drones. To enhance visual representation ability, our approach trains the vision backbone with auxiliary tasks, including image perturbations and future transition prediction, which results in more effective policy training. The proposed algorithm enables end-to-end ImageNav with direct velocity control, eliminating the need for external localization. Furthermore, we integrate a depth-based safety module for real-time obstacle avoidance, allowing the drone to safely navigate in cluttered environments. Unlike most existing drone navigation methods that focus solely on reference tracking or obstacle avoidance, our framework supports comprehensive navigation behaviors, including autonomous exploration, obstacle avoidance, and image-goal seeking, without requiring explicit global mapping.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"11 2","pages":"1962-1969"},"PeriodicalIF":5.3,"publicationDate":"2025-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145830921","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Robotics and Automation Letters
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1