首页 > 最新文献

Robotics and Autonomous Systems最新文献

英文 中文
MOVRO2: Loosely coupled monocular visual radar odometry using factor graph optimization MOVRO2:使用因数图优化的松耦合单目视觉雷达测距仪
IF 4.3 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS Pub Date : 2024-11-25 DOI: 10.1016/j.robot.2024.104860
Vlaho-Josip Štironja , Juraj Peršić , Luka Petrović , Ivan Marković , Ivan Petrović
Ego-motion estimation is an indispensable part of any autonomous system, especially in scenarios where wheel odometry or global pose measurement is unreliable or unavailable. In an environment where a global navigation satellite system is not available, conventional solutions for ego-motion estimation rely on the fusion of a LiDAR, a monocular camera and an inertial measurement unit (IMU), which is often plagued by drift. Therefore, complementary sensor solutions are being explored instead of relying on expensive and powerful IMUs. In this paper, we propose a method for estimating ego-motion, which we call MOVRO2, that utilizes the complementarity of radar and camera data. It is based on a loosely coupled monocular visual radar odometry approach within a factor graph optimization framework. The adoption of a loosely coupled approach is motivated by its scalability and the possibility to develop sensor models independently. To estimate the motion within the proposed framework, we fuse ego-velocity of the radar and scan-to-scan matches with the rotation obtained from consecutive camera frames and the unscaled velocity of the monocular odometry. We evaluate the performance of the proposed method on two open-source datasets and compare it to various mono-, dual- and three-sensor solutions, where our cost-effective method demonstrates performance comparable to state-of-the-art visual-inertial radar and LiDAR odometry solutions using high-performance 64-line LiDARs.
自我运动估计是任何自主系统不可或缺的一部分,尤其是在车轮里程测量或全局姿态测量不可靠或不可用的情况下。在没有全球导航卫星系统的环境中,自我运动估计的传统解决方案依赖于激光雷达、单目摄像头和惯性测量单元(IMU)的融合,而这往往会受到漂移的困扰。因此,人们正在探索补充传感器解决方案,而不是依赖昂贵且功能强大的惯性测量单元。在本文中,我们提出了一种利用雷达和摄像头数据互补性来估计自我运动的方法,我们称之为 MOVRO2。该方法基于因数图优化框架内的松散耦合单目视觉雷达里程测量方法。采用松散耦合方法的原因是其可扩展性和独立开发传感器模型的可能性。为了在提议的框架内估计运动,我们将雷达的自我速度和扫描到扫描的匹配度与从连续相机帧中获得的旋转以及单目里程测量的无标度速度融合在一起。我们在两个开源数据集上评估了所提方法的性能,并将其与各种单传感器、双传感器和三传感器解决方案进行了比较,结果表明,我们的方法具有成本效益,其性能可与使用高性能 64 线激光雷达的最先进视觉惯性雷达和激光雷达里程测量解决方案相媲美。
{"title":"MOVRO2: Loosely coupled monocular visual radar odometry using factor graph optimization","authors":"Vlaho-Josip Štironja ,&nbsp;Juraj Peršić ,&nbsp;Luka Petrović ,&nbsp;Ivan Marković ,&nbsp;Ivan Petrović","doi":"10.1016/j.robot.2024.104860","DOIUrl":"10.1016/j.robot.2024.104860","url":null,"abstract":"<div><div>Ego-motion estimation is an indispensable part of any autonomous system, especially in scenarios where wheel odometry or global pose measurement is unreliable or unavailable. In an environment where a global navigation satellite system is not available, conventional solutions for ego-motion estimation rely on the fusion of a LiDAR, a monocular camera and an inertial measurement unit (IMU), which is often plagued by drift. Therefore, complementary sensor solutions are being explored instead of relying on expensive and powerful IMUs. In this paper, we propose a method for estimating ego-motion, which we call MOVRO2, that utilizes the complementarity of radar and camera data. It is based on a loosely coupled monocular visual radar odometry approach within a factor graph optimization framework. The adoption of a loosely coupled approach is motivated by its scalability and the possibility to develop sensor models independently. To estimate the motion within the proposed framework, we fuse ego-velocity of the radar and scan-to-scan matches with the rotation obtained from consecutive camera frames and the unscaled velocity of the monocular odometry. We evaluate the performance of the proposed method on two open-source datasets and compare it to various mono-, dual- and three-sensor solutions, where our cost-effective method demonstrates performance comparable to state-of-the-art visual-inertial radar and LiDAR odometry solutions using high-performance 64-line LiDARs.</div></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"184 ","pages":"Article 104860"},"PeriodicalIF":4.3,"publicationDate":"2024-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142702829","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CUAHN-VIO: Content-and-uncertainty-aware homography network for visual-inertial odometry 视觉惯性里程计的内容和不确定性感知单应性网络
IF 4.3 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS Pub Date : 2024-11-22 DOI: 10.1016/j.robot.2024.104866
Yingfu Xu, Guido C.H.E. de Croon
Learning-based visual ego-motion estimation is promising yet not ready for navigating agile mobile robots in the real world. In this article, we propose CUAHN-VIO, a robust and efficient monocular visual-inertial odometry (VIO) designed for micro aerial vehicles (MAVs) equipped with a downward-facing camera. The vision frontend is a content-and-uncertainty-aware homography network (CUAHN). Content awareness measures the robustness of the network toward non-homography image content, e.g. 3-dimensional objects lying on a planar surface. Uncertainty awareness refers that the network not only predicts the homography transformation but also estimates the prediction uncertainty. The training requires no ground truth that is often difficult to obtain. The network has good generalization that enables “plug-and-play” deployment in new environments without fine-tuning. A lightweight extended Kalman filter (EKF) serves as the VIO backend and utilizes the mean prediction and variance estimation from the network for visual measurement updates. CUAHN-VIO is evaluated on a high-speed public dataset and shows rivaling accuracy to state-of-the-art (SOTA) VIO approaches. Thanks to the robustness to motion blur, low network inference time (23 ms), and stable processing latency (26 ms), CUAHN-VIO successfully runs onboard an Nvidia Jetson TX2 embedded processor to navigate a fast autonomous MAV.
基于学习的视觉自我运动估计很有前途,但还没有准备好在现实世界中导航敏捷移动机器人。在这篇文章中,我们提出了一种鲁棒和高效的单目视觉惯性里程计(VIO),设计用于配备向下摄像头的微型飞行器(MAVs)。视觉前端是一个内容和不确定性感知的同形词网络(CUAHN)。内容感知测量网络对非单应性图像内容的鲁棒性,例如平面上的三维物体。不确定性感知是指网络在预测单应性变换的同时,对预测的不确定性进行估计。这种训练不需要通常难以获得的基础真理。该网络具有良好的泛化能力,无需微调即可在新环境中进行“即插即用”部署。一个轻量级的扩展卡尔曼滤波器(EKF)作为VIO后端,利用来自网络的均值预测和方差估计进行视觉测量更新。CUAHN-VIO在高速公共数据集上进行了评估,并显示出与最先进的(SOTA) VIO方法相媲美的准确性。由于对运动模糊的鲁棒性,低网络推断时间(~ 23 ms)和稳定的处理延迟(~ 26 ms), CUAHN-VIO成功地在Nvidia Jetson TX2嵌入式处理器上运行,以导航快速自主MAV。
{"title":"CUAHN-VIO: Content-and-uncertainty-aware homography network for visual-inertial odometry","authors":"Yingfu Xu,&nbsp;Guido C.H.E. de Croon","doi":"10.1016/j.robot.2024.104866","DOIUrl":"10.1016/j.robot.2024.104866","url":null,"abstract":"<div><div>Learning-based visual ego-motion estimation is promising yet not ready for navigating agile mobile robots in the real world. In this article, we propose CUAHN-VIO, a robust and efficient monocular visual-inertial odometry (VIO) designed for micro aerial vehicles (MAVs) equipped with a downward-facing camera. The vision frontend is a content-and-uncertainty-aware homography network (CUAHN). Content awareness measures the robustness of the network toward non-homography image content, <em>e.g.</em> 3-dimensional objects lying on a planar surface. Uncertainty awareness refers that the network not only predicts the homography transformation but also estimates the prediction uncertainty. The training requires no ground truth that is often difficult to obtain. The network has good generalization that enables “plug-and-play” deployment in new environments without fine-tuning. A lightweight extended Kalman filter (EKF) serves as the VIO backend and utilizes the mean prediction and variance estimation from the network for visual measurement updates. CUAHN-VIO is evaluated on a high-speed public dataset and shows rivaling accuracy to state-of-the-art (SOTA) VIO approaches. Thanks to the robustness to motion blur, low network inference time (<span><math><mo>∼</mo></math></span>23 ms), and stable processing latency (<span><math><mo>∼</mo></math></span>26 ms), CUAHN-VIO successfully runs onboard an Nvidia Jetson TX2 embedded processor to navigate a fast autonomous MAV.</div></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"185 ","pages":"Article 104866"},"PeriodicalIF":4.3,"publicationDate":"2024-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142744976","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards zero-shot cross-agent transfer learning via latent-space universal notice network 通过潜空间通用通知网络实现零距离跨代理迁移学习
IF 4.3 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS Pub Date : 2024-11-22 DOI: 10.1016/j.robot.2024.104862
Samuel Beaussant , Sebastien Lengagne , Benoit Thuilot , Olivier Stasse
Despite numerous improvements regarding the sample-efficiency of Reinforcement Learning (RL) methods, learning from scratch still requires millions (even dozens of millions) of interactions with the environment to converge to a high-reward policy. This is usually because the agent has no prior information about the task and its own physical embodiment. One way to address and mitigate this data-hungriness is to use Transfer Learning (TL). In this paper, we explore TL in the context of RL with the specific purpose of transferring policies from one agent to another, even in the presence of morphology discrepancies or different state–action spaces. We propose a process to leverage past knowledge from one agent (source) to speed up or even bypass the learning phase for a different agent (target) tackling the same task. Our proposed method first leverages Variational Auto-Encoders (VAE) to learn an agent-agnostic latent space from paired, time-aligned trajectories collected on a set of agents. Then, we train a policy embedded inside the created agent-invariant latent space to solve a given task, yielding a task-module reusable by any of the agents sharing this common feature space. Through several robotic tasks and heterogeneous hardware platforms, both in simulation and on physical robots, we show the benefits of our approach in terms of improved sample-efficiency. More specifically we report zero-shot generalization in some instances, where performances after transfer are recovered instantly. In worst case scenarios, performances are retrieved after fine-tuning on the target robot for a fraction of the training cost required to train a policy with similar performances from scratch.
尽管强化学习(RL)方法在样本效率方面有了许多改进,但从头开始学习仍需要与环境进行数百万次(甚至数千万次)的交互,才能收敛到高回报策略。这通常是因为代理没有关于任务和自身物理体现的先验信息。解决和缓解这种数据饥渴症的方法之一是使用迁移学习(TL)。在本文中,我们探讨了 RL 背景下的 TL,其具体目的是将策略从一个代理转移到另一个代理,即使存在形态差异或不同的状态-行动空间。我们提出了一个流程,利用一个代理(源代理)过去的知识,加快甚至绕过处理相同任务的另一个代理(目标代理)的学习阶段。我们提出的方法首先利用变异自动编码器(VAE),从一组代理收集的成对、时间对齐的轨迹中学习一个与代理无关的潜在空间。然后,我们在所创建的与代理无关的潜空间内训练一个策略,以解决给定的任务,从而产生一个任务模块,可供共享这一共同特征空间的任何代理重复使用。通过几个机器人任务和异构硬件平台(包括模拟和物理机器人),我们展示了我们的方法在提高采样效率方面的优势。更具体地说,我们报告了某些情况下的零镜头泛化,即转移后的性能可立即恢复。在最坏的情况下,在目标机器人上进行微调后,只需花费从头开始训练具有类似性能的策略所需的一小部分训练成本,就能恢复性能。
{"title":"Towards zero-shot cross-agent transfer learning via latent-space universal notice network","authors":"Samuel Beaussant ,&nbsp;Sebastien Lengagne ,&nbsp;Benoit Thuilot ,&nbsp;Olivier Stasse","doi":"10.1016/j.robot.2024.104862","DOIUrl":"10.1016/j.robot.2024.104862","url":null,"abstract":"<div><div>Despite numerous improvements regarding the sample-efficiency of Reinforcement Learning (RL) methods, learning from scratch still requires millions (even dozens of millions) of interactions with the environment to converge to a high-reward policy. This is usually because the agent has no prior information about the task and its own physical embodiment. One way to address and mitigate this data-hungriness is to use Transfer Learning (TL). In this paper, we explore TL in the context of RL with the specific purpose of transferring policies from one agent to another, even in the presence of morphology discrepancies or different state–action spaces. We propose a process to leverage past knowledge from one agent (source) to speed up or even bypass the learning phase for a different agent (target) tackling the same task. Our proposed method first leverages Variational Auto-Encoders (VAE) to learn an agent-agnostic latent space from paired, time-aligned trajectories collected on a set of agents. Then, we train a policy embedded inside the created agent-invariant latent space to solve a given task, yielding a task-module reusable by any of the agents sharing this common feature space. Through several robotic tasks and heterogeneous hardware platforms, both in simulation and on physical robots, we show the benefits of our approach in terms of improved sample-efficiency. More specifically we report zero-shot generalization in some instances, where performances after transfer are recovered instantly. In worst case scenarios, performances are retrieved after fine-tuning on the target robot for a fraction of the training cost required to train a policy with similar performances from scratch.</div></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"184 ","pages":"Article 104862"},"PeriodicalIF":4.3,"publicationDate":"2024-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142721912","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning temporal maps of dynamics for mobile robots 学习移动机器人的动态时序图
IF 4.3 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS Pub Date : 2024-11-22 DOI: 10.1016/j.robot.2024.104853
Junyi Shi , Tomasz Piotr Kucner
Building a map representation of the surrounding environment is crucial for the successful operation of autonomous robots. While extensive research has concentrated on mapping geometric structures and static objects, the environment is also influenced by the movement of dynamic objects. Integrating information about spatial motion patterns in an environment can be beneficial for planning socially compliant trajectories, avoiding congested areas, and aligning with the general flow of people. In this paper, we introduce a deep state-space model designed to learn map representations of spatial motion patterns and their temporal changes at specific locations. Thus enabling the robot for human-compliant operation and improved trajectory forecasting in environments with evolving motion patterns. Validation of the proposed method is conducted using two datasets: one comprising generated motion patterns and the other featuring real-world pedestrian data. The model’s performance is assessed in terms of learning capability, mapping quality, and its applicability to downstream robotics tasks. For comparative assessment of mapping quality, we employ CLiFF-Map as a baseline, and CLiFF-LHMP serves as another baseline for evaluating performance in downstream motion prediction tasks. The results demonstrate that our model can effectively learn corresponding motion patterns and holds promising potential for application in robotic tasks.
建立周围环境的地图表示对于自主机器人的成功运行至关重要。虽然大量研究都集中在几何结构和静态物体的映射上,但环境也受到动态物体运动的影响。整合环境中的空间运动模式信息有利于规划符合社会要求的轨迹,避开拥挤区域,并与人流保持一致。在本文中,我们介绍了一种深度状态空间模型,旨在学习空间运动模式的地图表示及其在特定位置的时间变化。这样,机器人就能在运动模式不断变化的环境中进行与人类相适应的操作并改进轨迹预测。我们使用两个数据集对所提出的方法进行了验证:一个数据集包含生成的运动模式,另一个数据集则包含真实世界中的行人数据。从学习能力、映射质量及其对下游机器人任务的适用性等方面对模型的性能进行了评估。为了对映射质量进行比较评估,我们将 CLiFF-Map 作为基线,并将 CLiFF-LHMP 作为另一个基线,评估其在下游运动预测任务中的性能。结果表明,我们的模型可以有效地学习相应的运动模式,在机器人任务中具有广阔的应用前景。
{"title":"Learning temporal maps of dynamics for mobile robots","authors":"Junyi Shi ,&nbsp;Tomasz Piotr Kucner","doi":"10.1016/j.robot.2024.104853","DOIUrl":"10.1016/j.robot.2024.104853","url":null,"abstract":"<div><div>Building a map representation of the surrounding environment is crucial for the successful operation of autonomous robots. While extensive research has concentrated on mapping geometric structures and static objects, the environment is also influenced by the movement of dynamic objects. Integrating information about spatial motion patterns in an environment can be beneficial for planning socially compliant trajectories, avoiding congested areas, and aligning with the general flow of people. In this paper, we introduce a deep state-space model designed to learn map representations of spatial motion patterns and their temporal changes at specific locations. Thus enabling the robot for human-compliant operation and improved trajectory forecasting in environments with evolving motion patterns. Validation of the proposed method is conducted using two datasets: one comprising generated motion patterns and the other featuring real-world pedestrian data. The model’s performance is assessed in terms of learning capability, mapping quality, and its applicability to downstream robotics tasks. For comparative assessment of mapping quality, we employ CLiFF-Map as a baseline, and CLiFF-LHMP serves as another baseline for evaluating performance in downstream motion prediction tasks. The results demonstrate that our model can effectively learn corresponding motion patterns and holds promising potential for application in robotic tasks.</div></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"184 ","pages":"Article 104853"},"PeriodicalIF":4.3,"publicationDate":"2024-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142702685","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automation of polymer pressing by robotic handling with in-process parameter optimization 基于过程参数优化的聚合物机械加工自动化
IF 4.3 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS Pub Date : 2024-11-21 DOI: 10.1016/j.robot.2024.104868
Yuki Asano , Kei Okada , Shintaro Nakagawa , Naoko Yoshie , Junichiro Shiomi
In this study, we introduce an autonomous system for polymer pressing that integrates robotic manipulation, specialized equipment, and machine learning optimization. This system aims to significantly reduce lead time and human labor in polymer-materials development. Our approach utilizes an arm-type robot to handle polymer beads and operate a press machine, with process parameters autonomously determined by Bayesian optimization. The keys to this automation are custom-designed press tools that are suitable for robotic handling, such as press plates or fork, a gripper—tool interface with tapered convex and concave parts that enables the handling of multiple tools by a single robot gripper, and an integrated control system that synchronizes the robot with the press machine. Additionally, we implement a closed-loop process that incorporates image processing for pressed-polymer recognition and Bayesian optimization for continuous parameter refinement, with an evaluation function that considers polymer-film thickness and press times. Verification experiments demonstrate the capability of the system to autonomously execute pressing operations and effectively propose optimized press parameters.
在这项研究中,我们介绍了一个集成了机器人操作、专用设备和机器学习优化的聚合物压制自主系统。该系统旨在显著减少聚合物材料开发的交货时间和人力劳动。我们的方法利用手臂型机器人来处理聚合物珠和操作压力机,工艺参数由贝叶斯优化自主确定。实现这一自动化的关键是定制设计的适合机器人操作的冲压工具,如压板或压叉,夹具-工具接口,带有锥形凸和凹部件,使单个机器人夹具能够处理多个工具,以及集成控制系统,使机器人与压力机同步。此外,我们实现了一个闭环过程,该过程结合了用于压制聚合物识别的图像处理和用于连续参数优化的贝叶斯优化,以及考虑聚合物膜厚度和压制时间的评估函数。验证实验表明,该系统能够自主执行冲压操作,并有效地提出优化的冲压参数。
{"title":"Automation of polymer pressing by robotic handling with in-process parameter optimization","authors":"Yuki Asano ,&nbsp;Kei Okada ,&nbsp;Shintaro Nakagawa ,&nbsp;Naoko Yoshie ,&nbsp;Junichiro Shiomi","doi":"10.1016/j.robot.2024.104868","DOIUrl":"10.1016/j.robot.2024.104868","url":null,"abstract":"<div><div>In this study, we introduce an autonomous system for polymer pressing that integrates robotic manipulation, specialized equipment, and machine learning optimization. This system aims to significantly reduce lead time and human labor in polymer-materials development. Our approach utilizes an arm-type robot to handle polymer beads and operate a press machine, with process parameters autonomously determined by Bayesian optimization. The keys to this automation are custom-designed press tools that are suitable for robotic handling, such as press plates or fork, a gripper—tool interface with tapered convex and concave parts that enables the handling of multiple tools by a single robot gripper, and an integrated control system that synchronizes the robot with the press machine. Additionally, we implement a closed-loop process that incorporates image processing for pressed-polymer recognition and Bayesian optimization for continuous parameter refinement, with an evaluation function that considers polymer-film thickness and press times. Verification experiments demonstrate the capability of the system to autonomously execute pressing operations and effectively propose optimized press parameters.</div></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"185 ","pages":"Article 104868"},"PeriodicalIF":4.3,"publicationDate":"2024-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142759328","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Delta- and Kalman-filter designs for multi-sensor pose estimation on spherical mobile mapping systems 用于球形移动测绘系统多传感器姿态估计的德尔塔滤波器和卡尔曼滤波器设计
IF 4.3 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS Pub Date : 2024-11-21 DOI: 10.1016/j.robot.2024.104852
Fabian Arzberger , Tim Schubert , Fabian Wiecha , Jasper Zevering , Julian Rothe , Dorit Borrmann , Sergio Montenegro , Andreas Nüchter
Spherical mobile mapping systems are not thoroughly studied in terms of inertial pose estimation filtering. The underlying inherent rolling motion introduces high angular velocities and aggressive system dynamics around all principal axes. This motion profile also needs different modeling compared to state-of-the-art competitors, which heavily focus on more rotationally-restricted systems such as UAV, handheld, or cars. In this work we compare our previously proposed “Delta-filter”, which was heavily motivated by the sensors inability to provide covariance estimations, with a Kalman-filter design using a covariance model. Both filters fuse two 6-DoF pose estimators with a motion model in real-time, however the designs are theoretically suitable for an arbitrary number of estimators. We evaluate the trajectories against ground truth pose measurement from an OptiTrack™ motion capturing system. Furthermore, as our spherical systems are equipped with laser-scanners, we evaluate the resulting point clouds against ground truth maps available from a Riegl VZ400 terrestrial laser-scanner (TLS). Our source code and datasets can be found on github (Arzberger, 2023).
在惯性姿态估计过滤方面,对球形移动测绘系统的研究并不深入。其内在的滚动运动带来了高角速度和围绕所有主轴的剧烈系统动态。与最先进的竞争者相比,这种运动特征也需要不同的建模方法,这些竞争者主要关注旋转限制较多的系统,如无人机、手持设备或汽车。在这项工作中,我们将之前提出的 "Delta 过滤器 "与卡尔曼滤波器设计(使用协方差模型)进行了比较,前者的主要原因是传感器无法提供协方差估计。这两种滤波器都将两个 6-DoF 姿态估算器与一个运动模型实时融合,但理论上这两种设计都适用于任意数量的估算器。我们根据 OptiTrack™ 运动捕捉系统的地面真实姿态测量结果对轨迹进行了评估。此外,由于我们的球形系统配备了激光扫描仪,我们还根据 Riegl VZ400 陆地激光扫描仪(TLS)提供的地面实况地图对生成的点云进行了评估。我们的源代码和数据集可在 github 上找到(Arzberger,2023 年)。
{"title":"Delta- and Kalman-filter designs for multi-sensor pose estimation on spherical mobile mapping systems","authors":"Fabian Arzberger ,&nbsp;Tim Schubert ,&nbsp;Fabian Wiecha ,&nbsp;Jasper Zevering ,&nbsp;Julian Rothe ,&nbsp;Dorit Borrmann ,&nbsp;Sergio Montenegro ,&nbsp;Andreas Nüchter","doi":"10.1016/j.robot.2024.104852","DOIUrl":"10.1016/j.robot.2024.104852","url":null,"abstract":"<div><div>Spherical mobile mapping systems are not thoroughly studied in terms of inertial pose estimation filtering. The underlying inherent rolling motion introduces high angular velocities and aggressive system dynamics around all principal axes. This motion profile also needs different modeling compared to state-of-the-art competitors, which heavily focus on more rotationally-restricted systems such as UAV, handheld, or cars. In this work we compare our previously proposed “Delta-filter”, which was heavily motivated by the sensors inability to provide covariance estimations, with a Kalman-filter design using a covariance model. Both filters fuse two 6-DoF pose estimators with a motion model in real-time, however the designs are theoretically suitable for an arbitrary number of estimators. We evaluate the trajectories against ground truth pose measurement from an OptiTrack™ motion capturing system. Furthermore, as our spherical systems are equipped with laser-scanners, we evaluate the resulting point clouds against ground truth maps available from a Riegl VZ400 terrestrial laser-scanner (TLS). Our source code and datasets can be found on github (Arzberger, 2023).</div></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"184 ","pages":"Article 104852"},"PeriodicalIF":4.3,"publicationDate":"2024-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142702684","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Safe tracking control for free-flying space robots via control barrier functions 通过控制屏障功能实现自由飞行太空机器人的安全跟踪控制
IF 4.3 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS Pub Date : 2024-11-21 DOI: 10.1016/j.robot.2024.104865
Chengrui Shi , Tao Meng , Kun Wang , Jiakun Lei , Weijia Wang , Renhao Mao
Safety is a critical problem for space robots in future complex autonomous On-Orbit Services. In this paper, we propose a real-time and guaranteed method for whole-body safe tracking control of free-flying space robots using High Order Control Barrier Functions (HOCBFs).
We start by utilizing capsule-shaped safety envelopes for an accurate approximation of space robots. This is followed by the development of HOCBF-based safety filters to ensure simultaneous collision avoidance and compliance with specified joint limits. To mitigate feasibility issues, we incorporate the optimal decay method into our safety filter design. Furthermore, we introduce a data-driven re-planning mechanism to avoid local minimums of control barrier functions. Such a mechanism primarily operates through anomaly detection of tracking behavior using One-Class Support Vector Machines.
Numerical experiments demonstrate that our method effectively ensures safety of space robots under complicated circumstances without compromising the system’s ability to achieve its intended goals.
在未来复杂的自主在轨服务中,安全是太空机器人面临的一个关键问题。在本文中,我们提出了一种使用高阶控制障碍函数(HOCBFs)对自由飞行的太空机器人进行全身安全跟踪控制的实时和有保证的方法。随后,我们开发了基于 HOCBF 的安全滤波器,以确保同时避免碰撞和遵守指定的关节限制。为了缓解可行性问题,我们在安全滤波器设计中采用了最优衰减法。此外,我们还引入了数据驱动的重新规划机制,以避免控制障碍函数的局部最小值。数值实验证明,我们的方法能有效确保太空机器人在复杂环境下的安全,同时不影响系统实现预期目标的能力。
{"title":"Safe tracking control for free-flying space robots via control barrier functions","authors":"Chengrui Shi ,&nbsp;Tao Meng ,&nbsp;Kun Wang ,&nbsp;Jiakun Lei ,&nbsp;Weijia Wang ,&nbsp;Renhao Mao","doi":"10.1016/j.robot.2024.104865","DOIUrl":"10.1016/j.robot.2024.104865","url":null,"abstract":"<div><div>Safety is a critical problem for space robots in future complex autonomous On-Orbit Services. In this paper, we propose a real-time and guaranteed method for whole-body safe tracking control of free-flying space robots using High Order Control Barrier Functions (HOCBFs).</div><div>We start by utilizing capsule-shaped safety envelopes for an accurate approximation of space robots. This is followed by the development of HOCBF-based safety filters to ensure simultaneous collision avoidance and compliance with specified joint limits. To mitigate feasibility issues, we incorporate the optimal decay method into our safety filter design. Furthermore, we introduce a data-driven re-planning mechanism to avoid local minimums of control barrier functions. Such a mechanism primarily operates through anomaly detection of tracking behavior using One-Class Support Vector Machines.</div><div>Numerical experiments demonstrate that our method effectively ensures safety of space robots under complicated circumstances without compromising the system’s ability to achieve its intended goals.</div></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"184 ","pages":"Article 104865"},"PeriodicalIF":4.3,"publicationDate":"2024-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142702831","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A hierarchical simulation-based push planner for autonomous recovery in navigation blocked scenarios of mobile robots 基于分层模拟的推送规划器,用于移动机器人导航受阻情况下的自主恢复
IF 4.3 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS Pub Date : 2024-11-20 DOI: 10.1016/j.robot.2024.104867
Alessio De Luca , Luca Muratore , Nikos Tsagarakis
Mobile robotic platforms that are expected to be engaged in applications domains characterized by unstructured terrains and environment settings will unavoidably face mobility constraints that may not be overcome by classical navigation planning and obstacle avoidance/negotiation tools. Endowing these robots with additional skills, which enable them to interact and manipulate obstacles blocking their pathway, will significantly enhance their ability to deal with such conditions, permitting them to perform their mission more robustly when encountering such unstructured and cluttered scenes. This paper proposes a novel hierarchical simulation-based push planner framework that searches for a sequence of pushing actions to move obstacles toward a planned goal position. This aims at overcoming obstacle challenges that block the navigation of the robot toward a target location and, therefore, can lead to the failure of the navigation plan and the overall mission of the robot. The planned pushing actions enable the robot to relocate objects in the scene avoiding obstacles and considering environmental constraints identified by an elevation or an occupancy map. The online simulations of the pushing actions are carried out by exploiting the Mujoco physics engine. The framework was validated in the Gazebo simulation environment and in real platforms such as the hybrid wheeled-legged robot CENTAURO and the mobile cobot RELAX.
移动机器人平台在非结构化地形和环境设置的应用领域中,将不可避免地面临传统导航规划和避障/协商工具无法克服的移动限制。赋予这些机器人额外的技能,使它们能够与阻挡其前进道路的障碍物进行互动和操控,将大大提高它们应对此类情况的能力,使它们能够在遇到此类非结构化和杂乱场景时更稳健地执行任务。本文提出了一种新颖的基于分层模拟的推动规划器框架,该框架可搜索一连串推动动作,将障碍物推向计划的目标位置。这样做的目的是克服阻碍机器人向目标位置导航的障碍物挑战,因此可能导致导航计划和机器人整体任务的失败。计划好的推动动作能使机器人避开障碍物,并考虑到由高程图或占位图确定的环境限制,重新定位场景中的物体。利用 Mujoco 物理引擎对推动动作进行在线模拟。该框架在 Gazebo 仿真环境和实际平台(如混合轮足机器人 CENTAURO 和移动 cobot RELAX)中得到了验证。
{"title":"A hierarchical simulation-based push planner for autonomous recovery in navigation blocked scenarios of mobile robots","authors":"Alessio De Luca ,&nbsp;Luca Muratore ,&nbsp;Nikos Tsagarakis","doi":"10.1016/j.robot.2024.104867","DOIUrl":"10.1016/j.robot.2024.104867","url":null,"abstract":"<div><div>Mobile robotic platforms that are expected to be engaged in applications domains characterized by unstructured terrains and environment settings will unavoidably face mobility constraints that may not be overcome by classical navigation planning and obstacle avoidance/negotiation tools. Endowing these robots with additional skills, which enable them to interact and manipulate obstacles blocking their pathway, will significantly enhance their ability to deal with such conditions, permitting them to perform their mission more robustly when encountering such unstructured and cluttered scenes. This paper proposes a novel hierarchical simulation-based push planner framework that searches for a sequence of pushing actions to move obstacles toward a planned goal position. This aims at overcoming obstacle challenges that block the navigation of the robot toward a target location and, therefore, can lead to the failure of the navigation plan and the overall mission of the robot. The planned pushing actions enable the robot to relocate objects in the scene avoiding obstacles and considering environmental constraints identified by an elevation or an occupancy map. The online simulations of the pushing actions are carried out by exploiting the Mujoco physics engine. The framework was validated in the Gazebo simulation environment and in real platforms such as the hybrid wheeled-legged robot CENTAURO and the mobile cobot RELAX.</div></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"184 ","pages":"Article 104867"},"PeriodicalIF":4.3,"publicationDate":"2024-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142702832","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GPC-LIVO: Point-wise LiDAR-inertial-visual odometry with geometric and photometric composite measurement model GPC-LIVO:点向激光雷达-惯性-视觉里程计与几何和光度复合测量模型
IF 4.3 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS Pub Date : 2024-11-20 DOI: 10.1016/j.robot.2024.104864
Chenxi Ye, Bingfei Nan
In the pursuit of precision within Simultaneous Localization and Mapping (SLAM), multi-sensor fusion emerges as a validated strategy with vast potential in robotics applications. This work presents GPC-LIVO, an accurate and robust LiDAR-Inertial-Visual Odometry system that integrates geometric and photometric information into one composite measurement model with point-wise updating architecture. GPC-LIVO constructs a belief factor model to assign different weights on geometric and photometric observations in the measurement model and adopts an adaptive error-state Kalman filter state estimation back-end to dynamically estimate the covariance of two observations. Since LiDAR points have larger measurement errors at endpoints and edges, we only fuse photometric information for LiDAR planar features and propose a corresponding validation method based on the associated image plane. Comprehensive experimentation is conducted on GPC-LIVO, encompassing both publicly available data sequences and data collected from our bespoke hardware setup. The results conclusively establish the better performance of our proposed system compare to other state-of-art odometry frameworks, and demonstrate its ability to operate effectively in various challenging environmental conditions. GPC-LIVO outputs states estimation at a high frequency(1-5 kHz, varying based on the processed LiDAR points in a frame) and achieves comparable time consumption for real-time running.
在同时定位和绘图(SLAM)中追求精度的过程中,多传感器融合作为一种经过验证的策略在机器人应用中具有巨大的潜力。这项工作提出了GPC-LIVO,一个精确和鲁棒的LiDAR-Inertial-Visual Odometry系统,将几何和光度信息集成到一个具有点更新架构的复合测量模型中。GPC-LIVO构建信念因子模型,对测量模型中的几何和光度观测值赋予不同的权重,并采用自适应误差状态卡尔曼滤波状态估计后端动态估计两个观测值的协方差。由于LiDAR点在端点和边缘处的测量误差较大,我们仅对LiDAR平面特征融合光度信息,并提出了基于相关图像平面的相应验证方法。在GPC-LIVO上进行了全面的实验,包括公开可用的数据序列和从我们定制的硬件设置收集的数据。结果表明,与其他最先进的里程计框架相比,我们提出的系统具有更好的性能,并证明了其在各种具有挑战性的环境条件下有效运行的能力。GPC-LIVO以高频输出状态估计(1-5 kHz,根据帧中处理的LiDAR点而变化),并实现实时运行的相当时间消耗。
{"title":"GPC-LIVO: Point-wise LiDAR-inertial-visual odometry with geometric and photometric composite measurement model","authors":"Chenxi Ye,&nbsp;Bingfei Nan","doi":"10.1016/j.robot.2024.104864","DOIUrl":"10.1016/j.robot.2024.104864","url":null,"abstract":"<div><div>In the pursuit of precision within Simultaneous Localization and Mapping (SLAM), multi-sensor fusion emerges as a validated strategy with vast potential in robotics applications. This work presents GPC-LIVO, an accurate and robust LiDAR-Inertial-Visual Odometry system that integrates geometric and photometric information into one composite measurement model with point-wise updating architecture. GPC-LIVO constructs a belief factor model to assign different weights on geometric and photometric observations in the measurement model and adopts an adaptive error-state Kalman filter state estimation back-end to dynamically estimate the covariance of two observations. Since LiDAR points have larger measurement errors at endpoints and edges, we only fuse photometric information for LiDAR planar features and propose a corresponding validation method based on the associated image plane. Comprehensive experimentation is conducted on GPC-LIVO, encompassing both publicly available data sequences and data collected from our bespoke hardware setup. The results conclusively establish the better performance of our proposed system compare to other state-of-art odometry frameworks, and demonstrate its ability to operate effectively in various challenging environmental conditions. GPC-LIVO outputs states estimation at a high frequency(1-5 kHz, varying based on the processed LiDAR points in a frame) and achieves comparable time consumption for real-time running.</div></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"185 ","pages":"Article 104864"},"PeriodicalIF":4.3,"publicationDate":"2024-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142744977","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Ontological framework for high-level task replanning for autonomous robotic systems 自主机器人系统高级别任务重新规划的本体论框架
IF 4.3 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS Pub Date : 2024-11-19 DOI: 10.1016/j.robot.2024.104861
Rodrigo Bernardo , João M.C. Sousa , Paulo J.S. Gonçalves
Several frameworks for robot control platforms have been developed in recent years. However, strategies that incorporate automatic replanning have to be explored, which is a requirement for Autonomous Robotic Systems (ARS) to be widely adopted. Ontologies can play an essential role by providing a structured representation of knowledge. This paper proposes a new framework capable of replanning high-level tasks in failure situations for ARSs. The framework utilizes an ontology-based reasoning engine to overcome constraints and execute tasks through Behavior Trees (BTs). The proposed framework was implemented and validated in a real experimental environment using an Autonomous Mobile Robot (AMR) sharing a plan with a human operator. The proposed framework uses semantic reasoning in the planning system, offering a promising solution to improve the adaptability and efficiency of ARSs.
近年来,已开发出多个机器人控制平台框架。然而,必须探索包含自动重新规划的策略,这是自主机器人系统(ARS)被广泛采用的要求。本体可提供结构化的知识表示,从而发挥重要作用。本文提出了一种新的框架,能够在故障情况下为ARS重新规划高级任务。该框架利用基于本体的推理引擎来克服制约因素,并通过行为树(BT)来执行任务。在真实的实验环境中,使用与人类操作员共享计划的自主移动机器人(AMR)对所提出的框架进行了实施和验证。所提出的框架在计划系统中使用了语义推理,为提高自主移动机器人(ARS)的适应性和效率提供了一个前景广阔的解决方案。
{"title":"Ontological framework for high-level task replanning for autonomous robotic systems","authors":"Rodrigo Bernardo ,&nbsp;João M.C. Sousa ,&nbsp;Paulo J.S. Gonçalves","doi":"10.1016/j.robot.2024.104861","DOIUrl":"10.1016/j.robot.2024.104861","url":null,"abstract":"<div><div>Several frameworks for robot control platforms have been developed in recent years. However, strategies that incorporate automatic replanning have to be explored, which is a requirement for <em>Autonomous Robotic Systems</em> (ARS) to be widely adopted. Ontologies can play an essential role by providing a structured representation of knowledge. This paper proposes a new framework capable of replanning high-level tasks in failure situations for ARSs. The framework utilizes an ontology-based reasoning engine to overcome constraints and execute tasks through Behavior Trees (BTs). The proposed framework was implemented and validated in a real experimental environment using an <em>Autonomous Mobile Robot</em> (AMR) sharing a plan with a human operator. The proposed framework uses semantic reasoning in the planning system, offering a promising solution to improve the adaptability and efficiency of ARSs.</div></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"184 ","pages":"Article 104861"},"PeriodicalIF":4.3,"publicationDate":"2024-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142702830","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Robotics and Autonomous Systems
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1