首页 > 最新文献

IEEE Robotics and Automation Letters最新文献

英文 中文
Design, Modeling, and Experimental Verification of Passively Adaptable Roller Gripper for Separating Stacked Fabric 用于分离堆叠织物的被动适应性辊子抓手的设计、建模和实验验证
IF 4.6 2区 计算机科学 Q2 ROBOTICS Pub Date : 2024-09-13 DOI: 10.1109/LRA.2024.3461550
Jayant Unde;Jacinto Colan;Yasuhisa Hasegawa
This letter presents a novel approach to fabric manipulation through the development and optimization of a single-actuator-driven roller gripper. Focused on addressing the challenges inherent in handling fabrics with diverse thicknesses and materials, our gripper employs a passive adaptable mechanism driven by springs, enabling effective manipulation of fabrics ranging from 0.1 mm to 2.25 mm in thickness. We analyze gripper-fabric interaction forces to identify the parameters that influence successful grasping. We then optimize the gripper's normal forces and the roller's tangential force using the proposed model. Systematic evaluations demonstrated the gripper's capability to separate individual layers from fabric stacks, achieving a 94.9% success rate across multiple fabric types. Overall, this research offers a compact, cost-effective solution with broad applicability in diverse industrial automation contexts, providing valuable insights for advancing robotic fabric handling systems.
这封信介绍了一种通过开发和优化单执行器驱动的滚筒式机械手来操纵织物的新方法。为了应对在处理不同厚度和材料的织物时所面临的固有挑战,我们的机械手采用了由弹簧驱动的被动适应机制,能够有效地操纵厚度从 0.1 毫米到 2.25 毫米的织物。我们分析了机械手与织物的相互作用力,以确定影响成功抓取的参数。然后,我们利用提出的模型优化了抓手的法向力和滚筒的切向力。系统评估表明,该机械手能够从织物堆中分离出单层织物,在多种织物类型中的成功率达到 94.9%。总之,这项研究提供了一种结构紧凑、经济高效的解决方案,可广泛应用于各种工业自动化场合,为推进机器人织物处理系统的发展提供了宝贵的见解。
{"title":"Design, Modeling, and Experimental Verification of Passively Adaptable Roller Gripper for Separating Stacked Fabric","authors":"Jayant Unde;Jacinto Colan;Yasuhisa Hasegawa","doi":"10.1109/LRA.2024.3461550","DOIUrl":"https://doi.org/10.1109/LRA.2024.3461550","url":null,"abstract":"This letter presents a novel approach to fabric manipulation through the development and optimization of a single-actuator-driven roller gripper. Focused on addressing the challenges inherent in handling fabrics with diverse thicknesses and materials, our gripper employs a passive adaptable mechanism driven by springs, enabling effective manipulation of fabrics ranging from 0.1 mm to 2.25 mm in thickness. We analyze gripper-fabric interaction forces to identify the parameters that influence successful grasping. We then optimize the gripper's normal forces and the roller's tangential force using the proposed model. Systematic evaluations demonstrated the gripper's capability to separate individual layers from fabric stacks, achieving a 94.9% success rate across multiple fabric types. Overall, this research offers a compact, cost-effective solution with broad applicability in diverse industrial automation contexts, providing valuable insights for advancing robotic fabric handling systems.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":null,"pages":null},"PeriodicalIF":4.6,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10680377","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142274956","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
On the Calibration, Fault Detection and Recovery of a Force Sensing Device 力传感装置的校准、故障检测和恢复
IF 4.6 2区 计算机科学 Q2 ROBOTICS Pub Date : 2024-09-11 DOI: 10.1109/LRA.2024.3458807
Yifang Zhang;Arash Ajoudani;Nikos G. Tsagarakis
Ground reaction force information, which includes the location of the center of pressure (COP) and vertical ground reaction force (vGRF), has various applications, such as in the gait assessment of patients post-injury or in the control of robot prostheses and exoskeleton devices. At the beginning of this work, we introduce a newly developed force-sensing device for measuring the COP and vGRF. Then, a model-free calibration method is proposed, leveraging Gaussian process regression (GPR) to extract COP and vGRF from raw sensor data. This approach yields remarkably low normalized root mean squared errors (NRMSEs) of 0.029 and 0.020 for COP in the mediolateral and anteroposterior directions, respectively, and 0.024 for vGRF. However, in general, learning-based calibration methods are sensitive to abnormal readings from sensing elements. To improve the robustness of the measurement, a GPR-based fault detection network is outlined for evaluating the sensing state within the fault in individual sensing elements of the force-sensing device. Moreover, a GPR-based recovery method is proposed to retrieve the sensing device's function under the fault conditions. In validation experiments, the effect of the scale factor of the threshold in the fault detection network is experimentally analyzed. The fault detection network can achieve over 90% success rate with a lower than 5 seconds delay on average in detecting the fault when the scale factor is between 1.68 and 1.90. The engagement of GPR-based recovery models under fault conditions demonstrates a substantial enhancement in COP (up to 85.0% improvement) and vGRF (up to 84.8% improvement) estimation accuracy.
地面反作用力信息包括压力中心(COP)和垂直地面反作用力(vGRF)的位置,具有多种用途,例如用于伤后患者的步态评估或机器人假肢和外骨骼设备的控制。在这项工作的开头,我们介绍了一种新开发的用于测量 COP 和 vGRF 的力传感设备。然后,我们提出了一种无模型校准方法,利用高斯过程回归(GPR)从原始传感器数据中提取 COP 和 vGRF。这种方法产生的归一化均方根误差(NRMSE)非常低,内外侧和前胸方向的 COP 误差分别为 0.029 和 0.020,vGRF 误差为 0.024。然而,一般来说,基于学习的校准方法对传感元件的异常读数很敏感。为了提高测量的鲁棒性,概述了一个基于 GPR 的故障检测网络,用于评估力传感设备单个传感元件故障时的传感状态。此外,还提出了一种基于 GPR 的恢复方法,以恢复传感设备在故障条件下的功能。在验证实验中,实验分析了故障检测网络中阈值比例因子的影响。当比例系数介于 1.68 和 1.90 之间时,故障检测网络可以达到 90% 以上的成功率,检测故障的平均延迟时间低于 5 秒。基于 GPR 的恢复模型在故障条件下的参与表明,其 COP(最多提高 85.0%)和 vGRF(最多提高 84.8%)估计精度得到了大幅提高。
{"title":"On the Calibration, Fault Detection and Recovery of a Force Sensing Device","authors":"Yifang Zhang;Arash Ajoudani;Nikos G. Tsagarakis","doi":"10.1109/LRA.2024.3458807","DOIUrl":"https://doi.org/10.1109/LRA.2024.3458807","url":null,"abstract":"Ground reaction force information, which includes the location of the center of pressure (COP) and vertical ground reaction force (vGRF), has various applications, such as in the gait assessment of patients post-injury or in the control of robot prostheses and exoskeleton devices. At the beginning of this work, we introduce a newly developed force-sensing device for measuring the COP and vGRF. Then, a model-free calibration method is proposed, leveraging Gaussian process regression (GPR) to extract COP and vGRF from raw sensor data. This approach yields remarkably low normalized root mean squared errors (NRMSEs) of 0.029 and 0.020 for COP in the mediolateral and anteroposterior directions, respectively, and 0.024 for vGRF. However, in general, learning-based calibration methods are sensitive to abnormal readings from sensing elements. To improve the robustness of the measurement, a GPR-based fault detection network is outlined for evaluating the sensing state within the fault in individual sensing elements of the force-sensing device. Moreover, a GPR-based recovery method is proposed to retrieve the sensing device's function under the fault conditions. In validation experiments, the effect of the scale factor of the threshold in the fault detection network is experimentally analyzed. The fault detection network can achieve over 90% success rate with a lower than 5 seconds delay on average in detecting the fault when the scale factor is between 1.68 and 1.90. The engagement of GPR-based recovery models under fault conditions demonstrates a substantial enhancement in COP (up to 85.0% improvement) and vGRF (up to 84.8% improvement) estimation accuracy.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":null,"pages":null},"PeriodicalIF":4.6,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10675440","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142246504","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
G-Loc: Tightly-Coupled Graph Localization With Prior Topo-Metric Information G-Loc:利用先验拓扑计量信息进行紧密耦合图定位
IF 4.6 2区 计算机科学 Q2 ROBOTICS Pub Date : 2024-09-10 DOI: 10.1109/LRA.2024.3457383
Lorenzo Montano-Oliván;Julio A. Placed;Luis Montano;María T. Lázaro
Localization in already mapped environments is a critical component in many robotics and automotive applications, where previously acquired information can be exploited along with sensor fusion to provide robust and accurate localization estimates. In this letter, we offer a new perspective on map-based localization by reusing prior topological and metric information. Thus, we reformulate this long-studied problem to go beyond the mere use of metric maps. Our framework seamlessly integrates LiDAR, inertial and GNSS measurements, and cloud-to-map registrations in a sliding window graph fashion, which allows to accommodate the uncertainty of each observation. The modularity of our framework allows it to work with different sensor configurations (e.g., LiDAR resolutions, GNSS denial) and environmental conditions (e.g., mapless regions, large environments). We have conducted several validation experiments, including the deployment in a real-world automotive application, demonstrating the accuracy, efficiency, and versatility of our system in online localization.
在已绘制地图的环境中进行定位是许多机器人和汽车应用中的关键组成部分,在这些应用中,可以利用先前获取的信息和传感器融合来提供稳健而准确的定位估计。在这封信中,我们通过重新利用先前的拓扑和度量信息,为基于地图的定位提供了一个新的视角。因此,我们对这一研究已久的问题进行了重新表述,超越了单纯使用度量地图的范畴。我们的框架以滑动窗口图的方式无缝集成了激光雷达、惯性和全球导航卫星系统测量以及云到地图的注册,从而能够适应每个观测数据的不确定性。我们的框架具有模块化特点,可适用于不同的传感器配置(如激光雷达分辨率、全球导航卫星系统拒绝)和环境条件(如无地图区域、大型环境)。我们进行了多项验证实验,包括在实际汽车应用中的部署,证明了我们的系统在在线定位方面的准确性、效率和多功能性。
{"title":"G-Loc: Tightly-Coupled Graph Localization With Prior Topo-Metric Information","authors":"Lorenzo Montano-Oliván;Julio A. Placed;Luis Montano;María T. Lázaro","doi":"10.1109/LRA.2024.3457383","DOIUrl":"https://doi.org/10.1109/LRA.2024.3457383","url":null,"abstract":"Localization in already mapped environments is a critical component in many robotics and automotive applications, where previously acquired information can be exploited along with sensor fusion to provide robust and accurate localization estimates. In this letter, we offer a new perspective on map-based localization by reusing prior topological and metric information. Thus, we reformulate this long-studied problem to go beyond the mere use of metric maps. Our framework seamlessly integrates LiDAR, inertial and GNSS measurements, and cloud-to-map registrations in a sliding window graph fashion, which allows to accommodate the uncertainty of each observation. The modularity of our framework allows it to work with different sensor configurations (e.g., LiDAR resolutions, GNSS denial) and environmental conditions (e.g., mapless regions, large environments). We have conducted several validation experiments, including the deployment in a real-world automotive application, demonstrating the accuracy, efficiency, and versatility of our system in online localization.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":null,"pages":null},"PeriodicalIF":4.6,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142246409","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Safe and Efficient Path Planning Under Uncertainty via Deep Collision Probability Fields 通过深度碰撞概率场实现不确定性条件下的安全高效路径规划
IF 4.6 2区 计算机科学 Q2 ROBOTICS Pub Date : 2024-09-10 DOI: 10.1109/LRA.2024.3457208
Felix Herrmann;Sebastian Zach;Jacopo Banfi;Jan Peters;Georgia Chalvatzaki;Davide Tateo
Estimating collision probabilities between robots and environmental obstacles or other moving agents is crucial to ensure safety during path planning. This is an important building block of modern planning algorithms in many application scenarios such as autonomous driving, where noisy sensors perceive obstacles. While many approaches exist, they either provide too conservative estimates of the collision probabilities or are computationally intensive due to their sampling-based nature. To deal with these issues, we introduce Deep Collision Probability Fields, a neural-based approach for computing collision probabilities of arbitrary objects with arbitrary unimodal uncertainty distributions. Our approach relegates the computationally intensive estimation of collision probabilities via sampling at the training step, allowing for fast neural network inference of the constraints during planning. In extensive experiments, we show that Deep Collision Probability Fields can produce reasonably accurate collision probabilities (up to $10^{-3}$) for planning and that our approach can be easily plugged into standard path planning approaches to plan safe paths on 2-D maps containing uncertain static and dynamic obstacles.
在路径规划过程中,估算机器人与环境障碍物或其他移动物体之间的碰撞概率对于确保安全至关重要。在自动驾驶等许多应用场景中,由于传感器感知障碍物的噪声较大,因此这是现代规划算法的重要组成部分。虽然有很多方法,但它们要么对碰撞概率的估计过于保守,要么由于其基于采样的性质而计算量巨大。为了解决这些问题,我们引入了深度碰撞概率场,这是一种基于神经的方法,用于计算具有任意单模态不确定性分布的任意物体的碰撞概率。我们的方法将计算密集型的碰撞概率估计工作交给了训练步骤中的采样,从而允许在规划过程中对约束条件进行快速的神经网络推理。在大量实验中,我们发现深度碰撞概率场可以为规划产生相当精确的碰撞概率(高达 10^{-3}$ 美元),而且我们的方法可以轻松插入标准路径规划方法,在包含不确定静态和动态障碍物的二维地图上规划安全路径。
{"title":"Safe and Efficient Path Planning Under Uncertainty via Deep Collision Probability Fields","authors":"Felix Herrmann;Sebastian Zach;Jacopo Banfi;Jan Peters;Georgia Chalvatzaki;Davide Tateo","doi":"10.1109/LRA.2024.3457208","DOIUrl":"https://doi.org/10.1109/LRA.2024.3457208","url":null,"abstract":"Estimating collision probabilities between robots and environmental obstacles or other moving agents is crucial to ensure safety during path planning. This is an important building block of modern planning algorithms in many application scenarios such as autonomous driving, where noisy sensors perceive obstacles. While many approaches exist, they either provide too conservative estimates of the collision probabilities or are computationally intensive due to their sampling-based nature. To deal with these issues, we introduce Deep Collision Probability Fields, a neural-based approach for computing collision probabilities of arbitrary objects with arbitrary unimodal uncertainty distributions. Our approach relegates the computationally intensive estimation of collision probabilities via sampling at the training step, allowing for fast neural network inference of the constraints during planning. In extensive experiments, we show that Deep Collision Probability Fields can produce reasonably accurate collision probabilities (up to \u0000<inline-formula><tex-math>$10^{-3}$</tex-math></inline-formula>\u0000) for planning and that our approach can be easily plugged into standard path planning approaches to plan safe paths on 2-D maps containing uncertain static and dynamic obstacles.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":null,"pages":null},"PeriodicalIF":4.6,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142274913","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fast and Robust 6-DoF LiDAR-Based Localization of an Autonomous Vehicle Against Sensor Inaccuracy 基于 6-DoF 激光雷达的自主飞行器快速稳健定位,抵御传感器误差
IF 4.6 2区 计算机科学 Q2 ROBOTICS Pub Date : 2024-09-10 DOI: 10.1109/LRA.2024.3457370
Gyu-Min Oh;Seung-Woo Seo
Precise and real-time localization is crucial for autonomous vehicles. State-of-the-art methods utilize 3D light detection and ranging (LiDAR), inertial measurement unit (IMU), and global positioning system (GPS). However, to meet real-time constraints, these methods often limit the search space to only three degrees of freedom (DoF; $x$, $y$, and $heading$) and rely on prior maps and IMU for estimating the $roll$, $pitch$, and $z$ coordinates. This reliance on maps and sensors can introduce inaccuracies if they contain errors. To achieve precise localization in scenarios where IMU or map errors are present, the $roll$, $pitch$, and $z$ coordinates must be estimated. However, incorporating these additional dimensions into the localization process may increase the processing time, rendering it unsuitable for real-time applications. Herein, we propose a precise and robust 6-DoF LiDAR localization algorithm. Instead of directly generating all 6-DoF, the proposed algorithm generates particles based on the $x$, $y$, and $heading$ coordinates. Subsequently, it optimizes the estimation of $roll$, $pitch$, and $z$ coordinates of each particle while maintaining a fixed number of particles. By expanding the dimensionality in this manner, we mitigate the accuracy degradation that may occur with 3-DoF positioning when dealing with faulty sensors or maps. Experimental results demonstrate that the proposed algorithm achieves satisfactory performance even in scenarios where sensor accuracy is compromised.
精确的实时定位对自动驾驶汽车至关重要。最先进的方法是利用三维光探测和测距(LiDAR)、惯性测量单元(IMU)和全球定位系统(GPS)。然而,为了满足实时性的限制,这些方法通常将搜索空间限制为只有三个自由度(DoF:$x$、$y$ 和$heading$),并依靠先验地图和惯性测量单元来估计$roll$、$pitch$ 和$z$ 坐标。如果地图和传感器存在误差,这种对地图和传感器的依赖会带来不准确性。为了在 IMU 或地图存在误差的情况下实现精确定位,必须对 $roll$、$pitch$ 和 $z$ 坐标进行估算。然而,将这些额外的维度纳入定位过程可能会增加处理时间,使其不适合实时应用。在此,我们提出了一种精确、稳健的 6-DoF 激光雷达定位算法。该算法不直接生成所有 6-DoF,而是根据 $x$、$y$ 和 $heading$ 坐标生成粒子。随后,该算法在保持粒子数量固定的情况下,优化每个粒子的$roll$、$pitch$和$z$坐标的估算。通过以这种方式扩展维度,我们减轻了 3-DoF 定位在处理错误传感器或地图时可能出现的精度下降问题。实验结果表明,即使在传感器精度受到影响的情况下,所提出的算法也能达到令人满意的性能。
{"title":"Fast and Robust 6-DoF LiDAR-Based Localization of an Autonomous Vehicle Against Sensor Inaccuracy","authors":"Gyu-Min Oh;Seung-Woo Seo","doi":"10.1109/LRA.2024.3457370","DOIUrl":"https://doi.org/10.1109/LRA.2024.3457370","url":null,"abstract":"Precise and real-time localization is crucial for autonomous vehicles. State-of-the-art methods utilize 3D light detection and ranging (LiDAR), inertial measurement unit (IMU), and global positioning system (GPS). However, to meet real-time constraints, these methods often limit the search space to only three degrees of freedom (DoF; \u0000<inline-formula><tex-math>$x$</tex-math></inline-formula>\u0000, \u0000<inline-formula><tex-math>$y$</tex-math></inline-formula>\u0000, and \u0000<inline-formula><tex-math>$heading$</tex-math></inline-formula>\u0000) and rely on prior maps and IMU for estimating the \u0000<inline-formula><tex-math>$roll$</tex-math></inline-formula>\u0000, \u0000<inline-formula><tex-math>$pitch$</tex-math></inline-formula>\u0000, and \u0000<inline-formula><tex-math>$z$</tex-math></inline-formula>\u0000 coordinates. This reliance on maps and sensors can introduce inaccuracies if they contain errors. To achieve precise localization in scenarios where IMU or map errors are present, the \u0000<inline-formula><tex-math>$roll$</tex-math></inline-formula>\u0000, \u0000<inline-formula><tex-math>$pitch$</tex-math></inline-formula>\u0000, and \u0000<inline-formula><tex-math>$z$</tex-math></inline-formula>\u0000 coordinates must be estimated. However, incorporating these additional dimensions into the localization process may increase the processing time, rendering it unsuitable for real-time applications. Herein, we propose a precise and robust 6-DoF LiDAR localization algorithm. Instead of directly generating all 6-DoF, the proposed algorithm generates particles based on the \u0000<inline-formula><tex-math>$x$</tex-math></inline-formula>\u0000, \u0000<inline-formula><tex-math>$y$</tex-math></inline-formula>\u0000, and \u0000<inline-formula><tex-math>$heading$</tex-math></inline-formula>\u0000 coordinates. Subsequently, it optimizes the estimation of \u0000<inline-formula><tex-math>$roll$</tex-math></inline-formula>\u0000, \u0000<inline-formula><tex-math>$pitch$</tex-math></inline-formula>\u0000, and \u0000<inline-formula><tex-math>$z$</tex-math></inline-formula>\u0000 coordinates of each particle while maintaining a fixed number of particles. By expanding the dimensionality in this manner, we mitigate the accuracy degradation that may occur with 3-DoF positioning when dealing with faulty sensors or maps. Experimental results demonstrate that the proposed algorithm achieves satisfactory performance even in scenarios where sensor accuracy is compromised.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":null,"pages":null},"PeriodicalIF":4.6,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142235922","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CTS: Concurrent Teacher-Student Reinforcement Learning for Legged Locomotion CTS:腿部运动的师生同步强化学习
IF 4.6 2区 计算机科学 Q2 ROBOTICS Pub Date : 2024-09-10 DOI: 10.1109/LRA.2024.3457379
Hongxi Wang;Haoxiang Luo;Wei Zhang;Hua Chen
Thanks to recent explosive developments of data-driven learning methodologies, reinforcement learning (RL) emerges as a promising solution to address the legged locomotion problem in robotics. In this letter, we propose CTS, a novel Concurrent Teacher-Student reinforcement learning architecture for legged locomotion over uneven terrains. Different from conventional teacher-student architecture that trains the teacher policy via RL first and then transfers the knowledge to the student policy through supervised learning, our proposed architecture trains teacher and student policy networks concurrently under the reinforcement learning paradigm. To this end, we develop a new training scheme based on a modified proximal policy gradient (PPO) method that exploits data samples collected from the interactions between both the teacher and the student policies with the environment. The effectiveness of the proposed architecture and the new training scheme is demonstrated through substantial quantitative simulation comparisons with the state-of-the-art approaches and extensive indoor and outdoor experiments with quadrupedal and point-foot bipedal robot platforms, showcasing robust and agile locomotion capability. Quantitative simulation comparisons show that our approach reduces the average velocity tracking error by up to 20% compared to the two-stage teacher-student, demonstrating significant superiority in addressing blind locomotion tasks.
由于最近数据驱动学习方法的爆炸性发展,强化学习(RL)成为解决机器人腿部运动问题的一种有前途的解决方案。在这封信中,我们提出了一种新颖的师生并行强化学习架构(CTS),用于在不平坦的地形上进行腿部运动。与传统的师生架构(先通过 RL 训练教师策略,然后通过监督学习将知识传授给学生策略)不同,我们提出的架构在强化学习范式下同时训练教师和学生策略网络。为此,我们开发了一种基于改进的近端策略梯度(PPO)方法的新训练方案,该方法利用了从教师和学生策略与环境的交互中收集到的数据样本。通过与最先进的方法进行大量定量模拟比较,以及使用四足和点足双足机器人平台进行广泛的室内和室外实验,展示了稳健而敏捷的运动能力,从而证明了所提出的架构和新训练方案的有效性。定量仿真比较表明,与两阶段师生式相比,我们的方法可将平均速度跟踪误差降低 20%,在解决盲目运动任务方面具有显著优势。
{"title":"CTS: Concurrent Teacher-Student Reinforcement Learning for Legged Locomotion","authors":"Hongxi Wang;Haoxiang Luo;Wei Zhang;Hua Chen","doi":"10.1109/LRA.2024.3457379","DOIUrl":"https://doi.org/10.1109/LRA.2024.3457379","url":null,"abstract":"Thanks to recent explosive developments of data-driven learning methodologies, reinforcement learning (RL) emerges as a promising solution to address the legged locomotion problem in robotics. In this letter, we propose CTS, a novel Concurrent Teacher-Student reinforcement learning architecture for legged locomotion over uneven terrains. Different from conventional teacher-student architecture that trains the teacher policy via RL first and then transfers the knowledge to the student policy through supervised learning, our proposed architecture trains teacher and student policy networks concurrently under the reinforcement learning paradigm. To this end, we develop a new training scheme based on a modified proximal policy gradient (PPO) method that exploits data samples collected from the interactions between both the teacher and the student policies with the environment. The effectiveness of the proposed architecture and the new training scheme is demonstrated through substantial quantitative simulation comparisons with the state-of-the-art approaches and extensive indoor and outdoor experiments with quadrupedal and point-foot bipedal robot platforms, showcasing robust and agile locomotion capability. Quantitative simulation comparisons show that our approach reduces the average velocity tracking error by up to 20% compared to the two-stage teacher-student, demonstrating significant superiority in addressing blind locomotion tasks.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":null,"pages":null},"PeriodicalIF":4.6,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142246490","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LiDAR-BIND: Multi-Modal Sensor Fusion Through Shared Latent Embeddings LiDAR-BIND:通过共享潜在嵌入实现多模态传感器融合
IF 4.6 2区 计算机科学 Q2 ROBOTICS Pub Date : 2024-09-10 DOI: 10.1109/LRA.2024.3457384
Niels Balemans;Ali Anwar;Jan Steckel;Siegfried Mercelis
This letter presents LiDAR-BIND, a novel sensor fusion framework aimed at enhancing the reliability and safety of autonomous vehicles (AVs) through a shared latent embedding space. With this method, the addition of different modalities, such as sonar and radar, into existing navigation setups becomes possible. These modalities offer robust performance even in challenging scenarios where optical sensors fail. Leveraging a shared latent representation space, LiDAR-BIND enables accurate modality prediction, allowing for the translation of one sensor's observations into another, thereby overcoming the limitations of depending solely on LiDAR for dense point-cloud generation. Through this, the framework facilitates the alignment of multiple sensor modalities without the need for large synchronized datasets across all sensors. We demonstrate its usability in SLAM applications, outperforming traditional LiDAR-based approaches under degraded optical conditions.
本文介绍了一种新型传感器融合框架--LiDAR-BIND,旨在通过共享潜在嵌入空间提高自动驾驶汽车(AV)的可靠性和安全性。有了这种方法,将声纳和雷达等不同模式添加到现有导航设置中就成为可能。即使在光学传感器失效的挑战性场景中,这些模式也能提供强大的性能。利用共享的潜在表示空间,LiDAR-BIND 可实现精确的模态预测,允许将一种传感器的观测结果转换为另一种传感器的观测结果,从而克服了仅依靠激光雷达生成密集点云的局限性。通过这种方法,该框架可促进多种传感器模态的对齐,而无需所有传感器的大型同步数据集。我们展示了该框架在 SLAM 应用中的可用性,在光学条件退化的情况下,其性能优于传统的基于激光雷达的方法。
{"title":"LiDAR-BIND: Multi-Modal Sensor Fusion Through Shared Latent Embeddings","authors":"Niels Balemans;Ali Anwar;Jan Steckel;Siegfried Mercelis","doi":"10.1109/LRA.2024.3457384","DOIUrl":"https://doi.org/10.1109/LRA.2024.3457384","url":null,"abstract":"This letter presents LiDAR-BIND, a novel sensor fusion framework aimed at enhancing the reliability and safety of autonomous vehicles (AVs) through a shared latent embedding space. With this method, the addition of different modalities, such as sonar and radar, into existing navigation setups becomes possible. These modalities offer robust performance even in challenging scenarios where optical sensors fail. Leveraging a shared latent representation space, LiDAR-BIND enables accurate modality prediction, allowing for the translation of one sensor's observations into another, thereby overcoming the limitations of depending solely on LiDAR for dense point-cloud generation. Through this, the framework facilitates the alignment of multiple sensor modalities without the need for large synchronized datasets across all sensors. We demonstrate its usability in SLAM applications, outperforming traditional LiDAR-based approaches under degraded optical conditions.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":null,"pages":null},"PeriodicalIF":4.6,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142246489","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Joint Intrinsic and Extrinsic Calibration of Perception Systems Utilizing a Calibration Environment 利用校准环境对感知系统进行内在和外在联合校准
IF 4.6 2区 计算机科学 Q2 ROBOTICS Pub Date : 2024-09-10 DOI: 10.1109/LRA.2024.3457385
Louis Wiesmann;Thomas Läbe;Lucas Nunes;Jens Behley;Cyrill Stachniss
Basically all multi-sensor systems must calibrate their sensors to exploit their full potential for state estimation such as mapping and localization. In this letter, we investigate the problem of extrinsic and intrinsic calibration of perception systems. Traditionally, targets in the form of checkerboards or uniquely identifiable tags are used to calibrate those systems. We propose to use a whole calibration environment as a target that supports the intrinsic and extrinsic calibration of different types of sensors. By doing so, we are able to calibrate multiple perception systems with different configurations, sensor types, and sensor modalities. Our approach does not rely on overlaps between sensors which is often otherwise required when using classical targets. The main idea is to relate the measurements for each sensor to a precise model of the calibration environment. For this, we can choose for each sensor a specific method that best suits its calibration. Then, we estimate all intrinsics and extrinsics jointly using least squares adjustment. For the final evaluation of a LiDAR-to-camera calibration of our system, we propose an evaluation method that is independent of the calibration. This allows for quantitative evaluation between different calibration methods. The experiments show that our proposed method is able to provide reliable calibration.
基本上,所有多传感器系统都必须对传感器进行校准,以充分发挥其在映射和定位等状态估计方面的潜力。在这封信中,我们研究了感知系统的外在和内在校准问题。传统上,校准这些系统使用的是棋盘或唯一可识别标签形式的目标。我们建议将整个校准环境作为目标,支持不同类型传感器的内在和外在校准。这样,我们就能校准具有不同配置、传感器类型和传感器模式的多个感知系统。我们的方法并不依赖于传感器之间的重叠,而在使用经典目标时往往需要这种重叠。其主要思路是将每个传感器的测量结果与校准环境的精确模型联系起来。为此,我们可以为每个传感器选择最适合其校准的特定方法。然后,我们使用最小二乘调整法联合估算所有的本征和外征。对于我们系统的激光雷达到相机标定的最终评估,我们提出了一种独立于标定的评估方法。这样就可以对不同的校准方法进行定量评估。实验表明,我们提出的方法能够提供可靠的校准。
{"title":"Joint Intrinsic and Extrinsic Calibration of Perception Systems Utilizing a Calibration Environment","authors":"Louis Wiesmann;Thomas Läbe;Lucas Nunes;Jens Behley;Cyrill Stachniss","doi":"10.1109/LRA.2024.3457385","DOIUrl":"https://doi.org/10.1109/LRA.2024.3457385","url":null,"abstract":"Basically all multi-sensor systems must calibrate their sensors to exploit their full potential for state estimation such as mapping and localization. In this letter, we investigate the problem of extrinsic and intrinsic calibration of perception systems. Traditionally, targets in the form of checkerboards or uniquely identifiable tags are used to calibrate those systems. We propose to use a whole calibration environment as a target that supports the intrinsic and extrinsic calibration of different types of sensors. By doing so, we are able to calibrate multiple perception systems with different configurations, sensor types, and sensor modalities. Our approach does not rely on overlaps between sensors which is often otherwise required when using classical targets. The main idea is to relate the measurements for each sensor to a precise model of the calibration environment. For this, we can choose for each sensor a specific method that best suits its calibration. Then, we estimate all intrinsics and extrinsics jointly using least squares adjustment. For the final evaluation of a LiDAR-to-camera calibration of our system, we propose an evaluation method that is independent of the calibration. This allows for quantitative evaluation between different calibration methods. The experiments show that our proposed method is able to provide reliable calibration.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":null,"pages":null},"PeriodicalIF":4.6,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142235925","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Center Direction Network for Grasping Point Localization on Cloths 用于布料上抓取点定位的中心方向网络
IF 4.6 2区 计算机科学 Q2 ROBOTICS Pub Date : 2024-09-09 DOI: 10.1109/LRA.2024.3455802
Domen Tabernik;Jon Muhovič;Matej Urbas;Danijel Skočaj
Object grasping is a fundamental challenge in robotics and computer vision, critical for advancing robotic manipulation capabilities. Deformable objects, like fabrics and cloths, pose additional challenges due to their non-rigid nature. In this work, we introduce CeDiRNet-3DoF, a deep-learning model for grasp point detection, with a particular focus on cloth objects. CeDiRNet-3DoF employs center direction regression alongside a localization network, attaining first place in the perception task of ICRA 2023’s Cloth Manipulation Challenge. Recognizing the lack of standardized benchmarks in the literature that hinder effective method comparison, we present the ViCoS Towel Dataset. This extensive benchmark dataset comprises 8,000 real and 12,000 synthetic images, serving as a robust resource for training and evaluating contemporary data-driven deep-learning approaches. Extensive evaluation revealed CeDiRNet-3DoF's robustness in real-world performance, outperforming state-of-the-art methods, including the latest transformer-based models. Our work bridges a crucial gap, offering a robust solution and benchmark for cloth grasping in computer vision and robotics.
物体抓取是机器人学和计算机视觉领域的一项基本挑战,对于提高机器人操纵能力至关重要。由于织物和布料等可变形物体的非刚性特性,它们带来了额外的挑战。在这项工作中,我们介绍了用于抓取点检测的深度学习模型 CeDiRNet-3DoF,尤其侧重于布料物体。CeDiRNet-3DoF 采用中心方向回归与定位网络相结合的方法,在 ICRA 2023 的布料操纵挑战赛的感知任务中获得第一名。文献中缺乏标准化基准,这阻碍了有效的方法比较,有鉴于此,我们提出了 ViCoS 毛巾数据集。这个广泛的基准数据集包括 8000 张真实图像和 12000 张合成图像,是训练和评估当代数据驱动深度学习方法的强大资源。广泛的评估结果表明,CeDiRNet-3DoF 在真实世界中表现强劲,优于最先进的方法,包括最新的基于变换器的模型。我们的研究填补了这一重要空白,为计算机视觉和机器人技术中的布抓取提供了强大的解决方案和基准。
{"title":"Center Direction Network for Grasping Point Localization on Cloths","authors":"Domen Tabernik;Jon Muhovič;Matej Urbas;Danijel Skočaj","doi":"10.1109/LRA.2024.3455802","DOIUrl":"https://doi.org/10.1109/LRA.2024.3455802","url":null,"abstract":"Object grasping is a fundamental challenge in robotics and computer vision, critical for advancing robotic manipulation capabilities. Deformable objects, like fabrics and cloths, pose additional challenges due to their non-rigid nature. In this work, we introduce CeDiRNet-3DoF, a deep-learning model for grasp point detection, with a particular focus on cloth objects. CeDiRNet-3DoF employs center direction regression alongside a localization network, attaining first place in the perception task of ICRA 2023’s Cloth Manipulation Challenge. Recognizing the lack of standardized benchmarks in the literature that hinder effective method comparison, we present the ViCoS Towel Dataset. This extensive benchmark dataset comprises 8,000 real and 12,000 synthetic images, serving as a robust resource for training and evaluating contemporary data-driven deep-learning approaches. Extensive evaluation revealed CeDiRNet-3DoF's robustness in real-world performance, outperforming state-of-the-art methods, including the latest transformer-based models. Our work bridges a crucial gap, offering a robust solution and benchmark for cloth grasping in computer vision and robotics.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":null,"pages":null},"PeriodicalIF":4.6,"publicationDate":"2024-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142230874","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MAD-ICP: It is All About Matching Data – Robust and Informed LiDAR Odometry MAD-ICP:关键在于数据匹配--可靠、明智的激光雷达测距技术
IF 4.6 2区 计算机科学 Q2 ROBOTICS Pub Date : 2024-09-09 DOI: 10.1109/LRA.2024.3456509
Simone Ferrari;Luca Di Giammarino;Leonardo Brizi;Giorgio Grisetti
LiDAR odometry is the task of estimating the ego-motion of the sensor from sequential laser scans. This problem has been addressed by the community for more than two decades, and many effective solutions are available nowadays. Most of these systems implicitly rely on assumptions about the operating environment, the sensor used, and motion pattern. When these assumptions are violated, several well-known systems tend to perform poorly. This letter presents a LiDAR odometry system that can overcome these limitations and operate well under different operating conditions while achieving performance comparable with domain-specific methods. Our algorithm follows the well-known ICP paradigm that leverages a PCA-based kd-tree implementation that is used to extract structural information about the clouds being registered and to compute the minimization metric for the alignment. The drift is bound by managing the local map based on the estimated uncertainty of the tracked pose.
激光雷达测距是一项从连续激光扫描中估算传感器自我运动的任务。二十多年来,业界一直在解决这一问题,目前已有许多有效的解决方案。这些系统大多隐含地依赖于对操作环境、所用传感器和运动模式的假设。当违反这些假设时,一些著名的系统往往表现不佳。这封信介绍了一种激光雷达里程测量系统,该系统可以克服这些限制,在不同的操作条件下都能良好运行,同时还能达到与特定领域方法相媲美的性能。我们的算法遵循著名的 ICP 范式,利用基于 PCA 的 kd-tree 实现来提取正在注册的云的结构信息,并计算对齐的最小化度量。根据跟踪姿态的估计不确定性,通过管理局部地图来约束漂移。
{"title":"MAD-ICP: It is All About Matching Data – Robust and Informed LiDAR Odometry","authors":"Simone Ferrari;Luca Di Giammarino;Leonardo Brizi;Giorgio Grisetti","doi":"10.1109/LRA.2024.3456509","DOIUrl":"https://doi.org/10.1109/LRA.2024.3456509","url":null,"abstract":"LiDAR odometry is the task of estimating the ego-motion of the sensor from sequential laser scans. This problem has been addressed by the community for more than two decades, and many effective solutions are available nowadays. Most of these systems implicitly rely on assumptions about the operating environment, the sensor used, and motion pattern. When these assumptions are violated, several well-known systems tend to perform poorly. This letter presents a LiDAR odometry system that can overcome these limitations and operate well under different operating conditions while achieving performance comparable with domain-specific methods. Our algorithm follows the well-known ICP paradigm that leverages a PCA-based kd-tree implementation that is used to extract structural information about the clouds being registered and to compute the minimization metric for the alignment. The drift is bound by managing the local map based on the estimated uncertainty of the tracked pose.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":null,"pages":null},"PeriodicalIF":4.6,"publicationDate":"2024-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142246506","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Robotics and Automation Letters
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1