首页 > 最新文献

IEEE Robotics and Automation Letters最新文献

英文 中文
3D Guidance Law for Flexible Target Enclosing With Inherent Safety 具有固有安全的柔性目标封闭三维制导律
IF 4.6 2区 计算机科学 Q2 ROBOTICS Pub Date : 2025-01-13 DOI: 10.1109/LRA.2025.3528225
Praveen Kumar Ranjan;Abhinav Sinha;Yongcan Cao
In this paper, we address the problem of enclosing an arbitrarily moving target in three dimensions by a single pursuer while ensuring the pursuer's safety by preventing collisions with the target. The proposed guidance strategy steers the pursuer to a safe region of space surrounding and excluding the target, allowing it to maintain a certain distance from the latter while offering greater flexibility in positioning and converging to any orbit within this safe zone. We leverage the concept of the Lyapunov Barrier Function as a powerful tool to constrain the distance between the pursuer and the target within asymmetric bounds, thereby ensuring the pursuer's safety within the predefined region. Further, we demonstrate the effectiveness of the proposed guidance law in managing arbitrarily maneuvering targets and other uncertainties (such as vehicle/autopilot dynamics and external disturbances) by enabling the pursuer to consistently achieve stable global enclosing behaviors by switching between stable enclosing trajectories within the safe region whenever necessary, even in response to aggressive target maneuvers. To attest to the merits of our work, we conduct experimental tests with various plant models, including a high-fidelity quadrotor model within Software-in-the-loop (SITL) simulations, encompassing various challenging target maneuver scenarios and requiring only relative information for successful execution.
在本文中,我们解决了由单个跟踪器在三维空间中包围任意移动目标的问题,同时通过防止与目标碰撞来保证跟踪器的安全。所提出的制导策略将跟踪器引导到目标周围和不包括目标的空间安全区域,使其与目标保持一定距离,同时在定位和收敛到该安全区域内的任何轨道方面提供更大的灵活性。我们利用李雅普诺夫势障函数的概念作为一个强大的工具,将追踪者和目标之间的距离限制在非对称范围内,从而确保追踪者在预定义区域内的安全。此外,我们证明了所提出的制导律在管理任意机动目标和其他不确定因素(如车辆/自动驾驶仪动力学和外部干扰)方面的有效性,使追踪者能够在必要时通过在安全区域内的稳定封闭轨迹之间切换来始终如一地实现稳定的全局封闭行为,甚至在应对侵略性目标机动时也是如此。为了证明我们工作的优点,我们对各种工厂模型进行了实验测试,包括软件在环(SITL)模拟中的高保真四旋翼模型,包括各种具有挑战性的目标机动场景,并且只需要成功执行的相关信息。
{"title":"3D Guidance Law for Flexible Target Enclosing With Inherent Safety","authors":"Praveen Kumar Ranjan;Abhinav Sinha;Yongcan Cao","doi":"10.1109/LRA.2025.3528225","DOIUrl":"https://doi.org/10.1109/LRA.2025.3528225","url":null,"abstract":"In this paper, we address the problem of enclosing an arbitrarily moving target in three dimensions by a single pursuer while ensuring the pursuer's safety by preventing collisions with the target. The proposed guidance strategy steers the pursuer to a safe region of space surrounding and excluding the target, allowing it to maintain a certain distance from the latter while offering greater flexibility in positioning and converging to any orbit within this safe zone. We leverage the concept of the Lyapunov Barrier Function as a powerful tool to constrain the distance between the pursuer and the target within asymmetric bounds, thereby ensuring the pursuer's safety within the predefined region. Further, we demonstrate the effectiveness of the proposed guidance law in managing arbitrarily maneuvering targets and other uncertainties (such as vehicle/autopilot dynamics and external disturbances) by enabling the pursuer to consistently achieve stable global enclosing behaviors by switching between stable enclosing trajectories within the safe region whenever necessary, even in response to aggressive target maneuvers. To attest to the merits of our work, we conduct experimental tests with various plant models, including a high-fidelity quadrotor model within Software-in-the-loop (SITL) simulations, encompassing various challenging target maneuver scenarios and requiring only relative information for successful execution.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 2","pages":"2088-2095"},"PeriodicalIF":4.6,"publicationDate":"2025-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142992991","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-View Spatial Context and State Constraints for Object-Goal Navigation
IF 4.6 2区 计算机科学 Q2 ROBOTICS Pub Date : 2025-01-13 DOI: 10.1109/LRA.2025.3529324
Chong Lu;Meiqin Liu;Zhirong Luan;Yan He;Badong Chen
Object-goal navigation is a highly challenging task where an agent must navigate to a target solely based on visual observations. Current reinforcement learning-based methods for object-goal navigation face two major challenges: first, the agent lacks sufficient perception of environmental context information, resulting in an absence of rich visual representations; second, in complex environments or confined spaces, the agent tends to stop exploring novel states, becoming trapped in a deadlock from which it cannot escape. To address these issues, we propose a novel Multi-View Visual Transformer (MVVT) navigation model, which consists of two components: a multi-view visual observation representation module and an episode state constraint-based policy learning module. In the visual observation representation module, we expand the input image perspective to five views to enable the agent to learn rich spatial context relationships of the environment, which provides content-rich feature information for subsequent policy learning. In the policy learning module, we help the agent escape deadlock by constraining the correlation of highly related states within an episode, which promotes the exploration of novel states and achieves efficient navigation. We validate our method in the AI2-Thor environment, and experimental results show that our approach outperforms current state-of-the-art methods across all metrics, with a particularly notable improvement in success rate by 2.66% and SPL metric by 16.5%.
{"title":"Multi-View Spatial Context and State Constraints for Object-Goal Navigation","authors":"Chong Lu;Meiqin Liu;Zhirong Luan;Yan He;Badong Chen","doi":"10.1109/LRA.2025.3529324","DOIUrl":"https://doi.org/10.1109/LRA.2025.3529324","url":null,"abstract":"Object-goal navigation is a highly challenging task where an agent must navigate to a target solely based on visual observations. Current reinforcement learning-based methods for object-goal navigation face two major challenges: first, the agent lacks sufficient perception of environmental context information, resulting in an absence of rich visual representations; second, in complex environments or confined spaces, the agent tends to stop exploring novel states, becoming trapped in a deadlock from which it cannot escape. To address these issues, we propose a novel Multi-View Visual Transformer (MVVT) navigation model, which consists of two components: a multi-view visual observation representation module and an episode state constraint-based policy learning module. In the visual observation representation module, we expand the input image perspective to five views to enable the agent to learn rich spatial context relationships of the environment, which provides content-rich feature information for subsequent policy learning. In the policy learning module, we help the agent escape deadlock by constraining the correlation of highly related states within an episode, which promotes the exploration of novel states and achieves efficient navigation. We validate our method in the AI2-Thor environment, and experimental results show that our approach outperforms current state-of-the-art methods across all metrics, with a particularly notable improvement in success rate by 2.66% and SPL metric by 16.5%.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 3","pages":"2207-2214"},"PeriodicalIF":4.6,"publicationDate":"2025-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143106593","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
NeuroVE: Brain-Inspired Linear-Angular Velocity Estimation With Spiking Neural Networks
IF 4.6 2区 计算机科学 Q2 ROBOTICS Pub Date : 2025-01-13 DOI: 10.1109/LRA.2025.3529319
Xiao Li;Xieyuanli Chen;Ruibin Guo;Yujie Wu;Zongtan Zhou;Fangwen Yu;Huimin Lu
Vision-based ego-velocity estimation is a fundamental problem in robot state estimation. However, the constraints of frame-based cameras, including motion blur and insufficient frame rates in dynamic settings, readily lead to the failure of conventional velocity estimation techniques. Mammals exhibit a remarkable ability to accurately estimate their ego-velocity during aggressive movement. Hence, integrating this capability into robots shows great promise for addressing these challenges. In this letter, we propose a brain-inspired framework for linear-angular velocity estimation, dubbed NeuroVE. The NeuroVE framework employs an event camera to capture the motion information and implements spiking neural networks (SNNs) to simulate the brain's spatial cells' function for velocity estimation. We formulate the velocity estimation as a time-series forecasting problem. To this end, we design an Astrocyte Leaky Integrate-and-Fire (ALIF) neuron model to encode continuous values. Additionally, we have developed an Astrocyte Spiking Long Short-term Memory (ASLSTM) structure, which significantly improves the time-series forecasting capabilities, enabling an accurate estimate of ego-velocity. Results from both simulation and real-world experiments indicate that NeuroVE has achieved an approximate 60% increase in accuracy compared to other SNN-based approaches.
{"title":"NeuroVE: Brain-Inspired Linear-Angular Velocity Estimation With Spiking Neural Networks","authors":"Xiao Li;Xieyuanli Chen;Ruibin Guo;Yujie Wu;Zongtan Zhou;Fangwen Yu;Huimin Lu","doi":"10.1109/LRA.2025.3529319","DOIUrl":"https://doi.org/10.1109/LRA.2025.3529319","url":null,"abstract":"Vision-based ego-velocity estimation is a fundamental problem in robot state estimation. However, the constraints of frame-based cameras, including motion blur and insufficient frame rates in dynamic settings, readily lead to the failure of conventional velocity estimation techniques. Mammals exhibit a remarkable ability to accurately estimate their ego-velocity during aggressive movement. Hence, integrating this capability into robots shows great promise for addressing these challenges. In this letter, we propose a brain-inspired framework for linear-angular velocity estimation, dubbed NeuroVE. The NeuroVE framework employs an event camera to capture the motion information and implements spiking neural networks (SNNs) to simulate the brain's spatial cells' function for velocity estimation. We formulate the velocity estimation as a time-series forecasting problem. To this end, we design an Astrocyte Leaky Integrate-and-Fire (ALIF) neuron model to encode continuous values. Additionally, we have developed an Astrocyte Spiking Long Short-term Memory (ASLSTM) structure, which significantly improves the time-series forecasting capabilities, enabling an accurate estimate of ego-velocity. Results from both simulation and real-world experiments indicate that NeuroVE has achieved an approximate 60% increase in accuracy compared to other SNN-based approaches.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 3","pages":"2375-2382"},"PeriodicalIF":4.6,"publicationDate":"2025-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143106609","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DINOv2-Based UAV Visual Self-Localization in Low-Altitude Urban Environments 基于dinov2的低空城市环境无人机视觉自定位
IF 4.6 2区 计算机科学 Q2 ROBOTICS Pub Date : 2025-01-09 DOI: 10.1109/LRA.2025.3527762
Jiaqiang Yang;Danyang Qin;Huapeng Tang;Sili Tao;Haoze Bie;Lin Ma
Visual self-localization technology is essential for unmanned aerial vehicles (UAVs) to achieve autonomous navigation and mission execution in environments where global navigation satellite system (GNSS) signals are unavailable. This technology estimates the UAV's geographic location by performing cross-view matching between UAV and satellite images. However, significant viewpoint differences between UAV and satellite images result in poor accuracy for existing cross-view matching methods. To address this, we integrate the DINOv2 model with UAV visual localization tasks and propose a DINOv2-based UAV visual self-localization method. Considering the inherent differences between pre-trained models and cross-view matching tasks, we propose a global-local feature adaptive enhancement method (GLFA). This method leverages Transformer and multi-scale convolutions to capture long-range dependencies and local spatial information in visual images, improving the model's ability to recognize key discriminative landmarks. In addition, we propose a cross-enhancement method based on a spatial pyramid (CESP), which constructs a multi-scale spatial pyramid to cross-enhance features, effectively improving the ability of the features to perceive multi-scale spatial information. Experimental results demonstrate that the proposed method achieves impressive scores of 86.27% in R@1 and 88.87% in SDM@1 on the DenseUAV public benchmark dataset, providing a novel solution for UAV visual self-localization.
视觉自定位技术是无人机在全球导航卫星系统(GNSS)信号不可用的环境中实现自主导航和任务执行的关键技术。该技术通过在无人机和卫星图像之间执行交叉视图匹配来估计无人机的地理位置。然而,无人机和卫星图像之间的显著视点差异导致现有的交叉视点匹配方法精度较低。为了解决这一问题,我们将DINOv2模型与无人机视觉定位任务相结合,提出了一种基于DINOv2的无人机视觉自定位方法。考虑到预训练模型与交叉视图匹配任务之间的内在差异,提出了一种全局局部特征自适应增强方法(GLFA)。该方法利用Transformer和多尺度卷积来捕获视觉图像中的远程依赖关系和局部空间信息,提高了模型识别关键判别标志的能力。此外,我们提出了一种基于空间金字塔的交叉增强方法(CESP),该方法构建了一个多尺度空间金字塔对特征进行交叉增强,有效提高了特征对多尺度空间信息的感知能力。实验结果表明,该方法在DenseUAV公共基准数据集上的得分分别为R@1和SDM@1,分别达到86.27%和88.87%,为无人机视觉自定位提供了一种新的解决方案。
{"title":"DINOv2-Based UAV Visual Self-Localization in Low-Altitude Urban Environments","authors":"Jiaqiang Yang;Danyang Qin;Huapeng Tang;Sili Tao;Haoze Bie;Lin Ma","doi":"10.1109/LRA.2025.3527762","DOIUrl":"https://doi.org/10.1109/LRA.2025.3527762","url":null,"abstract":"Visual self-localization technology is essential for unmanned aerial vehicles (UAVs) to achieve autonomous navigation and mission execution in environments where global navigation satellite system (GNSS) signals are unavailable. This technology estimates the UAV's geographic location by performing cross-view matching between UAV and satellite images. However, significant viewpoint differences between UAV and satellite images result in poor accuracy for existing cross-view matching methods. To address this, we integrate the DINOv2 model with UAV visual localization tasks and propose a DINOv2-based UAV visual self-localization method. Considering the inherent differences between pre-trained models and cross-view matching tasks, we propose a global-local feature adaptive enhancement method (GLFA). This method leverages Transformer and multi-scale convolutions to capture long-range dependencies and local spatial information in visual images, improving the model's ability to recognize key discriminative landmarks. In addition, we propose a cross-enhancement method based on a spatial pyramid (CESP), which constructs a multi-scale spatial pyramid to cross-enhance features, effectively improving the ability of the features to perceive multi-scale spatial information. Experimental results demonstrate that the proposed method achieves impressive scores of 86.27% in R@1 and 88.87% in SDM@1 on the DenseUAV public benchmark dataset, providing a novel solution for UAV visual self-localization.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 2","pages":"2080-2087"},"PeriodicalIF":4.6,"publicationDate":"2025-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142992990","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning Agile Swimming: An End-to-End Approach Without CPGs 学习敏捷游泳:没有cpg的端到端方法
IF 4.6 2区 计算机科学 Q2 ROBOTICS Pub Date : 2025-01-09 DOI: 10.1109/LRA.2025.3527757
Xiaozhu Lin;Xiaopei Liu;Yang Wang
The pursuit of agile and efficient underwater robots, especially bio-mimetic robotic fish, has been impeded by challenges in creating motion controllers that are able to fully exploit their hydrodynamic capabilities. This letter addresses these challenges by introducing a novel, model-free, end-to-end control framework that leverages Deep Reinforcement Learning (DRL) to enable agile and energy-efficient swimming of robotic fish. Unlike existing methods that rely on predefined trigonometric swimming patterns like Central Pattern Generators (CPG), our approach directly outputs low-level actuator commands without strong constraints, enabling the robotic fish to learn agile swimming behaviors. In addition, by integrating a high-performance Computational Fluid Dynamics (CFD) simulator with innovative sim-to-real strategies, such as normalized density calibration and servo response calibration, the proposed framework significantly mitigates the sim-to-real gap, facilitating direct transfer of control policies to real-world environments without fine-tuning. Comparative experiments demonstrate that our method achieves faster swimming speeds, smaller turn-around radii, and reduced energy consumption compared to the state-of-the-art swimming controllers. Furthermore, the proposed framework shows promise in addressing complex tasks, paving the way for more effective deployment of robotic fish in real aquatic environments.
追求敏捷和高效的水下机器人,特别是仿生机器鱼,一直受到创造能够充分利用其流体动力学能力的运动控制器的挑战的阻碍。这封信通过引入一种新颖的、无模型的端到端控制框架来解决这些挑战,该框架利用深度强化学习(DRL)来实现机器鱼的敏捷和节能游泳。与现有方法依赖于预定义的三角游泳模式(如中央模式生成器(CPG))不同,我们的方法直接输出低级执行器命令,没有强约束,使机器鱼能够学习敏捷的游泳行为。此外,通过将高性能计算流体动力学(CFD)模拟器与创新的模拟到真实策略(如归一化密度校准和伺服响应校准)集成在一起,所提出的框架显着减小了模拟到真实的差距,便于将控制策略直接转移到现实环境中而无需微调。对比实验表明,与目前最先进的游泳控制器相比,我们的方法实现了更快的游泳速度、更小的转弯半径和更低的能量消耗。此外,所提出的框架显示出解决复杂任务的希望,为在真实的水生环境中更有效地部署机器鱼铺平了道路。
{"title":"Learning Agile Swimming: An End-to-End Approach Without CPGs","authors":"Xiaozhu Lin;Xiaopei Liu;Yang Wang","doi":"10.1109/LRA.2025.3527757","DOIUrl":"https://doi.org/10.1109/LRA.2025.3527757","url":null,"abstract":"The pursuit of agile and efficient underwater robots, especially bio-mimetic robotic fish, has been impeded by challenges in creating motion controllers that are able to fully exploit their hydrodynamic capabilities. This letter addresses these challenges by introducing a novel, model-free, end-to-end control framework that leverages Deep Reinforcement Learning (DRL) to enable agile and energy-efficient swimming of robotic fish. Unlike existing methods that rely on predefined trigonometric swimming patterns like Central Pattern Generators (CPG), our approach directly outputs low-level actuator commands without strong constraints, enabling the robotic fish to learn agile swimming behaviors. In addition, by integrating a high-performance Computational Fluid Dynamics (CFD) simulator with innovative sim-to-real strategies, such as normalized density calibration and servo response calibration, the proposed framework significantly mitigates the sim-to-real gap, facilitating direct transfer of control policies to real-world environments without fine-tuning. Comparative experiments demonstrate that our method achieves faster swimming speeds, smaller turn-around radii, and reduced energy consumption compared to the state-of-the-art swimming controllers. Furthermore, the proposed framework shows promise in addressing complex tasks, paving the way for more effective deployment of robotic fish in real aquatic environments.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 2","pages":"1992-1999"},"PeriodicalIF":4.6,"publicationDate":"2025-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142992983","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Human-Robot Collaborative Tele-Grasping in Clutter With Five-Fingered Robotic Hands
IF 4.6 2区 计算机科学 Q2 ROBOTICS Pub Date : 2025-01-08 DOI: 10.1109/LRA.2025.3527278
Yayu Huang;Dongxuan Fan;Dashun Yan;Wen Qi;Guoqiang Deng;Zhihao Shao;Yongkang Luo;Daheng Li;Zhenghan Wang;Qian Liu;Peng Wang
Teleoperation offers the possibility of enabling robots to replace humans in operating within hazardous environments. While it provides greater adaptability to unstructured settings than full autonomy, it also imposes significant burdens on human operators, leading to operational errors. To address this challenge, shared control, a key aspect of human-robot collaboration methods, has emerged as a promising alternative. By integrating direct teleoperation with autonomous control, shared control ensures both efficiency and stability. In this letter, we introduce a shared control framework for human-robot collaborative tele-grasping in clutter with five-fingered robotic hands. During teleoperation, the operator's intent to reach the target object is detected in real-time. Upon successful detection, continuous and smooth grasping plans are generated, allowing the robot to seamlessly take over control and achieve natural, collision-free grasping. We validate the proposed framework through fundamental component analysis and experiments on real-world platforms, demonstrating the superior performance of this framework in reducing operator workload and enabling effective grasping in clutter.
{"title":"Human-Robot Collaborative Tele-Grasping in Clutter With Five-Fingered Robotic Hands","authors":"Yayu Huang;Dongxuan Fan;Dashun Yan;Wen Qi;Guoqiang Deng;Zhihao Shao;Yongkang Luo;Daheng Li;Zhenghan Wang;Qian Liu;Peng Wang","doi":"10.1109/LRA.2025.3527278","DOIUrl":"https://doi.org/10.1109/LRA.2025.3527278","url":null,"abstract":"Teleoperation offers the possibility of enabling robots to replace humans in operating within hazardous environments. While it provides greater adaptability to unstructured settings than full autonomy, it also imposes significant burdens on human operators, leading to operational errors. To address this challenge, shared control, a key aspect of human-robot collaboration methods, has emerged as a promising alternative. By integrating direct teleoperation with autonomous control, shared control ensures both efficiency and stability. In this letter, we introduce a shared control framework for human-robot collaborative tele-grasping in clutter with five-fingered robotic hands. During teleoperation, the operator's intent to reach the target object is detected in real-time. Upon successful detection, continuous and smooth grasping plans are generated, allowing the robot to seamlessly take over control and achieve natural, collision-free grasping. We validate the proposed framework through fundamental component analysis and experiments on real-world platforms, demonstrating the superior performance of this framework in reducing operator workload and enabling effective grasping in clutter.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 3","pages":"2215-2222"},"PeriodicalIF":4.6,"publicationDate":"2025-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143106594","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning-Based On-Track System Identification for Scaled Autonomous Racing in Under a Minute 一分钟内基于学习的规模化自动驾驶赛车赛道系统辨识
IF 4.6 2区 计算机科学 Q2 ROBOTICS Pub Date : 2025-01-08 DOI: 10.1109/LRA.2025.3527336
Onur Dikici;Edoardo Ghignone;Cheng Hu;Nicolas Baumann;Lei Xie;Andrea Carron;Michele Magno;Matteo Corno
Accurate tire modeling is crucial for optimizing autonomous racing vehicles, as State-of-the-Art (SotA) model-based techniques rely on precise knowledge of the vehicle's parameters, yet system identification in dynamic racing conditions is challenging due to varying track and tire conditions. Traditional methods require extensive operational ranges, often impractical in racing scenarios. Machine Learning (ML)-based methods, while improving performance, struggle with generalization and depend on accurate initialization. This paper introduces a novel on-track system identification algorithm, incorporating a Neural Network (NN) for error correction, which is then employed for traditional system identification with virtually generated data. Crucially, the process is iteratively reapplied, with tire parameters updated at each cycle, leading to notable improvements in accuracy in tests on a scaled vehicle. Experiments show that it is possible to learn a tire model without prior knowledge with only 30 seconds of driving data, and 3 seconds of training time. This method demonstrates greater one-step prediction accuracy than the baseline Nonlinear Least Squares (NLS) method under noisy conditions, achieving a 3.3x lower Root Mean Square Error (RMSE), and yields tire models with comparable accuracy to traditional steady-state system identification. Furthermore, unlike steady-state methods requiring large spaces and specific experimental setups, the proposed approach identifies tire parameters directly on a race track in dynamic racing environments.
准确的轮胎建模对于优化自动驾驶赛车至关重要,因为基于最先进(SotA)模型的技术依赖于对车辆参数的精确了解,但由于赛道和轮胎条件的变化,动态赛车条件下的系统识别具有挑战性。传统的方法需要广泛的操作范围,在赛车场景中往往不切实际。基于机器学习(ML)的方法在提高性能的同时,与泛化和依赖于准确的初始化相斗争。本文介绍了一种新的轨道系统识别算法,该算法将神经网络(NN)用于纠错,然后将其用于基于虚拟生成数据的传统系统识别。至关重要的是,该过程是迭代重复应用的,轮胎参数在每个循环中更新,导致在缩放车辆上测试的准确性显着提高。实验表明,只需要30秒的驾驶数据和3秒的训练时间,就可以在没有先验知识的情况下学习轮胎模型。在噪声条件下,该方法比基线非线性最小二乘(NLS)方法具有更高的一步预测精度,实现了3.3倍的低均方根误差(RMSE),并且生成的轮胎模型具有与传统稳态系统识别相当的精度。此外,与需要大空间和特定实验设置的稳态方法不同,该方法直接在动态赛车环境中的赛道上识别轮胎参数。
{"title":"Learning-Based On-Track System Identification for Scaled Autonomous Racing in Under a Minute","authors":"Onur Dikici;Edoardo Ghignone;Cheng Hu;Nicolas Baumann;Lei Xie;Andrea Carron;Michele Magno;Matteo Corno","doi":"10.1109/LRA.2025.3527336","DOIUrl":"https://doi.org/10.1109/LRA.2025.3527336","url":null,"abstract":"Accurate tire modeling is crucial for optimizing autonomous racing vehicles, as State-of-the-Art (SotA) model-based techniques rely on precise knowledge of the vehicle's parameters, yet system identification in dynamic racing conditions is challenging due to varying track and tire conditions. Traditional methods require extensive operational ranges, often impractical in racing scenarios. Machine Learning (ML)-based methods, while improving performance, struggle with generalization and depend on accurate initialization. This paper introduces a novel on-track system identification algorithm, incorporating a Neural Network (NN) for error correction, which is then employed for traditional system identification with virtually generated data. Crucially, the process is iteratively reapplied, with tire parameters updated at each cycle, leading to notable improvements in accuracy in tests on a scaled vehicle. Experiments show that it is possible to learn a tire model without prior knowledge with only 30 seconds of driving data, and 3 seconds of training time. This method demonstrates greater one-step prediction accuracy than the baseline Nonlinear Least Squares (NLS) method under noisy conditions, achieving a 3.3x lower Root Mean Square Error (RMSE), and yields tire models with comparable accuracy to traditional steady-state system identification. Furthermore, unlike steady-state methods requiring large spaces and specific experimental setups, the proposed approach identifies tire parameters directly on a race track in dynamic racing environments.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 2","pages":"1984-1991"},"PeriodicalIF":4.6,"publicationDate":"2025-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142992982","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Symbolic Manipulation Planning With Discovered Object and Relational Predicates 使用已发现对象和关系谓词的符号操作规划
IF 4.6 2区 计算机科学 Q2 ROBOTICS Pub Date : 2025-01-08 DOI: 10.1109/LRA.2025.3527338
Alper Ahmetoglu;Erhan Oztop;Emre Ugur
Discovering the symbols and rules that can be used in long-horizon planning from a robot's unsupervised exploration of its environment and continuous sensorimotor experience is a challenging task. The previous studies proposed learning symbols from single or paired object interactions and planning with these symbols. In this work, we propose a system that learns rules with discovered object and relational symbols that encode an arbitrary number of objects and the relations between them, converts those rules to Planning Domain Description Language (PDDL), and generates plans that involve affordances of the arbitrary number of objects to achieve tasks. We validated our system with box-shaped objects in different sizes and showed that the system can develop a symbolic knowledge of pick-up, carry, and place operations, taking into account object compounds in different configurations, such as boxes would be carried together with a larger box that they are placed on. We also compared our method with other symbol learning methods and showed that planning with the operators defined over relational symbols gives better planning performance compared to the baselines.
从机器人对环境的无监督探索和持续的感觉运动体验中发现可用于长期规划的符号和规则是一项具有挑战性的任务。以往的研究建议从单个或成对对象的交互中学习符号,并对这些符号进行规划。在这项工作中,我们提出了一个系统,该系统使用已发现的对象和关系符号学习规则,这些规则编码任意数量的对象及其之间的关系,将这些规则转换为规划领域描述语言(PDDL),并生成涉及任意数量对象的辅助性的计划来完成任务。我们用不同大小的盒子形状的物体验证了我们的系统,并表明系统可以开发出取、携带和放置操作的符号知识,考虑到不同配置的物体化合物,例如盒子将与放置它们的更大的盒子一起携带。我们还将我们的方法与其他符号学习方法进行了比较,并表明与基线相比,使用在关系符号上定义的算子进行规划具有更好的规划性能。
{"title":"Symbolic Manipulation Planning With Discovered Object and Relational Predicates","authors":"Alper Ahmetoglu;Erhan Oztop;Emre Ugur","doi":"10.1109/LRA.2025.3527338","DOIUrl":"https://doi.org/10.1109/LRA.2025.3527338","url":null,"abstract":"Discovering the symbols and rules that can be used in long-horizon planning from a robot's unsupervised exploration of its environment and continuous sensorimotor experience is a challenging task. The previous studies proposed learning symbols from single or paired object interactions and planning with these symbols. In this work, we propose a system that learns rules with discovered object and relational symbols that encode an arbitrary number of objects and the relations between them, converts those rules to Planning Domain Description Language (PDDL), and generates plans that involve affordances of the arbitrary number of objects to achieve tasks. We validated our system with box-shaped objects in different sizes and showed that the system can develop a symbolic knowledge of pick-up, carry, and place operations, taking into account object compounds in different configurations, such as boxes would be carried together with a larger box that they are placed on. We also compared our method with other symbol learning methods and showed that planning with the operators defined over relational symbols gives better planning performance compared to the baselines.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 2","pages":"1968-1975"},"PeriodicalIF":4.6,"publicationDate":"2025-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142992980","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Design and Analysis of a Hybrid Actuator With Resilient Origami-Inspired Hinges 弹性折纸铰链混合驱动器的设计与分析
IF 4.6 2区 计算机科学 Q2 ROBOTICS Pub Date : 2025-01-08 DOI: 10.1109/LRA.2025.3527282
Seunghoon Yoo;Hyunjun Park;Youngsu Cha
This letter presents a novel cable-driven hybrid origami-inspired actuator with load-bearing capability. In contrast to conventional origami, the hybrid origami layer of the actuator is characterized by resilient hinges and rigid facets. The layers are bonded and assembled with the motors that apply tension via wires to generate a motion. The actuator exhibits high blocking force performance while preserving the large deformability of the conventional origami. To analyze the structure, a mathematical model is built using origami kinematics and elastic analysis. A hybrid origami tower with multiple layers is also suggested to show feasibility as a robot manipulator.
这封信提出了一种新颖的电缆驱动的混合折纸驱动器,具有承重能力。与传统折纸相比,执行器的混合折纸层具有弹性铰链和刚性面。这些层与通过电线施加张力以产生运动的马达粘合并组装在一起。该致动器在保持传统折纸的大变形性的同时,具有较高的阻力性能。为了对结构进行分析,利用折纸运动学和弹性分析建立了数学模型。此外,还提出了一种多层混合折纸塔作为机械臂的可行性。
{"title":"Design and Analysis of a Hybrid Actuator With Resilient Origami-Inspired Hinges","authors":"Seunghoon Yoo;Hyunjun Park;Youngsu Cha","doi":"10.1109/LRA.2025.3527282","DOIUrl":"https://doi.org/10.1109/LRA.2025.3527282","url":null,"abstract":"This letter presents a novel cable-driven hybrid origami-inspired actuator with load-bearing capability. In contrast to conventional origami, the hybrid origami layer of the actuator is characterized by resilient hinges and rigid facets. The layers are bonded and assembled with the motors that apply tension via wires to generate a motion. The actuator exhibits high blocking force performance while preserving the large deformability of the conventional origami. To analyze the structure, a mathematical model is built using origami kinematics and elastic analysis. A hybrid origami tower with multiple layers is also suggested to show feasibility as a robot manipulator.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 3","pages":"2128-2135"},"PeriodicalIF":4.6,"publicationDate":"2025-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142993207","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Incorporating Point Uncertainty in Radar SLAM 雷达SLAM中点不确定性的引入
IF 4.6 2区 计算机科学 Q2 ROBOTICS Pub Date : 2025-01-08 DOI: 10.1109/LRA.2025.3527344
Yang Xu;Qiucan Huang;Shaojie Shen;Huan Yin
Radar SLAM is robust in challenging conditions, such as fog, dust, and smoke, but suffers from the sparsity and noisiness of radar sensing, including speckle noise and multipath effects. This study provides a performance-enhanced radar SLAM system by incorporating point uncertainty. The basic system is a radar-inertial odometry system that leverages velocity-aided radar points and high-frequency inertial measurements. We first propose to model the uncertainty of radar points in polar coordinates by considering the nature of radar sensing. Then, the proposed uncertainty model is integrated into the data association module and incorporated for back-end state estimation. Real-world experiments on both public and self-collected datasets validate the effectiveness of the proposed models and approaches. The findings highlight the potential of incorporating point uncertainty to improve the radar SLAM system.
雷达SLAM在具有挑战性的条件下,如雾、灰尘和烟雾,具有鲁棒性,但受到雷达传感的稀疏性和噪声的影响,包括散斑噪声和多径效应。本研究通过结合点不确定性提供了一种性能增强的雷达SLAM系统。基本系统是雷达-惯性里程计系统,利用速度辅助雷达点和高频惯性测量。本文首先考虑雷达传感的性质,提出在极坐标下对雷达点的不确定性进行建模。然后,将所提出的不确定性模型集成到数据关联模块中,用于后端状态估计。在公共和自我收集的数据集上进行的实际实验验证了所提出的模型和方法的有效性。研究结果强调了结合点不确定性来改进雷达SLAM系统的潜力。
{"title":"Incorporating Point Uncertainty in Radar SLAM","authors":"Yang Xu;Qiucan Huang;Shaojie Shen;Huan Yin","doi":"10.1109/LRA.2025.3527344","DOIUrl":"https://doi.org/10.1109/LRA.2025.3527344","url":null,"abstract":"Radar SLAM is robust in challenging conditions, such as fog, dust, and smoke, but suffers from the sparsity and noisiness of radar sensing, including speckle noise and multipath effects. This study provides a performance-enhanced radar SLAM system by incorporating point uncertainty. The basic system is a radar-inertial odometry system that leverages velocity-aided radar points and high-frequency inertial measurements. We first propose to model the uncertainty of radar points in polar coordinates by considering the nature of radar sensing. Then, the proposed uncertainty model is integrated into the data association module and incorporated for back-end state estimation. Real-world experiments on both public and self-collected datasets validate the effectiveness of the proposed models and approaches. The findings highlight the potential of incorporating point uncertainty to improve the radar SLAM system.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 3","pages":"2168-2175"},"PeriodicalIF":4.6,"publicationDate":"2025-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142993205","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Robotics and Automation Letters
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1