首页 > 最新文献

Robotics and Autonomous Systems最新文献

英文 中文
LBH gripper: Linkage-belt based hybrid adaptive gripper design for dish collecting robots
IF 4.3 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS Pub Date : 2024-12-12 DOI: 10.1016/j.robot.2024.104886
YoungHwan Kim, JeongPil Shin, Jeeho Won, Wonhyoung Lee, TaeWon Seo
Recent developments in robot technology have led to the expansion of robot automation in various service fields. In this study, we developed a gripper for a plate retrieval robot for restaurant environments. Plate retrieval grippers are challenged by the variety of types and sizes of plates to be collected. Therefore, a gripper that can adapt to a variety of plate shapes and sizes is needed. To address these issues, we propose a Linkage-Belt Hybrid (LBH) gripper that combines a linkage structure and a soft gripper. The design of the LBH gripper was verified through structural analysis through simulation, and its performance was experimentally evaluated. Results show that the LBH gripper outperforms traditional bar-type, linkage-adaptive, and soft grippers. In an experiment involving randomly grasping 23 objects, the LBH gripper demonstrated the ability to grasp the highest number of objects, with only two objects experiencing a 30% failure rate. Additionally, the gripper’s payload is verified to be 10kg for bowls and 6kg for plates. In the safe release test, it was confirmed that it showed a significantly lower impact amount compared to the bar-type gripper. In addition, by operating the manipulator after gripping the dish, the gripping stability and the possibility of performing non-grabbing movements were confirmed, confirming that the LBH Gripper is suitable for the target dish collection task. These findings pave the way for the development of new connected belt hybrid grippers capable of gripping a variety of objects.
{"title":"LBH gripper: Linkage-belt based hybrid adaptive gripper design for dish collecting robots","authors":"YoungHwan Kim,&nbsp;JeongPil Shin,&nbsp;Jeeho Won,&nbsp;Wonhyoung Lee,&nbsp;TaeWon Seo","doi":"10.1016/j.robot.2024.104886","DOIUrl":"10.1016/j.robot.2024.104886","url":null,"abstract":"<div><div>Recent developments in robot technology have led to the expansion of robot automation in various service fields. In this study, we developed a gripper for a plate retrieval robot for restaurant environments. Plate retrieval grippers are challenged by the variety of types and sizes of plates to be collected. Therefore, a gripper that can adapt to a variety of plate shapes and sizes is needed. To address these issues, we propose a Linkage-Belt Hybrid (LBH) gripper that combines a linkage structure and a soft gripper. The design of the LBH gripper was verified through structural analysis through simulation, and its performance was experimentally evaluated. Results show that the LBH gripper outperforms traditional bar-type, linkage-adaptive, and soft grippers. In an experiment involving randomly grasping 23 objects, the LBH gripper demonstrated the ability to grasp the highest number of objects, with only two objects experiencing a 30% failure rate. Additionally, the gripper’s payload is verified to be 10kg for bowls and 6kg for plates. In the safe release test, it was confirmed that it showed a significantly lower impact amount compared to the bar-type gripper. In addition, by operating the manipulator after gripping the dish, the gripping stability and the possibility of performing non-grabbing movements were confirmed, confirming that the LBH Gripper is suitable for the target dish collection task. These findings pave the way for the development of new connected belt hybrid grippers capable of gripping a variety of objects.</div></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"185 ","pages":"Article 104886"},"PeriodicalIF":4.3,"publicationDate":"2024-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143102634","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Six-degree-of-freedom upper limb rehabilitation robot based on tight-coupled dynamic interactive control: Design and implementation
IF 4.3 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS Pub Date : 2024-12-12 DOI: 10.1016/j.robot.2024.104895
Guanxin Liu , Lingqi Zeng , Qingyun Meng , Xin Xu , Qiaoling Meng , Hongliu Yu

Objective

The upper limb rehabilitation robot is utilized to enhance the functionality of stroke patients, and its effectiveness has been proven. However, many upper limb rehabilitation devices currently suffer from issues such as bulkiness, limited control methods, and poor resistance to interference.

Methods

In response to these challenges, we have designed and implemented a Six-degree-of-freedom Upper Limb Rehabilitation Robot (S-ULRR) based on tightly coupled dynamic interaction control.

Results

The S-ULRR boasts a simple and compact structure, requiring only two motors to achieve three degrees of freedom in wrist motion. The mechanical arm's height and length are adjustable, catering to various upper limb users. Introduced a gravity compensation mechanism to prevent unnecessary movements. Proposed an interference-resistant control strategy based on a nonlinear disturbance observer fuzzy sliding mode controller, which has demonstrated superior interference tracking performance compared to a single observer. The design of a tightly coupled dynamic interaction control system has improved the precision of upper limb rehabilitation training. Additionally, developed an upper computer interface and a lower computer interface with an HMI touchscreen, enhancing the flexibility of the control system.

Conclusion

Relevant experiments have confirmed the achievement of our design objectives and validated the feasibility of the device.

Significance

This study presents a novel rehabilitation device that contains new structural design and control methods and provides new ideas for future research.
{"title":"Six-degree-of-freedom upper limb rehabilitation robot based on tight-coupled dynamic interactive control: Design and implementation","authors":"Guanxin Liu ,&nbsp;Lingqi Zeng ,&nbsp;Qingyun Meng ,&nbsp;Xin Xu ,&nbsp;Qiaoling Meng ,&nbsp;Hongliu Yu","doi":"10.1016/j.robot.2024.104895","DOIUrl":"10.1016/j.robot.2024.104895","url":null,"abstract":"<div><h3>Objective</h3><div>The upper limb rehabilitation robot is utilized to enhance the functionality of stroke patients, and its effectiveness has been proven. However, many upper limb rehabilitation devices currently suffer from issues such as bulkiness, limited control methods, and poor resistance to interference.</div></div><div><h3>Methods</h3><div>In response to these challenges, we have designed and implemented a Six-degree-of-freedom Upper Limb Rehabilitation Robot (S-ULRR) based on tightly coupled dynamic interaction control.</div></div><div><h3>Results</h3><div>The S-ULRR boasts a simple and compact structure, requiring only two motors to achieve three degrees of freedom in wrist motion. The mechanical arm's height and length are adjustable, catering to various upper limb users. Introduced a gravity compensation mechanism to prevent unnecessary movements. Proposed an interference-resistant control strategy based on a nonlinear disturbance observer fuzzy sliding mode controller, which has demonstrated superior interference tracking performance compared to a single observer. The design of a tightly coupled dynamic interaction control system has improved the precision of upper limb rehabilitation training. Additionally, developed an upper computer interface and a lower computer interface with an HMI touchscreen, enhancing the flexibility of the control system.</div></div><div><h3>Conclusion</h3><div>Relevant experiments have confirmed the achievement of our design objectives and validated the feasibility of the device.</div></div><div><h3>Significance</h3><div>This study presents a novel rehabilitation device that contains new structural design and control methods and provides new ideas for future research.</div></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"185 ","pages":"Article 104895"},"PeriodicalIF":4.3,"publicationDate":"2024-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143102982","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Tracking control with active and passive training cycle switching for rehabilitation training walker
IF 4.3 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS Pub Date : 2024-12-11 DOI: 10.1016/j.robot.2024.104887
Ping Sun , Peng Zhou , Shuoyu Wang , Hongbin Chang
This study presents a novel tracking control scheme addressing the challenge of seamless cycle switching between active and passive training for a rehabilitation training walker. A mathematical model with switching characteristics is established, leveraging switched stochastic configuration networks to observe uncertain training environments. Furthermore, an active and passive training cycle-switching mode is proposed by designing the training time, with the lower bound of the average training time determined to ensure stability of the tracking system. Specifically, active and passive training can be switched automatically when a specified training time is reached. Additionally, to ensure the safety training of the rehabilitee, a velocity decision scheme is introduced in the active training to coordinate the human–robot speed. Furthermore simulations and experimental findings corroborate the effectiveness of the proposed strategy.
{"title":"Tracking control with active and passive training cycle switching for rehabilitation training walker","authors":"Ping Sun ,&nbsp;Peng Zhou ,&nbsp;Shuoyu Wang ,&nbsp;Hongbin Chang","doi":"10.1016/j.robot.2024.104887","DOIUrl":"10.1016/j.robot.2024.104887","url":null,"abstract":"<div><div>This study presents a novel tracking control scheme addressing the challenge of seamless cycle switching between active and passive training for a rehabilitation training walker. A mathematical model with switching characteristics is established, leveraging switched stochastic configuration networks to observe uncertain training environments. Furthermore, an active and passive training cycle-switching mode is proposed by designing the training time, with the lower bound of the average training time determined to ensure stability of the tracking system. Specifically, active and passive training can be switched automatically when a specified training time is reached. Additionally, to ensure the safety training of the rehabilitee, a velocity decision scheme is introduced in the active training to coordinate the human–robot speed. Furthermore simulations and experimental findings corroborate the effectiveness of the proposed strategy.</div></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"185 ","pages":"Article 104887"},"PeriodicalIF":4.3,"publicationDate":"2024-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143102630","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Comparison of motion encoding frameworks on human manipulation actions
IF 4.3 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS Pub Date : 2024-12-11 DOI: 10.1016/j.robot.2024.104869
Lennart Jahn , Florentin Wörgötter , Tomas Kulvicius
Movement generation, and especially generalization to unseen situations, plays an important role in robotics. Different types of movement generation methods exist such as spline based methods, dynamical system based methods, and methods based on Gaussian mixture models (GMMs). Using a large, new dataset on human manipulations, in this paper we provide a highly detailed comparison of five fundamentally different and widely used movement encoding and generation frameworks: dynamic movement primitives (DMPs), time based Gaussian mixture regression (tbGMR), stable estimator of dynamical systems (SEDS), Probabilistic Movement Primitives (ProMP) and Optimal Control Primitives (OCP). We compare these frameworks with respect to their movement encoding efficiency, reconstruction accuracy, and movement generalization capabilities. The new dataset consists of nine object manipulation actions performed by 12 humans: pick and place, put on top/take down, put inside/take out, hide/uncover, and push/pull with a total of 7,652 movement examples.
Our analysis shows that for movement encoding and reconstruction DMPs and OCPs are the most efficient with respect to the number of parameters and reconstruction accuracy, if a sufficient number of kernels is used. In case of movement generalization to new start- and end-point situations, DMPs, OCPs and task parameterized GMM (TP-GMM, movement generalization framework based on tbGMR) lead to similar performance, which ProMPs only achieve when using many demonstrations for learning. All models outperform SEDS, which additionally proves to be difficult to fit. Furthermore we observe that TP-GMM and SEDS suffer from problems reaching the end-points of generalizations. These different quantitative results will help selecting the most appropriate models and designing trajectory representations in an improved task-dependent way in future robotic applications.
{"title":"Comparison of motion encoding frameworks on human manipulation actions","authors":"Lennart Jahn ,&nbsp;Florentin Wörgötter ,&nbsp;Tomas Kulvicius","doi":"10.1016/j.robot.2024.104869","DOIUrl":"10.1016/j.robot.2024.104869","url":null,"abstract":"<div><div>Movement generation, and especially generalization to unseen situations, plays an important role in robotics. Different types of movement generation methods exist such as spline based methods, dynamical system based methods, and methods based on Gaussian mixture models (GMMs). Using a large, new dataset on human manipulations, in this paper we provide a highly detailed comparison of five fundamentally different and widely used movement encoding and generation frameworks: dynamic movement primitives (DMPs), time based Gaussian mixture regression (tbGMR), stable estimator of dynamical systems (SEDS), Probabilistic Movement Primitives (ProMP) and Optimal Control Primitives (OCP). We compare these frameworks with respect to their movement encoding efficiency, reconstruction accuracy, and movement generalization capabilities. The new dataset consists of nine object manipulation actions performed by 12 humans: pick and place, put on top/take down, put inside/take out, hide/uncover, and push/pull with a total of 7,652 movement examples.</div><div>Our analysis shows that for movement encoding and reconstruction DMPs and OCPs are the most efficient with respect to the number of parameters and reconstruction accuracy, if a sufficient number of kernels is used. In case of movement generalization to new start- and end-point situations, DMPs, OCPs and task parameterized GMM (TP-GMM, movement generalization framework based on tbGMR) lead to similar performance, which ProMPs only achieve when using many demonstrations for learning. All models outperform SEDS, which additionally proves to be difficult to fit. Furthermore we observe that TP-GMM and SEDS suffer from problems reaching the end-points of generalizations. These different quantitative results will help selecting the most appropriate models and designing trajectory representations in an improved task-dependent way in future robotic applications.</div></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"185 ","pages":"Article 104869"},"PeriodicalIF":4.3,"publicationDate":"2024-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143102629","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A SysML-based language for evaluating the integrity of simulation and physical embodiments of Cyber–Physical systems
IF 4.3 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS Pub Date : 2024-12-10 DOI: 10.1016/j.robot.2024.104884
Wojciech Dudek , Narcis Miguel , Tomasz Winiarski
Evaluating early design concepts is crucial as it impacts quality and cost. This process is often hindered by vague and uncertain design information. This article introduces the SysML-based Simulated–Physical Systems Modelling Language (SPSysML). It is a Domain-Specification Language for evaluating component reusability in Cyber–Physical Systems incorporating Digital Twins and other simulated parts. The proposed factors assess the design quantitatively. SPSysML uses a requirement-based system structuring method to couple simulated and physical parts with requirements. SPSysML-based systems incorporate DTs that perceive exogenous actions in the simulated world.
SPSysML validation is survey- and application-based. First, we develop a robotic system for an assisted living project. We propose an SPSysML application procedure called SPSysAP that manages the considered system development by evaluating the system designs with the proposed quantitative factors. As a result of the SPSysML application, we observed an integrity improvement between the simulated and physical parts of the system. Thus, more system components are shared between the simulated and physical setups. The system was deployed on the physical robot and two simulators based on ROS and ROS2. Additionally, we share a questionnaire for SPSysML assessment. The feedback that we already received is published in this article.
{"title":"A SysML-based language for evaluating the integrity of simulation and physical embodiments of Cyber–Physical systems","authors":"Wojciech Dudek ,&nbsp;Narcis Miguel ,&nbsp;Tomasz Winiarski","doi":"10.1016/j.robot.2024.104884","DOIUrl":"10.1016/j.robot.2024.104884","url":null,"abstract":"<div><div>Evaluating early design concepts is crucial as it impacts quality and cost. This process is often hindered by vague and uncertain design information. This article introduces the SysML-based Simulated–Physical Systems Modelling Language (SPSysML). It is a Domain-Specification Language for evaluating component reusability in Cyber–Physical Systems incorporating Digital Twins and other simulated parts. The proposed factors assess the design quantitatively. SPSysML uses a requirement-based system structuring method to couple simulated and physical parts with requirements. SPSysML-based systems incorporate DTs that perceive exogenous actions in the simulated world.</div><div>SPSysML validation is survey- and application-based. First, we develop a robotic system for an assisted living project. We propose an SPSysML application procedure called SPSysAP that manages the considered system development by evaluating the system designs with the proposed quantitative factors. As a result of the SPSysML application, we observed an integrity improvement between the simulated and physical parts of the system. Thus, more system components are shared between the simulated and physical setups. The system was deployed on the physical robot and two simulators based on ROS and ROS2. Additionally, we share a questionnaire for SPSysML assessment. The feedback that we already received is published in this article.</div></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"185 ","pages":"Article 104884"},"PeriodicalIF":4.3,"publicationDate":"2024-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143102979","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A smoothed particle hydrodynamics framework for fluid simulation in robotics
IF 4.3 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS Pub Date : 2024-12-08 DOI: 10.1016/j.robot.2024.104885
Emmanouil Angelidis , Jonathan Arreguit , Jan Bender , Patrick Berggold , Ziyuan Liu , Alois Knoll , Alessandro Crespi , Auke J. Ijspeert
Simulation is a core component of robotics workflows that can shed light on the complex interplay between a physical body, the environment and sensory feedback mechanisms in silico. To this goal several simulation methods, originating in rigid body dynamics and in continuum mechanics have been employed, enabling the simulation of a plethora of phenomena such as rigid/soft body dynamics, fluid dynamics, muscle simulation as well as sensor and actuator dynamics. The physics engines commonly employed in robotics simulation focus on rigid body dynamics, whereas continuum mechanics methods excel on the simulation of phenomena where deformation plays a crucial role, keeping the two fields relatively separate. Here, we propose a shift of paradigm that allows for the accurate simulation of fluids in interaction with rigid bodies within the same robotics simulation framework, based on the continuum mechanics-based Smoothed Particle Hydrodynamics method. The proposed framework is useful for simulations such as swimming robots with complex geometries, robots manipulating fluids and even robots emitting highly viscous materials such as the ones used for 3D printing. Scenarios like swimming on the surface, air-water transitions, locomotion on granular media can be natively simulated within the proposed framework. Firstly, we present the overall architecture of our framework and give examples of a concrete software implementation. We then verify our approach by presenting one of the first of its kind simulation of self-propelled swimming robots with a smooth particle hydrodynamics method and compare our simulations with real experiments. Finally, we propose a new category of simulations that would benefit from this approach and discuss ways that the sim-to-real gap could be further reduced.
One Sentence Summary:
We present a framework for the interaction of rigid body dynamics with SPH-based fluid simulation in robotics, showcase its application on self-propelled swimming robots and validate the method by comparing simulations with physical experiments.
{"title":"A smoothed particle hydrodynamics framework for fluid simulation in robotics","authors":"Emmanouil Angelidis ,&nbsp;Jonathan Arreguit ,&nbsp;Jan Bender ,&nbsp;Patrick Berggold ,&nbsp;Ziyuan Liu ,&nbsp;Alois Knoll ,&nbsp;Alessandro Crespi ,&nbsp;Auke J. Ijspeert","doi":"10.1016/j.robot.2024.104885","DOIUrl":"10.1016/j.robot.2024.104885","url":null,"abstract":"<div><div>Simulation is a core component of robotics workflows that can shed light on the complex interplay between a physical body, the environment and sensory feedback mechanisms in silico. To this goal several simulation methods, originating in rigid body dynamics and in continuum mechanics have been employed, enabling the simulation of a plethora of phenomena such as rigid/soft body dynamics, fluid dynamics, muscle simulation as well as sensor and actuator dynamics. The physics engines commonly employed in robotics simulation focus on rigid body dynamics, whereas continuum mechanics methods excel on the simulation of phenomena where deformation plays a crucial role, keeping the two fields relatively separate. Here, we propose a shift of paradigm that allows for the accurate simulation of fluids in interaction with rigid bodies within the same robotics simulation framework, based on the continuum mechanics-based Smoothed Particle Hydrodynamics method. The proposed framework is useful for simulations such as swimming robots with complex geometries, robots manipulating fluids and even robots emitting highly viscous materials such as the ones used for 3D printing. Scenarios like swimming on the surface, air-water transitions, locomotion on granular media can be natively simulated within the proposed framework. Firstly, we present the overall architecture of our framework and give examples of a concrete software implementation. We then verify our approach by presenting one of the first of its kind simulation of self-propelled swimming robots with a smooth particle hydrodynamics method and compare our simulations with real experiments. Finally, we propose a new category of simulations that would benefit from this approach and discuss ways that the sim-to-real gap could be further reduced.</div><div>One Sentence Summary:</div><div>We present a framework for the interaction of rigid body dynamics with SPH-based fluid simulation in robotics, showcase its application on self-propelled swimming robots and validate the method by comparing simulations with physical experiments.</div></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"185 ","pages":"Article 104885"},"PeriodicalIF":4.3,"publicationDate":"2024-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143102978","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Advancing autonomous SLAM systems: Integrating YOLO object detection and enhanced loop closure techniques for robust environment mapping
IF 4.3 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS Pub Date : 2024-12-08 DOI: 10.1016/j.robot.2024.104871
Qamar Ul Islam , Fatemeh Khozaei , El Manaa Salah Al Barhoumi , Imran Baig , Dmitry Ignatyev
This research paper introduces an enhanced method for visual Simultaneous Localization and Mapping (SLAM), specifically designed for dynamic environments. Our approach distinguishes itself from traditional visual SLAM methods by integrating feature-based techniques with a lightweight object identification network known as You Only Look Once (YOLO). This integration allows for the extraction of semantic information, enhancing the system's performance. Furthermore, we incorporate sparse optical flow and advanced multi-view geometry to improve the accuracy of localization and mapping. A significant innovation in our method is the introduction of an improved loop detection algorithm, which optimizes mapping in complex settings. Our system is built upon the foundation of Oriented Features from Accelerated Segment Test (FAST) and Rotated BRIEF-SLAM3 (Binary Robust Independent Elementary Features-Simultaneous Localization and Mapping 3 or ORB-SLAM3), enabling real-time performance and demonstrating superior localization accuracy in dynamic environments. We conducted extensive experiments using public datasets, which show that our proposed system surpasses existing deep learning-based visual SLAM systems. It reduces the absolute trajectory error in dynamic scenarios and enhances mapping accuracy and robustness in complex environments. This system overcomes the limitations of traditional visual SLAM methods and emerges as a promising solution for real-world applications such as autonomous driving and advanced driver assistance systems. The technical novelty of our approach lies in the strategic integration of various innovative techniques, making it a significant advancement over existing methods.
{"title":"Advancing autonomous SLAM systems: Integrating YOLO object detection and enhanced loop closure techniques for robust environment mapping","authors":"Qamar Ul Islam ,&nbsp;Fatemeh Khozaei ,&nbsp;El Manaa Salah Al Barhoumi ,&nbsp;Imran Baig ,&nbsp;Dmitry Ignatyev","doi":"10.1016/j.robot.2024.104871","DOIUrl":"10.1016/j.robot.2024.104871","url":null,"abstract":"<div><div>This research paper introduces an enhanced method for visual Simultaneous Localization and Mapping (SLAM), specifically designed for dynamic environments. Our approach distinguishes itself from traditional visual SLAM methods by integrating feature-based techniques with a lightweight object identification network known as You Only Look Once (YOLO). This integration allows for the extraction of semantic information, enhancing the system's performance. Furthermore, we incorporate sparse optical flow and advanced multi-view geometry to improve the accuracy of localization and mapping. A significant innovation in our method is the introduction of an improved loop detection algorithm, which optimizes mapping in complex settings. Our system is built upon the foundation of Oriented Features from Accelerated Segment Test (FAST) and Rotated BRIEF-SLAM3 (Binary Robust Independent Elementary Features-Simultaneous Localization and Mapping 3 or ORB-SLAM3), enabling real-time performance and demonstrating superior localization accuracy in dynamic environments. We conducted extensive experiments using public datasets, which show that our proposed system surpasses existing deep learning-based visual SLAM systems. It reduces the absolute trajectory error in dynamic scenarios and enhances mapping accuracy and robustness in complex environments. This system overcomes the limitations of traditional visual SLAM methods and emerges as a promising solution for real-world applications such as autonomous driving and advanced driver assistance systems. The technical novelty of our approach lies in the strategic integration of various innovative techniques, making it a significant advancement over existing methods.</div></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"185 ","pages":"Article 104871"},"PeriodicalIF":4.3,"publicationDate":"2024-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143102633","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GAFAR revisited—Exploring the limits of point cloud registration on sparse subsets
IF 4.3 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS Pub Date : 2024-12-01 DOI: 10.1016/j.robot.2024.104870
Ludwig Mohr , Ismail Geles , Friedrich Fraundorfer
Robust estimation of a rigid transformation aligning two point clouds is a fundamental building block in many tasks in computer vision and robotics, such as mapping, (re-)localization, SLAM or 3D scanning, and many more. Deep-Learning based registration methods have proven superior in terms of robustness to outliers and bad initializations, yet their resource needs often render them unsuitable for applications where space, energy and real-time constraints come into play. Since – provided a good initialization – conventional registration algorithms like ICP prove to be fast and accurate, we may relax the requirements of absolute precision of a learning-based matcher and instead focus our attention on efficiency, robustness and generalization.
In prior work, we introduced our small, fast and light-weight registration algorithm GAFAR, which works on sparse subsets of point clouds and exhibits a small enough footprint to be deployed on edge devices in mobile applications, while still providing accurate results and promising generalization ability. Based thereon, we develop this idea further towards applicability on real world data with improvements to the algorithm as well as further evaluations and analyses. We showcase this by applying it to Kitti Odometry Benchmark and 3DMatch dataset as well as demonstrating its usability on edge devices. The code and trained weights are published in https://github.com/mordecaimalignatius/GAFAR/.
{"title":"GAFAR revisited—Exploring the limits of point cloud registration on sparse subsets","authors":"Ludwig Mohr ,&nbsp;Ismail Geles ,&nbsp;Friedrich Fraundorfer","doi":"10.1016/j.robot.2024.104870","DOIUrl":"10.1016/j.robot.2024.104870","url":null,"abstract":"<div><div>Robust estimation of a rigid transformation aligning two point clouds is a fundamental building block in many tasks in computer vision and robotics, such as mapping, (re-)localization, SLAM or 3D scanning, and many more. Deep-Learning based registration methods have proven superior in terms of robustness to outliers and bad initializations, yet their resource needs often render them unsuitable for applications where space, energy and real-time constraints come into play. Since – provided a good initialization – conventional registration algorithms like ICP prove to be fast and accurate, we may relax the requirements of absolute precision of a learning-based matcher and instead focus our attention on efficiency, robustness and generalization.</div><div>In prior work, we introduced our small, fast and light-weight registration algorithm GAFAR, which works on sparse subsets of point clouds and exhibits a small enough footprint to be deployed on edge devices in mobile applications, while still providing accurate results and promising generalization ability. Based thereon, we develop this idea further towards applicability on real world data with improvements to the algorithm as well as further evaluations and analyses. We showcase this by applying it to Kitti Odometry Benchmark and 3DMatch dataset as well as demonstrating its usability on edge devices. The code and trained weights are published in <span><span>https://github.com/mordecaimalignatius/GAFAR/</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"185 ","pages":"Article 104870"},"PeriodicalIF":4.3,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143102631","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MOVRO2: Loosely coupled monocular visual radar odometry using factor graph optimization MOVRO2:使用因数图优化的松耦合单目视觉雷达测距仪
IF 4.3 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS Pub Date : 2024-11-25 DOI: 10.1016/j.robot.2024.104860
Vlaho-Josip Štironja , Juraj Peršić , Luka Petrović , Ivan Marković , Ivan Petrović
Ego-motion estimation is an indispensable part of any autonomous system, especially in scenarios where wheel odometry or global pose measurement is unreliable or unavailable. In an environment where a global navigation satellite system is not available, conventional solutions for ego-motion estimation rely on the fusion of a LiDAR, a monocular camera and an inertial measurement unit (IMU), which is often plagued by drift. Therefore, complementary sensor solutions are being explored instead of relying on expensive and powerful IMUs. In this paper, we propose a method for estimating ego-motion, which we call MOVRO2, that utilizes the complementarity of radar and camera data. It is based on a loosely coupled monocular visual radar odometry approach within a factor graph optimization framework. The adoption of a loosely coupled approach is motivated by its scalability and the possibility to develop sensor models independently. To estimate the motion within the proposed framework, we fuse ego-velocity of the radar and scan-to-scan matches with the rotation obtained from consecutive camera frames and the unscaled velocity of the monocular odometry. We evaluate the performance of the proposed method on two open-source datasets and compare it to various mono-, dual- and three-sensor solutions, where our cost-effective method demonstrates performance comparable to state-of-the-art visual-inertial radar and LiDAR odometry solutions using high-performance 64-line LiDARs.
自我运动估计是任何自主系统不可或缺的一部分,尤其是在车轮里程测量或全局姿态测量不可靠或不可用的情况下。在没有全球导航卫星系统的环境中,自我运动估计的传统解决方案依赖于激光雷达、单目摄像头和惯性测量单元(IMU)的融合,而这往往会受到漂移的困扰。因此,人们正在探索补充传感器解决方案,而不是依赖昂贵且功能强大的惯性测量单元。在本文中,我们提出了一种利用雷达和摄像头数据互补性来估计自我运动的方法,我们称之为 MOVRO2。该方法基于因数图优化框架内的松散耦合单目视觉雷达里程测量方法。采用松散耦合方法的原因是其可扩展性和独立开发传感器模型的可能性。为了在提议的框架内估计运动,我们将雷达的自我速度和扫描到扫描的匹配度与从连续相机帧中获得的旋转以及单目里程测量的无标度速度融合在一起。我们在两个开源数据集上评估了所提方法的性能,并将其与各种单传感器、双传感器和三传感器解决方案进行了比较,结果表明,我们的方法具有成本效益,其性能可与使用高性能 64 线激光雷达的最先进视觉惯性雷达和激光雷达里程测量解决方案相媲美。
{"title":"MOVRO2: Loosely coupled monocular visual radar odometry using factor graph optimization","authors":"Vlaho-Josip Štironja ,&nbsp;Juraj Peršić ,&nbsp;Luka Petrović ,&nbsp;Ivan Marković ,&nbsp;Ivan Petrović","doi":"10.1016/j.robot.2024.104860","DOIUrl":"10.1016/j.robot.2024.104860","url":null,"abstract":"<div><div>Ego-motion estimation is an indispensable part of any autonomous system, especially in scenarios where wheel odometry or global pose measurement is unreliable or unavailable. In an environment where a global navigation satellite system is not available, conventional solutions for ego-motion estimation rely on the fusion of a LiDAR, a monocular camera and an inertial measurement unit (IMU), which is often plagued by drift. Therefore, complementary sensor solutions are being explored instead of relying on expensive and powerful IMUs. In this paper, we propose a method for estimating ego-motion, which we call MOVRO2, that utilizes the complementarity of radar and camera data. It is based on a loosely coupled monocular visual radar odometry approach within a factor graph optimization framework. The adoption of a loosely coupled approach is motivated by its scalability and the possibility to develop sensor models independently. To estimate the motion within the proposed framework, we fuse ego-velocity of the radar and scan-to-scan matches with the rotation obtained from consecutive camera frames and the unscaled velocity of the monocular odometry. We evaluate the performance of the proposed method on two open-source datasets and compare it to various mono-, dual- and three-sensor solutions, where our cost-effective method demonstrates performance comparable to state-of-the-art visual-inertial radar and LiDAR odometry solutions using high-performance 64-line LiDARs.</div></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"184 ","pages":"Article 104860"},"PeriodicalIF":4.3,"publicationDate":"2024-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142702829","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CUAHN-VIO: Content-and-uncertainty-aware homography network for visual-inertial odometry 视觉惯性里程计的内容和不确定性感知单应性网络
IF 4.3 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS Pub Date : 2024-11-22 DOI: 10.1016/j.robot.2024.104866
Yingfu Xu, Guido C.H.E. de Croon
Learning-based visual ego-motion estimation is promising yet not ready for navigating agile mobile robots in the real world. In this article, we propose CUAHN-VIO, a robust and efficient monocular visual-inertial odometry (VIO) designed for micro aerial vehicles (MAVs) equipped with a downward-facing camera. The vision frontend is a content-and-uncertainty-aware homography network (CUAHN). Content awareness measures the robustness of the network toward non-homography image content, e.g. 3-dimensional objects lying on a planar surface. Uncertainty awareness refers that the network not only predicts the homography transformation but also estimates the prediction uncertainty. The training requires no ground truth that is often difficult to obtain. The network has good generalization that enables “plug-and-play” deployment in new environments without fine-tuning. A lightweight extended Kalman filter (EKF) serves as the VIO backend and utilizes the mean prediction and variance estimation from the network for visual measurement updates. CUAHN-VIO is evaluated on a high-speed public dataset and shows rivaling accuracy to state-of-the-art (SOTA) VIO approaches. Thanks to the robustness to motion blur, low network inference time (23 ms), and stable processing latency (26 ms), CUAHN-VIO successfully runs onboard an Nvidia Jetson TX2 embedded processor to navigate a fast autonomous MAV.
基于学习的视觉自我运动估计很有前途,但还没有准备好在现实世界中导航敏捷移动机器人。在这篇文章中,我们提出了一种鲁棒和高效的单目视觉惯性里程计(VIO),设计用于配备向下摄像头的微型飞行器(MAVs)。视觉前端是一个内容和不确定性感知的同形词网络(CUAHN)。内容感知测量网络对非单应性图像内容的鲁棒性,例如平面上的三维物体。不确定性感知是指网络在预测单应性变换的同时,对预测的不确定性进行估计。这种训练不需要通常难以获得的基础真理。该网络具有良好的泛化能力,无需微调即可在新环境中进行“即插即用”部署。一个轻量级的扩展卡尔曼滤波器(EKF)作为VIO后端,利用来自网络的均值预测和方差估计进行视觉测量更新。CUAHN-VIO在高速公共数据集上进行了评估,并显示出与最先进的(SOTA) VIO方法相媲美的准确性。由于对运动模糊的鲁棒性,低网络推断时间(~ 23 ms)和稳定的处理延迟(~ 26 ms), CUAHN-VIO成功地在Nvidia Jetson TX2嵌入式处理器上运行,以导航快速自主MAV。
{"title":"CUAHN-VIO: Content-and-uncertainty-aware homography network for visual-inertial odometry","authors":"Yingfu Xu,&nbsp;Guido C.H.E. de Croon","doi":"10.1016/j.robot.2024.104866","DOIUrl":"10.1016/j.robot.2024.104866","url":null,"abstract":"<div><div>Learning-based visual ego-motion estimation is promising yet not ready for navigating agile mobile robots in the real world. In this article, we propose CUAHN-VIO, a robust and efficient monocular visual-inertial odometry (VIO) designed for micro aerial vehicles (MAVs) equipped with a downward-facing camera. The vision frontend is a content-and-uncertainty-aware homography network (CUAHN). Content awareness measures the robustness of the network toward non-homography image content, <em>e.g.</em> 3-dimensional objects lying on a planar surface. Uncertainty awareness refers that the network not only predicts the homography transformation but also estimates the prediction uncertainty. The training requires no ground truth that is often difficult to obtain. The network has good generalization that enables “plug-and-play” deployment in new environments without fine-tuning. A lightweight extended Kalman filter (EKF) serves as the VIO backend and utilizes the mean prediction and variance estimation from the network for visual measurement updates. CUAHN-VIO is evaluated on a high-speed public dataset and shows rivaling accuracy to state-of-the-art (SOTA) VIO approaches. Thanks to the robustness to motion blur, low network inference time (<span><math><mo>∼</mo></math></span>23 ms), and stable processing latency (<span><math><mo>∼</mo></math></span>26 ms), CUAHN-VIO successfully runs onboard an Nvidia Jetson TX2 embedded processor to navigate a fast autonomous MAV.</div></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"185 ","pages":"Article 104866"},"PeriodicalIF":4.3,"publicationDate":"2024-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142744976","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Robotics and Autonomous Systems
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1