Pub Date : 2025-01-01DOI: 10.1109/LRA.2024.3524912
Dongkyu Lee;I Made Aswin Nahrendra;Minho Oh;Byeongho Yu;Hyun Myung
Unstructured environments such as mountains, caves, construction sites, or disaster areas are challenging for autonomous navigation because of terrain irregularities. In particular, it is crucial to plan a path to avoid risky terrain and reach the goal quickly and safely. In this paper, we propose a method for safe and distance-efficient path planning, leveraging Traversal Risk Graph (TRG), a novel graph representation that takes into account geometric traversability of the terrain. TRG nodes represent stability and reachability of the terrain, while edges represent relative traversal risk-weighted path candidates. Additionally, TRG is constructed in a wavefront propagation manner and managed hierarchically, enabling real-time planning even in large-scale environments. Lastly, we formulate a graph optimization problem on TRG that leads the robot to navigate by prioritizing both safe and short paths. Our approach demonstrated superior safety, distance efficiency, and fast processing time compared to the conventional methods. It was also validated in several real-world experiments using a quadrupedal robot. Notably, TRG-planner contributed as the global path planner of an autonomous navigation framework for the DreamSTEP team, which won the Quadruped Robot Challenge at ICRA 2023.
{"title":"TRG-Planner: Traversal Risk Graph-Based Path Planning in Unstructured Environments for Safe and Efficient Navigation","authors":"Dongkyu Lee;I Made Aswin Nahrendra;Minho Oh;Byeongho Yu;Hyun Myung","doi":"10.1109/LRA.2024.3524912","DOIUrl":"https://doi.org/10.1109/LRA.2024.3524912","url":null,"abstract":"Unstructured environments such as mountains, caves, construction sites, or disaster areas are challenging for autonomous navigation because of terrain irregularities. In particular, it is crucial to plan a path to avoid risky terrain and reach the goal quickly and safely. In this paper, we propose a method for safe and distance-efficient path planning, leveraging Traversal Risk Graph (TRG), a novel graph representation that takes into account geometric traversability of the terrain. TRG nodes represent stability and reachability of the terrain, while edges represent relative traversal risk-weighted path candidates. Additionally, TRG is constructed in a wavefront propagation manner and managed hierarchically, enabling real-time planning even in large-scale environments. Lastly, we formulate a graph optimization problem on TRG that leads the robot to navigate by prioritizing both safe and short paths. Our approach demonstrated superior safety, distance efficiency, and fast processing time compared to the conventional methods. It was also validated in several real-world experiments using a quadrupedal robot. Notably, TRG-planner contributed as the global path planner of an autonomous navigation framework for the DreamSTEP team, which won the Quadruped Robot Challenge at ICRA 2023.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 2","pages":"1736-1743"},"PeriodicalIF":4.6,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142976119","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
State-of-the-art sensorimotor learning algorithms, either in the context of reinforcement learning or imitation learning, offer policies that can often produce unstable behaviors, damaging the robot and/or the environment. Moreover, it is very difficult to interpret the optimized controller and analyze its behavior and/or performance. Traditional robot learning, on the contrary, relies on dynamical system-based policies that can be analyzed for stability/safety. Such policies, however, are neither flexible nor generic and usually work only with proprioceptive sensor states. In this work, we bridge the gap between generic neural network policies and dynamical system-based policies, and we introduce Autonomous Neural Dynamic Policies (ANDPs) that: (a) are based on autonomous dynamical systems, (b) always produce asymptotically stable behaviors, and (c) are more flexible than traditional stable dynamical system-based policies. ANDPs are fully differentiable, flexible generic-policies that accept any observation input, while ensuring asymptotic stability. Through several experiments, we explore the flexibility and capacity of ANDPs in several imitation learning tasks including experiments with image observations. The results show that ANDPs combine the benefits of both neural network-based and dynamical system-based methods.
{"title":"Sensorimotor Learning With Stability Guarantees via Autonomous Neural Dynamic Policies","authors":"Dionis Totsila;Konstantinos Chatzilygeroudis;Valerio Modugno;Denis Hadjivelichkov;Dimitrios Kanoulas","doi":"10.1109/LRA.2024.3524878","DOIUrl":"https://doi.org/10.1109/LRA.2024.3524878","url":null,"abstract":"State-of-the-art sensorimotor learning algorithms, either in the context of reinforcement learning or imitation learning, offer policies that can often produce unstable behaviors, damaging the robot and/or the environment. Moreover, it is very difficult to interpret the optimized controller and analyze its behavior and/or performance. Traditional robot learning, on the contrary, relies on dynamical system-based policies that can be analyzed for stability/safety. Such policies, however, are neither flexible nor generic and usually work only with proprioceptive sensor states. In this work, we bridge the gap between generic neural network policies and dynamical system-based policies, and we introduce Autonomous Neural Dynamic Policies (ANDPs) that: (a) are based on autonomous dynamical systems, (b) always produce asymptotically stable behaviors, and (c) are more flexible than traditional stable dynamical system-based policies. ANDPs are fully differentiable, flexible generic-policies that accept any observation input, while ensuring asymptotic stability. Through several experiments, we explore the flexibility and capacity of ANDPs in several imitation learning tasks including experiments with image observations. The results show that ANDPs combine the benefits of both neural network-based and dynamical system-based methods.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 2","pages":"1760-1767"},"PeriodicalIF":4.6,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142976125","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01DOI: 10.1109/LRA.2024.3524903
Junling Wang;Zhenlin An;Yi Guo
Indoor positioning in environments where GPS cannot be used is a fundamental technology for robot navigation and human-robot interaction. However, existing vision-based localization systems cannot work in low-visibility environments, and existing wireless or acoustic localization systems require specific transceivers, making them expensive and power-intensive — particularly challenging for microrobots. This letter proposes a new metasurface-assisted ultrasound positioning system. The key idea is to use a low-cost passive acoustic metasurface to transfer any speaker into a directional sound source, with the acoustic spectrum varying based on direction. This allows any microrobot with a simple, low-cost microphone to capture such modified sound to identify the direction of the sound source. We develop a lightweight convolutional neural network-based localization algorithm that can be efficiently deployed on low-power microcontrollers. We evaluate our system in a large complex office. It can achieve a direction estimation accuracy of 7.26$^circ$, improving by 42.2% compared to systems without the metasurface and matching the performance of a 4-microphone array, with a localization accuracy of 0.35 m.
{"title":"MetaSonic: Advancing Robot Localization With Directional Embedded Acoustic Signals","authors":"Junling Wang;Zhenlin An;Yi Guo","doi":"10.1109/LRA.2024.3524903","DOIUrl":"https://doi.org/10.1109/LRA.2024.3524903","url":null,"abstract":"Indoor positioning in environments where GPS cannot be used is a fundamental technology for robot navigation and human-robot interaction. However, existing vision-based localization systems cannot work in low-visibility environments, and existing wireless or acoustic localization systems require specific transceivers, making them expensive and power-intensive — particularly challenging for microrobots. This letter proposes a new metasurface-assisted ultrasound positioning system. The key idea is to use a low-cost passive acoustic metasurface to transfer any speaker into a directional sound source, with the acoustic spectrum varying based on direction. This allows any microrobot with a simple, low-cost microphone to capture such modified sound to identify the direction of the sound source. We develop a lightweight convolutional neural network-based localization algorithm that can be efficiently deployed on low-power microcontrollers. We evaluate our system in a large complex office. It can achieve a direction estimation accuracy of 7.26<inline-formula><tex-math>$^circ$</tex-math></inline-formula>, improving by 42.2% compared to systems without the metasurface and matching the performance of a 4-microphone array, with a localization accuracy of 0.35 m.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 2","pages":"1704-1711"},"PeriodicalIF":4.6,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142976118","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01DOI: 10.1109/LRA.2024.3524889
Benjamin Calmé;Adam Metcalf;Michael Brockdorff;Haneul Jang;Yoonsue Choi;Peter Lloyd;Seok Chang Ryu;Pietro Valdastri
Magnetic Soft Continuum Robots (MSCR) are used in a wide variety of surgical interventions, including neurological, pancreatic, and cardiovascular procedures. To function effectively, these MSCRs require complex programmable magnetisation. However, they often suffer from limited manoeuvrability and imprecise positioning of the devices that carry them. Tendon-Driven Continuum Robots (TDCR) have the potential to address these issues. These navigation systems not only enable higher accuracy and precision but also offer the potential for remote control, thereby reducing clinicians' exposure to ionising radiation. Currently, MSCRs are deployed from manual flexible endoscopes without motion compensation, leading to uncertainty and trial-and-error insertion. In this study, the deployment of high aspect ratio MSCRs (60 mm long by 1.3 mm diameter) from a tendon-driven robot (25 cm long with a 2.8 mm diameter) is performed. By precisely positioning the deployment point, this paper evaluates the benefits of different magnetisation profiles. The comparison is carried out for a specific clinical scenario, assessing procedure time, the distance between the external permanent magnet (used for steering) and the MSCR, and the interaction force with the tissue. Clinical relevance is demonstrated through pancreatic and bile duct cannulation in a silicon phantom.
{"title":"Hybrid Tendon-Actuated and Soft Magnetic Robotic Platform for Pancreatic Applications","authors":"Benjamin Calmé;Adam Metcalf;Michael Brockdorff;Haneul Jang;Yoonsue Choi;Peter Lloyd;Seok Chang Ryu;Pietro Valdastri","doi":"10.1109/LRA.2024.3524889","DOIUrl":"https://doi.org/10.1109/LRA.2024.3524889","url":null,"abstract":"Magnetic Soft Continuum Robots (MSCR) are used in a wide variety of surgical interventions, including neurological, pancreatic, and cardiovascular procedures. To function effectively, these MSCRs require complex programmable magnetisation. However, they often suffer from limited manoeuvrability and imprecise positioning of the devices that carry them. Tendon-Driven Continuum Robots (TDCR) have the potential to address these issues. These navigation systems not only enable higher accuracy and precision but also offer the potential for remote control, thereby reducing clinicians' exposure to ionising radiation. Currently, MSCRs are deployed from manual flexible endoscopes without motion compensation, leading to uncertainty and trial-and-error insertion. In this study, the deployment of high aspect ratio MSCRs (60 mm long by 1.3 mm diameter) from a tendon-driven robot (25 cm long with a 2.8 mm diameter) is performed. By precisely positioning the deployment point, this paper evaluates the benefits of different magnetisation profiles. The comparison is carried out for a specific clinical scenario, assessing procedure time, the distance between the external permanent magnet (used for steering) and the MSCR, and the interaction force with the tissue. Clinical relevance is demonstrated through pancreatic and bile duct cannulation in a silicon phantom.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 2","pages":"1784-1791"},"PeriodicalIF":4.6,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142976121","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01DOI: 10.1109/LRA.2024.3524905
Chungeng Tian;Ning Hao;Fenghua He
This paper presents a novel approach to address the inconsistency problem caused by observability mismatch in visual-inertial navigation systems (VINS). The key idea involves applying a linear time-varying transformation to the error-state within the Error-State Kalman Filter (ESKF). This transformation ensures that the unobservable subspace of the transformed error-state system becomes independent of the state, thereby preserving the correct observability of the transformed system against variations in linearization points. We introduce the Transformed ESKF (T-ESKF), a consistent VINS estimator that performs state estimation using the transformed error-state system. Furthermore, we develop an efficient propagation technique to accelerate the covariance propagation based on the transformation relationship between the transition and accumulated matrices of T-ESKF and ESKF. We validate the proposed method through extensive simulations and experiments, demonstrating better (or competitive at least) performance compared to state-of-the-art methods.
{"title":"T-ESKF: Transformed Error-State Kalman Filter for Consistent Visual-Inertial Navigation","authors":"Chungeng Tian;Ning Hao;Fenghua He","doi":"10.1109/LRA.2024.3524905","DOIUrl":"https://doi.org/10.1109/LRA.2024.3524905","url":null,"abstract":"This paper presents a novel approach to address the inconsistency problem caused by observability mismatch in visual-inertial navigation systems (VINS). The key idea involves applying a linear time-varying transformation to the error-state within the Error-State Kalman Filter (ESKF). This transformation ensures that the unobservable subspace of the transformed error-state system becomes independent of the state, thereby preserving the correct observability of the transformed system against variations in linearization points. We introduce the Transformed ESKF (T-ESKF), a consistent VINS estimator that performs state estimation using the transformed error-state system. Furthermore, we develop an efficient propagation technique to accelerate the covariance propagation based on the transformation relationship between the transition and accumulated matrices of T-ESKF and ESKF. We validate the proposed method through extensive simulations and experiments, demonstrating better (or competitive at least) performance compared to state-of-the-art methods.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 2","pages":"1808-1815"},"PeriodicalIF":4.6,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142976122","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01DOI: 10.1109/LRA.2024.3524910
Paolo Rabino;Tatiana Tommasi
Interacting with real-world cluttered scenes poses several challenges to robotic agents that need to understand complex spatial dependencies among the observed objects to determine optimal pick sequences or efficient object retrieval strategies. Existing solutions typically manage simplified scenarios and focus on predicting pairwise object relationships following an initial object detection phase, but often overlook the global context or struggle with handling redundant and missing object relations. In this work, we present a modern take on visual relational reasoning for grasp planning. We introduce D3GD, a novel testbed that includes bin picking scenes with up to 35 objects from 97 distinct categories. Additionally, we propose D3G, a new end-to-end transformer-based dependency graph generation model that simultaneously detects objects and produces an adjacency matrix representing their spatial relationships. Recognizing the limitations of standard metrics, we employ the Average Precision of Relationships for the first time to evaluate model performance, conducting an extensive experimental benchmark. The obtained results establish our approach as the new state-of-the-art for this task, laying the foundation for future research in robotic manipulation.
{"title":"A Modern Take on Visual Relationship Reasoning for Grasp Planning","authors":"Paolo Rabino;Tatiana Tommasi","doi":"10.1109/LRA.2024.3524910","DOIUrl":"https://doi.org/10.1109/LRA.2024.3524910","url":null,"abstract":"Interacting with real-world cluttered scenes poses several challenges to robotic agents that need to understand complex spatial dependencies among the observed objects to determine optimal pick sequences or efficient object retrieval strategies. Existing solutions typically manage simplified scenarios and focus on predicting pairwise object relationships following an initial object detection phase, but often overlook the global context or struggle with handling redundant and missing object relations. In this work, we present a modern take on visual relational reasoning for grasp planning. We introduce D3GD, a novel testbed that includes bin picking scenes with up to 35 objects from 97 distinct categories. Additionally, we propose D3G, a new end-to-end transformer-based dependency graph generation model that simultaneously detects objects and produces an adjacency matrix representing their spatial relationships. Recognizing the limitations of standard metrics, we employ the Average Precision of Relationships for the first time to evaluate model performance, conducting an extensive experimental benchmark. The obtained results establish our approach as the new state-of-the-art for this task, laying the foundation for future research in robotic manipulation.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 2","pages":"1712-1719"},"PeriodicalIF":4.6,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10819650","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142976066","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01DOI: 10.1109/LRA.2024.3524904
Dingchang Hu;Tianyu Sun;Pengwei Xie;Siang Chen;Huazhong Yang;Guijin Wang
Traditional affordance segmentation on 3D point cloud objects requires massive amounts of annotated training data and can only make predictions within predefined classes and affordance tasks. To overcome these limitations, we propose a variation-robust few-shot 3D affordance segmentation network (VRNet) for robotic manipulation, which requires only several affordance annotations for novel object classes and manipulation tasks. In particular, we design an orientation-tolerant feature extractor to address pose variation between support and query point cloud objects, and present a multi-scale label propagation algorithm for variation in completeness. Extensive experiments on affordance datasets show that VRNet provides the best segmentation performance compared with previous works. Moreover, experiments in real robotic scenarios demonstrate the generalization ability of our method.
{"title":"Variation-Robust Few-Shot 3D Affordance Segmentation for Robotic Manipulation","authors":"Dingchang Hu;Tianyu Sun;Pengwei Xie;Siang Chen;Huazhong Yang;Guijin Wang","doi":"10.1109/LRA.2024.3524904","DOIUrl":"https://doi.org/10.1109/LRA.2024.3524904","url":null,"abstract":"Traditional affordance segmentation on 3D point cloud objects requires massive amounts of annotated training data and can only make predictions within predefined classes and affordance tasks. To overcome these limitations, we propose a variation-robust few-shot 3D affordance segmentation network (VRNet) for robotic manipulation, which requires only several affordance annotations for novel object classes and manipulation tasks. In particular, we design an orientation-tolerant feature extractor to address pose variation between support and query point cloud objects, and present a multi-scale label propagation algorithm for variation in completeness. Extensive experiments on affordance datasets show that VRNet provides the best segmentation performance compared with previous works. Moreover, experiments in real robotic scenarios demonstrate the generalization ability of our method.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 2","pages":"1672-1679"},"PeriodicalIF":4.6,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142975983","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Transcranial magnetic stimulation is a noninvasive medical procedure that can modulate brain activity, and it is widely used in neuroscience, neurology research, and clinical practice. Compared to manual operators, robots may improve the outcome due to their superior accuracy and repeatability. However, there has not been a widely accepted standard protocol for performing robotic TMS using fine-segmented brain images, resulting in arbitrary planned angles with respect to the true boundaries of the modulated cortex. Given that the recent study in TMS simulation suggests a noticeable difference in outcomes when using different anatomical details, cortical shape should play a more significant role in deciding the optimal TMS coil pose. In this work, we introduce an image-guided robotic system for TMS that focuses on (1) establishing standardized planning methods to define a reference (true zero) for the coil poses and (2) solving the issue that the manual coil placement requires expert hand-eye coordination which often leading to low repeatability of the experiments. To validate the design of our robotic system, a phantom study and a preliminary human subject study were performed. Our results show that the robotic method can half the positional error and improve the rotational accuracy by up to two orders of magnitude. The accuracy is proven to be repeatable because the standard deviation of multiple trials is lowered by an order of magnitude. The improved actuation accuracy successfully translates to the TMS application, with a higher and more stable induced voltage in magnetic field sensors and a higher electromyography (EMG) reading in the preliminary human subject study.
{"title":"An Image-Guided Robotic System for Transcranial Magnetic Stimulation: System Development and Experimental Evaluation","authors":"Yihao Liu;Jiaming Zhang;Letian Ai;Jing Tian;Shahriar Sefati;Huan Liu;Alejandro Martin-Gomez;Amir Kheradmand;Mehran Armand","doi":"10.1109/LRA.2024.3524900","DOIUrl":"https://doi.org/10.1109/LRA.2024.3524900","url":null,"abstract":"Transcranial magnetic stimulation is a noninvasive medical procedure that can modulate brain activity, and it is widely used in neuroscience, neurology research, and clinical practice. Compared to manual operators, robots may improve the outcome due to their superior accuracy and repeatability. However, there has not been a widely accepted standard protocol for performing robotic TMS using fine-segmented brain images, resulting in arbitrary planned angles with respect to the true boundaries of the modulated cortex. Given that the recent study in TMS simulation suggests a noticeable difference in outcomes when using different anatomical details, cortical shape should play a more significant role in deciding the optimal TMS coil pose. In this work, we introduce an image-guided robotic system for TMS that focuses on (1) establishing standardized planning methods to define a reference (true zero) for the coil poses and (2) solving the issue that the manual coil placement requires expert hand-eye coordination which often leading to low repeatability of the experiments. To validate the design of our robotic system, a phantom study and a preliminary human subject study were performed. Our results show that the robotic method can half the positional error and improve the rotational accuracy by up to two orders of magnitude. The accuracy is proven to be repeatable because the standard deviation of multiple trials is lowered by an order of magnitude. The improved actuation accuracy successfully translates to the TMS application, with a higher and more stable induced voltage in magnetic field sensors and a higher electromyography (EMG) reading in the preliminary human subject study.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 2","pages":"1936-1943"},"PeriodicalIF":4.6,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142992976","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01DOI: 10.1109/LRA.2024.3524898
A. Ouyoucef;Q. Peyron;V. Lebastard;F. Renda;G. Zheng;F. Boyer
The Cosserat rod theory has become a gold standard for modeling the statics and dynamics of serial and parallel continuum robots. Recently, a weak form of these Cosserat rod models called the geometric variable strain model has been derived where the robot deformations are projected on finite-dimensional basis functions. This model has very interesting features for continuum robotics, such as a Lagrangian form close to classical rigid robots and the ability to tune its performances in terms of computation time and accuracy. Two approaches have been proposed to obtain and compute it. The first is based on the Newton-Euler recursive algorithm and the second, on the projection of the strong form equations using Jacobian matrices. Although these approaches yield identical model forms, their disparate implementations and numerical schemes render each uniquely suited to specific applications. Notably, underlying these disparities lies a profound duality between these models, prompting our quest for a comprehensive overview of this duality along with an analysis of their algorithmic differences. Finally, we discuss perspectives for these two approaches, in particular their hybridization, based on the current knowledge of rigid robotics.
{"title":"Duality of the Existing Geometric Variable Strain Models for the Dynamic Modeling of Continuum Robots","authors":"A. Ouyoucef;Q. Peyron;V. Lebastard;F. Renda;G. Zheng;F. Boyer","doi":"10.1109/LRA.2024.3524898","DOIUrl":"https://doi.org/10.1109/LRA.2024.3524898","url":null,"abstract":"The Cosserat rod theory has become a gold standard for modeling the statics and dynamics of serial and parallel continuum robots. Recently, a weak form of these Cosserat rod models called the geometric variable strain model has been derived where the robot deformations are projected on finite-dimensional basis functions. This model has very interesting features for continuum robotics, such as a Lagrangian form close to classical rigid robots and the ability to tune its performances in terms of computation time and accuracy. Two approaches have been proposed to obtain and compute it. The first is based on the Newton-Euler recursive algorithm and the second, on the projection of the strong form equations using Jacobian matrices. Although these approaches yield identical model forms, their disparate implementations and numerical schemes render each uniquely suited to specific applications. Notably, underlying these disparities lies a profound duality between these models, prompting our quest for a comprehensive overview of this duality along with an analysis of their algorithmic differences. Finally, we discuss perspectives for these two approaches, in particular their hybridization, based on the current knowledge of rigid robotics.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 2","pages":"1848-1855"},"PeriodicalIF":4.6,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142975735","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01DOI: 10.1109/LRA.2024.3524902
Yunpeng Liang;Fulong Yin;Zhen Li;Zhilin Xiong;Zhihui Peng;Yanzheng Zhao;Weixin Yan
The presence of parallel mechanisms in bipedal robots increases the complexity of modeling and control, making it crucial to manage the trade-off between model accuracy and real-time control. In this letter, we propose a reduced-dimensional whole-body controller for series-parallel bipedal robots, utilizing a floating-base multi-rigid body model with kinematic loops. Notably, we neglect the joint acceleration and closed-loop acceleration constraints of the parallel mechanisms, reducing the dimensionality of variables and constraints in the whole-body optimization problem while ensuring compliance with actuated joint torque limits. Quantitative experiments indicate that, compared to the complete series-parallel model, the impact of inertial forces resulting from the parallel joint acceleration is negligible. Additionally, physical locomotion and disturbance tests demonstrate that our proposed controller can enhance computational efficiency by over 20%, with comparable locomotion performance and disturbance rejection ability.
{"title":"Reduced-Dimensional Whole-Body Control Based on Model Simplification for Bipedal Robots With Parallel Mechanisms","authors":"Yunpeng Liang;Fulong Yin;Zhen Li;Zhilin Xiong;Zhihui Peng;Yanzheng Zhao;Weixin Yan","doi":"10.1109/LRA.2024.3524902","DOIUrl":"https://doi.org/10.1109/LRA.2024.3524902","url":null,"abstract":"The presence of parallel mechanisms in bipedal robots increases the complexity of modeling and control, making it crucial to manage the trade-off between model accuracy and real-time control. In this letter, we propose a reduced-dimensional whole-body controller for series-parallel bipedal robots, utilizing a floating-base multi-rigid body model with kinematic loops. Notably, we neglect the joint acceleration and closed-loop acceleration constraints of the parallel mechanisms, reducing the dimensionality of variables and constraints in the whole-body optimization problem while ensuring compliance with actuated joint torque limits. Quantitative experiments indicate that, compared to the complete series-parallel model, the impact of inertial forces resulting from the parallel joint acceleration is negligible. Additionally, physical locomotion and disturbance tests demonstrate that our proposed controller can enhance computational efficiency by over 20%, with comparable locomotion performance and disturbance rejection ability.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 2","pages":"1696-1703"},"PeriodicalIF":4.6,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142975937","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}