Pub Date : 2025-06-03DOI: 10.1016/j.birob.2025.100245
Junjie Zhu , Mingming Su , Longchuan Li , Yuxuan Xiang , Jianming Wang , Xuan Xiao
The hyper-redundant manipulator (HRM) can explore narrow and curved pipelines by leveraging its high flexibility and redundancy. However, planning collision-free motion trajectories for HRMs in confined environments remains a significant challenge. To address this issue, a pipeline inspection approach that combines nonlinear model predictive control (NMPC) with the snake-inspired crawling algorithm(SCA) is proposed. The approach consists of three processes: insertion, inspection, and exit. The insertion and exit processes utilize the SCA, inspired by snake motion, to significantly reduce path planning time. The inspection process employs NMPC to generate collision-free motion. The prototype HRM is developed, and inspection experiments are conducted in various complex pipeline scenarios to validate the effectiveness and feasibility of the proposed method. Experimental results demonstrate that the approach effectively minimizes the computational cost of path planning, offering a practical solution for HRM applications in pipeline inspection.
{"title":"Snake-inspired trajectory planning and control for confined pipeline inspection with hyper-redundant manipulators","authors":"Junjie Zhu , Mingming Su , Longchuan Li , Yuxuan Xiang , Jianming Wang , Xuan Xiao","doi":"10.1016/j.birob.2025.100245","DOIUrl":"10.1016/j.birob.2025.100245","url":null,"abstract":"<div><div>The hyper-redundant manipulator (HRM) can explore narrow and curved pipelines by leveraging its high flexibility and redundancy. However, planning collision-free motion trajectories for HRMs in confined environments remains a significant challenge. To address this issue, a pipeline inspection approach that combines nonlinear model predictive control (NMPC) with the snake-inspired crawling algorithm(SCA) is proposed. The approach consists of three processes: insertion, inspection, and exit. The insertion and exit processes utilize the SCA, inspired by snake motion, to significantly reduce path planning time. The inspection process employs NMPC to generate collision-free motion. The prototype HRM is developed, and inspection experiments are conducted in various complex pipeline scenarios to validate the effectiveness and feasibility of the proposed method. Experimental results demonstrate that the approach effectively minimizes the computational cost of path planning, offering a practical solution for HRM applications in pipeline inspection.</div></div>","PeriodicalId":100184,"journal":{"name":"Biomimetic Intelligence and Robotics","volume":"5 3","pages":"Article 100245"},"PeriodicalIF":5.4,"publicationDate":"2025-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144886416","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-05-20DOI: 10.1016/j.birob.2025.100237
Liding Zhang , Kuanqi Cai , Zhenshan Bing , Chaoqun Wang , Alois Knoll
Optimal path planning involves finding a feasible state sequence between a start and a goal that optimizes an objective. This process relies on heuristic functions to guide the search direction. While a robust function can improve search efficiency and solution quality, current methods often overlook available environmental data and simplify the function structure due to the complexity of information relationships. This study introduces Genetic Informed Trees (GIT*), which improves upon Effort Informed Trees (EIT*) by integrating a wider array of environmental data, such as repulsive forces from obstacles and the dynamic importance of vertices, to refine heuristic functions for better guidance. Furthermore, we integrated reinforced genetic programming (RGP), which combines genetic programming with reward system feedback to mutate genotype-generative heuristic functions for GIT*. RGP leverages a multitude of data types, thereby improving computational efficiency and solution quality within a set timeframe. Comparative analyses demonstrate that GIT* surpasses existing single-query, sampling-based planners in problems ranging from to and was tested on a real-world mobile manipulation task. A video showcasing our experimental results is available at https://youtu.be/URjXbc_BiYg.
{"title":"Genetic Informed Trees (GIT*): Path planning via reinforced genetic programming heuristics","authors":"Liding Zhang , Kuanqi Cai , Zhenshan Bing , Chaoqun Wang , Alois Knoll","doi":"10.1016/j.birob.2025.100237","DOIUrl":"10.1016/j.birob.2025.100237","url":null,"abstract":"<div><div>Optimal path planning involves finding a feasible state sequence between a start and a goal that optimizes an objective. This process relies on heuristic functions to guide the search direction. While a robust function can improve search efficiency and solution quality, current methods often overlook available environmental data and simplify the function structure due to the complexity of information relationships. This study introduces Genetic Informed Trees (GIT*), which improves upon Effort Informed Trees (EIT*) by integrating a wider array of environmental data, such as repulsive forces from obstacles and the dynamic importance of vertices, to refine heuristic functions for better guidance. Furthermore, we integrated reinforced genetic programming (RGP), which combines genetic programming with reward system feedback to mutate genotype-generative heuristic functions for GIT*. RGP leverages a multitude of data types, thereby improving computational efficiency and solution quality within a set timeframe. Comparative analyses demonstrate that GIT* surpasses existing single-query, sampling-based planners in problems ranging from <span><math><msup><mrow><mi>R</mi></mrow><mrow><mn>4</mn></mrow></msup></math></span> to <span><math><msup><mrow><mi>R</mi></mrow><mrow><mn>16</mn></mrow></msup></math></span> and was tested on a real-world mobile manipulation task. A video showcasing our experimental results is available at <span><span>https://youtu.be/URjXbc_BiYg</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":100184,"journal":{"name":"Biomimetic Intelligence and Robotics","volume":"5 3","pages":"Article 100237"},"PeriodicalIF":5.4,"publicationDate":"2025-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144892904","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-05-06DOI: 10.1016/j.birob.2025.100236
Kang Peng , Yaoyuan Chang , Guodong Lang , Jian Xu , Yongsheng Gao , Jiajun Yin , Jie Zhao
Surgical image segmentation serves as the foundation for laparoscopic surgical navigation technology. The indistinct local features of biological tissues in laparoscopic image pose challenges for image segmentation. To address this issue, we develop an image segmentation network tailored for laparoscopic surgery. Firstly, we introduce the Mixed Attention Enhancement (MAE) module that sequentially conducts the Channel Attention Enhancement (CAE) module and the Global Feature Enhancement (GFE) module linked in series. The CAE module enhances the network’s perception of prominent channels, allowing feature maps to exhibit clear local features. The GFE module is capable of extracting global features from both the height and width dimensions of images and integrating them into three-dimensional features. This enhancement improves the network’s ability to capture global features, thereby facilitating the inference of regions with indistinct local features. Secondly, we propose the Multi-scale Feature Fusion (MFF) module. This module expands the feature map into various scales, further enlarging the network’s receptive field and enhancing perception of features at multiple scales. In addition, we tested the proposed network on the EndoVis 2018 and a human minimally invasive liver resection image segmentation dataset, comparing it against six other advanced image segmentation networks. The comparative test results demonstrate that the proposed network achieves the most advanced performance on both datasets, proving its potential in improving surgical image segmentation outcome. The codes of MAMNet are available at: https://github.com/Pang1234567/MAMNet.
{"title":"Image segmentation network for laparoscopic surgery","authors":"Kang Peng , Yaoyuan Chang , Guodong Lang , Jian Xu , Yongsheng Gao , Jiajun Yin , Jie Zhao","doi":"10.1016/j.birob.2025.100236","DOIUrl":"10.1016/j.birob.2025.100236","url":null,"abstract":"<div><div>Surgical image segmentation serves as the foundation for laparoscopic surgical navigation technology. The indistinct local features of biological tissues in laparoscopic image pose challenges for image segmentation. To address this issue, we develop an image segmentation network tailored for laparoscopic surgery. Firstly, we introduce the Mixed Attention Enhancement (MAE) module that sequentially conducts the Channel Attention Enhancement (CAE) module and the Global Feature Enhancement (GFE) module linked in series. The CAE module enhances the network’s perception of prominent channels, allowing feature maps to exhibit clear local features. The GFE module is capable of extracting global features from both the height and width dimensions of images and integrating them into three-dimensional features. This enhancement improves the network’s ability to capture global features, thereby facilitating the inference of regions with indistinct local features. Secondly, we propose the Multi-scale Feature Fusion (MFF) module. This module expands the feature map into various scales, further enlarging the network’s receptive field and enhancing perception of features at multiple scales. In addition, we tested the proposed network on the EndoVis 2018 and a human minimally invasive liver resection image segmentation dataset, comparing it against six other advanced image segmentation networks. The comparative test results demonstrate that the proposed network achieves the most advanced performance on both datasets, proving its potential in improving surgical image segmentation outcome. The codes of MAMNet are available at: <span><span>https://github.com/Pang1234567/MAMNet</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":100184,"journal":{"name":"Biomimetic Intelligence and Robotics","volume":"5 3","pages":"Article 100236"},"PeriodicalIF":0.0,"publicationDate":"2025-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144490238","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-30DOI: 10.1016/j.birob.2025.100235
Yanbin Li , Wei Zhang , Zhiguo Zhang , Xiaogang Shi , Ziruo Li , Mingming Zhang , Wenzheng Chi
Simultaneous Localization and Mapping (SLAM) is widely used to solve the localization problem of unmanned devices such as robots. However, in degraded environments, the accuracy of SLAM is greatly reduced due to the lack of constrained features. In this article, we propose a deep learning-based adaptive compensation strategy for sensors. First, we create a dataset dedicated to training a degradation detection model, which contains coordinate data of particle swarms with different distributional features, and endow the model with degradation detection capability through supervised learning. Second, we design a lightweight network model with short computation time and good accuracy for real-time degradation detection tasks. Finally, an adaptive compensation strategy for sensors based on the degree of degradation is designed, where the SLAM is able to assign different weights to the sensor information according to the degree of degradation given by the model, to adjust the contribution of different sensors in the pose optimization process. We demonstrate through simulation experiments and real experiments that the robustness of the improved SLAM in degraded environments is significantly enhanced, and the accuracy of localization and mapping are improved.
{"title":"An adaptive compensation strategy for sensors based on the degree of degradation","authors":"Yanbin Li , Wei Zhang , Zhiguo Zhang , Xiaogang Shi , Ziruo Li , Mingming Zhang , Wenzheng Chi","doi":"10.1016/j.birob.2025.100235","DOIUrl":"10.1016/j.birob.2025.100235","url":null,"abstract":"<div><div>Simultaneous Localization and Mapping (SLAM) is widely used to solve the localization problem of unmanned devices such as robots. However, in degraded environments, the accuracy of SLAM is greatly reduced due to the lack of constrained features. In this article, we propose a deep learning-based adaptive compensation strategy for sensors. First, we create a dataset dedicated to training a degradation detection model, which contains coordinate data of particle swarms with different distributional features, and endow the model with degradation detection capability through supervised learning. Second, we design a lightweight network model with short computation time and good accuracy for real-time degradation detection tasks. Finally, an adaptive compensation strategy for sensors based on the degree of degradation is designed, where the SLAM is able to assign different weights to the sensor information according to the degree of degradation given by the model, to adjust the contribution of different sensors in the pose optimization process. We demonstrate through simulation experiments and real experiments that the robustness of the improved SLAM in degraded environments is significantly enhanced, and the accuracy of localization and mapping are improved.</div></div>","PeriodicalId":100184,"journal":{"name":"Biomimetic Intelligence and Robotics","volume":"5 4","pages":"Article 100235"},"PeriodicalIF":5.4,"publicationDate":"2025-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145266328","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-23DOI: 10.1016/j.birob.2025.100234
Pengyu Du , Jianxiong Hao , Kun Qian , Yue Zhang , Zhiqiang Zhang , Chaoyang Shi
Tendon-driven continuum manipulators can perform tasks in confined environments due to their flexibility and curvilinearity, especially in minimally invasive surgeries. However, the friction along tendons and tendon slack present challenges to their motion control. This work proposes a trajectory tracking controller based on adaptive fuzzy sliding mode control (AFSMC) for the tendon-driven continuum manipulators. It consists of a sliding mode control (SMC) law with two groups of adaptive fuzzy subcontrollers. The first one is utilized to estimate and compensate for friction forces along tendons. The second one adapts the switching terms of SMC to alleviate the chattering phenomenon and enhance control robustness. To prevent tendon slack, an antagonistic strategy along with the AFSMC controller is adopted to allocate driving forces. Simulation and experiment studies have been conducted to investigate the efficacy of the proposed controller. In free space experiments, the AFSMC controller generates an average root-mean-square error (RMSE) of 0.42% compared with 0.90% of the SMC controller. In the case of a 50 g load, the proposed controller reduces the average RMSE to 1.47% compared with 4.29% of the SMC controller. These experimental results demonstrate that the proposed AFSMC controller has high control accuracy, robustness, and reduced chattering.
{"title":"Tendon friction compensation and slack avoidance for trajectory tracking control of the tendon-driven medical continuum manipulator","authors":"Pengyu Du , Jianxiong Hao , Kun Qian , Yue Zhang , Zhiqiang Zhang , Chaoyang Shi","doi":"10.1016/j.birob.2025.100234","DOIUrl":"10.1016/j.birob.2025.100234","url":null,"abstract":"<div><div>Tendon-driven continuum manipulators can perform tasks in confined environments due to their flexibility and curvilinearity, especially in minimally invasive surgeries. However, the friction along tendons and tendon slack present challenges to their motion control. This work proposes a trajectory tracking controller based on adaptive fuzzy sliding mode control (AFSMC) for the tendon-driven continuum manipulators. It consists of a sliding mode control (SMC) law with two groups of adaptive fuzzy subcontrollers. The first one is utilized to estimate and compensate for friction forces along tendons. The second one adapts the switching terms of SMC to alleviate the chattering phenomenon and enhance control robustness. To prevent tendon slack, an antagonistic strategy along with the AFSMC controller is adopted to allocate driving forces. Simulation and experiment studies have been conducted to investigate the efficacy of the proposed controller. In free space experiments, the AFSMC controller generates an average root-mean-square error (RMSE) of 0.42% compared with 0.90% of the SMC controller. In the case of a 50 g load, the proposed controller reduces the average RMSE to 1.47% compared with 4.29% of the SMC controller. These experimental results demonstrate that the proposed AFSMC controller has high control accuracy, robustness, and reduced chattering.</div></div>","PeriodicalId":100184,"journal":{"name":"Biomimetic Intelligence and Robotics","volume":"5 4","pages":"Article 100234"},"PeriodicalIF":5.4,"publicationDate":"2025-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144926625","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-22DOI: 10.1016/j.birob.2025.100233
Linsen Zhang , Shiqi Liu , Xiaoliang Xie , Xiaohu Zhou , Zengguang Hou , Xinkai Qu , Wenzheng Han , Meng Song , Xiyao Ma , Haining Zhao
In cerebrovascular interventional surgery, spatial position prediction navigation (SPPN) provides 3D spatial information of the vascular lumen, reducing the spatial dimension loss from digital subtraction angiography (DSA) and improving surgical precision. However, it is limited in its adaptability to complex vascular environments and prone to error accumulation. To address these issues, we propose spatial position prediction-based multimodal navigation (SPPMN), integrating minimal intraoperative X-ray images to enhance SPPN accuracy. In the first phase, a feature-weighted dynamic time warping (FDTW)-based branch matching algorithm is introduced for 3D topological positioning under non-registered conditions, with a dynamic location repositioning module for real-time corrections. In the second phase, an occlusion correction module, based on the elastic potential energy of the instrument tip, dynamically adjusts the tip’s angle to achieve low-projection occlusion control. Experimental validation using a high-precision electromagnetic tracking system (EMTS) on a 3D vascular model shows that the proposed method achieves an average 3D positioning accuracy of 9.36 mm in intracranial vascular regions, with a 78% reduction in radiation exposure, significantly enhancing both precision and safety in interventional surgeries.
{"title":"Novel 3D instrument navigation in intracranial vascular surgery with multi-source image fusion and self-calibration","authors":"Linsen Zhang , Shiqi Liu , Xiaoliang Xie , Xiaohu Zhou , Zengguang Hou , Xinkai Qu , Wenzheng Han , Meng Song , Xiyao Ma , Haining Zhao","doi":"10.1016/j.birob.2025.100233","DOIUrl":"10.1016/j.birob.2025.100233","url":null,"abstract":"<div><div>In cerebrovascular interventional surgery, spatial position prediction navigation (SPPN) provides 3D spatial information of the vascular lumen, reducing the spatial dimension loss from digital subtraction angiography (DSA) and improving surgical precision. However, it is limited in its adaptability to complex vascular environments and prone to error accumulation. To address these issues, we propose spatial position prediction-based multimodal navigation (SPPMN), integrating minimal intraoperative X-ray images to enhance SPPN accuracy. In the first phase, a feature-weighted dynamic time warping (FDTW)-based branch matching algorithm is introduced for 3D topological positioning under non-registered conditions, with a dynamic location repositioning module for real-time corrections. In the second phase, an occlusion correction module, based on the elastic potential energy of the instrument tip, dynamically adjusts the tip’s angle to achieve low-projection occlusion control. Experimental validation using a high-precision electromagnetic tracking system (EMTS) on a 3D vascular model shows that the proposed method achieves an average 3D positioning accuracy of 9.36 mm in intracranial vascular regions, with a 78% reduction in radiation exposure, significantly enhancing both precision and safety in interventional surgeries.</div></div>","PeriodicalId":100184,"journal":{"name":"Biomimetic Intelligence and Robotics","volume":"5 3","pages":"Article 100233"},"PeriodicalIF":5.4,"publicationDate":"2025-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144895044","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-11DOI: 10.1016/j.birob.2025.100232
Guoshun Cui , Shiwei Su , Hanyu Gao , Kai Zhuo , Kun Yang , Hang Wu
Humans can quickly perform adaptive grasping of soft objects by using visual perception and judgment of the grasping angle, which helps prevent the objects from sliding or deforming excessively. However, this easy task remains a challenge for robots. The grasping states of soft objects can be categorized into four types: sliding, appropriate, excessive and extreme. Effective recognition of different states is crucial for achieving adaptive grasping of soft objects. To address this problem, a novel visual-curvature fusion network based on YOLOv8 (VCFN-YOLOv8) is proposed to evaluate the grasping state of various soft objects. In this framework, the robotic arm equipped with the wrist camera and the curvature sensor is established to perform generalization grasping and lifting experiments on 11 different objects. Meanwhile, the dataset is built for training and testing the proposed method. The results show a classification accuracy of 99.51% on four different grasping states. A series of grasping evaluation experiments is conducted based on the proposed framework, along with tests for the model’s generality. The experiment results demonstrate that VCFN-YOLOv8 is accurate and efficient in evaluating the grasping state of soft objects and shows a certain degree of generalization for non-soft objects. It can be widely applied in fields such as automatic control, adaptive grasping and surgical robot.
{"title":"Soft objects grasping evaluation using a novel VCFN-YOLOv8 framework","authors":"Guoshun Cui , Shiwei Su , Hanyu Gao , Kai Zhuo , Kun Yang , Hang Wu","doi":"10.1016/j.birob.2025.100232","DOIUrl":"10.1016/j.birob.2025.100232","url":null,"abstract":"<div><div>Humans can quickly perform adaptive grasping of soft objects by using visual perception and judgment of the grasping angle, which helps prevent the objects from sliding or deforming excessively. However, this easy task remains a challenge for robots. The grasping states of soft objects can be categorized into four types: sliding, appropriate, excessive and extreme. Effective recognition of different states is crucial for achieving adaptive grasping of soft objects. To address this problem, a novel visual-curvature fusion network based on YOLOv8 (VCFN-YOLOv8) is proposed to evaluate the grasping state of various soft objects. In this framework, the robotic arm equipped with the wrist camera and the curvature sensor is established to perform generalization grasping and lifting experiments on 11 different objects. Meanwhile, the dataset is built for training and testing the proposed method. The results show a classification accuracy of 99.51% on four different grasping states. A series of grasping evaluation experiments is conducted based on the proposed framework, along with tests for the model’s generality. The experiment results demonstrate that VCFN-YOLOv8 is accurate and efficient in evaluating the grasping state of soft objects and shows a certain degree of generalization for non-soft objects. It can be widely applied in fields such as automatic control, adaptive grasping and surgical robot.</div></div>","PeriodicalId":100184,"journal":{"name":"Biomimetic Intelligence and Robotics","volume":"5 3","pages":"Article 100232"},"PeriodicalIF":0.0,"publicationDate":"2025-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144714580","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the field of hexapod robot control, the application of central pattern generators (CPG) and deep reinforcement learning (DRL) is becoming increasingly common. Compared to traditional control methods that rely on dynamic models, both the CPG and the end-to-end DRL approaches significantly simplify the complexity of designing control models. However, relying solely on DRL for control also has its drawbacks, such as slow convergence speed and low exploration efficiency. Moreover, although the CPG can produce rhythmic gaits, its control strategy is relatively singular, limiting the robot’s ability to adapt to complex terrains. To overcome these limitations, this study proposes a three-layer DRL control architecture. The high-level reinforcement learning controller is responsible for learning the parameters of the middle-level CPG and the low-level mapping functions, while the middle and low level controllers coordinate the joint movements within and between legs. By integrating the learning capabilities of DRL with the gait generation characteristics of CPG, this method significantly enhances the stability and adaptability of hexapod robots in complex terrains. Experimental results show that, compared to pure DRL approaches, this method significantly improves learning efficiency and control performance, when dealing with complex terrains, it considerably enhances the robot’s stability and adaptability compared to pure CPG control.
{"title":"Hierarchical reinforcement learning for enhancing stability and adaptability of hexapod robots in complex terrains","authors":"Shichang Huang , Zhihan Xiao , Minhua Zheng , Wen Shi","doi":"10.1016/j.birob.2025.100231","DOIUrl":"10.1016/j.birob.2025.100231","url":null,"abstract":"<div><div>In the field of hexapod robot control, the application of central pattern generators (CPG) and deep reinforcement learning (DRL) is becoming increasingly common. Compared to traditional control methods that rely on dynamic models, both the CPG and the end-to-end DRL approaches significantly simplify the complexity of designing control models. However, relying solely on DRL for control also has its drawbacks, such as slow convergence speed and low exploration efficiency. Moreover, although the CPG can produce rhythmic gaits, its control strategy is relatively singular, limiting the robot’s ability to adapt to complex terrains. To overcome these limitations, this study proposes a three-layer DRL control architecture. The high-level reinforcement learning controller is responsible for learning the parameters of the middle-level CPG and the low-level mapping functions, while the middle and low level controllers coordinate the joint movements within and between legs. By integrating the learning capabilities of DRL with the gait generation characteristics of CPG, this method significantly enhances the stability and adaptability of hexapod robots in complex terrains. Experimental results show that, compared to pure DRL approaches, this method significantly improves learning efficiency and control performance, when dealing with complex terrains, it considerably enhances the robot’s stability and adaptability compared to pure CPG control.</div></div>","PeriodicalId":100184,"journal":{"name":"Biomimetic Intelligence and Robotics","volume":"5 3","pages":"Article 100231"},"PeriodicalIF":5.4,"publicationDate":"2025-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144879101","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-18DOI: 10.1016/j.birob.2025.100230
Weinan Chen, Tao Zhang, Jiyu Cheng, Yangming Lee, Yisheng Guan
{"title":"Editorial for the special issue on Advanced Technology in Autonomous Robots and Swarm Intelligence","authors":"Weinan Chen, Tao Zhang, Jiyu Cheng, Yangming Lee, Yisheng Guan","doi":"10.1016/j.birob.2025.100230","DOIUrl":"10.1016/j.birob.2025.100230","url":null,"abstract":"","PeriodicalId":100184,"journal":{"name":"Biomimetic Intelligence and Robotics","volume":"5 3","pages":"Article 100230"},"PeriodicalIF":0.0,"publicationDate":"2025-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143828691","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-17DOI: 10.1016/j.birob.2025.100229
Zijian Zhang , Zitao Wang , Ming Shao , Yangyang Dong , Fenglei Ni
Humanoid robot joints require real-time torque detection to provide accurate force feedback information for the control system. To meet the measurement requirements and realize the miniaturization of the sensor, a torque sensor based on the magnetoelastic effect is developed, utilizing planar spiral coils as detection probes. In this work, a planar spiral coil mutual inductance calculation model is established to solve the mutual inductance coefficient, and the mechanical structure and circuit design of the sensor are completed. Finally, a torque loading platform is built to perform calibration experiments, and the hysteresis model is improved to compensate for the hysteresis phenomenon. The calibration results indicate that the sensor shows excellent loaded nonlinearity of 3.08%F.S., unloaded nonlinearity of 2.71%F.S., loaded repeatability of 2.48%F.S., unloaded repeatability of 1.89%F.S. and hysteresis of 1.9%F.S., at a compact probe size of 13.81.8 mm.
{"title":"A novel magnetoelastic torque sensor with planar spiral coil probes for humanoid robot joints","authors":"Zijian Zhang , Zitao Wang , Ming Shao , Yangyang Dong , Fenglei Ni","doi":"10.1016/j.birob.2025.100229","DOIUrl":"10.1016/j.birob.2025.100229","url":null,"abstract":"<div><div>Humanoid robot joints require real-time torque detection to provide accurate force feedback information for the control system. To meet the measurement requirements and realize the miniaturization of the sensor, a torque sensor based on the magnetoelastic effect is developed, utilizing planar spiral coils as detection probes. In this work, a planar spiral coil mutual inductance calculation model is established to solve the mutual inductance coefficient, and the mechanical structure and circuit design of the sensor are completed. Finally, a torque loading platform is built to perform calibration experiments, and the hysteresis model is improved to compensate for the hysteresis phenomenon. The calibration results indicate that the sensor shows excellent loaded nonlinearity of 3.08%F.S., unloaded nonlinearity of 2.71%F.S., loaded repeatability of 2.48%F.S., unloaded repeatability of 1.89%F.S. and hysteresis of 1.9%F.S., at a compact probe size of 13.8<span><math><mrow><mo>×</mo><mn>9</mn><mo>.</mo><mn>9</mn><mo>×</mo></mrow></math></span>1.8 mm.</div></div>","PeriodicalId":100184,"journal":{"name":"Biomimetic Intelligence and Robotics","volume":"5 3","pages":"Article 100229"},"PeriodicalIF":0.0,"publicationDate":"2025-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144634131","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}