Lorenzo Bianchi, Daniele Carnevale, Fabio Del Frate, Roberto Masocco, Simone Mattogno, Fabrizio Romanelli, Alessandro Tenaglia
A novel distributed control architecture for unmanned aircraft system (UASs) based on the new Robot Operating System (ROS) 2 middleware is proposed, endowed with industrial-grade tools that establish a novel standard for high-reliability distributed systems. The architecture has been developed for an autonomous quadcopter to design an inclusive solution ranging from low-level sensor management and soft real-time operating system setup and tuning to perception, exploration, and navigation modules orchestrated by a finite-state machine. The architecture proposed in this study builds on ROS 2 with its scalability and soft real-time communication functionalities, while including security and safety features, optimised implementations of localisation algorithms, and integrating an innovative and flexible path planner for UASs. Finally, experimental results have been collected during tests carried out both in the laboratory and in a realistic environment, showing the effectiveness of the proposed architecture in terms of reliability, scalability, and flexibility.
{"title":"A novel distributed architecture for unmanned aircraft systems based on Robot Operating System 2","authors":"Lorenzo Bianchi, Daniele Carnevale, Fabio Del Frate, Roberto Masocco, Simone Mattogno, Fabrizio Romanelli, Alessandro Tenaglia","doi":"10.1049/csy2.12083","DOIUrl":"10.1049/csy2.12083","url":null,"abstract":"<p>A novel distributed control architecture for unmanned aircraft system (UASs) based on the new Robot Operating System (ROS) 2 middleware is proposed, endowed with industrial-grade tools that establish a novel standard for high-reliability distributed systems. The architecture has been developed for an autonomous quadcopter to design an inclusive solution ranging from low-level sensor management and soft real-time operating system setup and tuning to perception, exploration, and navigation modules orchestrated by a finite-state machine. The architecture proposed in this study builds on ROS 2 with its scalability and soft real-time communication functionalities, while including security and safety features, optimised implementations of localisation algorithms, and integrating an innovative and flexible path planner for UASs. Finally, experimental results have been collected during tests carried out both in the laboratory and in a realistic environment, showing the effectiveness of the proposed architecture in terms of reliability, scalability, and flexibility.</p>","PeriodicalId":34110,"journal":{"name":"IET Cybersystems and Robotics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-03-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/csy2.12083","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48897911","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yu Tian, Fanli Meng, Yao Mao, Junwei Gao, Huabo Liu
In this study, the state estimation problems for linear discrete systems with uncertain parameters, deterministic input signals and d-step measurement delay are investigated. A robust state estimator with a similar iterative form and comparable computational complexity to the Kalman filter is derived based on the state augmentation method and the sensitivity penalisation of the innovation process. It is discussed that the steady-state properties such as boundedness and convergence of the robust state estimator under the assumptions that the system parameters are time invariant. Numerical simulation results show that compared with the Kalman filter, the obtained state estimator is more robust to modelling errors and has nice estimation accuracy.
{"title":"Robust state estimation for uncertain linear discrete systems with d-step measurement delay and deterministic input signals","authors":"Yu Tian, Fanli Meng, Yao Mao, Junwei Gao, Huabo Liu","doi":"10.1049/csy2.12080","DOIUrl":"10.1049/csy2.12080","url":null,"abstract":"<p>In this study, the state estimation problems for linear discrete systems with uncertain parameters, deterministic input signals and d-step measurement delay are investigated. A robust state estimator with a similar iterative form and comparable computational complexity to the Kalman filter is derived based on the state augmentation method and the sensitivity penalisation of the innovation process. It is discussed that the steady-state properties such as boundedness and convergence of the robust state estimator under the assumptions that the system parameters are time invariant. Numerical simulation results show that compared with the Kalman filter, the obtained state estimator is more robust to modelling errors and has nice estimation accuracy.</p>","PeriodicalId":34110,"journal":{"name":"IET Cybersystems and Robotics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/csy2.12080","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44339918","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Accurate and robust positioning and mapping are the core functions of autonomous mobile robots, and the ability to analyse and understand scenes is also an important criterion for the intelligence of autonomous mobile robots. In the outdoor environment, most robots rely on GPS positioning. When the signal is weak, the positioning error will interfere with the mapping results, making the semantic map construction less robust. This research mainly designs a semantic map construction system that does not rely on GPS signals for large outdoor scenes. It mainly designs a feature extraction scheme based on the sampling characteristics of Livox-AVIA solid-state LiDAR. The factor graph optimisation model of frame pose and inertial measurement unit (IMU) pre-integrated pose, using a sliding window to fuse solid-state LiDAR and IMU data, fuse laser inertial odometry and camera target detection results, refer to the closest point distance and curvature for semantic information. The point cloud is used for semantic segmentation to realise the construction of a 3D semantic map in outdoor scenes. The experiment verifies that laser inertial navigation odometry based on factor map optimisation has better positioning accuracy and lower overall cumulative error at turning, and the 3D semantic map obtained on this basis performs well.
{"title":"3D semantic map construction based on point cloud and image fusion","authors":"Huijun Li, Hailong Zhao, Bin Ye, Yu Zhang","doi":"10.1049/csy2.12078","DOIUrl":"10.1049/csy2.12078","url":null,"abstract":"<p>Accurate and robust positioning and mapping are the core functions of autonomous mobile robots, and the ability to analyse and understand scenes is also an important criterion for the intelligence of autonomous mobile robots. In the outdoor environment, most robots rely on GPS positioning. When the signal is weak, the positioning error will interfere with the mapping results, making the semantic map construction less robust. This research mainly designs a semantic map construction system that does not rely on GPS signals for large outdoor scenes. It mainly designs a feature extraction scheme based on the sampling characteristics of Livox-AVIA solid-state LiDAR. The factor graph optimisation model of frame pose and inertial measurement unit (IMU) pre-integrated pose, using a sliding window to fuse solid-state LiDAR and IMU data, fuse laser inertial odometry and camera target detection results, refer to the closest point distance and curvature for semantic information. The point cloud is used for semantic segmentation to realise the construction of a 3D semantic map in outdoor scenes. The experiment verifies that laser inertial navigation odometry based on factor map optimisation has better positioning accuracy and lower overall cumulative error at turning, and the 3D semantic map obtained on this basis performs well.</p>","PeriodicalId":34110,"journal":{"name":"IET Cybersystems and Robotics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-02-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/csy2.12078","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45538935","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Junyi Tao, Weichen Dai, Da Kong, Jiayan Wan, Bin He, Yu Zhang
In this study, a localisation system without cumulative errors is proposed. First, depth odometry is achieved only by using the depth information from the depth camera. Then the point cloud cross-source map registration is realised by 3D particle filtering to obtain the pose of the point cloud relative to the map. Furthermore, we fuse the odometry results with the point cloud to map registration results, so the system can operate effectively even if the map is incomplete. The effectiveness of the system for long-term localisation, localisation in the incomplete map, and localisation in low light through multiple experiments on the self-recorded dataset is demonstrated. Compared with other methods, the results are better than theirs and achieve high indoor localisation accuracy.
{"title":"Drift-free localisation using prior cross-source map for indoor low-light environments","authors":"Junyi Tao, Weichen Dai, Da Kong, Jiayan Wan, Bin He, Yu Zhang","doi":"10.1049/csy2.12081","DOIUrl":"10.1049/csy2.12081","url":null,"abstract":"<p>In this study, a localisation system without cumulative errors is proposed. First, depth odometry is achieved only by using the depth information from the depth camera. Then the point cloud cross-source map registration is realised by 3D particle filtering to obtain the pose of the point cloud relative to the map. Furthermore, we fuse the odometry results with the point cloud to map registration results, so the system can operate effectively even if the map is incomplete. The effectiveness of the system for long-term localisation, localisation in the incomplete map, and localisation in low light through multiple experiments on the self-recorded dataset is demonstrated. Compared with other methods, the results are better than theirs and achieve high indoor localisation accuracy.</p>","PeriodicalId":34110,"journal":{"name":"IET Cybersystems and Robotics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-02-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/csy2.12081","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41559347","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Brahim Brahmi, Mohammad Habibur Rahman, Maarouf Saad
This study addresses two issues about the interaction of the upper limb rehabilitation robot with individuals who have disabilities. The first step is to estimate the human's target position (also known as TPH). The second step is to develop a robust adaptive impedance control mechanism. A novel Non-singular Terminal Sliding Mode Control combined with an adaptive super-twisting controller is being developed to achieve this goal. This combination's purpose is to provide high reliability, continuous performance tracking of the system's trajectories. The proposed adaptive control strategy reduces matched dynamic uncertainty while also lowering chattering, which is the sliding mode's most glaring issue. The proposed TPH is coupled with adaptive impedance control with the use of a Radial Basis Function Neural Network, which allows a robotic exoskeleton to simply track the desired impedance model. To validate the approach in real-time, an exoskeleton robot was deployed in controlled experimental circumstances. A comparison study has been set up to show how the adaptive impedance approach proposed is better than other traditional controllers.
{"title":"Impedance learning adaptive super-twisting control of a robotic exoskeleton for physical human-robot interaction","authors":"Brahim Brahmi, Mohammad Habibur Rahman, Maarouf Saad","doi":"10.1049/csy2.12077","DOIUrl":"10.1049/csy2.12077","url":null,"abstract":"<p>This study addresses two issues about the interaction of the upper limb rehabilitation robot with individuals who have disabilities. The first step is to estimate the human's target position (also known as TPH). The second step is to develop a robust adaptive impedance control mechanism. A novel Non-singular Terminal Sliding Mode Control combined with an adaptive super-twisting controller is being developed to achieve this goal. This combination's purpose is to provide high reliability, continuous performance tracking of the system's trajectories. The proposed adaptive control strategy reduces matched dynamic uncertainty while also lowering chattering, which is the sliding mode's most glaring issue. The proposed TPH is coupled with adaptive impedance control with the use of a Radial Basis Function Neural Network, which allows a robotic exoskeleton to simply track the desired impedance model. To validate the approach in real-time, an exoskeleton robot was deployed in controlled experimental circumstances. A comparison study has been set up to show how the adaptive impedance approach proposed is better than other traditional controllers.</p>","PeriodicalId":34110,"journal":{"name":"IET Cybersystems and Robotics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-02-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/csy2.12077","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47399228","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yangde Chen, Peiliang Wang, Zichen Lin, Chenhao Sun
An artificial potential field method based on global path guidance (G-APF) is proposed for target unreachability and local minima problems of the conventional artificial potential field (APF) method in complex environments with dynamic obstacles. First, for the target unreachability problem, the global path attraction is added to the APF; second, an obstacle detection optimisation method is proposed and the optimal virtual target point is selected by setting the evaluation function to improve the local minima problem; finally, based on the obstacle detection optimisation method, the gravitational and repulsive processes are improved so that the path can pass through the narrow channel smoothly and remain collision-free. Experiments show that the method optimises 40.8% of the total path corners, reduces 81.8% of the number of path oscillations, and shortens 4.3% of the path length in Map 1. It can be applied to the vehicle obstacle avoidance path planning problem in complex environments with dynamic obstacles.
{"title":"Global path guided vehicle obstacle avoidance path planning with artificial potential field method","authors":"Yangde Chen, Peiliang Wang, Zichen Lin, Chenhao Sun","doi":"10.1049/csy2.12082","DOIUrl":"10.1049/csy2.12082","url":null,"abstract":"<p>An artificial potential field method based on global path guidance (G-APF) is proposed for target unreachability and local minima problems of the conventional artificial potential field (APF) method in complex environments with dynamic obstacles. First, for the target unreachability problem, the global path attraction is added to the APF; second, an obstacle detection optimisation method is proposed and the optimal virtual target point is selected by setting the evaluation function to improve the local minima problem; finally, based on the obstacle detection optimisation method, the gravitational and repulsive processes are improved so that the path can pass through the narrow channel smoothly and remain collision-free. Experiments show that the method optimises 40.8% of the total path corners, reduces 81.8% of the number of path oscillations, and shortens 4.3% of the path length in Map 1. It can be applied to the vehicle obstacle avoidance path planning problem in complex environments with dynamic obstacles.</p>","PeriodicalId":34110,"journal":{"name":"IET Cybersystems and Robotics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-02-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/csy2.12082","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49177863","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Simultaneous localisation and mapping (SLAM) are the basis for many robotic applications. As the front end of SLAM, visual odometry is mainly used to estimate camera pose. In dynamic scenes, classical methods are deteriorated by dynamic objects and cannot achieve satisfactory results. In order to improve the robustness of visual odometry in dynamic scenes, this paper proposed a dynamic region detection method based on RGB-D images. Firstly, all feature points on the RGB image are classified as dynamic and static using a triangle constraint and the epipolar geometric constraint successively. Meanwhile, the depth image is clustered using the K-Means method. The classified feature points are mapped to the clustered depth image, and a dynamic or static label is assigned to each cluster according to the number of dynamic feature points. Subsequently, a dynamic region mask for the RGB image is generated based on the dynamic clusters in the depth image, and the feature points covered by the mask are all removed. The remaining static feature points are applied to estimate the camera pose. Finally, some experimental results are provided to demonstrate the feasibility and performance.
{"title":"A robust RGB-D visual odometry with moving object detection in dynamic indoor scenes","authors":"Xianglong Zhang, Haiyang Yu, Yan Zhuang","doi":"10.1049/csy2.12079","DOIUrl":"10.1049/csy2.12079","url":null,"abstract":"<p>Simultaneous localisation and mapping (SLAM) are the basis for many robotic applications. As the front end of SLAM, visual odometry is mainly used to estimate camera pose. In dynamic scenes, classical methods are deteriorated by dynamic objects and cannot achieve satisfactory results. In order to improve the robustness of visual odometry in dynamic scenes, this paper proposed a dynamic region detection method based on RGB-D images. Firstly, all feature points on the RGB image are classified as dynamic and static using a triangle constraint and the epipolar geometric constraint successively. Meanwhile, the depth image is clustered using the K-Means method. The classified feature points are mapped to the clustered depth image, and a dynamic or static label is assigned to each cluster according to the number of dynamic feature points. Subsequently, a dynamic region mask for the RGB image is generated based on the dynamic clusters in the depth image, and the feature points covered by the mask are all removed. The remaining static feature points are applied to estimate the camera pose. Finally, some experimental results are provided to demonstrate the feasibility and performance.</p>","PeriodicalId":34110,"journal":{"name":"IET Cybersystems and Robotics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-02-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/csy2.12079","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45576563","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Precise localisation and navigation are the two most important tasks for mobile robots. Visual simultaneous localisation and mapping (VSLAM) is useful in localisation systems of mobile robots. The wide-angle camera has a broad field of vision and more abundant information on images, so it is widely used in mobile robots, including legged robots. However, wide-angle cameras are more complicated than ordinary cameras in the design of visual localisation systems, and higher requirements and challenges are put forward for VSLAM technologies based on wide-angle cameras. In order to resolve the problem of distortion in wide-angle images and improve the accuracy of localisation, a sampling VSLAM based on a wide-angle camera model for legged mobile robots is proposed. For the predictability of the periodic motion of a legged robot, in the method, the images are sampled periodically, image blocks with clear texture are selected and the image details are enhanced to extract the feature points on the image. Then, the feature points of the blocks are extracted and by using the feature points of the blocks in the images, the feature points on the images are extracted. Finally, the points on the incident light through the normalised plane are selected as the template points; the relationship between the template points and the images is established through the wide-angle camera model, and the pixel coordinates of the template points in the images and the descriptors are calculated. Moreover, many experiments are conducted on the TUM datasets with a quadruped robot . The experimental results show that the trajectory error and translation error measured by the proposed method are reduced compared with the VINS-MONO, ORB-SLAM3 and Periodic SLAM systems.
{"title":"Sampling visual SLAM with a wide-angle camera for legged mobile robots","authors":"Guangyu Fan, Jiaxin Huang, Dingyu Yang, Lei Rao","doi":"10.1049/csy2.12074","DOIUrl":"10.1049/csy2.12074","url":null,"abstract":"<p>Precise localisation and navigation are the two most important tasks for mobile robots. Visual simultaneous localisation and mapping (VSLAM) is useful in localisation systems of mobile robots. The wide-angle camera has a broad field of vision and more abundant information on images, so it is widely used in mobile robots, including legged robots. However, wide-angle cameras are more complicated than ordinary cameras in the design of visual localisation systems, and higher requirements and challenges are put forward for VSLAM technologies based on wide-angle cameras. In order to resolve the problem of distortion in wide-angle images and improve the accuracy of localisation, a sampling VSLAM based on a wide-angle camera model for legged mobile robots is proposed. For the predictability of the periodic motion of a legged robot, in the method, the images are sampled periodically, image blocks with clear texture are selected and the image details are enhanced to extract the feature points on the image. Then, the feature points of the blocks are extracted and by using the feature points of the blocks in the images, the feature points on the images are extracted. Finally, the points on the incident light through the normalised plane are selected as the template points; the relationship between the template points and the images is established through the wide-angle camera model, and the pixel coordinates of the template points in the images and the descriptors are calculated. Moreover, many experiments are conducted on the TUM datasets with a quadruped robot . The experimental results show that the trajectory error and translation error measured by the proposed method are reduced compared with the VINS-MONO, ORB-SLAM3 and Periodic SLAM systems.</p>","PeriodicalId":34110,"journal":{"name":"IET Cybersystems and Robotics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/csy2.12074","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47705183","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Qiuguo Zhu, Rui Song, Jun Wu, Yamakita Masaki, Zhangguo Yu
This is the Institution of Engineering and Technology (IET) Cyber-systems and Robotics Special Issue of Advances in Legged Robots: Control, Perception and Learning.
Legged mammals are found everywhere in nature. These legged animals can reach anywhere on Earth, adapt to uneven, discontinuous and obstructed environment. Their locomotion patterns are flexible and diverse to better adapt to the living environment. Imitating these real animals, legged robots have potential advantages over wheeled or tracked vehicles in regard to the traversal of rough and unstructured terrain. However, there are still many challenges for legged systems in solving the technical problems of the real world.
Control, perception and learning are the key technologies in the field of legged robots. Control is the basis of the stable and flexible locomotion of the legged robot. The combination of control and mechatronics machines will show excellent passing ability in continuous stairs, discrete terrain and vertical obstacle environments. The control methods of legged robots mainly include model-based control and learning-based control. After decades of development, research results have made robots more flexible and stable. The latter is a new method combining artificial intelligence, exploring how robots can acquire new motor skills in the process of interacting with the environment and achieve expected motor abilities. Perception allows the robot to understand the world. Autonomous navigation, behavioural decision-making and task operation, all require environmental awareness and understanding. This ability is an unattainable component of the legged robot. For example, different road surfaces require different gait modes, which is the most direct perceptual demand for legged robots.
Paper 1 by Haochen Xu, paper 2 by Qiuyue Luo and paper 3 by Wenhan Cai investigated the control problems of biped robots, paper 4 by Linqi Ye studied the leg–arm and wheel reconfiguration design and control strategy problems and paper 5 by Jinmian Hou extended the multi-leg hexapod robot problems. The design, control and strategy of the legged robot are discussed.
In paper 1, Haochen Xu et al. studied the disturbance rejection for biped robots during walking and running using CMG. They used the CMG as an auxiliary stabilisation device for fully actuated biped robots and integrated the CMG into the balance control framework. This method can effectively help the robot resist disturbance and remain stable over time.
In paper 2, Qiuyue Luo et al. exploited a self-stabilised walking gait for humanoid robots based on the essential model with internal states. They extended an essential dynamic model to the full dynamic model of humanoid robots based on the zero dynamics concept. By adjusting the step timing and landing position of the swing foot automatically and following the intrinsic dynamic characteristics, the humanoid robot can achieve robust walking.
{"title":"Advances in legged robots control, perception and learning","authors":"Qiuguo Zhu, Rui Song, Jun Wu, Yamakita Masaki, Zhangguo Yu","doi":"10.1049/csy2.12075","DOIUrl":"10.1049/csy2.12075","url":null,"abstract":"<p>This is the Institution of Engineering and Technology (IET) Cyber-systems and Robotics Special Issue of Advances in Legged Robots: Control, Perception and Learning.</p><p>Legged mammals are found everywhere in nature. These legged animals can reach anywhere on Earth, adapt to uneven, discontinuous and obstructed environment. Their locomotion patterns are flexible and diverse to better adapt to the living environment. Imitating these real animals, legged robots have potential advantages over wheeled or tracked vehicles in regard to the traversal of rough and unstructured terrain. However, there are still many challenges for legged systems in solving the technical problems of the real world.</p><p>Control, perception and learning are the key technologies in the field of legged robots. Control is the basis of the stable and flexible locomotion of the legged robot. The combination of control and mechatronics machines will show excellent passing ability in continuous stairs, discrete terrain and vertical obstacle environments. The control methods of legged robots mainly include model-based control and learning-based control. After decades of development, research results have made robots more flexible and stable. The latter is a new method combining artificial intelligence, exploring how robots can acquire new motor skills in the process of interacting with the environment and achieve expected motor abilities. Perception allows the robot to understand the world. Autonomous navigation, behavioural decision-making and task operation, all require environmental awareness and understanding. This ability is an unattainable component of the legged robot. For example, different road surfaces require different gait modes, which is the most direct perceptual demand for legged robots.</p><p>Paper 1 by Haochen Xu, paper 2 by Qiuyue Luo and paper 3 by Wenhan Cai investigated the control problems of biped robots, paper 4 by Linqi Ye studied the leg–arm and wheel reconfiguration design and control strategy problems and paper 5 by Jinmian Hou extended the multi-leg hexapod robot problems. The design, control and strategy of the legged robot are discussed.</p><p>In paper 1, Haochen Xu et al. studied the disturbance rejection for biped robots during walking and running using CMG. They used the CMG as an auxiliary stabilisation device for fully actuated biped robots and integrated the CMG into the balance control framework. This method can effectively help the robot resist disturbance and remain stable over time.</p><p>In paper 2, Qiuyue Luo et al. exploited a self-stabilised walking gait for humanoid robots based on the essential model with internal states. They extended an essential dynamic model to the full dynamic model of humanoid robots based on the zero dynamics concept. By adjusting the step timing and landing position of the swing foot automatically and following the intrinsic dynamic characteristics, the humanoid robot can achieve robust walking.</p><p>In p","PeriodicalId":34110,"journal":{"name":"IET Cybersystems and Robotics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/csy2.12075","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41244441","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Squatting is a basic movement of bipedal robots, which is essential in robotic actions like jumping or picking up objects. Due to the intrinsic complex dynamics of bipedal robots, perfect squatting motion requires high-performance motion planning and control algorithms. The standard academic solution combines model predictive control (MPC) with whole-body control (WBC), which is usually computationally expensive and difficult to implement on practical robots with limited computing resources. The real-time kinematic prediction (RKP) method is proposed, which considers upcoming reference motion trajectories and combines it with quadratic programming (QP)-based WBC. Since the WBC handles the full robot dynamics and various constraints, the RKP only needs to adopt the linear kinematics in the robot's task space and to softly constrain the desired accelerations. Then, the computational cost of derived closed-form RKP is greatly reduced. The RKP method is verified in simulation on a heavy-loaded bipedal robot. The robot makes rapid and large-amplitude squatting motions, which require close-to-limit torque outputs. Compared with the conventional QP-based WBC method, the proposed method exhibits high adaptability to rough planning, which implies much less user interference in the robot's motion planning. Furthermore, like the MPC, the proposed method can prepare for upcoming motions in advance but requires much less computation time.
{"title":"Squat motion of a bipedal robot using real-time kinematic prediction and whole-body control","authors":"Wenhan Cai, Qingkai Li, Songrui Huang, Hongjin Zhu, Yong Yang, Mingguo Zhao","doi":"10.1049/csy2.12073","DOIUrl":"10.1049/csy2.12073","url":null,"abstract":"<p>Squatting is a basic movement of bipedal robots, which is essential in robotic actions like jumping or picking up objects. Due to the intrinsic complex dynamics of bipedal robots, perfect squatting motion requires high-performance motion planning and control algorithms. The standard academic solution combines model predictive control (MPC) with whole-body control (WBC), which is usually computationally expensive and difficult to implement on practical robots with limited computing resources. The real-time kinematic prediction (RKP) method is proposed, which considers upcoming reference motion trajectories and combines it with quadratic programming (QP)-based WBC. Since the WBC handles the full robot dynamics and various constraints, the RKP only needs to adopt the linear kinematics in the robot's task space and to softly constrain the desired accelerations. Then, the computational cost of derived closed-form RKP is greatly reduced. The RKP method is verified in simulation on a heavy-loaded bipedal robot. The robot makes rapid and large-amplitude squatting motions, which require close-to-limit torque outputs. Compared with the conventional QP-based WBC method, the proposed method exhibits high adaptability to rough planning, which implies much less user interference in the robot's motion planning. Furthermore, like the MPC, the proposed method can prepare for upcoming motions in advance but requires much less computation time.</p>","PeriodicalId":34110,"journal":{"name":"IET Cybersystems and Robotics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/csy2.12073","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42465699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}