Tactile sensing is essential for improving robotic manipulation, particularly when handling transparent or deformable objects. However, effectively leveraging tactile observations remains a key challenge. In this study, we propose a semi-self-supervised complementary learning framework that integrates visual input with sparse tactile data collected through probing actions. Unlike previous approaches that rely on high-resolution tactile sensors or detailed 3D reconstructions, our method employs sparse tactile sensing to construct object representations via unsupervised learning. The proposed framework enables both complementary and independent recognition through vision and tactile perception, allowing the robot to perform additional probing actions to verify whether its hand has actually reached inside an object. We trained and evaluated our method on opening exploration tasks involving semi-transparent and deformable objects, using a relatively small real-world dataset collected with a robotic hand equipped with a simple tactile sensor.
{"title":"A self-supervised learning approach to acquire representation of concave object manipulation with sparse tactile sensing","authors":"Daiki Takamori , Yuichi Kobayashi , Tomohiro Hayakawa , Kosuke Hara , Dotaro Usui","doi":"10.1016/j.robot.2025.105319","DOIUrl":"10.1016/j.robot.2025.105319","url":null,"abstract":"<div><div>Tactile sensing is essential for improving robotic manipulation, particularly when handling transparent or deformable objects. However, effectively leveraging tactile observations remains a key challenge. In this study, we propose a semi-self-supervised complementary learning framework that integrates visual input with sparse tactile data collected through probing actions. Unlike previous approaches that rely on high-resolution tactile sensors or detailed 3D reconstructions, our method employs sparse tactile sensing to construct object representations via unsupervised learning. The proposed framework enables both complementary and independent recognition through vision and tactile perception, allowing the robot to perform additional probing actions to verify whether its hand has actually reached inside an object. We trained and evaluated our method on opening exploration tasks involving semi-transparent and deformable objects, using a relatively small real-world dataset collected with a robotic hand equipped with a simple tactile sensor.</div></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"198 ","pages":"Article 105319"},"PeriodicalIF":5.2,"publicationDate":"2026-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145978329","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-12DOI: 10.1016/j.robot.2026.105356
Song Han , Jiaao Zhao , Xinbin Li , Liwen Jia , Zhixin Liu
This study proposes a multitask bundling auction task allocation algorithm for heterogeneous multiple autonomous underwater vehicle (AUV) system with rechargeable docking stations. First, a distributed multitask bundling auction model, where multiple AUVs are allowed to win the bid in one auction round, is constructed. Meanwhile, the constructed model allows each winning AUV to achieve multiple tasks, thereby greatly improving the auction efficiency. In the bidding phase, each AUV can autonomously generate a multitask bundle, where the continuity of task execution can be effectively considered. Therefore, the utility of the multi-AUV system and the distributed allocation efficiency can be greatly improved. Second, a Crab Trap Artificial Intelligence (CTAI) algorithm, which mimics the process of catching crabs with crab traps, is proposed to effectively solve the particular constructed multitask bundle generation problem. Meanwhile, the continuity of task execution and the recharging timing for the AUV are comprehensively optimized by the proposed CTAI algorithm, which can efficiently generate the most competitive multitask bundle for each AUV. Moreover, a competition balance mechanism, that can effectively avoid the extra auction rounds caused by popular and unpopular tasks, is proposed to further improve the auction efficiency. The numerical results validate the superiority of the proposed algorithm.
{"title":"Task allocation for heterogeneous multi-AUV system with rechargeable docking stations: A multitask bundling auction approach","authors":"Song Han , Jiaao Zhao , Xinbin Li , Liwen Jia , Zhixin Liu","doi":"10.1016/j.robot.2026.105356","DOIUrl":"10.1016/j.robot.2026.105356","url":null,"abstract":"<div><div>This study proposes a multitask bundling auction task allocation algorithm for heterogeneous multiple autonomous underwater vehicle (AUV) system with rechargeable docking stations. First, a distributed multitask bundling auction model, where multiple AUVs are allowed to win the bid in one auction round, is constructed. Meanwhile, the constructed model allows each winning AUV to achieve multiple tasks, thereby greatly improving the auction efficiency. In the bidding phase, each AUV can autonomously generate a multitask bundle, where the continuity of task execution can be effectively considered. Therefore, the utility of the multi-AUV system and the distributed allocation efficiency can be greatly improved. Second, a Crab Trap Artificial Intelligence (CTAI) algorithm, which mimics the process of catching crabs with crab traps, is proposed to effectively solve the particular constructed multitask bundle generation problem. Meanwhile, the continuity of task execution and the recharging timing for the AUV are comprehensively optimized by the proposed CTAI algorithm, which can efficiently generate the most competitive multitask bundle for each AUV. Moreover, a competition balance mechanism, that can effectively avoid the extra auction rounds caused by popular and unpopular tasks, is proposed to further improve the auction efficiency. The numerical results validate the superiority of the proposed algorithm.</div></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"198 ","pages":"Article 105356"},"PeriodicalIF":5.2,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145978325","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-09DOI: 10.1016/j.robot.2026.105353
Lukas Sohlbach , Fernando Pérez-Peña , Karsten Schmidt
Rigid robots are highly specialised and can perform tasks with incredible precision. In contrast, soft robots provide a promising solution for creating robotic systems that are inherently better suited for unstructured and dynamic environments. Artificial muscles comprise one of the core components of soft robots. Among them, dielectric elastomer actuators (DEAs) represent the technology that comes closest to the capabilities of a natural muscle. However, their viscoelastic effects may limit the applicability and represent the main reason why suitable control methods are required. Thus, the objective of this work is to have a look at bioinspired spiking closed-loop control systems. By doing so, the research attempts to take a step towards creating true soft robots, which are bioinspired in all systems. A spiking neural network (SNN) is developed that comprised the main part of the controller and whose output is used as the control value. All information inside the controller was represented via spikes and the controller was implemented on neuromorphic hardware. During the validation, the general functionality was proven and a frequency-dependent tracking performance was observed. In a frequency range comparable to other works (≤ 0.5 Hz), the qualitative evaluation shows a good tracking performance even with a sinusoidal input.
{"title":"Spiking control of dielectric elastomer actuators","authors":"Lukas Sohlbach , Fernando Pérez-Peña , Karsten Schmidt","doi":"10.1016/j.robot.2026.105353","DOIUrl":"10.1016/j.robot.2026.105353","url":null,"abstract":"<div><div>Rigid robots are highly specialised and can perform tasks with incredible precision. In contrast, soft robots provide a promising solution for creating robotic systems that are inherently better suited for unstructured and dynamic environments. Artificial muscles comprise one of the core components of soft robots. Among them, dielectric elastomer actuators (DEAs) represent the technology that comes closest to the capabilities of a natural muscle. However, their viscoelastic effects may limit the applicability and represent the main reason why suitable control methods are required. Thus, the objective of this work is to have a look at bioinspired spiking closed-loop control systems. By doing so, the research attempts to take a step towards creating true soft robots, which are bioinspired in all systems. A spiking neural network (SNN) is developed that comprised the main part of the controller and whose output is used as the control value. All information inside the controller was represented via spikes and the controller was implemented on neuromorphic hardware. During the validation, the general functionality was proven and a frequency-dependent tracking performance was observed. In a frequency range comparable to other works (≤ 0.5 Hz), the qualitative evaluation shows a good tracking performance even with a sinusoidal input.</div></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"198 ","pages":"Article 105353"},"PeriodicalIF":5.2,"publicationDate":"2026-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145978330","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-09DOI: 10.1016/j.robot.2026.105338
Asmaa Loulou , Mustafa Unel
Sampling-based planners such as RRT* are widely used for motion planning in high-dimensional and complex environments. However, their reliance on uniform sampling often leads to slow convergence and inefficiency, especially in scenarios with narrow passages or long-range dependencies. To address this, we propose HAGRRT*, a Hybrid Attention-Guided RRT* algorithm that learns to generate spatially informed sampling priors. Our method introduces a new neural architecture that fuses multi-scale convolutional features with a lightweight cross-attention mechanism, explicitly conditioned on the start and goal positions. These features are decoded via a DPT-inspired module to produce 2D probability maps that guide the sampling process. Additionally, we propose an obstacle-aware loss function that penalizes disconnected and infeasible predictions which further encourages the network to focus on traversable, goal-directed regions. Extensive experiments on both structured (maze) and unstructured (forest) environments show that HAGRRT* achieves significantly faster convergence and improved path quality compared to both classical RRT* and recent deep-learning guided variants. Our method consistently requires fewer iterations and samples and is able to generalize across varying dataset types. On structured scenarios, our method achieves an average reduction of 39.6% in the number of samples and an average of 24.4% reduction in planning time compared to recent deep learning methods. On unstructured forest maps, our method reduces the number of samples by 71.5%, and planning time by 81.7% compared to recent deep learning methods, and improves the success rate from 67% to 93%. These results highlight the robustness, efficiency, and generalization ability of our approach across a wide range of planning environments.
{"title":"Hybrid attention-guided RRT*: Learning spatial sampling priors for accelerated path planning","authors":"Asmaa Loulou , Mustafa Unel","doi":"10.1016/j.robot.2026.105338","DOIUrl":"10.1016/j.robot.2026.105338","url":null,"abstract":"<div><div>Sampling-based planners such as RRT* are widely used for motion planning in high-dimensional and complex environments. However, their reliance on uniform sampling often leads to slow convergence and inefficiency, especially in scenarios with narrow passages or long-range dependencies. To address this, we propose HAGRRT*, a Hybrid Attention-Guided RRT* algorithm that learns to generate spatially informed sampling priors. Our method introduces a new neural architecture that fuses multi-scale convolutional features with a lightweight cross-attention mechanism, explicitly conditioned on the start and goal positions. These features are decoded via a DPT-inspired module to produce 2D probability maps that guide the sampling process. Additionally, we propose an obstacle-aware loss function that penalizes disconnected and infeasible predictions which further encourages the network to focus on traversable, goal-directed regions. Extensive experiments on both structured (maze) and unstructured (forest) environments show that HAGRRT* achieves significantly faster convergence and improved path quality compared to both classical RRT* and recent deep-learning guided variants. Our method consistently requires fewer iterations and samples and is able to generalize across varying dataset types. On structured scenarios, our method achieves an average reduction of <strong>39.6%</strong> in the number of samples and an average of <strong>24.4%</strong> reduction in planning time compared to recent deep learning methods. On unstructured forest maps, our method reduces the number of samples by <strong>71.5%</strong>, and planning time by <strong>81.7%</strong> compared to recent deep learning methods, and improves the success rate from <strong>67%</strong> to <strong>93%</strong>. These results highlight the robustness, efficiency, and generalization ability of our approach across a wide range of planning environments.</div></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"198 ","pages":"Article 105338"},"PeriodicalIF":5.2,"publicationDate":"2026-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145978328","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-08DOI: 10.1016/j.robot.2026.105335
Zhiwei Wu , Lei Yan , Tianhong Cheng , Wenfu Xu , Bin Liang
The cable-driven redundant manipulator (CDRM) is characterized by its lightweight, low inertia, and inherent compliance, enabling a wide range of applications in fields such as home services and medical rehabilitation. However, due to its complicated cable drive and transmission mechanism, compared with traditional redundant manipulator, the additional coupling cable kinematics is introduced into the inverse kinematics. Further, as the number of coupled equivalent joints of CDRM increases, it becomes challenging to obtain an efficient as well as stable inverse kinematics solution. In this paper, we propose an efficient geometry-informed inverse kinematics method by combining the geometry-based analytical approach and gradient-based numerical approach. First, the CDRM with 11 equivalent kinematic joints is reconstructed into a 7-DOF manipulator without joint offset. Based on the geometric characteristics, the analytical inverse kinematics of the reconstructed offset-free manipulator is derived to provide physically explainable iterative initial values in approximate solution space for numerical approach. Several numerical calculation results demonstrate that our method inherits the advantages of analytical approach, achieving accurate IK solutions, and improving the computational efficiency and the number of feasible solutions. Additionally, it also addresses the divergence issue resulting from irrational selection of initial values in numerical approach. Furthermore, the solution space can be comprehensively exploited by intuitively adjusting the arm-shape parameters and optimizing the manipulator’s configuration, in order to avoid surrounding obstacles, and optimize cable-tension distribution.
{"title":"An Efficient Geometry-Informed Inverse Kinematics of a 7 DOF Cable-Driven Manipulator with Non-Sphere Shoulder and Wrist","authors":"Zhiwei Wu , Lei Yan , Tianhong Cheng , Wenfu Xu , Bin Liang","doi":"10.1016/j.robot.2026.105335","DOIUrl":"10.1016/j.robot.2026.105335","url":null,"abstract":"<div><div>The cable-driven redundant manipulator (CDRM) is characterized by its lightweight, low inertia, and inherent compliance, enabling a wide range of applications in fields such as home services and medical rehabilitation. However, due to its complicated cable drive and transmission mechanism, compared with traditional redundant manipulator, the additional coupling cable kinematics is introduced into the inverse kinematics. Further, as the number of coupled equivalent joints of CDRM increases, it becomes challenging to obtain an efficient as well as stable inverse kinematics solution. In this paper, we propose an efficient geometry-informed inverse kinematics method by combining the geometry-based analytical approach and gradient-based numerical approach. First, the CDRM with 11 equivalent kinematic joints is reconstructed into a 7-DOF manipulator without joint offset. Based on the geometric characteristics, the analytical inverse kinematics of the reconstructed offset-free manipulator is derived to provide physically explainable iterative initial values in approximate solution space for numerical approach. Several numerical calculation results demonstrate that our method inherits the advantages of analytical approach, achieving accurate IK solutions, and improving the computational efficiency and the number of feasible solutions. Additionally, it also addresses the divergence issue resulting from irrational selection of initial values in numerical approach. Furthermore, the solution space can be comprehensively exploited by intuitively adjusting the arm-shape parameters and optimizing the manipulator’s configuration, in order to avoid surrounding obstacles, and optimize cable-tension distribution.</div></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"198 ","pages":"Article 105335"},"PeriodicalIF":5.2,"publicationDate":"2026-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145978332","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-08DOI: 10.1016/j.robot.2026.105340
Krishnendu Roy , R. Prasanth Kumar
Obstacle crossing is an important ability in biped and humanoid robots that are designed to traverse unstructured terrain. We consider the problem of determining the maximum (a) height, (b) width, (c) cross-sectional area, (d) thin vertical barrier height, and (e) square area of the obstacle that an underactuated biped robot with point-feet can cross while walking slowly. Two different biped robot configurations are compared for obstacle crossing: revolute knee and prismatic knee. The path needed to overcome the obstacle without touching it is determined with the help of binary occupancy grid in the sagittal plane and using genetic algorithm based maximization for each of the five cases, considering thin links as well as thick links for the biped robots. The determined collision free path for obstacle crossing is implemented as a trajectory and demonstrated in dynamic simulation in Mujoco simulation environment. In order to control the position of zero moment point (ZMP) and the ground projection of center of mass for stability, a reaction wheel in the torso is utilized. It is observed that increasing the thicknesses of the biped robot links in general has an effect of reducing the maximum size of the obstacle that can be crossed. Further, prismatic knee biped robot performs better than revolute knee biped robot in crossing large obstacles, especially with thick links. Experiments on a prismatic-knee biped robot further validate the results of GA and MuJoCo simulations.
{"title":"Obstacle crossing in revolute and prismatic knee underactuated biped robots","authors":"Krishnendu Roy , R. Prasanth Kumar","doi":"10.1016/j.robot.2026.105340","DOIUrl":"10.1016/j.robot.2026.105340","url":null,"abstract":"<div><div>Obstacle crossing is an important ability in biped and humanoid robots that are designed to traverse unstructured terrain. We consider the problem of determining the maximum (a) height, (b) width, (c) cross-sectional area, (d) thin vertical barrier height, and (e) square area of the obstacle that an underactuated biped robot with point-feet can cross while walking slowly. Two different biped robot configurations are compared for obstacle crossing: revolute knee and prismatic knee. The path needed to overcome the obstacle without touching it is determined with the help of binary occupancy grid in the sagittal plane and using genetic algorithm based maximization for each of the five cases, considering thin links as well as thick links for the biped robots. The determined collision free path for obstacle crossing is implemented as a trajectory and demonstrated in dynamic simulation in Mujoco simulation environment. In order to control the position of zero moment point (ZMP) and the ground projection of center of mass for stability, a reaction wheel in the torso is utilized. It is observed that increasing the thicknesses of the biped robot links in general has an effect of reducing the maximum size of the obstacle that can be crossed. Further, prismatic knee biped robot performs better than revolute knee biped robot in crossing large obstacles, especially with thick links. Experiments on a prismatic-knee biped robot further validate the results of GA and MuJoCo simulations.</div></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"198 ","pages":"Article 105340"},"PeriodicalIF":5.2,"publicationDate":"2026-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145939678","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-08DOI: 10.1016/j.robot.2026.105328
Jiaxiang Luo, Yufan Hu
Cloth grasping poses a fundamental challenge in robotics and computer vision, and constitutes a key capability for service robots. Accurate and robust localization of fabric grasping points is a critical prerequisite for enabling efficient and dexterous fabric manipulation. While recent deep learning approaches have achieved promising results on standard benchmarks, their performance still degrades substantially in complex scenarios involving severe folding, occlusion, and unknown backgrounds. This limitation primarily arises from insufficient global contextual modeling in existing dense regression frameworks, which rely heavily on local feature extraction while lacking the ability to capture global dependencies crucial for understanding fabric deformation patterns. To address these challenges, we propose RoCeDiRNet-3DoF, a novel framework built upon an InceptionNeXt encoder and a Wavelet Multiscale Convolutional Attention Decoder (WMCAD). WMCAD adopts a three-stage hierarchical architecture, incorporating Wavelet Convolution Blocks for global feature extraction, Dynamic Wavelet Upsampling Blocks to preserve semantic details during interpolation, and Multi-scale Mixed Attention Gates for effective cross-layer feature fusion. By leveraging WMCAD’s enhanced global feature modeling, RoCeDiRNet-3DoF achieves superior performance in challenging scenarios, achieving state-of-the-art (SOTA) results on the ViCoS dataset with an F1 score of 82.6%. Furthermore, across various complex scenario configurations, RoCeDiRNet-3DoF consistently outperforms competing methods, representing the current optimal solution for this task. The source code is available at: https://github.com/hyf381752569-stack/RoCeDiRNet-3DoF.
{"title":"RoCeDiRNet-3DoF: Robust center direction network for cloth grasping point localization in complex scenes","authors":"Jiaxiang Luo, Yufan Hu","doi":"10.1016/j.robot.2026.105328","DOIUrl":"10.1016/j.robot.2026.105328","url":null,"abstract":"<div><div>Cloth grasping poses a fundamental challenge in robotics and computer vision, and constitutes a key capability for service robots. Accurate and robust localization of fabric grasping points is a critical prerequisite for enabling efficient and dexterous fabric manipulation. While recent deep learning approaches have achieved promising results on standard benchmarks, their performance still degrades substantially in complex scenarios involving severe folding, occlusion, and unknown backgrounds. This limitation primarily arises from insufficient global contextual modeling in existing dense regression frameworks, which rely heavily on local feature extraction while lacking the ability to capture global dependencies crucial for understanding fabric deformation patterns. To address these challenges, we propose <strong>RoCeDiRNet-3DoF</strong>, a novel framework built upon an InceptionNeXt encoder and a <strong>Wavelet Multiscale Convolutional Attention Decoder (WMCAD)</strong>. WMCAD adopts a three-stage hierarchical architecture, incorporating Wavelet Convolution Blocks for global feature extraction, Dynamic Wavelet Upsampling Blocks to preserve semantic details during interpolation, and Multi-scale Mixed Attention Gates for effective cross-layer feature fusion. By leveraging WMCAD’s enhanced global feature modeling, RoCeDiRNet-3DoF achieves superior performance in challenging scenarios, achieving state-of-the-art (SOTA) results on the ViCoS dataset with an F1 score of 82.6%. Furthermore, across various complex scenario configurations, RoCeDiRNet-3DoF consistently outperforms competing methods, representing the current optimal solution for this task. The source code is available at: <span><span>https://github.com/hyf381752569-stack/RoCeDiRNet-3DoF</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"198 ","pages":"Article 105328"},"PeriodicalIF":5.2,"publicationDate":"2026-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145940038","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-08DOI: 10.1016/j.robot.2026.105350
Xiaogang Li , Yuan Ke
Back-support exoskeletons (BSEs) offer significant potential for reducing lumbar load and preventing lower-back injuries; however, their control is complicated by strong inter-channel coupling, time delays in actuation and sensing, and uncertainties induced by human–robot interaction. This paper develops a unified dynamic–kinematic model for a multi-input multi-output BSE that explicitly accounts for coupling effects, compliance, and delay characteristics, providing a control-oriented and physically consistent foundation. Based on this model, a prescribed performance control (PPC) framework is proposed to guarantee bounded tracking errors with predefined transient and steady-state behaviour in the presence of multidimensional, time-varying delays, without requiring model decoupling. To enhance robustness against lumped uncertainties, a hybrid observer integrating a linear extended state observer and a sliding mode observer is designed for real-time disturbance estimation and compensation. Simulation results obtained on a biomechanically realistic BSE platform demonstrate that the proposed PPC–LESO–SMO scheme achieves superior tracking accuracy, robustness, and convergence speed compared with conventional PPC and existing observer-based control approaches.
{"title":"Novel prescribed performance control for a kind of back-support exoskeleton with time delays","authors":"Xiaogang Li , Yuan Ke","doi":"10.1016/j.robot.2026.105350","DOIUrl":"10.1016/j.robot.2026.105350","url":null,"abstract":"<div><div>Back-support exoskeletons (BSEs) offer significant potential for reducing lumbar load and preventing lower-back injuries; however, their control is complicated by strong inter-channel coupling, time delays in actuation and sensing, and uncertainties induced by human–robot interaction. This paper develops a unified dynamic–kinematic model for a multi-input multi-output BSE that explicitly accounts for coupling effects, compliance, and delay characteristics, providing a control-oriented and physically consistent foundation. Based on this model, a prescribed performance control (PPC) framework is proposed to guarantee bounded tracking errors with predefined transient and steady-state behaviour in the presence of multidimensional, time-varying delays, without requiring model decoupling. To enhance robustness against lumped uncertainties, a hybrid observer integrating a linear extended state observer and a sliding mode observer is designed for real-time disturbance estimation and compensation. Simulation results obtained on a biomechanically realistic BSE platform demonstrate that the proposed PPC–LESO–SMO scheme achieves superior tracking accuracy, robustness, and convergence speed compared with conventional PPC and existing observer-based control approaches.</div></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"198 ","pages":"Article 105350"},"PeriodicalIF":5.2,"publicationDate":"2026-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145978326","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-07DOI: 10.1016/j.robot.2026.105332
Manaram Gnanasekera, Jay Katupitiya
The increasing deployment of unmanned aerial vehicles (UAVs) across various fields, from agriculture to disaster management, has raised critical concerns about mid-air collisions in increasingly congested airspaces. While previous research has extensively explored collision avoidance techniques, most solutions focus either on static or low-density dynamic environments, leaving a gap in addressing UAV navigation in densely cluttered, dynamic 3D environments. This paper introduces a novel collision cone-based approach designed to enhance time-efficiency and precision in 3D UAV collision avoidance scenarios, particularly in complex and dynamic environments with multiple obstacles. Through both simulation and real-world experiments, the method demonstrates superior time-efficiency compared to a benchmark method, while maintaining robust performance in unpredictable environments. The contributions of this work include the development of a real-time adaptable algorithm that recalculates optimal paths based on dynamic changes and its practical validation in realistic, high-density scenarios. This work fills a significant research gap by addressing the limitations of previous 2D approaches and static obstacle methods, providing a comprehensive solution for UAVs operating in highly dynamic 3D spaces.
{"title":"Collision cone based time-efficient method for 3D collision avoidance for UAVs: A purely heading-based solution","authors":"Manaram Gnanasekera, Jay Katupitiya","doi":"10.1016/j.robot.2026.105332","DOIUrl":"10.1016/j.robot.2026.105332","url":null,"abstract":"<div><div>The increasing deployment of unmanned aerial vehicles (UAVs) across various fields, from agriculture to disaster management, has raised critical concerns about mid-air collisions in increasingly congested airspaces. While previous research has extensively explored collision avoidance techniques, most solutions focus either on static or low-density dynamic environments, leaving a gap in addressing UAV navigation in densely cluttered, dynamic 3D environments. This paper introduces a novel collision cone-based approach designed to enhance time-efficiency and precision in 3D UAV collision avoidance scenarios, particularly in complex and dynamic environments with multiple obstacles. Through both simulation and real-world experiments, the method demonstrates superior time-efficiency compared to a benchmark method, while maintaining robust performance in unpredictable environments. The contributions of this work include the development of a real-time adaptable algorithm that recalculates optimal paths based on dynamic changes and its practical validation in realistic, high-density scenarios. This work fills a significant research gap by addressing the limitations of previous 2D approaches and static obstacle methods, providing a comprehensive solution for UAVs operating in highly dynamic 3D spaces.</div></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"198 ","pages":"Article 105332"},"PeriodicalIF":5.2,"publicationDate":"2026-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145978331","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-07DOI: 10.1016/j.robot.2026.105341
Dongwoo Seo, Seokki Moon, Jaeyoung Kang
This paper presents a unified motion control framework for a four-wheel independent steering (4WIS) and four-wheel independent drive (4WID) wheeled mobile robot (WMR) equipped with an unlimited steering angle system. Unlike conventional methods that rely on mode-specific kinematic controllers, the proposed controller does not require prior classification of driving modes. The controller defines the necessary forces at the tire positions to track the desired velocity profile in the body-fixed frame, considering tire slip dynamics, load transfer effects, and friction constraints based on the Magic Formula. The steering angles and in-wheel motor torques are then determined to generate the required forces. Due to the difficulty of directly measuring tire forces, a disturbance observer (DOB) is used to estimate these forces in real-time. Simulation results demonstrate that the proposed controller outperforms conventional kinematics-based approaches in velocity tracking accuracy, while maintaining stable tire force distribution and vertical load limits, ensuring the robot’s stability and effective maneuverability even under highly dynamic conditions characterized by large lateral accelerations and the resulting tire slip and load transfer.
{"title":"Unified motion control of 4WIS-4WID WMR with unlimited steering and load transfer consideration","authors":"Dongwoo Seo, Seokki Moon, Jaeyoung Kang","doi":"10.1016/j.robot.2026.105341","DOIUrl":"10.1016/j.robot.2026.105341","url":null,"abstract":"<div><div>This paper presents a unified motion control framework for a four-wheel independent steering (4WIS) and four-wheel independent drive (4WID) wheeled mobile robot (WMR) equipped with an unlimited steering angle system. Unlike conventional methods that rely on mode-specific kinematic controllers, the proposed controller does not require prior classification of driving modes. The controller defines the necessary forces at the tire positions to track the desired velocity profile in the body-fixed frame, considering tire slip dynamics, load transfer effects, and friction constraints based on the Magic Formula. The steering angles and in-wheel motor torques are then determined to generate the required forces. Due to the difficulty of directly measuring tire forces, a disturbance observer (DOB) is used to estimate these forces in real-time. Simulation results demonstrate that the proposed controller outperforms conventional kinematics-based approaches in velocity tracking accuracy, while maintaining stable tire force distribution and vertical load limits, ensuring the robot’s stability and effective maneuverability even under highly dynamic conditions characterized by large lateral accelerations and the resulting tire slip and load transfer.</div></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"198 ","pages":"Article 105341"},"PeriodicalIF":5.2,"publicationDate":"2026-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145978321","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}