首页 > 最新文献

Robotics and Autonomous Systems最新文献

英文 中文
Adaptive learning-based model predictive control strategy for drift vehicles
IF 4.3 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS Pub Date : 2025-02-14 DOI: 10.1016/j.robot.2025.104941
Bei Zhou , Cheng Hu , Jun Zeng , Zhouheng Li , Johannes Betz , Lei Xie , Hongye Su
Drift vehicle control offers valuable insights to support safe autonomous driving in extreme conditions, which hinges on tracking a particular path while maintaining the vehicle states near the drift equilibrium points (DEP). However, conventional tracking methods are not adaptable for drift vehicles due to their opposite steering angle and yaw rate. In this paper, we propose an adaptive path tracking (APT) control method to dynamically adjust drift states to follow the reference path, improving the commonly utilized predictive path tracking methods with released computation burden. Furthermore, existing control strategies necessitate a precise system model to calculate the DEP, which can be more intractable due to the highly nonlinear drift dynamics and sensitive vehicle parameters. To tackle this problem, an adaptive learning-based model predictive control (ALMPC) strategy is proposed based on the APT method, where an upper-level Bayesian optimization is employed to learn the DEP and APT control law to instruct a lower-level MPC drift controller. This hierarchical system architecture can also resolve the inherent control conflict between path tracking and drifting by separating these objectives into different layers. The ALMPC strategy is verified on the Matlab-Carsim platform, and simulation results demonstrate its effectiveness in controlling the drift vehicle to follow a clothoid-based reference path even with the misidentified road friction parameter.
{"title":"Adaptive learning-based model predictive control strategy for drift vehicles","authors":"Bei Zhou ,&nbsp;Cheng Hu ,&nbsp;Jun Zeng ,&nbsp;Zhouheng Li ,&nbsp;Johannes Betz ,&nbsp;Lei Xie ,&nbsp;Hongye Su","doi":"10.1016/j.robot.2025.104941","DOIUrl":"10.1016/j.robot.2025.104941","url":null,"abstract":"<div><div>Drift vehicle control offers valuable insights to support safe autonomous driving in extreme conditions, which hinges on tracking a particular path while maintaining the vehicle states near the drift equilibrium points (DEP). However, conventional tracking methods are not adaptable for drift vehicles due to their opposite steering angle and yaw rate. In this paper, we propose an adaptive path tracking (APT) control method to dynamically adjust drift states to follow the reference path, improving the commonly utilized predictive path tracking methods with released computation burden. Furthermore, existing control strategies necessitate a precise system model to calculate the DEP, which can be more intractable due to the highly nonlinear drift dynamics and sensitive vehicle parameters. To tackle this problem, an adaptive learning-based model predictive control (ALMPC) strategy is proposed based on the APT method, where an upper-level Bayesian optimization is employed to learn the DEP and APT control law to instruct a lower-level MPC drift controller. This hierarchical system architecture can also resolve the inherent control conflict between path tracking and drifting by separating these objectives into different layers. The ALMPC strategy is verified on the Matlab-Carsim platform, and simulation results demonstrate its effectiveness in controlling the drift vehicle to follow a clothoid-based reference path even with the misidentified road friction parameter.</div></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"188 ","pages":"Article 104941"},"PeriodicalIF":4.3,"publicationDate":"2025-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143463548","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Leveraging motion perceptibility and deep reinforcement learning for visual control of nonholonomic mobile robots
IF 4.3 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS Pub Date : 2025-02-13 DOI: 10.1016/j.robot.2025.104920
Takieddine Soualhi, Nathan Crombez, Alexandre Lombard, Yassine Ruichek, Stéphane Galland
This paper introduces a novel deep reinforcement learning framework to tackle the problem of visual servoing of nonholonomic mobile robots. The visual control of nonholonomic mobile robots becomes particularly challenging within the classical paradigm of visual servoing, mainly due to motion and visibility constraints, which makes it impossible to reach a given desired pose for certain configurations without losing essential visual information from the camera field of view. Previous work has demonstrated the effectiveness of deep reinforcement learning in addressing various vision-based robotics tasks. In light of this, we propose a framework that integrates deep recurrent policies, intrinsic motivation, and a novel auxiliary task that leverages the interaction matrix, the core of classical visual servoing approaches, to address the problem of vision-based control of nonholonomic robotic systems. Firstly, we analyze the influence of the nonholonomic constraints on control policy learning. Subsequently, we validate and evaluate our approach in both simulated and real-world environments. Our approach exhibits an emergent control behavior that enables the robot to accurately attain the desired pose while maintaining the desired visual content within the camera’s field of view. The proposed method outperforms the state-of-the-art approaches, demonstrating its effectiveness, robustness, and accuracy.
{"title":"Leveraging motion perceptibility and deep reinforcement learning for visual control of nonholonomic mobile robots","authors":"Takieddine Soualhi,&nbsp;Nathan Crombez,&nbsp;Alexandre Lombard,&nbsp;Yassine Ruichek,&nbsp;Stéphane Galland","doi":"10.1016/j.robot.2025.104920","DOIUrl":"10.1016/j.robot.2025.104920","url":null,"abstract":"<div><div>This paper introduces a novel deep reinforcement learning framework to tackle the problem of visual servoing of nonholonomic mobile robots. The visual control of nonholonomic mobile robots becomes particularly challenging within the classical paradigm of visual servoing, mainly due to motion and visibility constraints, which makes it impossible to reach a given desired pose for certain configurations without losing essential visual information from the camera field of view. Previous work has demonstrated the effectiveness of deep reinforcement learning in addressing various vision-based robotics tasks. In light of this, we propose a framework that integrates deep recurrent policies, intrinsic motivation, and a novel auxiliary task that leverages the interaction matrix, the core of classical visual servoing approaches, to address the problem of vision-based control of nonholonomic robotic systems. Firstly, we analyze the influence of the nonholonomic constraints on control policy learning. Subsequently, we validate and evaluate our approach in both simulated and real-world environments. Our approach exhibits an emergent control behavior that enables the robot to accurately attain the desired pose while maintaining the desired visual content within the camera’s field of view. The proposed method outperforms the state-of-the-art approaches, demonstrating its effectiveness, robustness, and accuracy.</div></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"189 ","pages":"Article 104920"},"PeriodicalIF":4.3,"publicationDate":"2025-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143580444","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing free-space transparency with discrete energy-based compensation in physical human–robot interaction
IF 4.3 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS Pub Date : 2025-02-11 DOI: 10.1016/j.robot.2025.104940
Seung Ho Lee, Ji Min Baek, Hyungpil Moon, Hyouk Ryeol Choi, Ja Choon Koo
In physical human–robot interaction (pHRi), free-space transparency reflects how accurately a robot interprets and follows human motion intentions. This paper presents a novel discrete energy-based compensator designed to enhance transparency by leveraging an admittance controller that requires real-time input compensation. Transparency, defined as the work performed by interaction forces per unit distance, is improved by analyzing human dynamics to minimize interaction forces linked to transparency. The proposed compensator incorporates time delay control to compute necessary real-time compensation based on interactions between human and robot dynamics represented by admittance parameters. The method was validated through simulations and experiments on a physical robot system, demonstrating its effectiveness in enhancing transparency while addressing practical limitations. This study emphasizes the importance of dynamic analysis in pHRi and proposes a cost-effective approach to compensate for both interaction and robot dynamics.
{"title":"Enhancing free-space transparency with discrete energy-based compensation in physical human–robot interaction","authors":"Seung Ho Lee,&nbsp;Ji Min Baek,&nbsp;Hyungpil Moon,&nbsp;Hyouk Ryeol Choi,&nbsp;Ja Choon Koo","doi":"10.1016/j.robot.2025.104940","DOIUrl":"10.1016/j.robot.2025.104940","url":null,"abstract":"<div><div>In physical human–robot interaction (pHRi), free-space transparency reflects how accurately a robot interprets and follows human motion intentions. This paper presents a novel discrete energy-based compensator designed to enhance transparency by leveraging an admittance controller that requires real-time input compensation. Transparency, defined as the work performed by interaction forces per unit distance, is improved by analyzing human dynamics to minimize interaction forces linked to transparency. The proposed compensator incorporates time delay control to compute necessary real-time compensation based on interactions between human and robot dynamics represented by admittance parameters. The method was validated through simulations and experiments on a physical robot system, demonstrating its effectiveness in enhancing transparency while addressing practical limitations. This study emphasizes the importance of dynamic analysis in pHRi and proposes a cost-effective approach to compensate for both interaction and robot dynamics.</div></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"187 ","pages":"Article 104940"},"PeriodicalIF":4.3,"publicationDate":"2025-02-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143429581","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Background suppression and comprehensive prototype pyramid distillation for few-shot object detection
IF 4.3 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS Pub Date : 2025-02-08 DOI: 10.1016/j.robot.2025.104938
Ning Li , Mingliang Wang , Gaochao Yang , Bo Li , Baohua Yuan , Shoukun Xu , Jun Qi
Few-Shot Object Detection (FSOD) methods can achieve detection of novel classes with only a small number of annotated samples and have received widespread attention in recent years. Meta-learning has been proven to be a key technology for addressing few-shot problems. Typically, meta-learning-based methods require an additional support branch to extract class prototypes for the few-shot classes, and the detection head performs classification and detection by measuring the distance between the class prototypes and the query features. Since the input to the support branch is the object image annotated with a bounding box, it often contains a large amount of background information, which degrades the quality of the class prototypes. Through our meticulous observation, we found that the center of the bounding box is often the core feature area of the object. Based on this, we designed a lightweight Background Suppression (BS) module that suppresses background features by measuring the similarity between the peripheral and central features of the support features, thereby providing high-quality support features for class prototype extraction. Additionally, in terms of class prototype extraction, we designed a more robust Comprehensive Prototype Pyramid Distillation (CPPD) module. This module first captures the multi-scale feature information of the object from the background-suppressed support features, and then uses a pyramid structure to hierarchically distill the multi-scale features to extract more comprehensive and purer class prototypes. Extensive experimental results on the PASCAL VOC and COCO datasets show that compared to other models under the same architecture and techniques, we achieved the best results.
{"title":"Background suppression and comprehensive prototype pyramid distillation for few-shot object detection","authors":"Ning Li ,&nbsp;Mingliang Wang ,&nbsp;Gaochao Yang ,&nbsp;Bo Li ,&nbsp;Baohua Yuan ,&nbsp;Shoukun Xu ,&nbsp;Jun Qi","doi":"10.1016/j.robot.2025.104938","DOIUrl":"10.1016/j.robot.2025.104938","url":null,"abstract":"<div><div>Few-Shot Object Detection (FSOD) methods can achieve detection of novel classes with only a small number of annotated samples and have received widespread attention in recent years. Meta-learning has been proven to be a key technology for addressing few-shot problems. Typically, meta-learning-based methods require an additional support branch to extract class prototypes for the few-shot classes, and the detection head performs classification and detection by measuring the distance between the class prototypes and the query features. Since the input to the support branch is the object image annotated with a bounding box, it often contains a large amount of background information, which degrades the quality of the class prototypes. Through our meticulous observation, we found that the center of the bounding box is often the core feature area of the object. Based on this, we designed a lightweight Background Suppression (BS) module that suppresses background features by measuring the similarity between the peripheral and central features of the support features, thereby providing high-quality support features for class prototype extraction. Additionally, in terms of class prototype extraction, we designed a more robust Comprehensive Prototype Pyramid Distillation (CPPD) module. This module first captures the multi-scale feature information of the object from the background-suppressed support features, and then uses a pyramid structure to hierarchically distill the multi-scale features to extract more comprehensive and purer class prototypes. Extensive experimental results on the PASCAL VOC and COCO datasets show that compared to other models under the same architecture and techniques, we achieved the best results.</div></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"187 ","pages":"Article 104938"},"PeriodicalIF":4.3,"publicationDate":"2025-02-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143379328","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Achieving adaptive tasks from human instructions for robots using large language models and behavior trees
IF 4.3 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS Pub Date : 2025-02-06 DOI: 10.1016/j.robot.2025.104937
Haotian Zhou, Yunhan Lin, Longwu Yan, Huasong Min
Combining Large Language Models (LLMs) with Behavior Trees (BTs) provides an alternative to achieve robot adaptive tasks from human instructions. BTs that contain goal conditions are generated by LLMs based on user instructions and then expanded by BT planners to accomplish tasks and handle disturbances. However, current LLMs struggle to handle unclear human instructions and have a relatively weak understanding of spatial geometry between objects, which results in suboptimal BT planning. To address these problems, this paper proposes a two-stage framework. In the first stage, a Feedback module is designed to handle unclear user instructions and guide the LLM to communicate with users, thus making the goal conditions of BTs complete. In the second stage, a BT Adaptive Update algorithm is proposed to optimize the execution order of the goal conditions, thereby improving the task efficiency of BT planner for multi-goal tasks. Experimental results from simulations and the real world indicate that our method enables the robot to generate complete goal conditions from user instructions and accomplish multi-goal tasks efficiently.
{"title":"Achieving adaptive tasks from human instructions for robots using large language models and behavior trees","authors":"Haotian Zhou,&nbsp;Yunhan Lin,&nbsp;Longwu Yan,&nbsp;Huasong Min","doi":"10.1016/j.robot.2025.104937","DOIUrl":"10.1016/j.robot.2025.104937","url":null,"abstract":"<div><div>Combining Large Language Models (LLMs) with Behavior Trees (BTs) provides an alternative to achieve robot adaptive tasks from human instructions. BTs that contain goal conditions are generated by LLMs based on user instructions and then expanded by BT planners to accomplish tasks and handle disturbances. However, current LLMs struggle to handle unclear human instructions and have a relatively weak understanding of spatial geometry between objects, which results in suboptimal BT planning. To address these problems, this paper proposes a two-stage framework. In the first stage, a Feedback module is designed to handle unclear user instructions and guide the LLM to communicate with users, thus making the goal conditions of BTs complete. In the second stage, a BT Adaptive Update algorithm is proposed to optimize the execution order of the goal conditions, thereby improving the task efficiency of BT planner for multi-goal tasks. Experimental results from simulations and the real world indicate that our method enables the robot to generate complete goal conditions from user instructions and accomplish multi-goal tasks efficiently.</div></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"187 ","pages":"Article 104937"},"PeriodicalIF":4.3,"publicationDate":"2025-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143377812","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Motion priority optimization framework towards automated and teleoperated robot cooperation in industrial recovery scenarios
IF 4.3 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS Pub Date : 2025-02-01 DOI: 10.1016/j.robot.2024.104833
Shunki Itadera, Yukiyasu Domae
In this study, we introduce an optimization framework to enhance the efficiency of motion priority design in scenarios involving automated and teleoperated robots within an industrial recovery context. The increasing utilization of industrial robots at manufacturing sites has been instrumental in reducing human workload. Nevertheless, achieving effective human–robot collaboration/cooperation (HRC) remains a challenge, especially when human workers and robots share a workspace for collaborative tasks. For instance, when an industrial robot encounters a failure, such as dropping an assembling part, it triggers the suspension of the corresponding factory cell for safe recovery. Given the limited capacity of pre-programmed robots to rectify such failures, human intervention becomes imperative, requiring entry into the robot workspace to address the dropped object while the robot system is halted. This discontinuous manufacturing process results in productivity loss. Robotic teleoperation has emerged as a promising technology enabling human workers to undertake high-risk tasks remotely and safely. Our study advocates for the incorporation of robotic teleoperation in the recovery process during manufacturing failure scenarios, which is referred to as “Cooperative Tele-Recovery”. Our proposed approach involves formulating priority rules designed to facilitate collision avoidance between manufacturing and recovery robots. This, in turn, ensures a continuous manufacturing process with minimal production loss within a configurable risk limitation. We present a comprehensive motion priority optimization framework composed of an HRC simulator and a cooperative multi-robot controller to identify optimal parameters for the priority function. The framework dynamically adjusts the allocation of motion priorities for manufacturing and recovery robots while adhering to predefined risk limitations. Through quantitative and qualitative assessments, we validate the novelty of our concept and demonstrate its feasibility.
{"title":"Motion priority optimization framework towards automated and teleoperated robot cooperation in industrial recovery scenarios","authors":"Shunki Itadera,&nbsp;Yukiyasu Domae","doi":"10.1016/j.robot.2024.104833","DOIUrl":"10.1016/j.robot.2024.104833","url":null,"abstract":"<div><div>In this study, we introduce an optimization framework to enhance the efficiency of motion priority design in scenarios involving automated and teleoperated robots within an industrial recovery context. The increasing utilization of industrial robots at manufacturing sites has been instrumental in reducing human workload. Nevertheless, achieving effective human–robot collaboration/cooperation (HRC) remains a challenge, especially when human workers and robots share a workspace for collaborative tasks. For instance, when an industrial robot encounters a failure, such as dropping an assembling part, it triggers the suspension of the corresponding factory cell for safe recovery. Given the limited capacity of pre-programmed robots to rectify such failures, human intervention becomes imperative, requiring entry into the robot workspace to address the dropped object while the robot system is halted. This discontinuous manufacturing process results in productivity loss. Robotic teleoperation has emerged as a promising technology enabling human workers to undertake high-risk tasks remotely and safely. Our study advocates for the incorporation of robotic teleoperation in the recovery process during manufacturing failure scenarios, which is referred to as “Cooperative Tele-Recovery”. Our proposed approach involves formulating priority rules designed to facilitate collision avoidance between manufacturing and recovery robots. This, in turn, ensures a continuous manufacturing process with minimal production loss within a configurable risk limitation. We present a comprehensive motion priority optimization framework composed of an HRC simulator and a cooperative multi-robot controller to identify optimal parameters for the priority function. The framework dynamically adjusts the allocation of motion priorities for manufacturing and recovery robots while adhering to predefined risk limitations. Through quantitative and qualitative assessments, we validate the novelty of our concept and demonstrate its feasibility.</div></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"184 ","pages":"Article 104833"},"PeriodicalIF":4.3,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143166753","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Path planning strategy for a space relocatable robotic manipulator based on improved GBNN algorithm
IF 4.3 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS Pub Date : 2025-02-01 DOI: 10.1016/j.robot.2025.104939
Yidao Ji, Cheng Zhou, Ruiyi Lin, Qiqi Liu
For the purpose of assisting astronauts in performing specific operations on orbital space stations, the space relocatable robotic manipulator has been studied and applied by many countries. In practical application scenarios, achieving optimal mobility and resource efficiency are the primary technical requirements for this manipulator. Therefore, we have developed an improved path-planning strategy based on the Glasius Bio-inspired Neural Network algorithm. This approach reduces computational resource consumption, dynamically avoids obstacles, and accounts for physical constraints. To simplify the complex process of 3D map rasterization, our method directly abstracts and constructs a topologically connected graph for the grasping points. Furthermore, the improved algorithm enhances the energy efficiency of path planning by incorporating a function that integrates global information. It also employs a diffusive updating method, enabling the rapid propagation of neuron activity values to target points within a single iteration. To further advance the practical application of the algorithm, we have considered the kinematic properties and physical constraints of the manipulator. Finally, we developed a dual-layer planning strategy that enables the manipulator to move efficiently across the surface of a non-regular space station. The effectiveness and advantages of the improved algorithm have been thoroughly evaluated through comprehensive comparisons with existing algorithms.
{"title":"Path planning strategy for a space relocatable robotic manipulator based on improved GBNN algorithm","authors":"Yidao Ji,&nbsp;Cheng Zhou,&nbsp;Ruiyi Lin,&nbsp;Qiqi Liu","doi":"10.1016/j.robot.2025.104939","DOIUrl":"10.1016/j.robot.2025.104939","url":null,"abstract":"<div><div>For the purpose of assisting astronauts in performing specific operations on orbital space stations, the space relocatable robotic manipulator has been studied and applied by many countries. In practical application scenarios, achieving optimal mobility and resource efficiency are the primary technical requirements for this manipulator. Therefore, we have developed an improved path-planning strategy based on the Glasius Bio-inspired Neural Network algorithm. This approach reduces computational resource consumption, dynamically avoids obstacles, and accounts for physical constraints. To simplify the complex process of 3D map rasterization, our method directly abstracts and constructs a topologically connected graph for the grasping points. Furthermore, the improved algorithm enhances the energy efficiency of path planning by incorporating a function that integrates global information. It also employs a diffusive updating method, enabling the rapid propagation of neuron activity values to target points within a single iteration. To further advance the practical application of the algorithm, we have considered the kinematic properties and physical constraints of the manipulator. Finally, we developed a dual-layer planning strategy that enables the manipulator to move efficiently across the surface of a non-regular space station. The effectiveness and advantages of the improved algorithm have been thoroughly evaluated through comprehensive comparisons with existing algorithms.</div></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"187 ","pages":"Article 104939"},"PeriodicalIF":4.3,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143175502","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Entropy-based tuning approach for Q-learning in an unstructured environment
IF 4.3 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS Pub Date : 2025-01-31 DOI: 10.1016/j.robot.2025.104924
Yu-Jen Chen , Wei-Cheng Jiang
In reinforcement learning applications, achieving a balance between exploration and exploitation is a crucial problem during the learning process. This study proposes an entropy-based tuning approach that uses the value different based exploration theory is proposed to solve this problem in an unstructured environment. In such an environment, a learning agent can manage its exploration rates in each state instead of using a constant rate for all states. Moreover, some obstacles may block the agent’s path to the destination. Accordingly, the proposed approach enables the agent to adaptively increase its exploration rates in some states undergoing transitions; thus, the agent is encouraged to explore in those states. This paper presents simulations of maze environments and the car parking problem to verify the proposed approach. The simulation results demonstrate that our approach enables the agent to adjust its policy quickly to adapt to changing environments.
{"title":"Entropy-based tuning approach for Q-learning in an unstructured environment","authors":"Yu-Jen Chen ,&nbsp;Wei-Cheng Jiang","doi":"10.1016/j.robot.2025.104924","DOIUrl":"10.1016/j.robot.2025.104924","url":null,"abstract":"<div><div>In reinforcement learning applications, achieving a balance between exploration and exploitation is a crucial problem during the learning process. This study proposes an entropy-based tuning approach that uses the value different based exploration theory is proposed to solve this problem in an unstructured environment. In such an environment, a learning agent can manage its exploration rates in each state instead of using a constant rate for all states. Moreover, some obstacles may block the agent’s path to the destination. Accordingly, the proposed approach enables the agent to adaptively increase its exploration rates in some states undergoing transitions; thus, the agent is encouraged to explore in those states. This paper presents simulations of maze environments and the car parking problem to verify the proposed approach. The simulation results demonstrate that our approach enables the agent to adjust its policy quickly to adapt to changing environments.</div></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"187 ","pages":"Article 104924"},"PeriodicalIF":4.3,"publicationDate":"2025-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143369532","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Aerial-ground testbed for simulating compliant bimanual on-orbit operations: From aerial to space robotic manipulation
IF 4.3 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS Pub Date : 2025-01-29 DOI: 10.1016/j.robot.2025.104927
Alejandro Suarez , Antonio Gonzalez-Morgado , Miguel Ruiz , Alejandro Lucena , Alessandro M. Giordano , Anibal Ollero
This paper presents the design and development of an aerial-ground robotics testbed for simulating bimanual manipulation operations on orbit relying on aerial robotics platforms, considering as representative use case the capture and maintenance of a non-cooperative free-floating satellite. The proposed testbed design is intended to facilitate the realization of simulations involving physical interaction, taking benefit of the technologies derived from aerial robotic manipulation, to be used as a complementary or alternative solution to existing ground testbed facilities. The system consists of a fully actuated multi-rotor (FAMR) that emulates the free flying/free floating dynamics of a target satellite, and a lightweight and compliant anthropomorphic dual arm system (LiCAS) to conduct the manipulation task, implementing the dynamics simulation in Simscape Multibody. The human-size and human-like kinematics of the LiCAS allow to replicate the manipulation skills of human operators, whereas its very low weight (2.5 kg) makes it possible to mount it on lightweight industrial robotic arms used to reproduce the spacecraft motion. Two types of compliant interactions are considered. On the one hand, impedance control for the post-contact phase is implemented in the simulation layer, using the right arm to hold the target and maintain the relative pose with the base while the left arm conducts the manipulation task. On the other hand, collision detection and passive accommodation is evaluated in the physical testing system relying on the mechanical joint compliance of the LiCAS dual arm. Collision reflexes between the free-floating FAMR and the compliant arm will be also experimentally evaluated by applying the principle of momentum conservation on the multi-rotor. The proposed approach takes benefit of the similarities between space and aerial robotic manipulation in terms of dynamic modeling, presenting simulation and experimental results in an indoor testbed to validate the developed framework.
{"title":"Aerial-ground testbed for simulating compliant bimanual on-orbit operations: From aerial to space robotic manipulation","authors":"Alejandro Suarez ,&nbsp;Antonio Gonzalez-Morgado ,&nbsp;Miguel Ruiz ,&nbsp;Alejandro Lucena ,&nbsp;Alessandro M. Giordano ,&nbsp;Anibal Ollero","doi":"10.1016/j.robot.2025.104927","DOIUrl":"10.1016/j.robot.2025.104927","url":null,"abstract":"<div><div>This paper presents the design and development of an aerial-ground robotics testbed for simulating bimanual manipulation operations on orbit relying on aerial robotics platforms, considering as representative use case the capture and maintenance of a non-cooperative free-floating satellite. The proposed testbed design is intended to facilitate the realization of simulations involving physical interaction, taking benefit of the technologies derived from aerial robotic manipulation, to be used as a complementary or alternative solution to existing ground testbed facilities. The system consists of a fully actuated multi-rotor (FAMR) that emulates the free flying/free floating dynamics of a target satellite, and a lightweight and compliant anthropomorphic dual arm system (LiCAS) to conduct the manipulation task, implementing the dynamics simulation in Simscape Multibody. The human-size and human-like kinematics of the LiCAS allow to replicate the manipulation skills of human operators, whereas its very low weight (2.5 kg) makes it possible to mount it on lightweight industrial robotic arms used to reproduce the spacecraft motion. Two types of compliant interactions are considered. On the one hand, impedance control for the post-contact phase is implemented in the simulation layer, using the right arm to hold the target and maintain the relative pose with the base while the left arm conducts the manipulation task. On the other hand, collision detection and passive accommodation is evaluated in the physical testing system relying on the mechanical joint compliance of the LiCAS dual arm. Collision reflexes between the free-floating FAMR and the compliant arm will be also experimentally evaluated by applying the principle of momentum conservation on the multi-rotor. The proposed approach takes benefit of the similarities between space and aerial robotic manipulation in terms of dynamic modeling, presenting simulation and experimental results in an indoor testbed to validate the developed framework.</div></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"187 ","pages":"Article 104927"},"PeriodicalIF":4.3,"publicationDate":"2025-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143175501","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Tightly-coupled LiDAR-IMU-wheel odometry with an online neural kinematic model learning via factor graph optimization
IF 4.3 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS Pub Date : 2025-01-28 DOI: 10.1016/j.robot.2025.104929
Taku Okawara , Kenji Koide , Shuji Oishi , Masashi Yokozuka , Atsuhiko Banno , Kentaro Uno , Kazuya Yoshida
Environments lacking geometric features (e.g., tunnels and long straight corridors) are challenging for LiDAR-based odometry algorithms because LiDAR point clouds degenerate in such environments. For wheeled robots, a wheel kinematic model (i.e., wheel odometry) can improve the reliability of the odometry estimation. However, the kinematic model suffers from complex motions (e.g., wheel slippage, lateral movement) in the case of skid-steering robots particularly because this robot model rotates by skidding its wheels. Furthermore, these errors change nonlinearly when the wheel slippage is large (e.g., drifting) and are subject to terrain-dependent parameters. To simultaneously tackle point cloud degeneration and the kinematic model errors, we developed a LiDAR-IMU-wheel odometry algorithm incorporating online training of a neural network that learns the kinematic model of wheeled robots with nonlinearity. We propose to train the neural network online on a factor graph along with robot states, allowing the learning-based kinematic model to adapt to the current terrain condition. The proposed method jointly solves online training of the neural network and LiDAR-IMU-wheel odometry on a unified factor graph to retain the consistency of all those constraints. Through experiments, we first verified that the proposed network adapted to a changing environment, resulting in an accurate odometry estimation across different environments. We then confirmed that the proposed odometry estimation algorithm was robust against point cloud degeneration and nonlinearity (e.g., large wheel slippage by drifting) of the kinematic model. The summary video is available at: https://www.youtube.com/watch?v=CvRVhdda7Cw
{"title":"Tightly-coupled LiDAR-IMU-wheel odometry with an online neural kinematic model learning via factor graph optimization","authors":"Taku Okawara ,&nbsp;Kenji Koide ,&nbsp;Shuji Oishi ,&nbsp;Masashi Yokozuka ,&nbsp;Atsuhiko Banno ,&nbsp;Kentaro Uno ,&nbsp;Kazuya Yoshida","doi":"10.1016/j.robot.2025.104929","DOIUrl":"10.1016/j.robot.2025.104929","url":null,"abstract":"<div><div>Environments lacking geometric features (e.g., tunnels and long straight corridors) are challenging for LiDAR-based odometry algorithms because LiDAR point clouds degenerate in such environments. For wheeled robots, a wheel kinematic model (i.e., wheel odometry) can improve the reliability of the odometry estimation. However, the kinematic model suffers from complex motions (e.g., wheel slippage, lateral movement) in the case of skid-steering robots particularly because this robot model rotates by skidding its wheels. Furthermore, these errors change nonlinearly when the wheel slippage is large (e.g., drifting) and are subject to terrain-dependent parameters. To simultaneously tackle point cloud degeneration and the kinematic model errors, we developed a LiDAR-IMU-wheel odometry algorithm incorporating online training of a neural network that learns the kinematic model of wheeled robots with nonlinearity. We propose to train the neural network online on a factor graph along with robot states, allowing the learning-based kinematic model to adapt to the current terrain condition. The proposed method jointly solves online training of the neural network and LiDAR-IMU-wheel odometry on a unified factor graph to retain the consistency of all those constraints. Through experiments, we first verified that the proposed network adapted to a changing environment, resulting in an accurate odometry estimation across different environments. We then confirmed that the proposed odometry estimation algorithm was robust against point cloud degeneration and nonlinearity (e.g., large wheel slippage by drifting) of the kinematic model. The summary video is available at: <span><span>https://www.youtube.com/watch?v=CvRVhdda7Cw</span><svg><path></path></svg></span></div></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"187 ","pages":"Article 104929"},"PeriodicalIF":4.3,"publicationDate":"2025-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143176577","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Robotics and Autonomous Systems
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1