首页 > 最新文献

Robotics and Autonomous Systems最新文献

英文 中文
A stable and safe method for two-leg balancing of a quadruped robot using a neural-network-based controller
IF 4.3 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS Pub Date : 2024-12-28 DOI: 10.1016/j.robot.2024.104901
Alessia Li Noce , Luca Patanè , Paolo Arena
This study presents a control strategy using a neural controller to achieve postural control in underactuated quadrupedal robots, such as balancing on two feet constrained to be fixed. Such a configuration, as in climbing animals, is the most appropriate solution for traversing uneven, slippery terrains with few safe footholds. This is one of the most challenging poses to achieve and maintain under dynamic stability in a complex, high-order, underactuated robotic structure with two fixed points. The neural network learns by mimicking an optimal controller on a variation-based linearized model of the robot. A hybrid training strategy, formulated within a Linear Matrix Inequality framework, was developed to minimize the classical accuracy index while incorporating additional constraints to ensure stability and safety based on Lyapunov theory.For the first time, a Lyapunov neural controller was successfully applied to an underactuated dynamic system to maintain critical stability conditions, extending the region of attraction for the desired equilibrium beyond that of the optimal base controller used as a teacher. The neural controller demonstrates its efficiency against disturbances and novel reference poses not encountered during training, showcasing impressive generalization capabilities. Another key advantage is the significantly increased bandwidth of the neural control loop, which is several orders of magnitude higher than that of currently used recursive optimal controllers. This strategy is validated using a realistic dynamic simulation framework.
{"title":"A stable and safe method for two-leg balancing of a quadruped robot using a neural-network-based controller","authors":"Alessia Li Noce ,&nbsp;Luca Patanè ,&nbsp;Paolo Arena","doi":"10.1016/j.robot.2024.104901","DOIUrl":"10.1016/j.robot.2024.104901","url":null,"abstract":"<div><div>This study presents a control strategy using a neural controller to achieve postural control in underactuated quadrupedal robots, such as balancing on two feet constrained to be fixed. Such a configuration, as in climbing animals, is the most appropriate solution for traversing uneven, slippery terrains with few safe footholds. This is one of the most challenging poses to achieve and maintain under dynamic stability in a complex, high-order, underactuated robotic structure with two fixed points. The neural network learns by mimicking an optimal controller on a variation-based linearized model of the robot. A hybrid training strategy, formulated within a Linear Matrix Inequality framework, was developed to minimize the classical accuracy index while incorporating additional constraints to ensure stability and safety based on Lyapunov theory.For the first time, a Lyapunov neural controller was successfully applied to an underactuated dynamic system to maintain critical stability conditions, extending the region of attraction for the desired equilibrium beyond that of the optimal base controller used as a teacher. The neural controller demonstrates its efficiency against disturbances and novel reference poses not encountered during training, showcasing impressive generalization capabilities. Another key advantage is the significantly increased bandwidth of the neural control loop, which is several orders of magnitude higher than that of currently used recursive optimal controllers. This strategy is validated using a realistic dynamic simulation framework.</div></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"186 ","pages":"Article 104901"},"PeriodicalIF":4.3,"publicationDate":"2024-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143096426","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
POE-based kinematic calibration for serial robots using left-invariant error representation and decomposed iterative method
IF 4.3 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS Pub Date : 2024-12-27 DOI: 10.1016/j.robot.2024.104896
Qi Huang , Xuanchen Zhang , Hongtao Pan , Kun Lu , Yong Cheng
Most kinematic calibration algorithms overlook the impact of the pose error representation of the robot end-effector on calibration accuracy. In this paper, we demonstrate that the left-invariant error representation (LIEP) provides better pose accuracy than the right-invariant error representation (RIEP) in robot kinematic calibration. Standard product-of-exponentials (POE) kinematic calibration algorithms naturally satisfy the continuity and completeness of the error parameters but lack minimality. We introduce a novel minimal parameterization for the error parameters by analyzing the ineffective error updates during iteration. The number of identifiable parameters is determined as 4r+2p+6, where r and p represent the number of revolute and prismatic joints, respectively. In addition, we propose a decomposed iterative method to address the issue of the condition number in the identification Jacobian matrix being affected by the position data, thereby improving the convergence and robustness of algorithm. Finally, we present a POE-based calibration algorithm using the left-invariant error representation and decomposed iterative method, which satisfies completeness, continuity, and minimality. Several factors affecting calibration accuracy in POE-based kinematic calibration algorithms are discussed through simulations and experiments. Both simulations and experiments support our claims, showing that our algorithm outperforms existing methods in terms of orientation and position accuracy.
{"title":"POE-based kinematic calibration for serial robots using left-invariant error representation and decomposed iterative method","authors":"Qi Huang ,&nbsp;Xuanchen Zhang ,&nbsp;Hongtao Pan ,&nbsp;Kun Lu ,&nbsp;Yong Cheng","doi":"10.1016/j.robot.2024.104896","DOIUrl":"10.1016/j.robot.2024.104896","url":null,"abstract":"<div><div>Most kinematic calibration algorithms overlook the impact of the pose error representation of the robot end-effector on calibration accuracy. In this paper, we demonstrate that the left-invariant error representation (LIEP) provides better pose accuracy than the right-invariant error representation (RIEP) in robot kinematic calibration. Standard product-of-exponentials (POE) kinematic calibration algorithms naturally satisfy the continuity and completeness of the error parameters but lack minimality. We introduce a novel minimal parameterization for the error parameters by analyzing the ineffective error updates during iteration. The number of identifiable parameters is determined as <span><math><mrow><mn>4</mn><mi>r</mi><mo>+</mo><mn>2</mn><mi>p</mi><mo>+</mo><mn>6</mn></mrow></math></span>, where <span><math><mi>r</mi></math></span> and <span><math><mi>p</mi></math></span> represent the number of revolute and prismatic joints, respectively. In addition, we propose a decomposed iterative method to address the issue of the condition number in the identification Jacobian matrix being affected by the position data, thereby improving the convergence and robustness of algorithm. Finally, we present a POE-based calibration algorithm using the left-invariant error representation and decomposed iterative method, which satisfies completeness, continuity, and minimality. Several factors affecting calibration accuracy in POE-based kinematic calibration algorithms are discussed through simulations and experiments. Both simulations and experiments support our claims, showing that our algorithm outperforms existing methods in terms of orientation and position accuracy.</div></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"186 ","pages":"Article 104896"},"PeriodicalIF":4.3,"publicationDate":"2024-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143096427","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CMFFN: An efficient cross-modal feature fusion network for semantic segmentation
IF 4.3 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS Pub Date : 2024-12-26 DOI: 10.1016/j.robot.2024.104900
Yingjian Zhang, Ning Li, Jichao Jiao, Jiawen Ai, Zheng Yan, Yingchao Zeng, Tianxiang Zhang, Qian Li
Multimodal information fusion can improve the accuracy and robustness of semantic segmentation results. However, the differences between modalities, the complementarity of the information provided, and the effectiveness of the fusion process await further exploration. In addition, another modality significantly increases the computational complexity, parameters and model training cost. To address these issues, we propose a novel end-to-end multimodal dual-stream semantic segmentation network, called cross-modal filtering fusion network (CMFFN), which efficiently fuses features from RGB and raw point cloud modalities. In particular, our CMFFN does not require mapping raw point clouds into image formats or additional modality alignment designs. For the point cloud branch of CMFFN, to alleviate the problem of increasing computational resources with the number of input point clouds, we propose a lightweight backbone network based on a sparse query attention mechanism, which achieves a balance between feature extraction performance and training resource utilization on the ModelNet40. For multimodal fusion, we introduce the cross-modal scoring feature selection and fusion module (CMSSF) with learnable parameters, which provides a general approach to reduce expensive costs caused by modalities interaction. Benefiting from the effective pruning and fine-grained framework, CMFFN achieves up to 67.39% in mIoU with a 3. 64% gain compared to the latest state-of-the-art model CMNeXt on KITTI-360 raw data, while reducing parameters and computational workload by 14.4% and 15.6%, respectively. Furthermore, since CMFFN directly models raw 3D point cloud data, it produces visually higher quality masks than the ground truth in certain aspects, such as locating relative spatial positions of targets and predicting distant objects.
{"title":"CMFFN: An efficient cross-modal feature fusion network for semantic segmentation","authors":"Yingjian Zhang,&nbsp;Ning Li,&nbsp;Jichao Jiao,&nbsp;Jiawen Ai,&nbsp;Zheng Yan,&nbsp;Yingchao Zeng,&nbsp;Tianxiang Zhang,&nbsp;Qian Li","doi":"10.1016/j.robot.2024.104900","DOIUrl":"10.1016/j.robot.2024.104900","url":null,"abstract":"<div><div>Multimodal information fusion can improve the accuracy and robustness of semantic segmentation results. However, the differences between modalities, the complementarity of the information provided, and the effectiveness of the fusion process await further exploration. In addition, another modality significantly increases the computational complexity, parameters and model training cost. To address these issues, we propose a novel end-to-end multimodal dual-stream semantic segmentation network, called cross-modal filtering fusion network (CMFFN), which efficiently fuses features from RGB and raw point cloud modalities. In particular, our CMFFN does not require mapping raw point clouds into image formats or additional modality alignment designs. For the point cloud branch of CMFFN, to alleviate the problem of increasing computational resources with the number of input point clouds, we propose a lightweight backbone network based on a sparse query attention mechanism, which achieves a balance between feature extraction performance and training resource utilization on the ModelNet40. For multimodal fusion, we introduce the cross-modal scoring feature selection and fusion module (CMSSF) with learnable parameters, which provides a general approach to reduce expensive costs caused by modalities interaction. Benefiting from the effective pruning and fine-grained framework, CMFFN achieves up to 67.39% in mIoU with a 3. 64% gain compared to the latest state-of-the-art model CMNeXt on KITTI-360 raw data, while reducing parameters and computational workload by 14.4% and 15.6%, respectively. Furthermore, since CMFFN directly models raw 3D point cloud data, it produces visually higher quality masks than the ground truth in certain aspects, such as locating relative spatial positions of targets and predicting distant objects.</div></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"186 ","pages":"Article 104900"},"PeriodicalIF":4.3,"publicationDate":"2024-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143096429","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MBCNN-EATCFNet: A multi-branch neural network with efficient attention mechanism for decoding EEG-based motor imagery
IF 4.3 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS Pub Date : 2024-12-21 DOI: 10.1016/j.robot.2024.104899
Shiming Xiong, Li Wang, Guoxian Xia, Jiaxian Deng
The decoding performance of motor imagery (MI) based on electroencephalogram (EEG) limits the practical applications of brain-computer interface (BCI). In this paper, we propose a decoding approach for MI classification tasks based on a multi-branch convolutional neural network (MBCNN) and a time convolution fusion network with efficient attention mechanism (EATCFN). In MBCNN-EATCFNet, the combination of multi-branch and multi-scale structures is employed to capture spatiotemporal features at different scales. Additionally, to extract more discriminative temporal features from EEG signals, by integrating ECA into TCN, the module captures integrated information of bidirectional cross-channel interactions and long-term dependencies in the temporal sequence. Finally, to improve the adaptability of the model, a novel adaptive feature fusion method is proposed to distinguish the importance of bidirectional features. Our proposed model achieves classification results of 81.34 % (subject-dependent) and 69.46 % (subject-independent) on the BCI Competition IV dataset 2a, and 87.45 % (subject-dependent) and 83.63 % (subject-independent) on the BCI Competition IV dataset 2b, respectively. On dataset 2a, compared to eight baseline models, our approach achieves an average improvement of 10.15 % (subject-dependent) and 4.34 % (subject-independent), respectively. On dataset 2b, it achieves an average improvement of 2.76 % (subject-dependent) and 1.55 % (subject-independent). Furthermore, ablation experiments have been conducted to validate the effectiveness of each module. This model has significant potential in the clinical and practical application of MI-based BCI systems, thus promoting the further development of BCI technology.
{"title":"MBCNN-EATCFNet: A multi-branch neural network with efficient attention mechanism for decoding EEG-based motor imagery","authors":"Shiming Xiong,&nbsp;Li Wang,&nbsp;Guoxian Xia,&nbsp;Jiaxian Deng","doi":"10.1016/j.robot.2024.104899","DOIUrl":"10.1016/j.robot.2024.104899","url":null,"abstract":"<div><div>The decoding performance of motor imagery (MI) based on electroencephalogram (EEG) limits the practical applications of brain-computer interface (BCI). In this paper, we propose a decoding approach for MI classification tasks based on a multi-branch convolutional neural network (MBCNN) and a time convolution fusion network with efficient attention mechanism (EATCFN). In MBCNN-EATCFNet, the combination of multi-branch and multi-scale structures is employed to capture spatiotemporal features at different scales. Additionally, to extract more discriminative temporal features from EEG signals, by integrating ECA into TCN, the module captures integrated information of bidirectional cross-channel interactions and long-term dependencies in the temporal sequence. Finally, to improve the adaptability of the model, a novel adaptive feature fusion method is proposed to distinguish the importance of bidirectional features. Our proposed model achieves classification results of 81.34 % (subject-dependent) and 69.46 % (subject-independent) on the BCI Competition IV dataset 2a, and 87.45 % (subject-dependent) and 83.63 % (subject-independent) on the BCI Competition IV dataset 2b, respectively. On dataset 2a, compared to eight baseline models, our approach achieves an average improvement of 10.15 % (subject-dependent) and 4.34 % (subject-independent), respectively. On dataset 2b, it achieves an average improvement of 2.76 % (subject-dependent) and 1.55 % (subject-independent). Furthermore, ablation experiments have been conducted to validate the effectiveness of each module. This model has significant potential in the clinical and practical application of MI-based BCI systems, thus promoting the further development of BCI technology.</div></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"185 ","pages":"Article 104899"},"PeriodicalIF":4.3,"publicationDate":"2024-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143103069","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Finding degenerate conics by in-plane rotations: A direct perspective-three-point solver
IF 4.3 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS Pub Date : 2024-12-20 DOI: 10.1016/j.robot.2024.104897
Yi Zhang, Baoqiong Wang, Yueqiang Zhang, Zechun Lin, Wenjun Chen, Xiaolin Liu, Qifeng Yu
The perspective-three-point (P3P) method, which estimates pose of a calibrated camera using three 2D/3D point correspondences, plays a crucial role in various visual pipelines. While most methods directly simplify the problem to a quartic, some contemporary approaches solve by degenerate conics, offering benefits in efficiency, preventing duplicate solutions and correct rate. However, existing degenerate-conic-based methods rely on the indirect framework, which is acknowledged to be less efficient and stable than the direct framework. In addition, they formulate degenerate conics algebraically, lacking clear geometric interpretations. To this end, a direct and degenerate-conic-based P3P method has been proposed in this work, which furthermore provides geometric interpretations to the degenerate conics. Concretely, it demonstrates that the conics undergo degeneration under certain in-plane rotations, angles of which are determined by a cubic. The proposed method is carefully designed to circumvent numerically risky computations and unnecessary computational burdens on recovering geometrically-invalid solutions. Both simulated and real experiments show that it excels in efficiency, noise robustness and correct rate, while avoiding the duplicate solutions commonly encountered in quartic-based methods. Additionally, it has the highest efficiency when combined with RANSAC. Code is at https://github.com/Johnnyzyzy/our_p3p.
{"title":"Finding degenerate conics by in-plane rotations: A direct perspective-three-point solver","authors":"Yi Zhang,&nbsp;Baoqiong Wang,&nbsp;Yueqiang Zhang,&nbsp;Zechun Lin,&nbsp;Wenjun Chen,&nbsp;Xiaolin Liu,&nbsp;Qifeng Yu","doi":"10.1016/j.robot.2024.104897","DOIUrl":"10.1016/j.robot.2024.104897","url":null,"abstract":"<div><div>The perspective-three-point (P3P) method, which estimates pose of a calibrated camera using three 2D/3D point correspondences, plays a crucial role in various visual pipelines. While most methods directly simplify the problem to a quartic, some contemporary approaches solve by degenerate conics, offering benefits in efficiency, preventing duplicate solutions and correct rate. However, existing degenerate-conic-based methods rely on the indirect framework, which is acknowledged to be less efficient and stable than the direct framework. In addition, they formulate degenerate conics algebraically, lacking clear geometric interpretations. To this end, a direct and degenerate-conic-based P3P method has been proposed in this work, which furthermore provides geometric interpretations to the degenerate conics. Concretely, it demonstrates that the conics undergo degeneration under certain in-plane rotations, angles of which are determined by a cubic. The proposed method is carefully designed to circumvent numerically risky computations and unnecessary computational burdens on recovering geometrically-invalid solutions. Both simulated and real experiments show that it excels in efficiency, noise robustness and correct rate, while avoiding the duplicate solutions commonly encountered in quartic-based methods. Additionally, it has the highest efficiency when combined with RANSAC. Code is at <span><span>https://github.com/Johnnyzyzy/our_p3p</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"185 ","pages":"Article 104897"},"PeriodicalIF":4.3,"publicationDate":"2024-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143102984","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Safe peg-in-hole automatic assembly using virtual guiding force: A deep reinforcement learning solution
IF 4.3 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS Pub Date : 2024-12-16 DOI: 10.1016/j.robot.2024.104894
Yujia Zang , Zitong Wang , Mingfeng Pan, Zhixuan Hou, Ziqin Ding, Mingyang Zhao
This paper proposes an automatic peg-in-hole assembly method based on deep reinforcement learning, incorporating a feedforward virtual guiding force with safety considerations. Unlike traditional approaches that involve positional trajectories, our method draws inspiration from human dragging movements and utilizes feedforward virtual guiding forces as actions. This enables the reinforcement learning agent to drag the end effector to achieve peg-in-hole assembly and effectively addresses the safety issues caused by random actions throughout the training and testing processes. Experimental validation involves testing scenarios with pegs and holes featuring varying chamfers and clearances, as well as different levels of positioning uncertainty and initial search positions. The experiments demonstrate that our approach not only tackles the safety challenges but also exhibits good performance in cylindrical peg-in-hole tasks with initial positioning uncertainty and different chamfer/clearance structures, achieving high success rates.
{"title":"Safe peg-in-hole automatic assembly using virtual guiding force: A deep reinforcement learning solution","authors":"Yujia Zang ,&nbsp;Zitong Wang ,&nbsp;Mingfeng Pan,&nbsp;Zhixuan Hou,&nbsp;Ziqin Ding,&nbsp;Mingyang Zhao","doi":"10.1016/j.robot.2024.104894","DOIUrl":"10.1016/j.robot.2024.104894","url":null,"abstract":"<div><div>This paper proposes an automatic peg-in-hole assembly method based on deep reinforcement learning, incorporating a feedforward virtual guiding force with safety considerations. Unlike traditional approaches that involve positional trajectories, our method draws inspiration from human dragging movements and utilizes feedforward virtual guiding forces as actions. This enables the reinforcement learning agent to drag the end effector to achieve peg-in-hole assembly and effectively addresses the safety issues caused by random actions throughout the training and testing processes. Experimental validation involves testing scenarios with pegs and holes featuring varying chamfers and clearances, as well as different levels of positioning uncertainty and initial search positions. The experiments demonstrate that our approach not only tackles the safety challenges but also exhibits good performance in cylindrical peg-in-hole tasks with initial positioning uncertainty and different chamfer/clearance structures, achieving high success rates.</div></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"185 ","pages":"Article 104894"},"PeriodicalIF":4.3,"publicationDate":"2024-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143102980","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Occupation-aware planning method for robotic monitoring missions in dynamic environments
IF 4.3 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS Pub Date : 2024-12-16 DOI: 10.1016/j.robot.2024.104892
Yaroslav Marchukov, Luis Montano
This paper presents a method for robotic monitoring missions in the presence of moving obstacles. Although the scenario map is known, the robot lacks information about the movement of dynamic obstacles during the monitoring mission. Numerous local planners have been developed in recent years for navigating highly dynamic environments. However, the absence of a global planner for these environments can result in unavoidable collisions or the inability to successfully complete missions in densely populated areas, such as a scenario monitoring in our case. This work addresses the development and evaluation of a global planner, MADA (Monitoring Avoiding Dynamic Areas), aimed at enhancing the deployment of robots in such challenging conditions. The robot plans and executes the mission using the proposed two-step approach. The first step involves selecting the observation goal based on the environment’s distribution and estimated monitoring costs. In the second step, the robot identifies areas with moving obstacles and obtains paths avoiding densely occupied dynamic regions based on their occupation. Quantitative and qualitative results based on simulations and on real-world experimentation, confirm that the proposed method allows the robot to effectively monitor most of the environment while avoiding densely occupied dynamic areas.
{"title":"Occupation-aware planning method for robotic monitoring missions in dynamic environments","authors":"Yaroslav Marchukov,&nbsp;Luis Montano","doi":"10.1016/j.robot.2024.104892","DOIUrl":"10.1016/j.robot.2024.104892","url":null,"abstract":"<div><div>This paper presents a method for robotic monitoring missions in the presence of moving obstacles. Although the scenario map is known, the robot lacks information about the movement of dynamic obstacles during the monitoring mission. Numerous local planners have been developed in recent years for navigating highly dynamic environments. However, the absence of a global planner for these environments can result in unavoidable collisions or the inability to successfully complete missions in densely populated areas, such as a scenario monitoring in our case. This work addresses the development and evaluation of a global planner, <span><math><mrow><mi>M</mi><mi>A</mi><mi>D</mi><mi>A</mi></mrow></math></span> (Monitoring Avoiding Dynamic Areas), aimed at enhancing the deployment of robots in such challenging conditions. The robot plans and executes the mission using the proposed two-step approach. The first step involves selecting the observation goal based on the environment’s distribution and estimated monitoring costs. In the second step, the robot identifies areas with moving obstacles and obtains paths avoiding densely occupied dynamic regions based on their occupation. Quantitative and qualitative results based on simulations and on real-world experimentation, confirm that the proposed method allows the robot to effectively monitor most of the environment while avoiding densely occupied dynamic areas.</div></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"185 ","pages":"Article 104892"},"PeriodicalIF":4.3,"publicationDate":"2024-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143102983","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hierarchical probabilistic graphical models for multi-UAV cooperative pursuit in dynamic environments
IF 4.3 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS Pub Date : 2024-12-15 DOI: 10.1016/j.robot.2024.104890
Yixin Huang , Xiaojia Xiang , Chao Yan , Han Zhou , Dengqing Tang , Jun Lai
Pursuing a non-cooperative moving target through multiple unmanned aerial vehicles (multi-UAV) is still challenging, especially in complex environments with dynamic obstacles. This article proposes a self-organizing multi-UAV cooperative pursuit approach based on hierarchical probabilistic graphical models. Firstly, we establish the UAV double-integrator kinematic models and provide a mathematical description of the pursuit task. Subsequently, a task-specific hierarchical probabilistic graphical model is designed for autonomous decision-making of UAVs. In the model, local perception states and individual motion capabilities are integrated to estimate the probability distribution parameters for each node. To enhance pursuit efficiency, the pursuit task is segmented into multiple phases and a “dispersed encirclement” strategy is devised inspired by wolf pack hunting behavior. Finally, numerical simulations and real-world experiments are conducted to validate the scalability, adaptability, and robustness of the proposed approach.
{"title":"Hierarchical probabilistic graphical models for multi-UAV cooperative pursuit in dynamic environments","authors":"Yixin Huang ,&nbsp;Xiaojia Xiang ,&nbsp;Chao Yan ,&nbsp;Han Zhou ,&nbsp;Dengqing Tang ,&nbsp;Jun Lai","doi":"10.1016/j.robot.2024.104890","DOIUrl":"10.1016/j.robot.2024.104890","url":null,"abstract":"<div><div>Pursuing a non-cooperative moving target through multiple unmanned aerial vehicles (multi-UAV) is still challenging, especially in complex environments with dynamic obstacles. This article proposes a self-organizing multi-UAV cooperative pursuit approach based on hierarchical probabilistic graphical models. Firstly, we establish the UAV double-integrator kinematic models and provide a mathematical description of the pursuit task. Subsequently, a task-specific hierarchical probabilistic graphical model is designed for autonomous decision-making of UAVs. In the model, local perception states and individual motion capabilities are integrated to estimate the probability distribution parameters for each node. To enhance pursuit efficiency, the pursuit task is segmented into multiple phases and a “dispersed encirclement” strategy is devised inspired by wolf pack hunting behavior. Finally, numerical simulations and real-world experiments are conducted to validate the scalability, adaptability, and robustness of the proposed approach.</div></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"185 ","pages":"Article 104890"},"PeriodicalIF":4.3,"publicationDate":"2024-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143102977","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
3D3A class of parallel mechanisms: Design and performance evaluation
IF 4.3 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS Pub Date : 2024-12-13 DOI: 10.1016/j.robot.2024.104891
Savaş Yılmaz , Serdar Küçük , Metin Toz
This paper presents a study of 20 six-degrees-of-freedom (DoF) Generalized Stewart–Gough Platform (GSP) type mechanisms in 3D3A class. These mechanisms are characterized by three distance and three angular constraints between their base platforms (BPs) and moving platforms (MPs). The mechanisms are categorized into two groups based on the geometric shapes of their BPs and MPs. The platform shapes are determined by the types of joints and their connection configurations on the platforms. The platforms are designed with regular polygonal shapes. Two groups are generated according to whether the geometric shapes of the platforms are the same or different. There are 10 types of mechanisms in the first group, where the BP and the MP shapes of each are the same. The second group also includes 10 types of mechanisms, each with two design variations, resulting in 20 designs. Thus, a total of 30 mechanisms are designed. The reachable workspaces and dexterities of these manipulators are computed to assess the impact of design parameters such as the ratio between the BP and MP radii and the rotation of one end of the BP edges. The reachable workspace volume and the Global Conditioning Index (GCI) are selected as the performance metrics. The results of all mechanisms are presented in the paper, with a comparison of the two best mechanisms against the classical 3×3 Stewart Platform. The study shows that the D43A13 type mechanism achieves a large reachable workspace while maintaining a high GCI value.
{"title":"3D3A class of parallel mechanisms: Design and performance evaluation","authors":"Savaş Yılmaz ,&nbsp;Serdar Küçük ,&nbsp;Metin Toz","doi":"10.1016/j.robot.2024.104891","DOIUrl":"10.1016/j.robot.2024.104891","url":null,"abstract":"<div><div>This paper presents a study of 20 six-degrees-of-freedom (DoF) Generalized Stewart–Gough Platform (GSP) type mechanisms in 3D3A class. These mechanisms are characterized by three distance and three angular constraints between their base platforms (BPs) and moving platforms (MPs). The mechanisms are categorized into two groups based on the geometric shapes of their BPs and MPs. The platform shapes are determined by the types of joints and their connection configurations on the platforms. The platforms are designed with regular polygonal shapes. Two groups are generated according to whether the geometric shapes of the platforms are the same or different. There are 10 types of mechanisms in the first group, where the BP and the MP shapes of each are the same. The second group also includes 10 types of mechanisms, each with two design variations, resulting in 20 designs. Thus, a total of 30 mechanisms are designed. The reachable workspaces and dexterities of these manipulators are computed to assess the impact of design parameters such as the ratio between the BP and MP radii and the rotation of one end of the BP edges. The reachable workspace volume and the Global Conditioning Index (GCI) are selected as the performance metrics. The results of all mechanisms are presented in the paper, with a comparison of the two best mechanisms against the classical 3×3 Stewart Platform. The study shows that the <span><math><mrow><msubsup><mi>D</mi><mn>4</mn><mn>3</mn></msubsup><mspace></mspace><msubsup><mi>A</mi><mn>1</mn><mn>3</mn></msubsup></mrow></math></span> type mechanism achieves a large reachable workspace while maintaining a high GCI value.</div></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"185 ","pages":"Article 104891"},"PeriodicalIF":4.3,"publicationDate":"2024-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143102632","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hybrid brain-computer interfacing paradigm for assistive robotics
IF 4.3 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS Pub Date : 2024-12-13 DOI: 10.1016/j.robot.2024.104893
Ahona Ghosh , Lidia Ghosh , Sriparna Saha
Human-computer interfacing, which can be controlled by eye movements and brain signals, is of widespread use in recent times as an intelligent rehabilitation aid. Although some exciting literature exists on robot-assisted physical therapy, enhancing the quality of neurological rehabilitation, most can only focus on a smaller range of control commands required for real-time robot navigation. In this paper, a hybrid brain-computer interfacing system to control a robotic arm has been proposed where an Electrooculography (EOG) and an Electroencephalography (EEG) sensor, respectively, have been used to select the joints of the robot and to control the movement of the selected joint in the required direction. The proposed technique, which applies interquartile range-based data augmentation to extracted time-domain features, can effectively deal with the outliers and prevent overfitting. Afterwards, a novel variant of the Two-Dimensional Convolutional Neural Network is employed for the classification of EOG signals. On the other hand, a Phase sensitive Common Spatial Pattern induced Linear Discriminant Analysis is utilized for classifying the EEG data. The classifiers exhibit satisfactory performance with 98.45 % and 96.61 % accuracy for EOG and EEG, respectively, leading to the implementation of an online robot navigation system in real-time. The system integrates EEG-based signals into the robotic control loop, enabling real-time error detection in the end-effector trajectory of the Robotic arm through Error Related Potential signals and confirming task completion or target attainment via P300 detection. The proposed framework yields an average steady-state error, peak overshoot, and settling time of 0.036, 2.5 %, and 30 s, respectively. Moreover, the average target reaching rate is 95 %, making it a suitable choice for real-time rehabilitative platforms in prosthetics design.
{"title":"Hybrid brain-computer interfacing paradigm for assistive robotics","authors":"Ahona Ghosh ,&nbsp;Lidia Ghosh ,&nbsp;Sriparna Saha","doi":"10.1016/j.robot.2024.104893","DOIUrl":"10.1016/j.robot.2024.104893","url":null,"abstract":"<div><div>Human-computer interfacing, which can be controlled by eye movements and brain signals, is of widespread use in recent times as an intelligent rehabilitation aid. Although some exciting literature exists on robot-assisted physical therapy, enhancing the quality of neurological rehabilitation, most can only focus on a smaller range of control commands required for real-time robot navigation. In this paper, a hybrid brain-computer interfacing system to control a robotic arm has been proposed where an Electrooculography (EOG) and an Electroencephalography (EEG) sensor, respectively, have been used to select the joints of the robot and to control the movement of the selected joint in the required direction. The proposed technique, which applies interquartile range-based data augmentation to extracted time-domain features, can effectively deal with the outliers and prevent overfitting. Afterwards, a novel variant of the Two-Dimensional Convolutional Neural Network is employed for the classification of EOG signals. On the other hand, a Phase sensitive Common Spatial Pattern induced Linear Discriminant Analysis is utilized for classifying the EEG data. The classifiers exhibit satisfactory performance with 98.45 % and 96.61 % accuracy for EOG and EEG, respectively, leading to the implementation of an online robot navigation system in real-time. The system integrates EEG-based signals into the robotic control loop, enabling real-time error detection in the end-effector trajectory of the Robotic arm through Error Related Potential signals and confirming task completion or target attainment via P300 detection. The proposed framework yields an average steady-state error, peak overshoot, and settling time of 0.036, 2.5 %, and 30 s, respectively. Moreover, the average target reaching rate is 95 %, making it a suitable choice for real-time rehabilitative platforms in prosthetics design.</div></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"185 ","pages":"Article 104893"},"PeriodicalIF":4.3,"publicationDate":"2024-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143102981","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Robotics and Autonomous Systems
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1