Pub Date : 2024-06-20DOI: 10.1007/s10846-024-02099-y
Junping Li, Yufei Jin, Rui Hu, Yulin Bai, Di Lu, Zheng Zeng, Lian Lian
The hybrid aerial underwater vehicle (HAUV) could operate in the air and underwater might provide a great convenience for aerial and underwater exploration, and the fixed-wing HAUV (FHAUV) has time, space and cost advantages in future large-scale applications, while the large difference between the aerial and underwater environments is a challenge to control, especially in the air/water transition. However, for FHAUV control, there is a lack of research on phenomena or problems caused by large changes in the air/water transition. In addition, the effects of wind, wave, other factors and conditions on motion control are not investigated. This paper presents the first control study on the above issues. The motion model of FHAUV is developed, with the effects of wind and wave disturbances. Then, this paper improves a cascade gain scheduling (CGS) PID for different media environments (air and water) and proposes a cascade state feedback (CSF) control strategy to address the convergence problem of FHAUV control caused by large speed change in the air/water transition. In the comparisons of the two control schemes in various tracking cases including trajectory slopes, reference speeds, wind and wave disturbances, CSF has a better control effect, convergence rate and robustness; the key factors and conditions of the air/water transition are investigated, the critical relations and feasible domains of the trajectory slopes and reference speeds that the FHAUV must meet to successfully exit the water and enter the air are obtained, the critical slope decreases as the reference speed increases, and the feasible domain of CSF is larger than that of CGS, revealing that CSF is superior than CGS for exiting the water.
{"title":"Trajectory Tracking Control of Fixed-Wing Hybrid Aerial Underwater Vehicle Subject to Wind and Wave Disturbances","authors":"Junping Li, Yufei Jin, Rui Hu, Yulin Bai, Di Lu, Zheng Zeng, Lian Lian","doi":"10.1007/s10846-024-02099-y","DOIUrl":"https://doi.org/10.1007/s10846-024-02099-y","url":null,"abstract":"<p>The hybrid aerial underwater vehicle (HAUV) could operate in the air and underwater might provide a great convenience for aerial and underwater exploration, and the fixed-wing HAUV (FHAUV) has time, space and cost advantages in future large-scale applications, while the large difference between the aerial and underwater environments is a challenge to control, especially in the air/water transition. However, for FHAUV control, there is a lack of research on phenomena or problems caused by large changes in the air/water transition. In addition, the effects of wind, wave, other factors and conditions on motion control are not investigated. This paper presents the first control study on the above issues. The motion model of FHAUV is developed, with the effects of wind and wave disturbances. Then, this paper improves a cascade gain scheduling (CGS) PID for different media environments (air and water) and proposes a cascade state feedback (CSF) control strategy to address the convergence problem of FHAUV control caused by large speed change in the air/water transition. In the comparisons of the two control schemes in various tracking cases including trajectory slopes, reference speeds, wind and wave disturbances, CSF has a better control effect, convergence rate and robustness; the key factors and conditions of the air/water transition are investigated, the critical relations and feasible domains of the trajectory slopes and reference speeds that the FHAUV must meet to successfully exit the water and enter the air are obtained, the critical slope decreases as the reference speed increases, and the feasible domain of CSF is larger than that of CGS, revealing that CSF is superior than CGS for exiting the water.</p>","PeriodicalId":54794,"journal":{"name":"Journal of Intelligent & Robotic Systems","volume":"94 1","pages":""},"PeriodicalIF":3.3,"publicationDate":"2024-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141504661","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-20DOI: 10.1007/s10846-024-02122-2
A. Sajedifar, M. H. Korayem, F. Allahverdi
Although robot-assisted physiotherapy has gained increasing attention in recent years, the use of wearable rehabilitation robots for lower limbs has shown reduced efficiency due to additional equipment and motors located at the center of the joint, increasing complexity and load on disabled patients. This paper proposes a novel rehabilitation approach by eliminating motors and equipment from the center of joints and placing them on a fixed platform using cable-based power transmission. A proposed model of a 14 cable-driven bipedal robot with 7 degrees of freedom has been used to model a lower limb rehabilitation robot corresponding to it. The dynamic equations of the robot are derived using the Euler-Lagrange method. The sliding mode control technique is utilized to offer accurate control for tracking desired trajectories, ensuring smoothness despite disturbances, and reducing tracking errors. This approach is employed to help prevent patients from falling and support them in maintaining balance during rehabilitative exercises. To ensure that cables exert positive tension, the sliding mode controller was combined with quadratic programming optimization, minimizing path error while constraining the controller input torque to be non-negative. The performance of the proposed controller was assessed by considering several control gains resulting in K = 10 identified as the most effective one. The feasibility of this approach to rehabilitation is demonstrated by the numerical results in MATLAB simulation, which show that the RMSE amount of the right and left hip and thigh angles are 0.29, 0.37, 0.31, and 0.44, respectively which verified an improved rehabilitation process. Also, the correlation coefficient between the Adams and MATLAB simulation results for motor torque was found to be 0.98, indicating a high degree of correlation between the two simulation results.
{"title":"Dynamic Modelling and Optimal Sliding Mode Control of the Wearable Rehabilitative Bipedal Cable Robot with 7 Degrees of Freedom","authors":"A. Sajedifar, M. H. Korayem, F. Allahverdi","doi":"10.1007/s10846-024-02122-2","DOIUrl":"https://doi.org/10.1007/s10846-024-02122-2","url":null,"abstract":"<p>Although robot-assisted physiotherapy has gained increasing attention in recent years, the use of wearable rehabilitation robots for lower limbs has shown reduced efficiency due to additional equipment and motors located at the center of the joint, increasing complexity and load on disabled patients. This paper proposes a novel rehabilitation approach by eliminating motors and equipment from the center of joints and placing them on a fixed platform using cable-based power transmission. A proposed model of a 14 cable-driven bipedal robot with 7 degrees of freedom has been used to model a lower limb rehabilitation robot corresponding to it. The dynamic equations of the robot are derived using the Euler-Lagrange method. The sliding mode control technique is utilized to offer accurate control for tracking desired trajectories, ensuring smoothness despite disturbances, and reducing tracking errors. This approach is employed to help prevent patients from falling and support them in maintaining balance during rehabilitative exercises. To ensure that cables exert positive tension, the sliding mode controller was combined with quadratic programming optimization, minimizing path error while constraining the controller input torque to be non-negative. The performance of the proposed controller was assessed by considering several control gains resulting in K = 10 identified as the most effective one. The feasibility of this approach to rehabilitation is demonstrated by the numerical results in MATLAB simulation, which show that the RMSE amount of the right and left hip and thigh angles are 0.29, 0.37, 0.31, and 0.44, respectively which verified an improved rehabilitation process. Also, the correlation coefficient between the Adams and MATLAB simulation results for motor torque was found to be 0.98, indicating a high degree of correlation between the two simulation results.</p>","PeriodicalId":54794,"journal":{"name":"Journal of Intelligent & Robotic Systems","volume":"86 1","pages":""},"PeriodicalIF":3.3,"publicationDate":"2024-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141504659","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-07DOI: 10.1007/s10846-024-02111-5
Meisam Kabiri, Claudio Cimarelli, Hriday Bavle, Jose Luis Sanchez-Lopez, Holger Voos
5G New Radio Time of Arrival (ToA) data has the potential to revolutionize indoor localization for micro aerial vehicles (MAVs). However, its performance under varying network setups, especially when combined with IMU data for real-time localization, has not been fully explored so far. In this study, we develop an Error State Kalman Filter (ESKF) and a Pose Graph Optimization (PGO) approach to address this gap. We systematically evaluate the performance of the derived approaches for real-time MAV localization in realistic scenarios with 5G base stations in Line-Of-Sight (LOS), demonstrating the potential of 5G technologies in this domain. In order to experimentally test and compare our localization approaches, we augment the EuRoC MAV benchmark dataset for visual-inertial odometry with simulated yet highly realistic 5G ToA measurements. Our experimental results comprehensively assess the impact of varying network setups, including varying base station numbers and network configurations, on ToA-based MAV localization performance. The findings show promising results for seamless and robust localization using 5G ToA measurements, achieving an accuracy of 15 cm throughout the entire trajectory within a graph-based framework with five 5G base stations, and an accuracy of up to 34 cm in the case of ESKF-based localization. Additionally, we measure the run time of both algorithms and show that they are both fast enough for real-time implementation.
{"title":"Graph-Based vs. Error State Kalman Filter-Based Fusion of 5G and Inertial Data for MAV Indoor Pose Estimation","authors":"Meisam Kabiri, Claudio Cimarelli, Hriday Bavle, Jose Luis Sanchez-Lopez, Holger Voos","doi":"10.1007/s10846-024-02111-5","DOIUrl":"https://doi.org/10.1007/s10846-024-02111-5","url":null,"abstract":"<p>5G New Radio Time of Arrival (ToA) data has the potential to revolutionize indoor localization for micro aerial vehicles (MAVs). However, its performance under varying network setups, especially when combined with IMU data for real-time localization, has not been fully explored so far. In this study, we develop an Error State Kalman Filter (ESKF) and a Pose Graph Optimization (PGO) approach to address this gap. We systematically evaluate the performance of the derived approaches for real-time MAV localization in realistic scenarios with 5G base stations in Line-Of-Sight (LOS), demonstrating the potential of 5G technologies in this domain. In order to experimentally test and compare our localization approaches, we augment the EuRoC MAV benchmark dataset for visual-inertial odometry with simulated yet highly realistic 5G ToA measurements. Our experimental results comprehensively assess the impact of varying network setups, including varying base station numbers and network configurations, on ToA-based MAV localization performance. The findings show promising results for seamless and robust localization using 5G ToA measurements, achieving an accuracy of 15 cm throughout the entire trajectory within a graph-based framework with five 5G base stations, and an accuracy of up to 34 cm in the case of ESKF-based localization. Additionally, we measure the run time of both algorithms and show that they are both fast enough for real-time implementation.</p>","PeriodicalId":54794,"journal":{"name":"Journal of Intelligent & Robotic Systems","volume":"49 1","pages":""},"PeriodicalIF":3.3,"publicationDate":"2024-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141552210","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-05DOI: 10.1007/s10846-024-02115-1
Nisha Kumari, Kevin Lee, Jan Carlo Barca, Chathurika Ranaweera
Drone swarms consist of multiple drones that can achieve tasks that individual drones can not, such as search and recovery or surveillance over a large area. A swarm’s internal structure typically consists of multiple drones operating autonomously. Reliable detection and tracking of swarms and individual drones allow a greater understanding of the behaviour and movement of a swarm. Increased understanding of drone behaviour allows better coordination, collision avoidance, and performance monitoring of individual drones in the swarm. The research presented in this paper proposes a deep learning-based approach for reliable detection and tracking of individual drones within a swarm using stereo-vision cameras in real time. The motivation behind this research is in the need to gain a deeper understanding of swarm dynamics, enabling improved coordination, collision avoidance, and performance monitoring of individual drones within a swarm. The proposed solution provides a precise tracking system and considers the highly dense and dynamic behaviour of drones. The approach is evaluated in both sparse and dense networks in a variety of configurations. The accuracy and efficiency of the proposed solution have been analysed by implementing a series of comparative experiments that demonstrate reasonable accuracy in detecting and tracking drones within a swarm.
{"title":"Towards Reliable Identification and Tracking of Drones Within a Swarm","authors":"Nisha Kumari, Kevin Lee, Jan Carlo Barca, Chathurika Ranaweera","doi":"10.1007/s10846-024-02115-1","DOIUrl":"https://doi.org/10.1007/s10846-024-02115-1","url":null,"abstract":"<p>Drone swarms consist of multiple drones that can achieve tasks that individual drones can not, such as search and recovery or surveillance over a large area. A swarm’s internal structure typically consists of multiple drones operating autonomously. Reliable detection and tracking of swarms and individual drones allow a greater understanding of the behaviour and movement of a swarm. Increased understanding of drone behaviour allows better coordination, collision avoidance, and performance monitoring of individual drones in the swarm. The research presented in this paper proposes a deep learning-based approach for reliable detection and tracking of individual drones within a swarm using stereo-vision cameras in real time. The motivation behind this research is in the need to gain a deeper understanding of swarm dynamics, enabling improved coordination, collision avoidance, and performance monitoring of individual drones within a swarm. The proposed solution provides a precise tracking system and considers the highly dense and dynamic behaviour of drones. The approach is evaluated in both sparse and dense networks in a variety of configurations. The accuracy and efficiency of the proposed solution have been analysed by implementing a series of comparative experiments that demonstrate reasonable accuracy in detecting and tracking drones within a swarm.</p>","PeriodicalId":54794,"journal":{"name":"Journal of Intelligent & Robotic Systems","volume":"43 1","pages":""},"PeriodicalIF":3.3,"publicationDate":"2024-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141256760","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Image moment features can describe more general target patterns and have good decoupling properties. However, the image moment features that control the camera’s rotation motion around the x-axis and y-axis mainly depend on the target image itself. In this paper, the ultra-redundant manipulator visual positioning and robust tracking control method based on the image moments are advocated.First, six image moment features used to control camera motion around the x-axis and around the y-axis are proposed. And then, a novel method is proposed to use to select image features. For tracking a moving target, a kalman filter combined with adaptive fuzzy sliding mode control method is proposed to achieve tracking control of moving targets, which can estimate changes in image features caused by the target’s motion on-line and compensate for estimation errors. Finally, the experimental system based on Labview-RealTime system and ultra-redundant manipulator is used to verify the real-time performance and practicability of the algorithm. Experimental results are presented to illustrate the validity of the image features and tracking method.
图像矩特征可以描述更一般的目标模式,并具有良好的解耦特性。然而,控制摄像机绕 x 轴和 y 轴旋转运动的图像矩特征主要取决于目标图像本身。本文提倡基于图像矩的超冗余机械手视觉定位和鲁棒跟踪控制方法。首先,提出了用于控制摄像机绕 x 轴和 y 轴运动的六个图像矩特征。然后,提出了一种用于选择图像特征的新方法。针对移动目标的跟踪,提出了卡尔曼滤波与自适应模糊滑模控制相结合的方法来实现对移动目标的跟踪控制,该方法可以在线估计目标运动引起的图像特征变化,并补偿估计误差。最后,利用基于 Labview-RealTime 系统和超冗余机械手的实验系统验证了算法的实时性和实用性。实验结果说明了图像特征和跟踪方法的有效性。
{"title":"Image moment-based visual positioning and robust tracking control of ultra-redundant manipulator","authors":"Zhongcan Li, Yufei Zhou, Mingchao Zhu, Yongzhi Chu, Qingwen Wu","doi":"10.1007/s10846-024-02103-5","DOIUrl":"https://doi.org/10.1007/s10846-024-02103-5","url":null,"abstract":"<p>Image moment features can describe more general target patterns and have good decoupling properties. However, the image moment features that control the camera’s rotation motion around the x-axis and y-axis mainly depend on the target image itself. In this paper, the ultra-redundant manipulator visual positioning and robust tracking control method based on the image moments are advocated.First, six image moment features used to control camera motion around the x-axis and around the y-axis are proposed. And then, a novel method is proposed to use to select image features. For tracking a moving target, a kalman filter combined with adaptive fuzzy sliding mode control method is proposed to achieve tracking control of moving targets, which can estimate changes in image features caused by the target’s motion on-line and compensate for estimation errors. Finally, the experimental system based on Labview-RealTime system and ultra-redundant manipulator is used to verify the real-time performance and practicability of the algorithm. Experimental results are presented to illustrate the validity of the image features and tracking method.</p>","PeriodicalId":54794,"journal":{"name":"Journal of Intelligent & Robotic Systems","volume":"33 1","pages":""},"PeriodicalIF":3.3,"publicationDate":"2024-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141195119","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-29DOI: 10.1007/s10846-024-02084-5
Anna Konert, Piotr Kasprzyk
An analysis of the development of legal regulations regarding unmanned civil aviation leads to the conclusion that the current air traffic rules are among the key issues that require amending. Are drones allowed to fly at any height? Can drones fly freely over a person’s house or garden, several meters above the ground? What is the minimum allowable height for drone flights? The method of study consisted of content analysis of the existing legislation. Current doctrines were confronted with the existing regulations, documents, materials, safety reports, and statistics. The results of the study show that the existing air traffic rules, precisely in the case of aircraft operations performed by manned and unmanned aviation at very low heights, are definitely practical in nature. First, in most countries violations of air traffic rules are prohibited acts subject to criminal penalty. Second, determining the principles of air traffic for air operations is of crucial importance for determining legally permissible interference in property ownership. The urban air mobility is outside the scope of this research.
{"title":"Very Low Level Flight Rules for Manned and Unmanned Aircraft Operations","authors":"Anna Konert, Piotr Kasprzyk","doi":"10.1007/s10846-024-02084-5","DOIUrl":"https://doi.org/10.1007/s10846-024-02084-5","url":null,"abstract":"<p>An analysis of the development of legal regulations regarding unmanned civil aviation leads to the conclusion that the current air traffic rules are among the key issues that require amending. Are drones allowed to fly at any height? Can drones fly freely over a person’s house or garden, several meters above the ground? What is the minimum allowable height for drone flights? The method of study consisted of content analysis of the existing legislation. Current doctrines were confronted with the existing regulations, documents, materials<b>,</b> safety reports, and statistics. The results of the study show that the existing air traffic rules, precisely in the case of aircraft operations performed by manned and unmanned aviation at very low heights, are definitely practical in nature. First, in most countries violations of air traffic rules are prohibited acts subject to criminal penalty. Second, determining the principles of air traffic for air operations is of crucial importance for determining legally permissible interference in property ownership. The urban air mobility is outside the scope of this research.</p>","PeriodicalId":54794,"journal":{"name":"Journal of Intelligent & Robotic Systems","volume":"57 1","pages":""},"PeriodicalIF":3.3,"publicationDate":"2024-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141195026","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-27DOI: 10.1007/s10846-024-02100-8
P. V. Sabique, Ganesh Pasupathy, S. Kalaimagal, G. Shanmugasundar, V. K. Muneer
The surge of haptic technology has greatly impacted Robotic-assisted surgery in recent years due to its inspirational advancement in the field. Delivering tactile feedback to the surgeon has a significant role in improving the user experience in RAMIS. This work proposes a Modified inception ResNet network along with dimensionality reduction to regenerate the variable force produced during the surgical intervention. This work collects the relevant dataset from two ex vivo porcine skins and one ex vivo artificial skin for the validation of the results. The proposed framework is used to model both spatial and temporal data collected from the sensors, tissue, manipulators, and surgical tools. The evaluations are based on three distinct datasets with modest variations in tissue properties. The results of the proposed framework show an improvement of force prediction accuracy by 10.81% over RNN, 6.02% over RNN + LSTM, and 3.81% over the CNN + LSTM framework, and torque prediction accuracy by 12.41% over RNN, 5.75% over RNN + LSTM, and 3.75% over CNN + LSTM. The sensitivity study demonstrates that features such as torque (96.93%), deformation (94.02%), position (93.98%), vision (92.12%), stiffness (87.95%), tool diameter (89.24%), rotation (65.10%), and orientation (62.51%) have respective influences on the anticipated force. It was observed that the quality of the predicted force improved by 2.18% when performing feature selection and dimensionality reduction on features collected from tool, manipulator, tissue, and vision data and processing them simultaneously in all four architectures. The method has potential applications for online surgical tasks and surgeon training.
{"title":"A Stereovision-based Approach for Retrieving Variable Force Feedback in Robotic-Assisted Surgery Using Modified Inception ResNet V2 Networks","authors":"P. V. Sabique, Ganesh Pasupathy, S. Kalaimagal, G. Shanmugasundar, V. K. Muneer","doi":"10.1007/s10846-024-02100-8","DOIUrl":"https://doi.org/10.1007/s10846-024-02100-8","url":null,"abstract":"<p>The surge of haptic technology has greatly impacted Robotic-assisted surgery in recent years due to its inspirational advancement in the field. Delivering tactile feedback to the surgeon has a significant role in improving the user experience in RAMIS. This work proposes a Modified inception ResNet network along with dimensionality reduction to regenerate the variable force produced during the surgical intervention. This work collects the relevant dataset from two ex vivo porcine skins and one ex vivo artificial skin for the validation of the results. The proposed framework is used to model both spatial and temporal data collected from the sensors, tissue, manipulators, and surgical tools. The evaluations are based on three distinct datasets with modest variations in tissue properties. The results of the proposed framework show an improvement of force prediction accuracy by 10.81% over RNN, 6.02% over RNN + LSTM, and 3.81% over the CNN + LSTM framework, and torque prediction accuracy by 12.41% over RNN, 5.75% over RNN + LSTM, and 3.75% over CNN + LSTM. The sensitivity study demonstrates that features such as torque (96.93%), deformation (94.02%), position (93.98%), vision (92.12%), stiffness (87.95%), tool diameter (89.24%), rotation (65.10%), and orientation (62.51%) have respective influences on the anticipated force. It was observed that the quality of the predicted force improved by 2.18% when performing feature selection and dimensionality reduction on features collected from tool, manipulator, tissue, and vision data and processing them simultaneously in all four architectures. The method has potential applications for online surgical tasks and surgeon training.</p>","PeriodicalId":54794,"journal":{"name":"Journal of Intelligent & Robotic Systems","volume":"257 1","pages":""},"PeriodicalIF":3.3,"publicationDate":"2024-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141172512","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The inspection and maintenance of power distribution network are crucial for efficiently delivering electricity to consumers. Due to the high voltage of power distribution network lines, manual live-line operations are difficult, risky, and inefficient. This paper researches a Power Distribution Network Live-line Operation Robot (PDLOR) with autonomous tool assembly capabilities to replace humans in various high-risk electrical maintenance tasks. To address the challenges of tool assembly in dynamic and unstructured work environments for PDLOR, we propose a framework consisting of deep visual-guided coarse localization and prior knowledge and fuzzy logic driven deep deterministic policy gradient (PKFD-DPG) high-precision assembly algorithm. First, we propose a multiscale identification and localization network based on YOLOv5, which enables the peg-hole close quickly and reduces ineffective exploration. Second, we design a main-auxiliary combined reward system, where the main-line reward uses the hindsight experience replay mechanism, and the auxiliary reward is based on fuzzy logic inference mechanism, addressing ineffective exploration and sparse reward in the learning process. In addition, we validate the effectiveness and advantages of the proposed algorithm through simulations and physical experiments, and also compare its performance with other assembly algorithms. The experimental results show that, for single-tool assembly tasks, the success rate of PKFD-DPG is 15.2% higher than the DDPG with functionized reward functions and 51.7% higher than the PD force control method; for multip-tools assembly tasks, the success rate of PKFD-DPG method is 17% and 53.4% higher than the other methods.
{"title":"Deep Visual-guided and Deep Reinforcement Learning Algorithm Based for Multip-Peg-in-Hole Assembly Task of Power Distribution Live-line Operation Robot","authors":"Li Zheng, Jiajun Ai, Yahao Wang, Xuming Tang, Shaolei Wu, Sheng Cheng, Rui Guo, Erbao Dong","doi":"10.1007/s10846-024-02079-2","DOIUrl":"https://doi.org/10.1007/s10846-024-02079-2","url":null,"abstract":"<p>The inspection and maintenance of power distribution network are crucial for efficiently delivering electricity to consumers. Due to the high voltage of power distribution network lines, manual live-line operations are difficult, risky, and inefficient. This paper researches a Power Distribution Network Live-line Operation Robot (PDLOR) with autonomous tool assembly capabilities to replace humans in various high-risk electrical maintenance tasks. To address the challenges of tool assembly in dynamic and unstructured work environments for PDLOR, we propose a framework consisting of deep visual-guided coarse localization and prior knowledge and fuzzy logic driven deep deterministic policy gradient (PKFD-DPG) high-precision assembly algorithm. First, we propose a multiscale identification and localization network based on YOLOv5, which enables the peg-hole close quickly and reduces ineffective exploration. Second, we design a main-auxiliary combined reward system, where the main-line reward uses the hindsight experience replay mechanism, and the auxiliary reward is based on fuzzy logic inference mechanism, addressing ineffective exploration and sparse reward in the learning process. In addition, we validate the effectiveness and advantages of the proposed algorithm through simulations and physical experiments, and also compare its performance with other assembly algorithms. The experimental results show that, for single-tool assembly tasks, the success rate of PKFD-DPG is 15.2% higher than the DDPG with functionized reward functions and 51.7% higher than the PD force control method; for multip-tools assembly tasks, the success rate of PKFD-DPG method is 17% and 53.4% higher than the other methods.</p>","PeriodicalId":54794,"journal":{"name":"Journal of Intelligent & Robotic Systems","volume":"48 1","pages":""},"PeriodicalIF":3.3,"publicationDate":"2024-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141061484","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-18DOI: 10.1007/s10846-024-02108-0
Thulio Amorim, Tiago Nascimento, Akash Chaudhary, Eliseo Ferrante, Martin Saska
In this work, we propose a minimalistic swarm flocking approach for multirotor unmanned aerial vehicles (UAVs). Our approach allows the swarm to achieve cohesively and aligned flocking (collective motion), in a random direction, without externally provided directional information exchange (alignment control). The method relies on minimalistic sensory requirements as it uses only the relative range and bearing of swarm agents in local proximity obtained through onboard sensors on the UAV. Thus, our method is able to stabilize and control the flock of a general shape above a steep terrain without any explicit communication between swarm members. To implement proximal control in a three-dimensional manner, the Lennard-Jones potential function is used to maintain cohesiveness and avoid collisions between robots. The performance of the proposed approach was tested in real-world conditions by experiments with a team of nine UAVs. Experiments also present the usage of our approach on UAVs that are independent of external positioning systems such as the Global Navigation Satellite System (GNSS). Relying only on a relative visual localization through the ultraviolet direction and ranging (UVDAR) system, previously proposed by our group, the experiments verify that our system can be applied in GNSS-denied environments. The degree achieved of alignment and cohesiveness was evaluated using the metrics of order and steady-state value.
{"title":"A Minimalistic 3D Self-Organized UAV Flocking Approach for Desert Exploration","authors":"Thulio Amorim, Tiago Nascimento, Akash Chaudhary, Eliseo Ferrante, Martin Saska","doi":"10.1007/s10846-024-02108-0","DOIUrl":"https://doi.org/10.1007/s10846-024-02108-0","url":null,"abstract":"<p>In this work, we propose a minimalistic swarm flocking approach for multirotor unmanned aerial vehicles (UAVs). Our approach allows the swarm to achieve cohesively and aligned flocking (collective motion), in a random direction, without externally provided directional information exchange (alignment control). The method relies on minimalistic sensory requirements as it uses only the relative range and bearing of swarm agents in local proximity obtained through onboard sensors on the UAV. Thus, our method is able to stabilize and control the flock of a general shape above a steep terrain without any explicit communication between swarm members. To implement proximal control in a three-dimensional manner, the Lennard-Jones potential function is used to maintain cohesiveness and avoid collisions between robots. The performance of the proposed approach was tested in real-world conditions by experiments with a team of nine UAVs. Experiments also present the usage of our approach on UAVs that are independent of external positioning systems such as the Global Navigation Satellite System (GNSS). Relying only on a relative visual localization through the ultraviolet direction and ranging (UVDAR) system, previously proposed by our group, the experiments verify that our system can be applied in GNSS-denied environments. The degree achieved of alignment and cohesiveness was evaluated using the metrics of order and steady-state value.</p>","PeriodicalId":54794,"journal":{"name":"Journal of Intelligent & Robotic Systems","volume":"54 1","pages":""},"PeriodicalIF":3.3,"publicationDate":"2024-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141061468","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-18DOI: 10.1007/s10846-024-02114-2
Konrad Małek, Jacek Dybała, Andrzej Kordecki, Piotr Hondra, Katarzyna Kijania
This article concerns the issue of image semantic segmentation for the machine vision system of an autonomous Unmanned Ground Vehicle (UGV) moving in an off-road environment. Determining the meaning (semantics) of the areas visible in the recorded image provides a complete understanding of the scene surrounding the autonomous vehicle. It is crucial for the correct determination of a passable route. Nowadays, semantic segmentation is generally solved using convolutional neural networks (CNN), which can take an image as input and output the segmented image. However, proper training of the neural network requires the use of large amounts of data, which becomes problematic in the situation of low availability of large, dedicated image data sets that consider various off-road situations - driving on various types of roads, surrounded by diverse vegetation and in various weather and light conditions. This study introduces a synthetic image dataset called “OffRoadSynth” to address the training data scarcity for off-road scenarios. It has been shown that pre-training the neural network on this synthetic dataset improves image segmentation accuracy compared to other methods, such as random network weight initialization or using larger, generic datasets. Results suggest that using a smaller but domain-dedicated set of synthetic images to initialize network weights for training on the target real-world dataset may be an effective approach to improving semantic segmentation results of images, including those from off-road environments.
{"title":"OffRoadSynth Open Dataset for Semantic Segmentation using Synthetic-Data-Based Weight Initialization for Autonomous UGV in Off-Road Environments","authors":"Konrad Małek, Jacek Dybała, Andrzej Kordecki, Piotr Hondra, Katarzyna Kijania","doi":"10.1007/s10846-024-02114-2","DOIUrl":"https://doi.org/10.1007/s10846-024-02114-2","url":null,"abstract":"<p>This article concerns the issue of image semantic segmentation for the machine vision system of an autonomous Unmanned Ground Vehicle (UGV) moving in an off-road environment. Determining the meaning (semantics) of the areas visible in the recorded image provides a complete understanding of the scene surrounding the autonomous vehicle. It is crucial for the correct determination of a passable route. Nowadays, semantic segmentation is generally solved using convolutional neural networks (CNN), which can take an image as input and output the segmented image. However, proper training of the neural network requires the use of large amounts of data, which becomes problematic in the situation of low availability of large, dedicated image data sets that consider various off-road situations - driving on various types of roads, surrounded by diverse vegetation and in various weather and light conditions. This study introduces a synthetic image dataset called “OffRoadSynth” to address the training data scarcity for off-road scenarios. It has been shown that pre-training the neural network on this synthetic dataset improves image segmentation accuracy compared to other methods, such as random network weight initialization or using larger, generic datasets. Results suggest that using a smaller but domain-dedicated set of synthetic images to initialize network weights for training on the target real-world dataset may be an effective approach to improving semantic segmentation results of images, including those from off-road environments.</p>","PeriodicalId":54794,"journal":{"name":"Journal of Intelligent & Robotic Systems","volume":"131 1","pages":""},"PeriodicalIF":3.3,"publicationDate":"2024-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141061377","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}