Pub Date : 2024-04-11DOI: 10.1007/s10846-024-02095-2
Alexandre Cardaillac, Roger Skjetne, Martin Ludvigsen
Hull inspection is an important task to ensure sustainability of ships. To overcome the challenges of hull structure inspection in an underwater environment in an efficient way, an autonomous system for hull inspection has to be developed. In this paper, a new approach to underwater ship hull inspection is proposed. It aims at developing the basis for an end-to-end autonomous solution. The real-time aspect is an important part of this work, as it allows the operators and inspectors to receive feedback about the inspection as it happens. A reference mission plan is generated and adapted online based on the inspection findings. This is done through the processing of a multibeam forward looking sonar to estimate the pose of the hull relative to the drone. An inspection map is incrementally built in a novel way, incorporating uncertainty estimates to better represent the inspection state, quality, and observation confidence. The proposed methods are experimentally tested in real-time on real ships and demonstrate the applicability to quickly understand what has been done during the inspection.
{"title":"ROV-Based Autonomous Maneuvering for Ship Hull Inspection with Coverage Monitoring","authors":"Alexandre Cardaillac, Roger Skjetne, Martin Ludvigsen","doi":"10.1007/s10846-024-02095-2","DOIUrl":"https://doi.org/10.1007/s10846-024-02095-2","url":null,"abstract":"<p>Hull inspection is an important task to ensure sustainability of ships. To overcome the challenges of hull structure inspection in an underwater environment in an efficient way, an autonomous system for hull inspection has to be developed. In this paper, a new approach to underwater ship hull inspection is proposed. It aims at developing the basis for an end-to-end autonomous solution. The real-time aspect is an important part of this work, as it allows the operators and inspectors to receive feedback about the inspection as it happens. A reference mission plan is generated and adapted online based on the inspection findings. This is done through the processing of a multibeam forward looking sonar to estimate the pose of the hull relative to the drone. An inspection map is incrementally built in a novel way, incorporating uncertainty estimates to better represent the inspection state, quality, and observation confidence. The proposed methods are experimentally tested in real-time on real ships and demonstrate the applicability to quickly understand what has been done during the inspection.</p>","PeriodicalId":54794,"journal":{"name":"Journal of Intelligent & Robotic Systems","volume":"56 1","pages":""},"PeriodicalIF":3.3,"publicationDate":"2024-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140577681","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-11DOI: 10.1007/s10846-024-02089-0
S. M. Mehdi. Hassani. N, Jafar Roshanian
Insect-inspired sensor fusion algorithms have presented a promising avenue in the development of robust and efficient systems, owing to the insects' ability to process numerous streams of noisy sensory data. The ring attractor neural network architecture has been identified as a noteworthy model for the optimal integration of diverse insect sensors. Expanding on this, our research presents an innovative bio-inspired ring attractor neural network architecture designed to augment the performance of microsatellite attitude determination systems through the fusion of data from multiple gyroscopic sensors.Extensive simulations using a nonlinear model of the microsatellite, while incorporating specific navigational disturbances, have been conducted to ascertain the viability and effectiveness of this approach. The results obtained have been superior to those of alternative methodologies, thus highlighting the potential of our proposed bio-inspired fusion technique. The findings indicate that this approach could significantly improve the accuracy and robustness of microsatellite systems across a wide range of applications.
{"title":"Innovative Exploration of a Bio-Inspired Sensor Fusion Algorithm: Enhancing Micro Satellite Functionality through Touretsky's Decentralized Neural Networks","authors":"S. M. Mehdi. Hassani. N, Jafar Roshanian","doi":"10.1007/s10846-024-02089-0","DOIUrl":"https://doi.org/10.1007/s10846-024-02089-0","url":null,"abstract":"<p>Insect-inspired sensor fusion algorithms have presented a promising avenue in the development of robust and efficient systems, owing to the insects' ability to process numerous streams of noisy sensory data. The ring attractor neural network architecture has been identified as a noteworthy model for the optimal integration of diverse insect sensors. Expanding on this, our research presents an innovative bio-inspired ring attractor neural network architecture designed to augment the performance of microsatellite attitude determination systems through the fusion of data from multiple gyroscopic sensors.Extensive simulations using a nonlinear model of the microsatellite, while incorporating specific navigational disturbances, have been conducted to ascertain the viability and effectiveness of this approach. The results obtained have been superior to those of alternative methodologies, thus highlighting the potential of our proposed bio-inspired fusion technique. The findings indicate that this approach could significantly improve the accuracy and robustness of microsatellite systems across a wide range of applications.</p>","PeriodicalId":54794,"journal":{"name":"Journal of Intelligent & Robotic Systems","volume":"52 1","pages":""},"PeriodicalIF":3.3,"publicationDate":"2024-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140577690","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Vision-based deep learning perception fulfills a paramount role in robotics, facilitating solutions to many challenging scenarios, such as acrobatic maneuvers of autonomous unmanned aerial vehicles (UAVs) and robot-assisted high-precision surgery. Control-oriented end-to-end perception approaches, which directly output control variables for the robot, commonly take advantage of the robot’s state estimation as an auxiliary input. When intermediate outputs are estimated and fed to a lower-level controller, i.e., mediated approaches, the robot’s state is commonly used as an input only for egocentric tasks, which estimate physical properties of the robot itself. In this work, we propose to apply a similar approach for the first time – to the best of our knowledge – to non-egocentric mediated tasks, where the estimated outputs refer to an external subject. We prove how our general methodology improves the regression performance of deep convolutional neural networks (CNNs) on a broad class of non-egocentric 3D pose estimation problems, with minimal computational cost. By analyzing three highly-different use cases, spanning from grasping with a robotic arm to following a human subject with a pocket-sized UAV, our results consistently improve the R(^{2}) regression metric, up to +0.51, compared to their stateless baselines. Finally, we validate the in-field performance of a closed-loop autonomous cm-scale UAV on the human pose estimation task. Our results show a significant reduction, i.e., 24% on average, on the mean absolute error of our stateful CNN, compared to a State-of-the-Art stateless counterpart.
{"title":"Vision-state Fusion: Improving Deep Neural Networks for Autonomous Robotics","authors":"Elia Cereda, Stefano Bonato, Mirko Nava, Alessandro Giusti, Daniele Palossi","doi":"10.1007/s10846-024-02091-6","DOIUrl":"https://doi.org/10.1007/s10846-024-02091-6","url":null,"abstract":"<p>Vision-based deep learning perception fulfills a paramount role in robotics, facilitating solutions to many challenging scenarios, such as acrobatic maneuvers of autonomous unmanned aerial vehicles (UAVs) and robot-assisted high-precision surgery. Control-oriented end-to-end perception approaches, which directly output control variables for the robot, commonly take advantage of the robot’s state estimation as an auxiliary input. When intermediate outputs are estimated and fed to a lower-level controller, i.e., mediated approaches, the robot’s state is commonly used as an input only for egocentric tasks, which estimate physical properties of the robot itself. In this work, we propose to apply a similar approach for the first time – to the best of our knowledge – to non-egocentric mediated tasks, where the estimated outputs refer to an external subject. We prove how our general methodology improves the regression performance of deep convolutional neural networks (CNNs) on a broad class of non-egocentric 3D pose estimation problems, with minimal computational cost. By analyzing three highly-different use cases, spanning from grasping with a robotic arm to following a human subject with a pocket-sized UAV, our results consistently improve the R<span>(^{2})</span> regression metric, up to +0.51, compared to their stateless baselines. Finally, we validate the in-field performance of a closed-loop autonomous cm-scale UAV on the human pose estimation task. Our results show a significant reduction, i.e., 24% on average, on the mean absolute error of our stateful CNN, compared to a State-of-the-Art stateless counterpart.</p>","PeriodicalId":54794,"journal":{"name":"Journal of Intelligent & Robotic Systems","volume":"84 1","pages":""},"PeriodicalIF":3.3,"publicationDate":"2024-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140577790","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Robotics has been booming in recent years. Especially with the development of artificial intelligence, more and more researchers have devoted themselves to the field of robotics, but there are still many shortcomings in the multi-task operation of robots. Reinforcement learning has achieved good performance in manipulator manipulation, especially in grasping, but grasping is only the first step for the robot to perform actions, and it often ignores the stacking, assembly, placement, and other tasks to be carried out later. Such long-horizon tasks still face the problems of expensive time, dead-end exploration, and process reversal. Hierarchical reinforcement learning has some advantages in solving the above problems, but not all tasks can be learned hierarchically. This paper mainly solves the complex manipulation task of continuous multi-action of the manipulator by improving the method of hierarchical reinforcement learning, aiming to solve the task of long sequences such as stacking and alignment by proposing a framework. Our framework completes simulation experiments on various tasks and improves the success rate from 78.3% to 94.8% when cleaning cluttered toys. In the stacking toy experiment, the training speed is nearly three times faster than the baseline method. And our method can be generalized to other long-horizon tasks. Experiments show that the more complex the task, the greater the advantage of our framework.
{"title":"Efficient Stacking and Grasping in Unstructured Environments","authors":"Fei Wang, Yue Liu, Manyi Shi, Chao Chen, Shangdong Liu, Jinbiao Zhu","doi":"10.1007/s10846-024-02078-3","DOIUrl":"https://doi.org/10.1007/s10846-024-02078-3","url":null,"abstract":"<p>Robotics has been booming in recent years. Especially with the development of artificial intelligence, more and more researchers have devoted themselves to the field of robotics, but there are still many shortcomings in the multi-task operation of robots. Reinforcement learning has achieved good performance in manipulator manipulation, especially in grasping, but grasping is only the first step for the robot to perform actions, and it often ignores the stacking, assembly, placement, and other tasks to be carried out later. Such long-horizon tasks still face the problems of expensive time, dead-end exploration, and process reversal. Hierarchical reinforcement learning has some advantages in solving the above problems, but not all tasks can be learned hierarchically. This paper mainly solves the complex manipulation task of continuous multi-action of the manipulator by improving the method of hierarchical reinforcement learning, aiming to solve the task of long sequences such as stacking and alignment by proposing a framework. Our framework completes simulation experiments on various tasks and improves the success rate from 78.3% to 94.8% when cleaning cluttered toys. In the stacking toy experiment, the training speed is nearly three times faster than the baseline method. And our method can be generalized to other long-horizon tasks. Experiments show that the more complex the task, the greater the advantage of our framework.</p>","PeriodicalId":54794,"journal":{"name":"Journal of Intelligent & Robotic Systems","volume":"72 1","pages":""},"PeriodicalIF":3.3,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140577770","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-28DOI: 10.1007/s10846-024-02069-4
Elizabeth Viviana Cabrera-Ávila, Bruno Marques Ferreira da Silva, Luiz Marcos Garcia Gonçalves
Visual odometry (VO) is an important problem studied in robotics and computer vision in which the relative camera motion is computed through visual information. In this work, we propose to reduce the error accumulation of a dual stereo VO system (4 cameras) computing 6 degrees of freedom poses by fusing two independent stereo odometry with a nonlinear optimization. Our approach computes two stereo odometries employing the LIBVISO2 algorithm and later merge them by using image correspondences between the stereo pairs and minimizing the reprojection error with graph-based bundle adjustment. Experiments carried out on the KITTI odometry datasets show that our method computes more accurate estimates (measured as the Relative Positioning Error) in comparison to the traditional stereo odometry (stereo bundle adjustment). In addition, the proposed method has a similar or better odometry accuracy compared to ORB-SLAM2 and UCOSLAM algorithms.
{"title":"Nonlinearly Optimized Dual Stereo Visual Odometry Fusion","authors":"Elizabeth Viviana Cabrera-Ávila, Bruno Marques Ferreira da Silva, Luiz Marcos Garcia Gonçalves","doi":"10.1007/s10846-024-02069-4","DOIUrl":"https://doi.org/10.1007/s10846-024-02069-4","url":null,"abstract":"<p>Visual odometry (VO) is an important problem studied in robotics and computer vision in which the relative camera motion is computed through visual information. In this work, we propose to reduce the error accumulation of a dual stereo VO system (4 cameras) computing 6 degrees of freedom poses by fusing two independent stereo odometry with a nonlinear optimization. Our approach computes two stereo odometries employing the LIBVISO2 algorithm and later merge them by using image correspondences between the stereo pairs and minimizing the reprojection error with graph-based bundle adjustment. Experiments carried out on the KITTI odometry datasets show that our method computes more accurate estimates (measured as the Relative Positioning Error) in comparison to the traditional stereo odometry (stereo bundle adjustment). In addition, the proposed method has a similar or better odometry accuracy compared to ORB-SLAM2 and UCOSLAM algorithms.</p>","PeriodicalId":54794,"journal":{"name":"Journal of Intelligent & Robotic Systems","volume":"172 1","pages":""},"PeriodicalIF":3.3,"publicationDate":"2024-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140316758","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-27DOI: 10.1007/s10846-024-02085-4
Mehmetcan Kaymaz, Recep Ayzit, Onur Akgün, Kamil Canberk Atik, Mustafa Erdem, Baris Yalcin, Gürkan Cetin, Nazım Kemal Ure
Navigation and planning for unmanned aerial vehicles (UAVs) based on visual-inertial sensors has been a popular research area in recent years. However, most visual sensors are prone to high error rates when exposed to disturbances such as excessive brightness and blur, which can lead to catastrophic performance drops in perception and motion planning systems. This study proposes a novel framework to address the coupled perception-planning problem in high-risk environments. This achieved by developing algorithms that can automatically adjust the agility of the UAV maneuvers based on the predicted error rate of the pose estimation system. The fundamental idea behind our work is to demonstrate that highly agile maneuvers become infeasible to execute when visual measurements are noisy. Thus, agility should be traded-off with safety to enable efficient risk management. Our study focuses on navigating a quadcopter through a sequence of gates on an unknown map, and we rely on existing deep learning methods for visual gate-pose estimation. In addition, we develop an architecture for estimating the pose error under high disturbance visual inputs. We use the estimated pose errors to train a reinforcement learning agent to tune the parameters of the motion planning algorithm to safely navigate the environment while minimizing the track completion time. Simulation results demonstrate that our proposed approach yields significantly fewer crashes and higher track completion rates compared to approaches that do not utilize reinforcement learning.
{"title":"Trading-Off Safety with Agility Using Deep Pose Error Estimation and Reinforcement Learning for Perception-Driven UAV Motion Planning","authors":"Mehmetcan Kaymaz, Recep Ayzit, Onur Akgün, Kamil Canberk Atik, Mustafa Erdem, Baris Yalcin, Gürkan Cetin, Nazım Kemal Ure","doi":"10.1007/s10846-024-02085-4","DOIUrl":"https://doi.org/10.1007/s10846-024-02085-4","url":null,"abstract":"<p>Navigation and planning for unmanned aerial vehicles (UAVs) based on visual-inertial sensors has been a popular research area in recent years. However, most visual sensors are prone to high error rates when exposed to disturbances such as excessive brightness and blur, which can lead to catastrophic performance drops in perception and motion planning systems. This study proposes a novel framework to address the coupled perception-planning problem in high-risk environments. This achieved by developing algorithms that can automatically adjust the agility of the UAV maneuvers based on the predicted error rate of the pose estimation system. The fundamental idea behind our work is to demonstrate that highly agile maneuvers become infeasible to execute when visual measurements are noisy. Thus, agility should be traded-off with safety to enable efficient risk management. Our study focuses on navigating a quadcopter through a sequence of gates on an unknown map, and we rely on existing deep learning methods for visual gate-pose estimation. In addition, we develop an architecture for estimating the pose error under high disturbance visual inputs. We use the estimated pose errors to train a reinforcement learning agent to tune the parameters of the motion planning algorithm to safely navigate the environment while minimizing the track completion time. Simulation results demonstrate that our proposed approach yields significantly fewer crashes and higher track completion rates compared to approaches that do not utilize reinforcement learning.</p>","PeriodicalId":54794,"journal":{"name":"Journal of Intelligent & Robotic Systems","volume":"51 1","pages":""},"PeriodicalIF":3.3,"publicationDate":"2024-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140315507","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-26DOI: 10.1007/s10846-024-02061-y
Faiyaz Ahmed, J. C. Mohanta, Anupam Keshari
Condition monitoring of power transmission lines is an essential aspect of improving transmission efficiency and ensuring an uninterrupted power supply. Wherein, efficient inspection methods play a critical role for carrying out regular inspections with less effort & cost, minimum labour engagement and ease of execution in any geographical & environmental conditions. Earlier various methods such as manual inspection, roll-on wire robotic inspection and helicopter-based inspection are preferably utilized. In the present days, Unmanned Aerial System (UAS) based inspection techniques are gradually increasing its suitability in terms of working speed, flexibility to program for difficult circumstances, accuracy in data collection and cost minimization. This paper reports a state-of-the-art study on the inspection of power transmission line systems and various methods utilized therein, along with their merits and demerits, which are explained and compared. Furthermore, a review was also carried out for the existing visual inspection systems utilized for power line inspection. In addition to that, blockchain utilities for power transmission line inspection are discussed, which illustrates next-generation data management possibilities, automating an effective inspection and providing solutions for the current challenges. Overall, the review demonstrates a concept for synergic integration of deep learning, navigation control concepts and the utilization of advanced sensors so that UAVs with advanced computation techniques can be analyzed with different aspects of implementation.
{"title":"Power Transmission Line Inspections: Methods, Challenges, Current Status and Usage of Unmanned Aerial Systems","authors":"Faiyaz Ahmed, J. C. Mohanta, Anupam Keshari","doi":"10.1007/s10846-024-02061-y","DOIUrl":"https://doi.org/10.1007/s10846-024-02061-y","url":null,"abstract":"<p>Condition monitoring of power transmission lines is an essential aspect of improving transmission efficiency and ensuring an uninterrupted power supply. Wherein, efficient inspection methods play a critical role for carrying out regular inspections with less effort & cost, minimum labour engagement and ease of execution in any geographical & environmental conditions. Earlier various methods such as manual inspection, roll-on wire robotic inspection and helicopter-based inspection are preferably utilized. In the present days, Unmanned Aerial System (UAS) based inspection techniques are gradually increasing its suitability in terms of working speed, flexibility to program for difficult circumstances, accuracy in data collection and cost minimization. This paper reports a state-of-the-art study on the inspection of power transmission line systems and various methods utilized therein, along with their merits and demerits, which are explained and compared. Furthermore, a review was also carried out for the existing visual inspection systems utilized for power line inspection. In addition to that, blockchain utilities for power transmission line inspection are discussed, which illustrates next-generation data management possibilities, automating an effective inspection and providing solutions for the current challenges. Overall, the review demonstrates a concept for synergic integration of deep learning, navigation control concepts and the utilization of advanced sensors so that UAVs with advanced computation techniques can be analyzed with different aspects of implementation.</p>","PeriodicalId":54794,"journal":{"name":"Journal of Intelligent & Robotic Systems","volume":"27 1","pages":""},"PeriodicalIF":3.3,"publicationDate":"2024-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140302761","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-22DOI: 10.1007/s10846-024-02058-7
Philippe Lambert, Karen Godary-Dejean, Lionel Lapierre, Lotfi Jaiem, Didier Crestani
This paper proposes the PANORAMA approach, which is designed to dynamically and autonomously manage the allocation of a robot’s hardware and software resources during fully autonomous mission. This behavioral autonomy approach guarantees the satisfaction of the mission performance constraints. This article clarifies the concept of performance for autonomous robotic missions and details the different phases of the PANORAMA approach. Finally, it focuses on an experimental implementation on a patrolling mission example.
{"title":"Performance Guarantee for Autonomous Robotic Missions using Resource Management: The PANORAMA Approach","authors":"Philippe Lambert, Karen Godary-Dejean, Lionel Lapierre, Lotfi Jaiem, Didier Crestani","doi":"10.1007/s10846-024-02058-7","DOIUrl":"https://doi.org/10.1007/s10846-024-02058-7","url":null,"abstract":"<p>This paper proposes the PANORAMA approach, which is designed to dynamically and autonomously manage the allocation of a robot’s hardware and software resources during fully autonomous mission. This behavioral autonomy approach guarantees the satisfaction of the mission performance constraints. This article clarifies the concept of performance for autonomous robotic missions and details the different phases of the PANORAMA approach. Finally, it focuses on an experimental implementation on a patrolling mission example.</p>","PeriodicalId":54794,"journal":{"name":"Journal of Intelligent & Robotic Systems","volume":"25 1","pages":""},"PeriodicalIF":3.3,"publicationDate":"2024-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140204073","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-22DOI: 10.1007/s10846-024-02081-8
Marios-Nektarios Stamatopoulos, Avijit Banerjee, George Nikolakopoulos
Aerial 3D printing is a pioneering technology yet in its conceptual stage that combines frontiers of 3D printing and Unmanned aerial vehicles (UAVs) aiming to construct large-scale structures in remote and hard-to-reach locations autonomously. The envisioned technology will enable a paradigm shift in the construction and manufacturing industries by utilizing UAVs as precision flying construction workers. However, the limited payload-carrying capacity of the UAVs, along with the intricate dexterity required for manipulation and planning, imposes a formidable barrier to overcome. Aiming to surpass these issues, a novel aerial decomposition-based and scheduling 3D printing framework is presented in this article, which considers a near-optimal decomposition of the original 3D shape of the model into smaller, more manageable sub-parts called chunks. This is achieved by searching for planar cuts based on a heuristic function incorporating necessary constraints associated with the interconnectivity between subparts, while avoiding any possibility of collision between the UAV’s extruder and generated chunks. Additionally, an autonomous task allocation framework is presented, which determines a priority-based sequence to assign each printable chunk to a UAV for manufacturing. The efficacy of the proposed framework is demonstrated using the physics-based Gazebo simulation engine, where various primitive CAD-based aerial 3D constructions are established, accounting for the nonlinear UAVs dynamics, associated motion planning and reactive navigation through Model predictive control.
空中三维打印是一项开创性技术,目前尚处于概念阶段,它结合了三维打印和无人驾驶飞行器(UAV)的前沿技术,旨在偏远和难以到达的地方自主建造大型建筑。这项设想中的技术将利用无人飞行器作为精确飞行的建筑工人,实现建筑和制造业的模式转变。然而,无人机有限的有效载荷承载能力,以及操作和规划所需的复杂灵巧性,都是需要克服的巨大障碍。为了克服这些问题,本文提出了一种新颖的基于航空分解和调度的 3D 打印框架,该框架考虑将模型的原始 3D 形状近乎最优地分解为更小、更易于管理的子部分(称为 "块")。这是通过基于启发式函数搜索平面切割来实现的,该函数包含了与子部件之间的互连性相关的必要约束,同时避免了无人机挤出机与生成的块之间发生碰撞的任何可能性。此外,还提出了一个自主任务分配框架,该框架确定了将每个可打印块分配给无人机进行制造的优先顺序。我们使用基于物理的 Gazebo 仿真引擎演示了所提框架的功效,通过模型预测控制,建立了各种基于 CAD 的原始空中 3D 建筑,并考虑了无人机的非线性动力学、相关运动规划和反应导航。
{"title":"A Decomposition and a Scheduling Framework for Enabling Aerial 3D Printing","authors":"Marios-Nektarios Stamatopoulos, Avijit Banerjee, George Nikolakopoulos","doi":"10.1007/s10846-024-02081-8","DOIUrl":"https://doi.org/10.1007/s10846-024-02081-8","url":null,"abstract":"<p>Aerial 3D printing is a pioneering technology yet in its conceptual stage that combines frontiers of 3D printing and Unmanned aerial vehicles (UAVs) aiming to construct large-scale structures in remote and hard-to-reach locations autonomously. The envisioned technology will enable a paradigm shift in the construction and manufacturing industries by utilizing UAVs as precision flying construction workers. However, the limited payload-carrying capacity of the UAVs, along with the intricate dexterity required for manipulation and planning, imposes a formidable barrier to overcome. Aiming to surpass these issues, a novel aerial decomposition-based and scheduling 3D printing framework is presented in this article, which considers a near-optimal decomposition of the original 3D shape of the model into smaller, more manageable sub-parts called chunks. This is achieved by searching for planar cuts based on a heuristic function incorporating necessary constraints associated with the interconnectivity between subparts, while avoiding any possibility of collision between the UAV’s extruder and generated chunks. Additionally, an autonomous task allocation framework is presented, which determines a priority-based sequence to assign each printable chunk to a UAV for manufacturing. The efficacy of the proposed framework is demonstrated using the physics-based Gazebo simulation engine, where various primitive CAD-based aerial 3D constructions are established, accounting for the nonlinear UAVs dynamics, associated motion planning and reactive navigation through Model predictive control.</p>","PeriodicalId":54794,"journal":{"name":"Journal of Intelligent & Robotic Systems","volume":"157 1","pages":""},"PeriodicalIF":3.3,"publicationDate":"2024-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140203979","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-21DOI: 10.1007/s10846-023-02041-8
Jinge Si, Bin Li, Liang Wang, Chencheng Deng, Junzheng Wang, Shoukun Wang
High-reliability landing systems for unmanned aerial vehicles (UAVs) have gained extensive attention for their applicability in complex wild environments. Accurate locating, flexible tracking, and reliable recovery are the main challenges in drone landing. In this paper, a novel UAV autonomous landing system and its control framework are proposed and implemented. It’s comprised of an environmental perception system, an unmanned ground vehicle (UGV), and a Stewart platform to locate, track, and recover the drone autonomously. Firstly, a recognition algorithm based on multi-sensor fusion is developed to locate the target in real time with the help of a one-dimensional turntable. Secondly, a dual-stage tracking strategy composed of a UGV and a landing platform is proposed for dynamically tracking the landing drone. In a wide range, the UGV is in charge of fast-tracking through the artificial potential field (APF) path planning and the model predictive control (MPC) tracking algorithms. While the trapezoidal speed planning is employed in platform controller to compensate for the tracking error of the UGV, realizing the precise tracking to the drone in a small range. Furthermore, a recovery algorithm including an attitude compensation controller and an impedance controller is designed for the Stewart platform, ensuring horizontal and compliant landing of the drone. Finally, extensive simulations and experiments are dedicated to verifying the feasibility and reliability of the developed system and framework, indicating that it is a superior case of UAV autonomous landing in wild environments such as grasslands, slopes, and snow.
无人驾驶飞行器(UAV)的高可靠性着陆系统因其在复杂野外环境中的适用性而受到广泛关注。精确定位、灵活跟踪和可靠回收是无人机着陆的主要挑战。本文提出并实现了一种新型无人机自主着陆系统及其控制框架。该系统由环境感知系统、无人地面飞行器(UGV)和 Stewart 平台组成,可实现无人机的自主定位、跟踪和回收。首先,开发了一种基于多传感器融合的识别算法,借助一维转台实时定位目标。其次,提出了一种由 UGV 和着陆平台组成的双级跟踪策略,用于动态跟踪着陆无人机。在大范围内,UGV 通过人工势场(APF)路径规划和模型预测控制(MPC)跟踪算法负责快速跟踪。而平台控制器则采用梯形速度规划来补偿 UGV 的跟踪误差,从而在小范围内实现对无人机的精确跟踪。此外,还为 Stewart 平台设计了包括姿态补偿控制器和阻抗控制器在内的恢复算法,以确保无人机水平平稳着陆。最后,大量的模拟和实验验证了所开发系统和框架的可行性和可靠性,表明它是无人机在草地、斜坡和雪地等野外环境中自主着陆的卓越案例。
{"title":"A UAV Autonomous Landing System Integrating Locating, Tracking, and Landing in the Wild Environment","authors":"Jinge Si, Bin Li, Liang Wang, Chencheng Deng, Junzheng Wang, Shoukun Wang","doi":"10.1007/s10846-023-02041-8","DOIUrl":"https://doi.org/10.1007/s10846-023-02041-8","url":null,"abstract":"<p>High-reliability landing systems for unmanned aerial vehicles (UAVs) have gained extensive attention for their applicability in complex wild environments. Accurate locating, flexible tracking, and reliable recovery are the main challenges in drone landing. In this paper, a novel UAV autonomous landing system and its control framework are proposed and implemented. It’s comprised of an environmental perception system, an unmanned ground vehicle (UGV), and a Stewart platform to locate, track, and recover the drone autonomously. Firstly, a recognition algorithm based on multi-sensor fusion is developed to locate the target in real time with the help of a one-dimensional turntable. Secondly, a dual-stage tracking strategy composed of a UGV and a landing platform is proposed for dynamically tracking the landing drone. In a wide range, the UGV is in charge of fast-tracking through the artificial potential field (APF) path planning and the model predictive control (MPC) tracking algorithms. While the trapezoidal speed planning is employed in platform controller to compensate for the tracking error of the UGV, realizing the precise tracking to the drone in a small range. Furthermore, a recovery algorithm including an attitude compensation controller and an impedance controller is designed for the Stewart platform, ensuring horizontal and compliant landing of the drone. Finally, extensive simulations and experiments are dedicated to verifying the feasibility and reliability of the developed system and framework, indicating that it is a superior case of UAV autonomous landing in wild environments such as grasslands, slopes, and snow.</p>","PeriodicalId":54794,"journal":{"name":"Journal of Intelligent & Robotic Systems","volume":"21 1","pages":""},"PeriodicalIF":3.3,"publicationDate":"2024-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140204074","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}