首页 > 最新文献

Journal of Intelligent & Robotic Systems最新文献

英文 中文
ROV-Based Autonomous Maneuvering for Ship Hull Inspection with Coverage Monitoring 基于遥控潜水器的船体检测自主操纵与覆盖监测
IF 3.3 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-04-11 DOI: 10.1007/s10846-024-02095-2
Alexandre Cardaillac, Roger Skjetne, Martin Ludvigsen

Hull inspection is an important task to ensure sustainability of ships. To overcome the challenges of hull structure inspection in an underwater environment in an efficient way, an autonomous system for hull inspection has to be developed. In this paper, a new approach to underwater ship hull inspection is proposed. It aims at developing the basis for an end-to-end autonomous solution. The real-time aspect is an important part of this work, as it allows the operators and inspectors to receive feedback about the inspection as it happens. A reference mission plan is generated and adapted online based on the inspection findings. This is done through the processing of a multibeam forward looking sonar to estimate the pose of the hull relative to the drone. An inspection map is incrementally built in a novel way, incorporating uncertainty estimates to better represent the inspection state, quality, and observation confidence. The proposed methods are experimentally tested in real-time on real ships and demonstrate the applicability to quickly understand what has been done during the inspection.

船体检测是确保船舶可持续发展的一项重要任务。为了有效克服在水下环境中进行船体结构检测所面临的挑战,必须开发一种用于船体检测的自主系统。本文提出了一种新的水下船体检测方法。其目的是为端到端自主解决方案奠定基础。实时性是这项工作的重要组成部分,因为它允许操作员和检测员在检测过程中接收反馈信息。根据检测结果,在线生成并调整参考任务计划。这是通过处理多波束前视声纳来估计船体相对于无人机的姿态。检查图以一种新颖的方式逐步生成,其中包含不确定性估计,以更好地表示检查状态、质量和观测置信度。所提出的方法在真实船舶上进行了实时实验测试,并证明了其适用性,可快速了解检查过程中的工作情况。
{"title":"ROV-Based Autonomous Maneuvering for Ship Hull Inspection with Coverage Monitoring","authors":"Alexandre Cardaillac, Roger Skjetne, Martin Ludvigsen","doi":"10.1007/s10846-024-02095-2","DOIUrl":"https://doi.org/10.1007/s10846-024-02095-2","url":null,"abstract":"<p>Hull inspection is an important task to ensure sustainability of ships. To overcome the challenges of hull structure inspection in an underwater environment in an efficient way, an autonomous system for hull inspection has to be developed. In this paper, a new approach to underwater ship hull inspection is proposed. It aims at developing the basis for an end-to-end autonomous solution. The real-time aspect is an important part of this work, as it allows the operators and inspectors to receive feedback about the inspection as it happens. A reference mission plan is generated and adapted online based on the inspection findings. This is done through the processing of a multibeam forward looking sonar to estimate the pose of the hull relative to the drone. An inspection map is incrementally built in a novel way, incorporating uncertainty estimates to better represent the inspection state, quality, and observation confidence. The proposed methods are experimentally tested in real-time on real ships and demonstrate the applicability to quickly understand what has been done during the inspection.</p>","PeriodicalId":54794,"journal":{"name":"Journal of Intelligent & Robotic Systems","volume":"56 1","pages":""},"PeriodicalIF":3.3,"publicationDate":"2024-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140577681","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Innovative Exploration of a Bio-Inspired Sensor Fusion Algorithm: Enhancing Micro Satellite Functionality through Touretsky's Decentralized Neural Networks 受生物启发的传感器融合算法的创新探索:通过图雷茨基分散神经网络增强微型卫星功能
IF 3.3 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-04-11 DOI: 10.1007/s10846-024-02089-0
S. M. Mehdi. Hassani. N, Jafar Roshanian

Insect-inspired sensor fusion algorithms have presented a promising avenue in the development of robust and efficient systems, owing to the insects' ability to process numerous streams of noisy sensory data. The ring attractor neural network architecture has been identified as a noteworthy model for the optimal integration of diverse insect sensors. Expanding on this, our research presents an innovative bio-inspired ring attractor neural network architecture designed to augment the performance of microsatellite attitude determination systems through the fusion of data from multiple gyroscopic sensors.Extensive simulations using a nonlinear model of the microsatellite, while incorporating specific navigational disturbances, have been conducted to ascertain the viability and effectiveness of this approach. The results obtained have been superior to those of alternative methodologies, thus highlighting the potential of our proposed bio-inspired fusion technique. The findings indicate that this approach could significantly improve the accuracy and robustness of microsatellite systems across a wide range of applications.

由于昆虫具有处理大量嘈杂感官数据流的能力,受昆虫启发的传感器融合算法为开发稳健高效的系统提供了一条大有可为的途径。环状吸引子神经网络架构已被确定为优化整合不同昆虫传感器的一个值得注意的模型。在此基础上,我们的研究提出了一种创新的生物启发环状吸引子神经网络架构,旨在通过融合来自多个陀螺仪传感器的数据来提高微卫星姿态确定系统的性能。我们使用微卫星的非线性模型并结合特定的导航干扰进行了大量模拟,以确定这种方法的可行性和有效性。获得的结果优于其他方法,从而凸显了我们提出的生物启发融合技术的潜力。研究结果表明,这种方法可以大大提高微型卫星系统在广泛应用中的准确性和稳健性。
{"title":"Innovative Exploration of a Bio-Inspired Sensor Fusion Algorithm: Enhancing Micro Satellite Functionality through Touretsky's Decentralized Neural Networks","authors":"S. M. Mehdi. Hassani. N, Jafar Roshanian","doi":"10.1007/s10846-024-02089-0","DOIUrl":"https://doi.org/10.1007/s10846-024-02089-0","url":null,"abstract":"<p>Insect-inspired sensor fusion algorithms have presented a promising avenue in the development of robust and efficient systems, owing to the insects' ability to process numerous streams of noisy sensory data. The ring attractor neural network architecture has been identified as a noteworthy model for the optimal integration of diverse insect sensors. Expanding on this, our research presents an innovative bio-inspired ring attractor neural network architecture designed to augment the performance of microsatellite attitude determination systems through the fusion of data from multiple gyroscopic sensors.Extensive simulations using a nonlinear model of the microsatellite, while incorporating specific navigational disturbances, have been conducted to ascertain the viability and effectiveness of this approach. The results obtained have been superior to those of alternative methodologies, thus highlighting the potential of our proposed bio-inspired fusion technique. The findings indicate that this approach could significantly improve the accuracy and robustness of microsatellite systems across a wide range of applications.</p>","PeriodicalId":54794,"journal":{"name":"Journal of Intelligent & Robotic Systems","volume":"52 1","pages":""},"PeriodicalIF":3.3,"publicationDate":"2024-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140577690","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Vision-state Fusion: Improving Deep Neural Networks for Autonomous Robotics 视觉状态融合:为自主机器人改进深度神经网络
IF 3.3 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-04-10 DOI: 10.1007/s10846-024-02091-6
Elia Cereda, Stefano Bonato, Mirko Nava, Alessandro Giusti, Daniele Palossi

Vision-based deep learning perception fulfills a paramount role in robotics, facilitating solutions to many challenging scenarios, such as acrobatic maneuvers of autonomous unmanned aerial vehicles (UAVs) and robot-assisted high-precision surgery. Control-oriented end-to-end perception approaches, which directly output control variables for the robot, commonly take advantage of the robot’s state estimation as an auxiliary input. When intermediate outputs are estimated and fed to a lower-level controller, i.e., mediated approaches, the robot’s state is commonly used as an input only for egocentric tasks, which estimate physical properties of the robot itself. In this work, we propose to apply a similar approach for the first time – to the best of our knowledge – to non-egocentric mediated tasks, where the estimated outputs refer to an external subject. We prove how our general methodology improves the regression performance of deep convolutional neural networks (CNNs) on a broad class of non-egocentric 3D pose estimation problems, with minimal computational cost. By analyzing three highly-different use cases, spanning from grasping with a robotic arm to following a human subject with a pocket-sized UAV, our results consistently improve the R(^{2}) regression metric, up to +0.51, compared to their stateless baselines. Finally, we validate the in-field performance of a closed-loop autonomous cm-scale UAV on the human pose estimation task. Our results show a significant reduction, i.e., 24% on average, on the mean absolute error of our stateful CNN, compared to a State-of-the-Art stateless counterpart.

基于视觉的深度学习感知在机器人技术中发挥着至关重要的作用,有助于解决许多具有挑战性的场景,例如自主无人飞行器(UAV)的杂技表演和机器人辅助高精度手术。以控制为导向的端到端感知方法直接为机器人输出控制变量,通常利用机器人的状态估计作为辅助输入。当中间输出被估算并馈送至下级控制器(即中介方法)时,机器人的状态通常只被用作以自我为中心的任务的输入,即估算机器人本身的物理属性。在这项工作中,我们首次提出将类似方法应用于非以自我为中心的中介任务,在这种任务中,估计的输出指的是外部主体。我们证明了我们的通用方法如何以最小的计算成本提高深度卷积神经网络(CNN)在一大类非自我中心三维姿态估计问题上的回归性能。通过分析从使用机械臂抓取到使用袖珍型无人机跟踪人体等三种高度不同的使用案例,我们的结果与无状态基线相比,持续改善了 R(^{2}) 回归指标,最高可达 +0.51。最后,我们验证了厘米级闭环自主无人机在人体姿态估计任务中的现场性能。结果表明,与最先进的无状态 CNN 相比,我们的有状态 CNN 的平均绝对误差大幅降低,平均降低了 24%。
{"title":"Vision-state Fusion: Improving Deep Neural Networks for Autonomous Robotics","authors":"Elia Cereda, Stefano Bonato, Mirko Nava, Alessandro Giusti, Daniele Palossi","doi":"10.1007/s10846-024-02091-6","DOIUrl":"https://doi.org/10.1007/s10846-024-02091-6","url":null,"abstract":"<p>Vision-based deep learning perception fulfills a paramount role in robotics, facilitating solutions to many challenging scenarios, such as acrobatic maneuvers of autonomous unmanned aerial vehicles (UAVs) and robot-assisted high-precision surgery. Control-oriented end-to-end perception approaches, which directly output control variables for the robot, commonly take advantage of the robot’s state estimation as an auxiliary input. When intermediate outputs are estimated and fed to a lower-level controller, i.e., mediated approaches, the robot’s state is commonly used as an input only for egocentric tasks, which estimate physical properties of the robot itself. In this work, we propose to apply a similar approach for the first time – to the best of our knowledge – to non-egocentric mediated tasks, where the estimated outputs refer to an external subject. We prove how our general methodology improves the regression performance of deep convolutional neural networks (CNNs) on a broad class of non-egocentric 3D pose estimation problems, with minimal computational cost. By analyzing three highly-different use cases, spanning from grasping with a robotic arm to following a human subject with a pocket-sized UAV, our results consistently improve the R<span>(^{2})</span> regression metric, up to +0.51, compared to their stateless baselines. Finally, we validate the in-field performance of a closed-loop autonomous cm-scale UAV on the human pose estimation task. Our results show a significant reduction, i.e., 24% on average, on the mean absolute error of our stateful CNN, compared to a State-of-the-Art stateless counterpart.</p>","PeriodicalId":54794,"journal":{"name":"Journal of Intelligent & Robotic Systems","volume":"84 1","pages":""},"PeriodicalIF":3.3,"publicationDate":"2024-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140577790","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Efficient Stacking and Grasping in Unstructured Environments 非结构化环境中的高效堆叠和抓取
IF 3.3 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-04-01 DOI: 10.1007/s10846-024-02078-3
Fei Wang, Yue Liu, Manyi Shi, Chao Chen, Shangdong Liu, Jinbiao Zhu

Robotics has been booming in recent years. Especially with the development of artificial intelligence, more and more researchers have devoted themselves to the field of robotics, but there are still many shortcomings in the multi-task operation of robots. Reinforcement learning has achieved good performance in manipulator manipulation, especially in grasping, but grasping is only the first step for the robot to perform actions, and it often ignores the stacking, assembly, placement, and other tasks to be carried out later. Such long-horizon tasks still face the problems of expensive time, dead-end exploration, and process reversal. Hierarchical reinforcement learning has some advantages in solving the above problems, but not all tasks can be learned hierarchically. This paper mainly solves the complex manipulation task of continuous multi-action of the manipulator by improving the method of hierarchical reinforcement learning, aiming to solve the task of long sequences such as stacking and alignment by proposing a framework. Our framework completes simulation experiments on various tasks and improves the success rate from 78.3% to 94.8% when cleaning cluttered toys. In the stacking toy experiment, the training speed is nearly three times faster than the baseline method. And our method can be generalized to other long-horizon tasks. Experiments show that the more complex the task, the greater the advantage of our framework.

近年来,机器人技术蓬勃发展。特别是随着人工智能的发展,越来越多的研究人员投身于机器人领域,但机器人的多任务操作仍存在很多不足。强化学习在机械手操作,尤其是抓取方面取得了不错的成绩,但抓取只是机器人执行动作的第一步,往往忽略了后面要执行的堆垛、装配、摆放等任务。这种长视距任务仍然面临着时间昂贵、探索无果和过程逆转等问题。分层强化学习在解决上述问题上有一定优势,但并非所有任务都能分层学习。本文主要通过改进分层强化学习的方法来解决机械手连续多动作的复杂操纵任务,旨在通过提出一个框架来解决堆叠和对齐等长序列任务。我们的框架完成了各种任务的模拟实验,在清理杂乱玩具时,成功率从78.3%提高到94.8%。在堆叠玩具实验中,训练速度比基准方法快了近三倍。我们的方法还可以推广到其他长视距任务中。实验表明,任务越复杂,我们框架的优势就越大。
{"title":"Efficient Stacking and Grasping in Unstructured Environments","authors":"Fei Wang, Yue Liu, Manyi Shi, Chao Chen, Shangdong Liu, Jinbiao Zhu","doi":"10.1007/s10846-024-02078-3","DOIUrl":"https://doi.org/10.1007/s10846-024-02078-3","url":null,"abstract":"<p>Robotics has been booming in recent years. Especially with the development of artificial intelligence, more and more researchers have devoted themselves to the field of robotics, but there are still many shortcomings in the multi-task operation of robots. Reinforcement learning has achieved good performance in manipulator manipulation, especially in grasping, but grasping is only the first step for the robot to perform actions, and it often ignores the stacking, assembly, placement, and other tasks to be carried out later. Such long-horizon tasks still face the problems of expensive time, dead-end exploration, and process reversal. Hierarchical reinforcement learning has some advantages in solving the above problems, but not all tasks can be learned hierarchically. This paper mainly solves the complex manipulation task of continuous multi-action of the manipulator by improving the method of hierarchical reinforcement learning, aiming to solve the task of long sequences such as stacking and alignment by proposing a framework. Our framework completes simulation experiments on various tasks and improves the success rate from 78.3% to 94.8% when cleaning cluttered toys. In the stacking toy experiment, the training speed is nearly three times faster than the baseline method. And our method can be generalized to other long-horizon tasks. Experiments show that the more complex the task, the greater the advantage of our framework.</p>","PeriodicalId":54794,"journal":{"name":"Journal of Intelligent & Robotic Systems","volume":"72 1","pages":""},"PeriodicalIF":3.3,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140577770","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Nonlinearly Optimized Dual Stereo Visual Odometry Fusion 非线性优化的双立体视觉测距融合
IF 3.3 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-03-28 DOI: 10.1007/s10846-024-02069-4
Elizabeth Viviana Cabrera-Ávila, Bruno Marques Ferreira da Silva, Luiz Marcos Garcia Gonçalves

Visual odometry (VO) is an important problem studied in robotics and computer vision in which the relative camera motion is computed through visual information. In this work, we propose to reduce the error accumulation of a dual stereo VO system (4 cameras) computing 6 degrees of freedom poses by fusing two independent stereo odometry with a nonlinear optimization. Our approach computes two stereo odometries employing the LIBVISO2 algorithm and later merge them by using image correspondences between the stereo pairs and minimizing the reprojection error with graph-based bundle adjustment. Experiments carried out on the KITTI odometry datasets show that our method computes more accurate estimates (measured as the Relative Positioning Error) in comparison to the traditional stereo odometry (stereo bundle adjustment). In addition, the proposed method has a similar or better odometry accuracy compared to ORB-SLAM2 and UCOSLAM algorithms.

视觉里程测量(VO)是机器人学和计算机视觉领域研究的一个重要问题,通过视觉信息计算摄像机的相对运动。在这项工作中,我们建议通过非线性优化融合两个独立的立体里程计,减少双立体 VO 系统(4 个摄像头)计算 6 自由度姿势时的误差累积。我们的方法采用 LIBVISO2 算法计算两个立体姿态,然后利用立体对之间的图像对应关系将其合并,并通过基于图的捆绑调整使重投误差最小化。在 KITTI 测距数据集上进行的实验表明,与传统的立体测距(立体束调整)相比,我们的方法能计算出更精确的估计值(以相对定位误差衡量)。此外,与 ORB-SLAM2 和 UCOSLAM 算法相比,所提出的方法具有相似或更高的里程测量精度。
{"title":"Nonlinearly Optimized Dual Stereo Visual Odometry Fusion","authors":"Elizabeth Viviana Cabrera-Ávila, Bruno Marques Ferreira da Silva, Luiz Marcos Garcia Gonçalves","doi":"10.1007/s10846-024-02069-4","DOIUrl":"https://doi.org/10.1007/s10846-024-02069-4","url":null,"abstract":"<p>Visual odometry (VO) is an important problem studied in robotics and computer vision in which the relative camera motion is computed through visual information. In this work, we propose to reduce the error accumulation of a dual stereo VO system (4 cameras) computing 6 degrees of freedom poses by fusing two independent stereo odometry with a nonlinear optimization. Our approach computes two stereo odometries employing the LIBVISO2 algorithm and later merge them by using image correspondences between the stereo pairs and minimizing the reprojection error with graph-based bundle adjustment. Experiments carried out on the KITTI odometry datasets show that our method computes more accurate estimates (measured as the Relative Positioning Error) in comparison to the traditional stereo odometry (stereo bundle adjustment). In addition, the proposed method has a similar or better odometry accuracy compared to ORB-SLAM2 and UCOSLAM algorithms.</p>","PeriodicalId":54794,"journal":{"name":"Journal of Intelligent & Robotic Systems","volume":"172 1","pages":""},"PeriodicalIF":3.3,"publicationDate":"2024-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140316758","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Trading-Off Safety with Agility Using Deep Pose Error Estimation and Reinforcement Learning for Perception-Driven UAV Motion Planning 利用深度姿态误差估计和强化学习实现感知驱动的无人机运动规划,在安全性和灵活性之间权衡取舍
IF 3.3 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-03-27 DOI: 10.1007/s10846-024-02085-4
Mehmetcan Kaymaz, Recep Ayzit, Onur Akgün, Kamil Canberk Atik, Mustafa Erdem, Baris Yalcin, Gürkan Cetin, Nazım Kemal Ure

Navigation and planning for unmanned aerial vehicles (UAVs) based on visual-inertial sensors has been a popular research area in recent years. However, most visual sensors are prone to high error rates when exposed to disturbances such as excessive brightness and blur, which can lead to catastrophic performance drops in perception and motion planning systems. This study proposes a novel framework to address the coupled perception-planning problem in high-risk environments. This achieved by developing algorithms that can automatically adjust the agility of the UAV maneuvers based on the predicted error rate of the pose estimation system. The fundamental idea behind our work is to demonstrate that highly agile maneuvers become infeasible to execute when visual measurements are noisy. Thus, agility should be traded-off with safety to enable efficient risk management. Our study focuses on navigating a quadcopter through a sequence of gates on an unknown map, and we rely on existing deep learning methods for visual gate-pose estimation. In addition, we develop an architecture for estimating the pose error under high disturbance visual inputs. We use the estimated pose errors to train a reinforcement learning agent to tune the parameters of the motion planning algorithm to safely navigate the environment while minimizing the track completion time. Simulation results demonstrate that our proposed approach yields significantly fewer crashes and higher track completion rates compared to approaches that do not utilize reinforcement learning.

基于视觉惯性传感器的无人飞行器(UAV)导航和规划是近年来的热门研究领域。然而,大多数视觉传感器在受到亮度过高和模糊等干扰时容易出现高错误率,这可能导致感知和运动规划系统出现灾难性的性能下降。本研究提出了一种新型框架,用于解决高风险环境中的感知-规划耦合问题。为此,我们开发了一种算法,可根据姿态估计系统的预测误差率自动调整无人机机动的敏捷性。我们工作背后的基本思想是证明当视觉测量存在噪声时,高灵敏度的机动操作将变得难以执行。因此,应在灵活性与安全性之间进行权衡,以实现高效的风险管理。我们的研究重点是让四旋翼飞行器通过未知地图上的一连串门,我们依靠现有的深度学习方法来进行视觉门位置估计。此外,我们还开发了一种架构,用于估计高干扰视觉输入下的姿势误差。我们利用估计的姿态误差来训练强化学习代理,以调整运动规划算法的参数,从而在最小化轨道完成时间的同时安全地在环境中导航。仿真结果表明,与不使用强化学习的方法相比,我们提出的方法大大减少了碰撞,提高了轨道完成率。
{"title":"Trading-Off Safety with Agility Using Deep Pose Error Estimation and Reinforcement Learning for Perception-Driven UAV Motion Planning","authors":"Mehmetcan Kaymaz, Recep Ayzit, Onur Akgün, Kamil Canberk Atik, Mustafa Erdem, Baris Yalcin, Gürkan Cetin, Nazım Kemal Ure","doi":"10.1007/s10846-024-02085-4","DOIUrl":"https://doi.org/10.1007/s10846-024-02085-4","url":null,"abstract":"<p>Navigation and planning for unmanned aerial vehicles (UAVs) based on visual-inertial sensors has been a popular research area in recent years. However, most visual sensors are prone to high error rates when exposed to disturbances such as excessive brightness and blur, which can lead to catastrophic performance drops in perception and motion planning systems. This study proposes a novel framework to address the coupled perception-planning problem in high-risk environments. This achieved by developing algorithms that can automatically adjust the agility of the UAV maneuvers based on the predicted error rate of the pose estimation system. The fundamental idea behind our work is to demonstrate that highly agile maneuvers become infeasible to execute when visual measurements are noisy. Thus, agility should be traded-off with safety to enable efficient risk management. Our study focuses on navigating a quadcopter through a sequence of gates on an unknown map, and we rely on existing deep learning methods for visual gate-pose estimation. In addition, we develop an architecture for estimating the pose error under high disturbance visual inputs. We use the estimated pose errors to train a reinforcement learning agent to tune the parameters of the motion planning algorithm to safely navigate the environment while minimizing the track completion time. Simulation results demonstrate that our proposed approach yields significantly fewer crashes and higher track completion rates compared to approaches that do not utilize reinforcement learning.</p>","PeriodicalId":54794,"journal":{"name":"Journal of Intelligent & Robotic Systems","volume":"51 1","pages":""},"PeriodicalIF":3.3,"publicationDate":"2024-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140315507","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Power Transmission Line Inspections: Methods, Challenges, Current Status and Usage of Unmanned Aerial Systems 输电线路检查:无人机系统的方法、挑战、现状和使用情况
IF 3.3 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-03-26 DOI: 10.1007/s10846-024-02061-y
Faiyaz Ahmed, J. C. Mohanta, Anupam Keshari

Condition monitoring of power transmission lines is an essential aspect of improving transmission efficiency and ensuring an uninterrupted power supply. Wherein, efficient inspection methods play a critical role for carrying out regular inspections with less effort & cost, minimum labour engagement and ease of execution in any geographical & environmental conditions. Earlier various methods such as manual inspection, roll-on wire robotic inspection and helicopter-based inspection are preferably utilized. In the present days, Unmanned Aerial System (UAS) based inspection techniques are gradually increasing its suitability in terms of working speed, flexibility to program for difficult circumstances, accuracy in data collection and cost minimization. This paper reports a state-of-the-art study on the inspection of power transmission line systems and various methods utilized therein, along with their merits and demerits, which are explained and compared. Furthermore, a review was also carried out for the existing visual inspection systems utilized for power line inspection. In addition to that, blockchain utilities for power transmission line inspection are discussed, which illustrates next-generation data management possibilities, automating an effective inspection and providing solutions for the current challenges. Overall, the review demonstrates a concept for synergic integration of deep learning, navigation control concepts and the utilization of advanced sensors so that UAVs with advanced computation techniques can be analyzed with different aspects of implementation.

输电线路的状态监测是提高输电效率和确保不间断供电的一个重要方面。其中,高效的检测方法对于在任何地理和环境条件下,以较少的工作量和成本、最少的劳动力投入和方便的执行方式进行定期检测起着至关重要的作用。早期的检测方法主要有人工检测、卷线机器人检测和直升机检测。如今,基于无人机系统(UAS)的检测技术在工作速度、困难情况下编程的灵活性、数据收集的准确性和成本最小化等方面的适用性正在逐步提高。本文报告了输电线路系统检测的最新研究成果,并对其中使用的各种方法及其优缺点进行了解释和比较。此外,还对用于输电线路检测的现有视觉检测系统进行了审查。此外,还讨论了用于输电线路检测的区块链实用程序,这说明了下一代数据管理的可能性,实现了有效检测的自动化,并为当前的挑战提供了解决方案。总之,综述展示了深度学习、导航控制概念和先进传感器利用的协同整合概念,从而可以从不同方面分析具有先进计算技术的无人机的实施情况。
{"title":"Power Transmission Line Inspections: Methods, Challenges, Current Status and Usage of Unmanned Aerial Systems","authors":"Faiyaz Ahmed, J. C. Mohanta, Anupam Keshari","doi":"10.1007/s10846-024-02061-y","DOIUrl":"https://doi.org/10.1007/s10846-024-02061-y","url":null,"abstract":"<p>Condition monitoring of power transmission lines is an essential aspect of improving transmission efficiency and ensuring an uninterrupted power supply. Wherein, efficient inspection methods play a critical role for carrying out regular inspections with less effort &amp; cost, minimum labour engagement and ease of execution in any geographical &amp; environmental conditions. Earlier various methods such as manual inspection, roll-on wire robotic inspection and helicopter-based inspection are preferably utilized. In the present days, Unmanned Aerial System (UAS) based inspection techniques are gradually increasing its suitability in terms of working speed, flexibility to program for difficult circumstances, accuracy in data collection and cost minimization. This paper reports a state-of-the-art study on the inspection of power transmission line systems and various methods utilized therein, along with their merits and demerits, which are explained and compared. Furthermore, a review was also carried out for the existing visual inspection systems utilized for power line inspection. In addition to that, blockchain utilities for power transmission line inspection are discussed, which illustrates next-generation data management possibilities, automating an effective inspection and providing solutions for the current challenges. Overall, the review demonstrates a concept for synergic integration of deep learning, navigation control concepts and the utilization of advanced sensors so that UAVs with advanced computation techniques can be analyzed with different aspects of implementation.</p>","PeriodicalId":54794,"journal":{"name":"Journal of Intelligent & Robotic Systems","volume":"27 1","pages":""},"PeriodicalIF":3.3,"publicationDate":"2024-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140302761","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Performance Guarantee for Autonomous Robotic Missions using Resource Management: The PANORAMA Approach 利用资源管理保证自主机器人任务的性能:PANORAMA 方法
IF 3.3 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-03-22 DOI: 10.1007/s10846-024-02058-7
Philippe Lambert, Karen Godary-Dejean, Lionel Lapierre, Lotfi Jaiem, Didier Crestani

This paper proposes the PANORAMA approach, which is designed to dynamically and autonomously manage the allocation of a robot’s hardware and software resources during fully autonomous mission. This behavioral autonomy approach guarantees the satisfaction of the mission performance constraints. This article clarifies the concept of performance for autonomous robotic missions and details the different phases of the PANORAMA approach. Finally, it focuses on an experimental implementation on a patrolling mission example.

本文提出了 PANORAMA 方法,该方法设计用于在全自主任务期间动态、自主地管理机器人硬件和软件资源的分配。这种行为自主方法保证了任务性能约束的满足。本文阐明了自主机器人任务的性能概念,并详细介绍了 PANORAMA 方法的不同阶段。最后,文章重点介绍了一个巡逻任务实例的实验实施情况。
{"title":"Performance Guarantee for Autonomous Robotic Missions using Resource Management: The PANORAMA Approach","authors":"Philippe Lambert, Karen Godary-Dejean, Lionel Lapierre, Lotfi Jaiem, Didier Crestani","doi":"10.1007/s10846-024-02058-7","DOIUrl":"https://doi.org/10.1007/s10846-024-02058-7","url":null,"abstract":"<p>This paper proposes the PANORAMA approach, which is designed to dynamically and autonomously manage the allocation of a robot’s hardware and software resources during fully autonomous mission. This behavioral autonomy approach guarantees the satisfaction of the mission performance constraints. This article clarifies the concept of performance for autonomous robotic missions and details the different phases of the PANORAMA approach. Finally, it focuses on an experimental implementation on a patrolling mission example.</p>","PeriodicalId":54794,"journal":{"name":"Journal of Intelligent & Robotic Systems","volume":"25 1","pages":""},"PeriodicalIF":3.3,"publicationDate":"2024-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140204073","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Decomposition and a Scheduling Framework for Enabling Aerial 3D Printing 实现航空三维打印的分解和调度框架
IF 3.3 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-03-22 DOI: 10.1007/s10846-024-02081-8
Marios-Nektarios Stamatopoulos, Avijit Banerjee, George Nikolakopoulos

Aerial 3D printing is a pioneering technology yet in its conceptual stage that combines frontiers of 3D printing and Unmanned aerial vehicles (UAVs) aiming to construct large-scale structures in remote and hard-to-reach locations autonomously. The envisioned technology will enable a paradigm shift in the construction and manufacturing industries by utilizing UAVs as precision flying construction workers. However, the limited payload-carrying capacity of the UAVs, along with the intricate dexterity required for manipulation and planning, imposes a formidable barrier to overcome. Aiming to surpass these issues, a novel aerial decomposition-based and scheduling 3D printing framework is presented in this article, which considers a near-optimal decomposition of the original 3D shape of the model into smaller, more manageable sub-parts called chunks. This is achieved by searching for planar cuts based on a heuristic function incorporating necessary constraints associated with the interconnectivity between subparts, while avoiding any possibility of collision between the UAV’s extruder and generated chunks. Additionally, an autonomous task allocation framework is presented, which determines a priority-based sequence to assign each printable chunk to a UAV for manufacturing. The efficacy of the proposed framework is demonstrated using the physics-based Gazebo simulation engine, where various primitive CAD-based aerial 3D constructions are established, accounting for the nonlinear UAVs dynamics, associated motion planning and reactive navigation through Model predictive control.

空中三维打印是一项开创性技术,目前尚处于概念阶段,它结合了三维打印和无人驾驶飞行器(UAV)的前沿技术,旨在偏远和难以到达的地方自主建造大型建筑。这项设想中的技术将利用无人飞行器作为精确飞行的建筑工人,实现建筑和制造业的模式转变。然而,无人机有限的有效载荷承载能力,以及操作和规划所需的复杂灵巧性,都是需要克服的巨大障碍。为了克服这些问题,本文提出了一种新颖的基于航空分解和调度的 3D 打印框架,该框架考虑将模型的原始 3D 形状近乎最优地分解为更小、更易于管理的子部分(称为 "块")。这是通过基于启发式函数搜索平面切割来实现的,该函数包含了与子部件之间的互连性相关的必要约束,同时避免了无人机挤出机与生成的块之间发生碰撞的任何可能性。此外,还提出了一个自主任务分配框架,该框架确定了将每个可打印块分配给无人机进行制造的优先顺序。我们使用基于物理的 Gazebo 仿真引擎演示了所提框架的功效,通过模型预测控制,建立了各种基于 CAD 的原始空中 3D 建筑,并考虑了无人机的非线性动力学、相关运动规划和反应导航。
{"title":"A Decomposition and a Scheduling Framework for Enabling Aerial 3D Printing","authors":"Marios-Nektarios Stamatopoulos, Avijit Banerjee, George Nikolakopoulos","doi":"10.1007/s10846-024-02081-8","DOIUrl":"https://doi.org/10.1007/s10846-024-02081-8","url":null,"abstract":"<p>Aerial 3D printing is a pioneering technology yet in its conceptual stage that combines frontiers of 3D printing and Unmanned aerial vehicles (UAVs) aiming to construct large-scale structures in remote and hard-to-reach locations autonomously. The envisioned technology will enable a paradigm shift in the construction and manufacturing industries by utilizing UAVs as precision flying construction workers. However, the limited payload-carrying capacity of the UAVs, along with the intricate dexterity required for manipulation and planning, imposes a formidable barrier to overcome. Aiming to surpass these issues, a novel aerial decomposition-based and scheduling 3D printing framework is presented in this article, which considers a near-optimal decomposition of the original 3D shape of the model into smaller, more manageable sub-parts called chunks. This is achieved by searching for planar cuts based on a heuristic function incorporating necessary constraints associated with the interconnectivity between subparts, while avoiding any possibility of collision between the UAV’s extruder and generated chunks. Additionally, an autonomous task allocation framework is presented, which determines a priority-based sequence to assign each printable chunk to a UAV for manufacturing. The efficacy of the proposed framework is demonstrated using the physics-based Gazebo simulation engine, where various primitive CAD-based aerial 3D constructions are established, accounting for the nonlinear UAVs dynamics, associated motion planning and reactive navigation through Model predictive control.</p>","PeriodicalId":54794,"journal":{"name":"Journal of Intelligent & Robotic Systems","volume":"157 1","pages":""},"PeriodicalIF":3.3,"publicationDate":"2024-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140203979","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A UAV Autonomous Landing System Integrating Locating, Tracking, and Landing in the Wild Environment 集野外环境定位、跟踪和着陆于一体的无人机自主着陆系统
IF 3.3 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-03-21 DOI: 10.1007/s10846-023-02041-8
Jinge Si, Bin Li, Liang Wang, Chencheng Deng, Junzheng Wang, Shoukun Wang

High-reliability landing systems for unmanned aerial vehicles (UAVs) have gained extensive attention for their applicability in complex wild environments. Accurate locating, flexible tracking, and reliable recovery are the main challenges in drone landing. In this paper, a novel UAV autonomous landing system and its control framework are proposed and implemented. It’s comprised of an environmental perception system, an unmanned ground vehicle (UGV), and a Stewart platform to locate, track, and recover the drone autonomously. Firstly, a recognition algorithm based on multi-sensor fusion is developed to locate the target in real time with the help of a one-dimensional turntable. Secondly, a dual-stage tracking strategy composed of a UGV and a landing platform is proposed for dynamically tracking the landing drone. In a wide range, the UGV is in charge of fast-tracking through the artificial potential field (APF) path planning and the model predictive control (MPC) tracking algorithms. While the trapezoidal speed planning is employed in platform controller to compensate for the tracking error of the UGV, realizing the precise tracking to the drone in a small range. Furthermore, a recovery algorithm including an attitude compensation controller and an impedance controller is designed for the Stewart platform, ensuring horizontal and compliant landing of the drone. Finally, extensive simulations and experiments are dedicated to verifying the feasibility and reliability of the developed system and framework, indicating that it is a superior case of UAV autonomous landing in wild environments such as grasslands, slopes, and snow.

无人驾驶飞行器(UAV)的高可靠性着陆系统因其在复杂野外环境中的适用性而受到广泛关注。精确定位、灵活跟踪和可靠回收是无人机着陆的主要挑战。本文提出并实现了一种新型无人机自主着陆系统及其控制框架。该系统由环境感知系统、无人地面飞行器(UGV)和 Stewart 平台组成,可实现无人机的自主定位、跟踪和回收。首先,开发了一种基于多传感器融合的识别算法,借助一维转台实时定位目标。其次,提出了一种由 UGV 和着陆平台组成的双级跟踪策略,用于动态跟踪着陆无人机。在大范围内,UGV 通过人工势场(APF)路径规划和模型预测控制(MPC)跟踪算法负责快速跟踪。而平台控制器则采用梯形速度规划来补偿 UGV 的跟踪误差,从而在小范围内实现对无人机的精确跟踪。此外,还为 Stewart 平台设计了包括姿态补偿控制器和阻抗控制器在内的恢复算法,以确保无人机水平平稳着陆。最后,大量的模拟和实验验证了所开发系统和框架的可行性和可靠性,表明它是无人机在草地、斜坡和雪地等野外环境中自主着陆的卓越案例。
{"title":"A UAV Autonomous Landing System Integrating Locating, Tracking, and Landing in the Wild Environment","authors":"Jinge Si, Bin Li, Liang Wang, Chencheng Deng, Junzheng Wang, Shoukun Wang","doi":"10.1007/s10846-023-02041-8","DOIUrl":"https://doi.org/10.1007/s10846-023-02041-8","url":null,"abstract":"<p>High-reliability landing systems for unmanned aerial vehicles (UAVs) have gained extensive attention for their applicability in complex wild environments. Accurate locating, flexible tracking, and reliable recovery are the main challenges in drone landing. In this paper, a novel UAV autonomous landing system and its control framework are proposed and implemented. It’s comprised of an environmental perception system, an unmanned ground vehicle (UGV), and a Stewart platform to locate, track, and recover the drone autonomously. Firstly, a recognition algorithm based on multi-sensor fusion is developed to locate the target in real time with the help of a one-dimensional turntable. Secondly, a dual-stage tracking strategy composed of a UGV and a landing platform is proposed for dynamically tracking the landing drone. In a wide range, the UGV is in charge of fast-tracking through the artificial potential field (APF) path planning and the model predictive control (MPC) tracking algorithms. While the trapezoidal speed planning is employed in platform controller to compensate for the tracking error of the UGV, realizing the precise tracking to the drone in a small range. Furthermore, a recovery algorithm including an attitude compensation controller and an impedance controller is designed for the Stewart platform, ensuring horizontal and compliant landing of the drone. Finally, extensive simulations and experiments are dedicated to verifying the feasibility and reliability of the developed system and framework, indicating that it is a superior case of UAV autonomous landing in wild environments such as grasslands, slopes, and snow.</p>","PeriodicalId":54794,"journal":{"name":"Journal of Intelligent & Robotic Systems","volume":"21 1","pages":""},"PeriodicalIF":3.3,"publicationDate":"2024-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140204074","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of Intelligent & Robotic Systems
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1