首页 > 最新文献

Journal of Intelligent & Robotic Systems最新文献

英文 中文
Trajectory Tracking Control of Fixed-Wing Hybrid Aerial Underwater Vehicle Subject to Wind and Wave Disturbances 受风浪干扰的固定翼混合动力航空水下航行器的轨迹跟踪控制
IF 3.3 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-06-20 DOI: 10.1007/s10846-024-02099-y
Junping Li, Yufei Jin, Rui Hu, Yulin Bai, Di Lu, Zheng Zeng, Lian Lian

The hybrid aerial underwater vehicle (HAUV) could operate in the air and underwater might provide a great convenience for aerial and underwater exploration, and the fixed-wing HAUV (FHAUV) has time, space and cost advantages in future large-scale applications, while the large difference between the aerial and underwater environments is a challenge to control, especially in the air/water transition. However, for FHAUV control, there is a lack of research on phenomena or problems caused by large changes in the air/water transition. In addition, the effects of wind, wave, other factors and conditions on motion control are not investigated. This paper presents the first control study on the above issues. The motion model of FHAUV is developed, with the effects of wind and wave disturbances. Then, this paper improves a cascade gain scheduling (CGS) PID for different media environments (air and water) and proposes a cascade state feedback (CSF) control strategy to address the convergence problem of FHAUV control caused by large speed change in the air/water transition. In the comparisons of the two control schemes in various tracking cases including trajectory slopes, reference speeds, wind and wave disturbances, CSF has a better control effect, convergence rate and robustness; the key factors and conditions of the air/water transition are investigated, the critical relations and feasible domains of the trajectory slopes and reference speeds that the FHAUV must meet to successfully exit the water and enter the air are obtained, the critical slope decreases as the reference speed increases, and the feasible domain of CSF is larger than that of CGS, revealing that CSF is superior than CGS for exiting the water.

可在空中和水下运行的水下混合飞行器(HAUV)可能会为空中和水下探测提供极大的便利,固定翼水下混合飞行器(FHAUV)在未来的大规模应用中具有时间、空间和成本优势,而空中和水下环境的巨大差异是控制的一个挑战,尤其是在空中/水下转换时。然而,对于 FHAUV 的控制,目前还缺乏对空气/水过渡时期的巨大变化所引起的现象或问题的研究。此外,也没有研究风、波浪、其他因素和条件对运动控制的影响。本文首次对上述问题进行了控制研究。本文建立了 FHAUV 的运动模型,并考虑了风和波浪干扰的影响。然后,本文针对不同介质环境(空气和水)改进了级联增益调度(CGS)PID,并提出了级联状态反馈(CSF)控制策略,以解决 FHAUV 控制在空气/水转换过程中因速度变化较大而产生的收敛问题。在轨迹斜率、参考速度、风浪扰动等多种跟踪情况下,对比两种控制方案,CSF的控制效果、收敛速度和鲁棒性都更好;研究了水气转换的关键因素和条件,得到了 FHAUV 成功出水和进入大气所必须满足的轨迹斜率和参考速度的临界关系和可行域,临界斜率随参考速度的增加而减小,CSF 的可行域大于 CGS,表明 CSF 在出水方面优于 CGS。
{"title":"Trajectory Tracking Control of Fixed-Wing Hybrid Aerial Underwater Vehicle Subject to Wind and Wave Disturbances","authors":"Junping Li, Yufei Jin, Rui Hu, Yulin Bai, Di Lu, Zheng Zeng, Lian Lian","doi":"10.1007/s10846-024-02099-y","DOIUrl":"https://doi.org/10.1007/s10846-024-02099-y","url":null,"abstract":"<p>The hybrid aerial underwater vehicle (HAUV) could operate in the air and underwater might provide a great convenience for aerial and underwater exploration, and the fixed-wing HAUV (FHAUV) has time, space and cost advantages in future large-scale applications, while the large difference between the aerial and underwater environments is a challenge to control, especially in the air/water transition. However, for FHAUV control, there is a lack of research on phenomena or problems caused by large changes in the air/water transition. In addition, the effects of wind, wave, other factors and conditions on motion control are not investigated. This paper presents the first control study on the above issues. The motion model of FHAUV is developed, with the effects of wind and wave disturbances. Then, this paper improves a cascade gain scheduling (CGS) PID for different media environments (air and water) and proposes a cascade state feedback (CSF) control strategy to address the convergence problem of FHAUV control caused by large speed change in the air/water transition. In the comparisons of the two control schemes in various tracking cases including trajectory slopes, reference speeds, wind and wave disturbances, CSF has a better control effect, convergence rate and robustness; the key factors and conditions of the air/water transition are investigated, the critical relations and feasible domains of the trajectory slopes and reference speeds that the FHAUV must meet to successfully exit the water and enter the air are obtained, the critical slope decreases as the reference speed increases, and the feasible domain of CSF is larger than that of CGS, revealing that CSF is superior than CGS for exiting the water.</p>","PeriodicalId":54794,"journal":{"name":"Journal of Intelligent & Robotic Systems","volume":"94 1","pages":""},"PeriodicalIF":3.3,"publicationDate":"2024-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141504661","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dynamic Modelling and Optimal Sliding Mode Control of the Wearable Rehabilitative Bipedal Cable Robot with 7 Degrees of Freedom 具有 7 个自由度的穿戴式康复双足缆索机器人的动态建模和最佳滑动模式控制
IF 3.3 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-06-20 DOI: 10.1007/s10846-024-02122-2
A. Sajedifar, M. H. Korayem, F. Allahverdi

Although robot-assisted physiotherapy has gained increasing attention in recent years, the use of wearable rehabilitation robots for lower limbs has shown reduced efficiency due to additional equipment and motors located at the center of the joint, increasing complexity and load on disabled patients. This paper proposes a novel rehabilitation approach by eliminating motors and equipment from the center of joints and placing them on a fixed platform using cable-based power transmission. A proposed model of a 14 cable-driven bipedal robot with 7 degrees of freedom has been used to model a lower limb rehabilitation robot corresponding to it. The dynamic equations of the robot are derived using the Euler-Lagrange method. The sliding mode control technique is utilized to offer accurate control for tracking desired trajectories, ensuring smoothness despite disturbances, and reducing tracking errors. This approach is employed to help prevent patients from falling and support them in maintaining balance during rehabilitative exercises. To ensure that cables exert positive tension, the sliding mode controller was combined with quadratic programming optimization, minimizing path error while constraining the controller input torque to be non-negative. The performance of the proposed controller was assessed by considering several control gains resulting in K = 10 identified as the most effective one. The feasibility of this approach to rehabilitation is demonstrated by the numerical results in MATLAB simulation, which show that the RMSE amount of the right and left hip and thigh angles are 0.29, 0.37, 0.31, and 0.44, respectively which verified an improved rehabilitation process. Also, the correlation coefficient between the Adams and MATLAB simulation results for motor torque was found to be 0.98, indicating a high degree of correlation between the two simulation results.

尽管近年来机器人辅助物理治疗越来越受到关注,但由于额外的设备和电机位于关节中心,增加了残疾患者的复杂性和负担,下肢可穿戴康复机器人的使用效率有所下降。本文提出了一种新颖的康复方法,即取消关节中心的电机和设备,将其放置在一个固定平台上,利用电缆进行动力传输。本文提出了一个具有 7 个自由度的 14 拉索驱动双足机器人模型,并以此为基础建立了与之相对应的下肢康复机器人模型。使用欧拉-拉格朗日法推导出机器人的动态方程。利用滑动模式控制技术提供精确控制,以跟踪所需的轨迹,确保在受到干扰时仍能保持平稳,并减少跟踪误差。采用这种方法有助于防止病人摔倒,并支持他们在康复训练中保持平衡。为确保缆绳施加正拉力,滑动模式控制器与二次编程优化相结合,最大限度地减少路径误差,同时限制控制器输入扭矩为非负。通过考虑多个控制增益,评估了拟议控制器的性能,结果发现 K = 10 是最有效的增益。MATLAB 仿真的数值结果显示,左右臀部和大腿角度的 RMSE 值分别为 0.29、0.37、0.31 和 0.44,验证了这种康复方法的可行性。此外,亚当斯和 MATLAB 对电机扭矩的模拟结果之间的相关系数为 0.98,表明两种模拟结果之间具有高度相关性。
{"title":"Dynamic Modelling and Optimal Sliding Mode Control of the Wearable Rehabilitative Bipedal Cable Robot with 7 Degrees of Freedom","authors":"A. Sajedifar, M. H. Korayem, F. Allahverdi","doi":"10.1007/s10846-024-02122-2","DOIUrl":"https://doi.org/10.1007/s10846-024-02122-2","url":null,"abstract":"<p>Although robot-assisted physiotherapy has gained increasing attention in recent years, the use of wearable rehabilitation robots for lower limbs has shown reduced efficiency due to additional equipment and motors located at the center of the joint, increasing complexity and load on disabled patients. This paper proposes a novel rehabilitation approach by eliminating motors and equipment from the center of joints and placing them on a fixed platform using cable-based power transmission. A proposed model of a 14 cable-driven bipedal robot with 7 degrees of freedom has been used to model a lower limb rehabilitation robot corresponding to it. The dynamic equations of the robot are derived using the Euler-Lagrange method. The sliding mode control technique is utilized to offer accurate control for tracking desired trajectories, ensuring smoothness despite disturbances, and reducing tracking errors. This approach is employed to help prevent patients from falling and support them in maintaining balance during rehabilitative exercises. To ensure that cables exert positive tension, the sliding mode controller was combined with quadratic programming optimization, minimizing path error while constraining the controller input torque to be non-negative. The performance of the proposed controller was assessed by considering several control gains resulting in K = 10 identified as the most effective one. The feasibility of this approach to rehabilitation is demonstrated by the numerical results in MATLAB simulation, which show that the RMSE amount of the right and left hip and thigh angles are 0.29, 0.37, 0.31, and 0.44, respectively which verified an improved rehabilitation process. Also, the correlation coefficient between the Adams and MATLAB simulation results for motor torque was found to be 0.98, indicating a high degree of correlation between the two simulation results.</p>","PeriodicalId":54794,"journal":{"name":"Journal of Intelligent & Robotic Systems","volume":"86 1","pages":""},"PeriodicalIF":3.3,"publicationDate":"2024-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141504659","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Graph-Based vs. Error State Kalman Filter-Based Fusion of 5G and Inertial Data for MAV Indoor Pose Estimation 基于图形与误差状态卡尔曼滤波器的 5G 与惯性数据融合用于无人飞行器室内姿态估计
IF 3.3 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-06-07 DOI: 10.1007/s10846-024-02111-5
Meisam Kabiri, Claudio Cimarelli, Hriday Bavle, Jose Luis Sanchez-Lopez, Holger Voos

5G New Radio Time of Arrival (ToA) data has the potential to revolutionize indoor localization for micro aerial vehicles (MAVs). However, its performance under varying network setups, especially when combined with IMU data for real-time localization, has not been fully explored so far. In this study, we develop an Error State Kalman Filter (ESKF) and a Pose Graph Optimization (PGO) approach to address this gap. We systematically evaluate the performance of the derived approaches for real-time MAV localization in realistic scenarios with 5G base stations in Line-Of-Sight (LOS), demonstrating the potential of 5G technologies in this domain. In order to experimentally test and compare our localization approaches, we augment the EuRoC MAV benchmark dataset for visual-inertial odometry with simulated yet highly realistic 5G ToA measurements. Our experimental results comprehensively assess the impact of varying network setups, including varying base station numbers and network configurations, on ToA-based MAV localization performance. The findings show promising results for seamless and robust localization using 5G ToA measurements, achieving an accuracy of 15 cm throughout the entire trajectory within a graph-based framework with five 5G base stations, and an accuracy of up to 34 cm in the case of ESKF-based localization. Additionally, we measure the run time of both algorithms and show that they are both fast enough for real-time implementation.

5G 新无线电到达时间(ToA)数据有望彻底改变微型飞行器(MAV)的室内定位。然而,迄今为止,人们尚未充分探索其在不同网络设置下的性能,尤其是在结合 IMU 数据进行实时定位时。在本研究中,我们开发了误差状态卡尔曼滤波器(ESKF)和姿态图优化(PGO)方法来弥补这一不足。我们系统地评估了衍生方法在现实场景中与 5G 基站在视线(LOS)内进行 MAV 实时定位的性能,展示了 5G 技术在这一领域的潜力。为了对我们的定位方法进行实验测试和比较,我们使用模拟但高度真实的 5G ToA 测量数据增强了用于视觉惯性里程测量的 EuRoC MAV 基准数据集。我们的实验结果全面评估了不同网络设置(包括不同基站数量和网络配置)对基于 ToA 的 MAV 定位性能的影响。实验结果表明,使用 5G ToA 测量进行无缝、稳健定位的效果很好,在基于图的框架内,使用五个 5G 基站进行定位时,整个轨迹的精度达到了 15 厘米,而基于 ESKF 的定位精度则高达 34 厘米。此外,我们还测量了这两种算法的运行时间,结果表明它们都足够快,可以实时实现。
{"title":"Graph-Based vs. Error State Kalman Filter-Based Fusion of 5G and Inertial Data for MAV Indoor Pose Estimation","authors":"Meisam Kabiri, Claudio Cimarelli, Hriday Bavle, Jose Luis Sanchez-Lopez, Holger Voos","doi":"10.1007/s10846-024-02111-5","DOIUrl":"https://doi.org/10.1007/s10846-024-02111-5","url":null,"abstract":"<p>5G New Radio Time of Arrival (ToA) data has the potential to revolutionize indoor localization for micro aerial vehicles (MAVs). However, its performance under varying network setups, especially when combined with IMU data for real-time localization, has not been fully explored so far. In this study, we develop an Error State Kalman Filter (ESKF) and a Pose Graph Optimization (PGO) approach to address this gap. We systematically evaluate the performance of the derived approaches for real-time MAV localization in realistic scenarios with 5G base stations in Line-Of-Sight (LOS), demonstrating the potential of 5G technologies in this domain. In order to experimentally test and compare our localization approaches, we augment the EuRoC MAV benchmark dataset for visual-inertial odometry with simulated yet highly realistic 5G ToA measurements. Our experimental results comprehensively assess the impact of varying network setups, including varying base station numbers and network configurations, on ToA-based MAV localization performance. The findings show promising results for seamless and robust localization using 5G ToA measurements, achieving an accuracy of 15 cm throughout the entire trajectory within a graph-based framework with five 5G base stations, and an accuracy of up to 34 cm in the case of ESKF-based localization. Additionally, we measure the run time of both algorithms and show that they are both fast enough for real-time implementation.</p>","PeriodicalId":54794,"journal":{"name":"Journal of Intelligent & Robotic Systems","volume":"49 1","pages":""},"PeriodicalIF":3.3,"publicationDate":"2024-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141552210","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards Reliable Identification and Tracking of Drones Within a Swarm 实现对蜂群内无人机的可靠识别和跟踪
IF 3.3 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-06-05 DOI: 10.1007/s10846-024-02115-1
Nisha Kumari, Kevin Lee, Jan Carlo Barca, Chathurika Ranaweera

Drone swarms consist of multiple drones that can achieve tasks that individual drones can not, such as search and recovery or surveillance over a large area. A swarm’s internal structure typically consists of multiple drones operating autonomously. Reliable detection and tracking of swarms and individual drones allow a greater understanding of the behaviour and movement of a swarm. Increased understanding of drone behaviour allows better coordination, collision avoidance, and performance monitoring of individual drones in the swarm. The research presented in this paper proposes a deep learning-based approach for reliable detection and tracking of individual drones within a swarm using stereo-vision cameras in real time. The motivation behind this research is in the need to gain a deeper understanding of swarm dynamics, enabling improved coordination, collision avoidance, and performance monitoring of individual drones within a swarm. The proposed solution provides a precise tracking system and considers the highly dense and dynamic behaviour of drones. The approach is evaluated in both sparse and dense networks in a variety of configurations. The accuracy and efficiency of the proposed solution have been analysed by implementing a series of comparative experiments that demonstrate reasonable accuracy in detecting and tracking drones within a swarm.

无人机群由多架无人机组成,可以完成单个无人机无法完成的任务,如搜索和回收或大面积监视。无人机群的内部结构通常由多架自主运行的无人机组成。通过对无人机群和单个无人机进行可靠的探测和跟踪,可以更好地了解无人机群的行为和动向。加深对无人机行为的了解,可以更好地协调、避免碰撞,并对蜂群中的单个无人机进行性能监控。本文介绍的研究提出了一种基于深度学习的方法,利用立体视觉相机实时可靠地检测和跟踪蜂群中的单个无人机。这项研究背后的动机是需要更深入地了解蜂群动态,从而改进蜂群中单个无人机的协调、避免碰撞和性能监控。所提出的解决方案提供了一个精确的跟踪系统,并考虑到了无人机高度密集的动态行为。在各种配置的稀疏和密集网络中对该方法进行了评估。通过实施一系列对比实验,分析了所提解决方案的准确性和效率,实验结果表明在探测和跟踪蜂群中的无人机方面具有合理的准确性。
{"title":"Towards Reliable Identification and Tracking of Drones Within a Swarm","authors":"Nisha Kumari, Kevin Lee, Jan Carlo Barca, Chathurika Ranaweera","doi":"10.1007/s10846-024-02115-1","DOIUrl":"https://doi.org/10.1007/s10846-024-02115-1","url":null,"abstract":"<p>Drone swarms consist of multiple drones that can achieve tasks that individual drones can not, such as search and recovery or surveillance over a large area. A swarm’s internal structure typically consists of multiple drones operating autonomously. Reliable detection and tracking of swarms and individual drones allow a greater understanding of the behaviour and movement of a swarm. Increased understanding of drone behaviour allows better coordination, collision avoidance, and performance monitoring of individual drones in the swarm. The research presented in this paper proposes a deep learning-based approach for reliable detection and tracking of individual drones within a swarm using stereo-vision cameras in real time. The motivation behind this research is in the need to gain a deeper understanding of swarm dynamics, enabling improved coordination, collision avoidance, and performance monitoring of individual drones within a swarm. The proposed solution provides a precise tracking system and considers the highly dense and dynamic behaviour of drones. The approach is evaluated in both sparse and dense networks in a variety of configurations. The accuracy and efficiency of the proposed solution have been analysed by implementing a series of comparative experiments that demonstrate reasonable accuracy in detecting and tracking drones within a swarm.</p>","PeriodicalId":54794,"journal":{"name":"Journal of Intelligent & Robotic Systems","volume":"43 1","pages":""},"PeriodicalIF":3.3,"publicationDate":"2024-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141256760","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Image moment-based visual positioning and robust tracking control of ultra-redundant manipulator 基于图像力矩的超冗余机械手视觉定位和鲁棒跟踪控制
IF 3.3 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-05-30 DOI: 10.1007/s10846-024-02103-5
Zhongcan Li, Yufei Zhou, Mingchao Zhu, Yongzhi Chu, Qingwen Wu

Image moment features can describe more general target patterns and have good decoupling properties. However, the image moment features that control the camera’s rotation motion around the x-axis and y-axis mainly depend on the target image itself. In this paper, the ultra-redundant manipulator visual positioning and robust tracking control method based on the image moments are advocated.First, six image moment features used to control camera motion around the x-axis and around the y-axis are proposed. And then, a novel method is proposed to use to select image features. For tracking a moving target, a kalman filter combined with adaptive fuzzy sliding mode control method is proposed to achieve tracking control of moving targets, which can estimate changes in image features caused by the target’s motion on-line and compensate for estimation errors. Finally, the experimental system based on Labview-RealTime system and ultra-redundant manipulator is used to verify the real-time performance and practicability of the algorithm. Experimental results are presented to illustrate the validity of the image features and tracking method.

图像矩特征可以描述更一般的目标模式,并具有良好的解耦特性。然而,控制摄像机绕 x 轴和 y 轴旋转运动的图像矩特征主要取决于目标图像本身。本文提倡基于图像矩的超冗余机械手视觉定位和鲁棒跟踪控制方法。首先,提出了用于控制摄像机绕 x 轴和 y 轴运动的六个图像矩特征。然后,提出了一种用于选择图像特征的新方法。针对移动目标的跟踪,提出了卡尔曼滤波与自适应模糊滑模控制相结合的方法来实现对移动目标的跟踪控制,该方法可以在线估计目标运动引起的图像特征变化,并补偿估计误差。最后,利用基于 Labview-RealTime 系统和超冗余机械手的实验系统验证了算法的实时性和实用性。实验结果说明了图像特征和跟踪方法的有效性。
{"title":"Image moment-based visual positioning and robust tracking control of ultra-redundant manipulator","authors":"Zhongcan Li, Yufei Zhou, Mingchao Zhu, Yongzhi Chu, Qingwen Wu","doi":"10.1007/s10846-024-02103-5","DOIUrl":"https://doi.org/10.1007/s10846-024-02103-5","url":null,"abstract":"<p>Image moment features can describe more general target patterns and have good decoupling properties. However, the image moment features that control the camera’s rotation motion around the x-axis and y-axis mainly depend on the target image itself. In this paper, the ultra-redundant manipulator visual positioning and robust tracking control method based on the image moments are advocated.First, six image moment features used to control camera motion around the x-axis and around the y-axis are proposed. And then, a novel method is proposed to use to select image features. For tracking a moving target, a kalman filter combined with adaptive fuzzy sliding mode control method is proposed to achieve tracking control of moving targets, which can estimate changes in image features caused by the target’s motion on-line and compensate for estimation errors. Finally, the experimental system based on Labview-RealTime system and ultra-redundant manipulator is used to verify the real-time performance and practicability of the algorithm. Experimental results are presented to illustrate the validity of the image features and tracking method.</p>","PeriodicalId":54794,"journal":{"name":"Journal of Intelligent & Robotic Systems","volume":"33 1","pages":""},"PeriodicalIF":3.3,"publicationDate":"2024-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141195119","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Very Low Level Flight Rules for Manned and Unmanned Aircraft Operations 有人驾驶飞机和无人驾驶飞机超低空飞行规则
IF 3.3 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-05-29 DOI: 10.1007/s10846-024-02084-5
Anna Konert, Piotr Kasprzyk

An analysis of the development of legal regulations regarding unmanned civil aviation leads to the conclusion that the current air traffic rules are among the key issues that require amending. Are drones allowed to fly at any height? Can drones fly freely over a person’s house or garden, several meters above the ground? What is the minimum allowable height for drone flights? The method of study consisted of content analysis of the existing legislation. Current doctrines were confronted with the existing regulations, documents, materials, safety reports, and statistics. The results of the study show that the existing air traffic rules, precisely in the case of aircraft operations performed by manned and unmanned aviation at very low heights, are definitely practical in nature. First, in most countries violations of air traffic rules are prohibited acts subject to criminal penalty. Second, determining the principles of air traffic for air operations is of crucial importance for determining legally permissible interference in property ownership. The urban air mobility is outside the scope of this research.

通过分析有关无人驾驶民用航空的法律法规的发展情况,我们得出结论,现行空中交通规则是需要修订的关键问题之一。是否允许无人机在任何高度飞行?无人机可以在离地面数米高的房屋或花园上空自由飞行吗?无人机飞行的最低允许高度是多少?研究方法包括对现行法律进行内容分析。将现行理论与现有法规、文件、材料、安全报告和统计数据进行了对比。研究结果表明,现有的空中交通规则,确切地说,在有人驾驶和无人驾驶航空器在极低高度进行飞行作业的情况下,无疑具有实用性。首先,在大多数国家,违反空中交通规则是应受刑事处罚的违禁行为。其次,确定空中作业的空中交通原则对于确定法律允许的对财产所有权的干涉至关重要。城市空中交通不属于本研究的范围。
{"title":"Very Low Level Flight Rules for Manned and Unmanned Aircraft Operations","authors":"Anna Konert, Piotr Kasprzyk","doi":"10.1007/s10846-024-02084-5","DOIUrl":"https://doi.org/10.1007/s10846-024-02084-5","url":null,"abstract":"<p>An analysis of the development of legal regulations regarding unmanned civil aviation leads to the conclusion that the current air traffic rules are among the key issues that require amending. Are drones allowed to fly at any height? Can drones fly freely over a person’s house or garden, several meters above the ground? What is the minimum allowable height for drone flights? The method of study consisted of content analysis of the existing legislation. Current doctrines were confronted with the existing regulations, documents, materials<b>,</b> safety reports, and statistics. The results of the study show that the existing air traffic rules, precisely in the case of aircraft operations performed by manned and unmanned aviation at very low heights, are definitely practical in nature. First, in most countries violations of air traffic rules are prohibited acts subject to criminal penalty. Second, determining the principles of air traffic for air operations is of crucial importance for determining legally permissible interference in property ownership. The urban air mobility is outside the scope of this research.</p>","PeriodicalId":54794,"journal":{"name":"Journal of Intelligent & Robotic Systems","volume":"57 1","pages":""},"PeriodicalIF":3.3,"publicationDate":"2024-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141195026","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Stereovision-based Approach for Retrieving Variable Force Feedback in Robotic-Assisted Surgery Using Modified Inception ResNet V2 Networks 基于立体视觉的机器人辅助手术可变力反馈检索方法(使用修改后的 Inception ResNet V2 网络
IF 3.3 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-05-27 DOI: 10.1007/s10846-024-02100-8
P. V. Sabique, Ganesh Pasupathy, S. Kalaimagal, G. Shanmugasundar, V. K. Muneer

The surge of haptic technology has greatly impacted Robotic-assisted surgery in recent years due to its inspirational advancement in the field. Delivering tactile feedback to the surgeon has a significant role in improving the user experience in RAMIS. This work proposes a Modified inception ResNet network along with dimensionality reduction to regenerate the variable force produced during the surgical intervention. This work collects the relevant dataset from two ex vivo porcine skins and one ex vivo artificial skin for the validation of the results. The proposed framework is used to model both spatial and temporal data collected from the sensors, tissue, manipulators, and surgical tools. The evaluations are based on three distinct datasets with modest variations in tissue properties. The results of the proposed framework show an improvement of force prediction accuracy by 10.81% over RNN, 6.02% over RNN + LSTM, and 3.81% over the CNN + LSTM framework, and torque prediction accuracy by 12.41% over RNN, 5.75% over RNN + LSTM, and 3.75% over CNN + LSTM. The sensitivity study demonstrates that features such as torque (96.93%), deformation (94.02%), position (93.98%), vision (92.12%), stiffness (87.95%), tool diameter (89.24%), rotation (65.10%), and orientation (62.51%) have respective influences on the anticipated force. It was observed that the quality of the predicted force improved by 2.18% when performing feature selection and dimensionality reduction on features collected from tool, manipulator, tissue, and vision data and processing them simultaneously in all four architectures. The method has potential applications for online surgical tasks and surgeon training.

近年来,触觉技术的迅猛发展极大地影响了机器人辅助外科手术,因为它在该领域取得了令人鼓舞的进步。向外科医生提供触觉反馈对于改善 RAMIS 的用户体验具有重要作用。本研究提出了一种改进的萌芽 ResNet 网络,并通过降维来重新生成手术干预过程中产生的可变力。这项工作从两张活体猪皮和一张活体人造皮肤中收集相关数据集,以验证结果。提出的框架用于对从传感器、组织、机械手和手术工具收集到的空间和时间数据进行建模。评估基于组织属性变化不大的三个不同数据集。拟议框架的结果显示,力预测准确率比 RNN 提高了 10.81%,比 RNN + LSTM 提高了 6.02%,比 CNN + LSTM 框架提高了 3.81%;扭矩预测准确率比 RNN 提高了 12.41%,比 RNN + LSTM 提高了 5.75%,比 CNN + LSTM 提高了 3.75%。灵敏度研究表明,扭矩 (96.93%)、变形 (94.02%)、位置 (93.98%)、视觉 (92.12%)、刚度 (87.95%)、刀具直径 (89.24%)、旋转 (65.10%) 和方向 (62.51%) 等特征对预期力有各自的影响。据观察,对从工具、机械手、组织和视觉数据中收集的特征进行特征选择和降维处理,并在所有四种架构中同时进行处理时,预测力的质量提高了 2.18%。该方法有望应用于在线手术任务和外科医生培训。
{"title":"A Stereovision-based Approach for Retrieving Variable Force Feedback in Robotic-Assisted Surgery Using Modified Inception ResNet V2 Networks","authors":"P. V. Sabique, Ganesh Pasupathy, S. Kalaimagal, G. Shanmugasundar, V. K. Muneer","doi":"10.1007/s10846-024-02100-8","DOIUrl":"https://doi.org/10.1007/s10846-024-02100-8","url":null,"abstract":"<p>The surge of haptic technology has greatly impacted Robotic-assisted surgery in recent years due to its inspirational advancement in the field. Delivering tactile feedback to the surgeon has a significant role in improving the user experience in RAMIS. This work proposes a Modified inception ResNet network along with dimensionality reduction to regenerate the variable force produced during the surgical intervention. This work collects the relevant dataset from two ex vivo porcine skins and one ex vivo artificial skin for the validation of the results. The proposed framework is used to model both spatial and temporal data collected from the sensors, tissue, manipulators, and surgical tools. The evaluations are based on three distinct datasets with modest variations in tissue properties. The results of the proposed framework show an improvement of force prediction accuracy by 10.81% over RNN, 6.02% over RNN + LSTM, and 3.81% over the CNN + LSTM framework, and torque prediction accuracy by 12.41% over RNN, 5.75% over RNN + LSTM, and 3.75% over CNN + LSTM. The sensitivity study demonstrates that features such as torque (96.93%), deformation (94.02%), position (93.98%), vision (92.12%), stiffness (87.95%), tool diameter (89.24%), rotation (65.10%), and orientation (62.51%) have respective influences on the anticipated force. It was observed that the quality of the predicted force improved by 2.18% when performing feature selection and dimensionality reduction on features collected from tool, manipulator, tissue, and vision data and processing them simultaneously in all four architectures. The method has potential applications for online surgical tasks and surgeon training.</p>","PeriodicalId":54794,"journal":{"name":"Journal of Intelligent & Robotic Systems","volume":"257 1","pages":""},"PeriodicalIF":3.3,"publicationDate":"2024-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141172512","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Visual-guided and Deep Reinforcement Learning Algorithm Based for Multip-Peg-in-Hole Assembly Task of Power Distribution Live-line Operation Robot 基于视觉引导和深度强化学习算法的配电带电作业机器人多孔装配任务
IF 3.3 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-05-18 DOI: 10.1007/s10846-024-02079-2
Li Zheng, Jiajun Ai, Yahao Wang, Xuming Tang, Shaolei Wu, Sheng Cheng, Rui Guo, Erbao Dong

The inspection and maintenance of power distribution network are crucial for efficiently delivering electricity to consumers. Due to the high voltage of power distribution network lines, manual live-line operations are difficult, risky, and inefficient. This paper researches a Power Distribution Network Live-line Operation Robot (PDLOR) with autonomous tool assembly capabilities to replace humans in various high-risk electrical maintenance tasks. To address the challenges of tool assembly in dynamic and unstructured work environments for PDLOR, we propose a framework consisting of deep visual-guided coarse localization and prior knowledge and fuzzy logic driven deep deterministic policy gradient (PKFD-DPG) high-precision assembly algorithm. First, we propose a multiscale identification and localization network based on YOLOv5, which enables the peg-hole close quickly and reduces ineffective exploration. Second, we design a main-auxiliary combined reward system, where the main-line reward uses the hindsight experience replay mechanism, and the auxiliary reward is based on fuzzy logic inference mechanism, addressing ineffective exploration and sparse reward in the learning process. In addition, we validate the effectiveness and advantages of the proposed algorithm through simulations and physical experiments, and also compare its performance with other assembly algorithms. The experimental results show that, for single-tool assembly tasks, the success rate of PKFD-DPG is 15.2% higher than the DDPG with functionized reward functions and 51.7% higher than the PD force control method; for multip-tools assembly tasks, the success rate of PKFD-DPG method is 17% and 53.4% higher than the other methods.

配电网络的检查和维护对于向用户有效供电至关重要。由于配电网线路电压高,人工带电作业难度大、风险高、效率低。本文研究了一种具有自主工具组装能力的配电网带电作业机器人(PDLOR),以替代人类完成各种高风险的电力维护任务。为了解决 PDLOR 在动态和非结构化工作环境中工具装配的挑战,我们提出了一个由深度视觉引导的粗定位和先验知识以及模糊逻辑驱动的深度确定性策略梯度(PKFD-DPG)高精度装配算法组成的框架。首先,我们提出了基于 YOLOv5 的多尺度识别和定位网络,该网络可快速关闭挂孔并减少无效探索。其次,我们设计了主辅结合的奖励系统,其中主线奖励采用事后经验回放机制,辅助奖励基于模糊逻辑推理机制,解决了学习过程中的无效探索和奖励稀疏问题。此外,我们还通过仿真和物理实验验证了所提算法的有效性和优势,并将其性能与其他装配算法进行了比较。实验结果表明,对于单工具装配任务,PKFD-DPG 方法的成功率比带函数化奖励函数的 DDPG 方法高 15.2%,比 PD 力控制方法高 51.7%;对于多工具装配任务,PKFD-DPG 方法的成功率比其他方法高 17%,比其他方法高 53.4%。
{"title":"Deep Visual-guided and Deep Reinforcement Learning Algorithm Based for Multip-Peg-in-Hole Assembly Task of Power Distribution Live-line Operation Robot","authors":"Li Zheng, Jiajun Ai, Yahao Wang, Xuming Tang, Shaolei Wu, Sheng Cheng, Rui Guo, Erbao Dong","doi":"10.1007/s10846-024-02079-2","DOIUrl":"https://doi.org/10.1007/s10846-024-02079-2","url":null,"abstract":"<p>The inspection and maintenance of power distribution network are crucial for efficiently delivering electricity to consumers. Due to the high voltage of power distribution network lines, manual live-line operations are difficult, risky, and inefficient. This paper researches a Power Distribution Network Live-line Operation Robot (PDLOR) with autonomous tool assembly capabilities to replace humans in various high-risk electrical maintenance tasks. To address the challenges of tool assembly in dynamic and unstructured work environments for PDLOR, we propose a framework consisting of deep visual-guided coarse localization and prior knowledge and fuzzy logic driven deep deterministic policy gradient (PKFD-DPG) high-precision assembly algorithm. First, we propose a multiscale identification and localization network based on YOLOv5, which enables the peg-hole close quickly and reduces ineffective exploration. Second, we design a main-auxiliary combined reward system, where the main-line reward uses the hindsight experience replay mechanism, and the auxiliary reward is based on fuzzy logic inference mechanism, addressing ineffective exploration and sparse reward in the learning process. In addition, we validate the effectiveness and advantages of the proposed algorithm through simulations and physical experiments, and also compare its performance with other assembly algorithms. The experimental results show that, for single-tool assembly tasks, the success rate of PKFD-DPG is 15.2% higher than the DDPG with functionized reward functions and 51.7% higher than the PD force control method; for multip-tools assembly tasks, the success rate of PKFD-DPG method is 17% and 53.4% higher than the other methods.</p>","PeriodicalId":54794,"journal":{"name":"Journal of Intelligent & Robotic Systems","volume":"48 1","pages":""},"PeriodicalIF":3.3,"publicationDate":"2024-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141061484","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Minimalistic 3D Self-Organized UAV Flocking Approach for Desert Exploration 用于沙漠探索的极简三维自组织无人机成群方法
IF 3.3 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-05-18 DOI: 10.1007/s10846-024-02108-0
Thulio Amorim, Tiago Nascimento, Akash Chaudhary, Eliseo Ferrante, Martin Saska

In this work, we propose a minimalistic swarm flocking approach for multirotor unmanned aerial vehicles (UAVs). Our approach allows the swarm to achieve cohesively and aligned flocking (collective motion), in a random direction, without externally provided directional information exchange (alignment control). The method relies on minimalistic sensory requirements as it uses only the relative range and bearing of swarm agents in local proximity obtained through onboard sensors on the UAV. Thus, our method is able to stabilize and control the flock of a general shape above a steep terrain without any explicit communication between swarm members. To implement proximal control in a three-dimensional manner, the Lennard-Jones potential function is used to maintain cohesiveness and avoid collisions between robots. The performance of the proposed approach was tested in real-world conditions by experiments with a team of nine UAVs. Experiments also present the usage of our approach on UAVs that are independent of external positioning systems such as the Global Navigation Satellite System (GNSS). Relying only on a relative visual localization through the ultraviolet direction and ranging (UVDAR) system, previously proposed by our group, the experiments verify that our system can be applied in GNSS-denied environments. The degree achieved of alignment and cohesiveness was evaluated using the metrics of order and steady-state value.

在这项工作中,我们提出了一种适用于多旋翼无人飞行器(UAV)的简约蜂群集群方法。我们的方法允许蜂群在随机方向上实现凝聚和排列的成群(集体运动),而无需外部提供的方向信息交换(排列控制)。这种方法对感官的要求极低,因为它只使用通过无人机上的机载传感器获得的蜂群在本地附近的相对距离和方位。因此,我们的方法能够稳定和控制陡峭地形上的一般形状的飞行群,而不需要飞行群成员之间进行任何明确的通信。为了以三维方式实现近距离控制,我们使用了伦纳德-琼斯势函数来保持机器人群的凝聚力,避免机器人之间发生碰撞。在真实世界条件下,我们用一个由九个无人机组成的团队进行了实验,测试了所提出方法的性能。实验还展示了我们的方法在独立于全球导航卫星系统(GNSS)等外部定位系统的无人机上的使用情况。仅依靠我们小组之前提出的紫外线测向和测距(UVDAR)系统进行相对视觉定位,实验验证了我们的系统可以应用于不依赖全球导航卫星系统的环境。使用阶次和稳态值指标评估了所达到的对准和内聚程度。
{"title":"A Minimalistic 3D Self-Organized UAV Flocking Approach for Desert Exploration","authors":"Thulio Amorim, Tiago Nascimento, Akash Chaudhary, Eliseo Ferrante, Martin Saska","doi":"10.1007/s10846-024-02108-0","DOIUrl":"https://doi.org/10.1007/s10846-024-02108-0","url":null,"abstract":"<p>In this work, we propose a minimalistic swarm flocking approach for multirotor unmanned aerial vehicles (UAVs). Our approach allows the swarm to achieve cohesively and aligned flocking (collective motion), in a random direction, without externally provided directional information exchange (alignment control). The method relies on minimalistic sensory requirements as it uses only the relative range and bearing of swarm agents in local proximity obtained through onboard sensors on the UAV. Thus, our method is able to stabilize and control the flock of a general shape above a steep terrain without any explicit communication between swarm members. To implement proximal control in a three-dimensional manner, the Lennard-Jones potential function is used to maintain cohesiveness and avoid collisions between robots. The performance of the proposed approach was tested in real-world conditions by experiments with a team of nine UAVs. Experiments also present the usage of our approach on UAVs that are independent of external positioning systems such as the Global Navigation Satellite System (GNSS). Relying only on a relative visual localization through the ultraviolet direction and ranging (UVDAR) system, previously proposed by our group, the experiments verify that our system can be applied in GNSS-denied environments. The degree achieved of alignment and cohesiveness was evaluated using the metrics of order and steady-state value.</p>","PeriodicalId":54794,"journal":{"name":"Journal of Intelligent & Robotic Systems","volume":"54 1","pages":""},"PeriodicalIF":3.3,"publicationDate":"2024-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141061468","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
OffRoadSynth Open Dataset for Semantic Segmentation using Synthetic-Data-Based Weight Initialization for Autonomous UGV in Off-Road Environments 利用基于合成数据的权重初始化进行语义分割的 OffRoadSynth 开放数据集,适用于非公路环境中的自主式 UGV
IF 3.3 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-05-18 DOI: 10.1007/s10846-024-02114-2
Konrad Małek, Jacek Dybała, Andrzej Kordecki, Piotr Hondra, Katarzyna Kijania

This article concerns the issue of image semantic segmentation for the machine vision system of an autonomous Unmanned Ground Vehicle (UGV) moving in an off-road environment. Determining the meaning (semantics) of the areas visible in the recorded image provides a complete understanding of the scene surrounding the autonomous vehicle. It is crucial for the correct determination of a passable route. Nowadays, semantic segmentation is generally solved using convolutional neural networks (CNN), which can take an image as input and output the segmented image. However, proper training of the neural network requires the use of large amounts of data, which becomes problematic in the situation of low availability of large, dedicated image data sets that consider various off-road situations - driving on various types of roads, surrounded by diverse vegetation and in various weather and light conditions. This study introduces a synthetic image dataset called “OffRoadSynth” to address the training data scarcity for off-road scenarios. It has been shown that pre-training the neural network on this synthetic dataset improves image segmentation accuracy compared to other methods, such as random network weight initialization or using larger, generic datasets. Results suggest that using a smaller but domain-dedicated set of synthetic images to initialize network weights for training on the target real-world dataset may be an effective approach to improving semantic segmentation results of images, including those from off-road environments.

本文涉及在越野环境中行驶的自主无人地面车辆(UGV)机器视觉系统的图像语义分割问题。确定所记录图像中可见区域的含义(语义)有助于全面了解自动驾驶车辆周围的场景。这对于正确确定可通行路线至关重要。目前,语义分割一般使用卷积神经网络(CNN)来解决,它可以将图像作为输入,并输出分割后的图像。然而,神经网络的正确训练需要使用大量数据,而在考虑到各种越野情况(在各种类型的道路上行驶、周围有各种植被、在各种天气和光线条件下行驶)的大型专用图像数据集可用性较低的情况下,这就成了问题。本研究引入了一个名为 "OffRoadSynth "的合成图像数据集,以解决越野场景训练数据稀缺的问题。研究表明,与随机网络权重初始化或使用更大的通用数据集等其他方法相比,在该合成数据集上预训练神经网络可提高图像分割的准确性。研究结果表明,使用较小但领域专用的合成图像集来初始化网络权重,以便在目标真实世界数据集上进行训练,可能是改善图像语义分割结果(包括来自非道路环境的图像)的有效方法。
{"title":"OffRoadSynth Open Dataset for Semantic Segmentation using Synthetic-Data-Based Weight Initialization for Autonomous UGV in Off-Road Environments","authors":"Konrad Małek, Jacek Dybała, Andrzej Kordecki, Piotr Hondra, Katarzyna Kijania","doi":"10.1007/s10846-024-02114-2","DOIUrl":"https://doi.org/10.1007/s10846-024-02114-2","url":null,"abstract":"<p>This article concerns the issue of image semantic segmentation for the machine vision system of an autonomous Unmanned Ground Vehicle (UGV) moving in an off-road environment. Determining the meaning (semantics) of the areas visible in the recorded image provides a complete understanding of the scene surrounding the autonomous vehicle. It is crucial for the correct determination of a passable route. Nowadays, semantic segmentation is generally solved using convolutional neural networks (CNN), which can take an image as input and output the segmented image. However, proper training of the neural network requires the use of large amounts of data, which becomes problematic in the situation of low availability of large, dedicated image data sets that consider various off-road situations - driving on various types of roads, surrounded by diverse vegetation and in various weather and light conditions. This study introduces a synthetic image dataset called “OffRoadSynth” to address the training data scarcity for off-road scenarios. It has been shown that pre-training the neural network on this synthetic dataset improves image segmentation accuracy compared to other methods, such as random network weight initialization or using larger, generic datasets. Results suggest that using a smaller but domain-dedicated set of synthetic images to initialize network weights for training on the target real-world dataset may be an effective approach to improving semantic segmentation results of images, including those from off-road environments.</p>","PeriodicalId":54794,"journal":{"name":"Journal of Intelligent & Robotic Systems","volume":"131 1","pages":""},"PeriodicalIF":3.3,"publicationDate":"2024-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141061377","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of Intelligent & Robotic Systems
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1