首页 > 最新文献

Journal of Field Robotics最新文献

英文 中文
Degeneracy-Resistant LiDAR-SLAM Algorithm Based on Geometric and Visual Features' Fusion 基于几何特征与视觉特征融合的抗退化LiDAR-SLAM算法
IF 5.2 2区 计算机科学 Q2 ROBOTICS Pub Date : 2025-08-25 DOI: 10.1002/rob.70026
Daping Yang, Wenzhong Shi, Shuyu Zhang, Mingyan Nie, Yitao Wei, Qiru Zhong, Shengyu Lu, Ameer Hamza Khan

Simultaneous localization and mapping (SLAM) is at the core of robotics automation, relying on sensors such as Laser Detection and Ranging (LiDAR) and cameras to digitally construct a robot's environment and determine its position within. LiDAR-based SLAM outperforms visual-SLAM, especially in low visibility and challenging lighting conditions. However, these systems still face challenges like scene degradation when dealing with feature-deficient degenerate environments such as long corridors or tunnels. Traditional LiDAR SLAM algorithms primarily focus on the extraction of geometric features from the scene, with less utilization of visual information, for example, LiDAR-generated reflectivity (also commonly referred to as intensity image) and depth imagery. In this study, we explore the potential of fusing both geometric and LiDAR-generated image features into the SLAM system in various forms, aiming to enhance the system's adaptability in diverse environments and its robustness against environment degeneracy. We propose a new multifeature-modality SLAM designed for robust real-time localization and mapping in challenging environments. Our method enhances and extracts visual features from LiDAR-generated images, which are then fused with geometric features through a holistic residual function for pose optimization. We also integrate a deep learning-based object removal algorithm to reduce sensitivity to moving objects and sensor noise. This article conducts an in-depth comparison of the proposed algorithm with several leading technologies in terms of scan matching accuracy, robustness, odometry, and mapping. The experimental results vividly showcase the superiority of our method in achieving high scan matching success rates and strong resilience against random outliers and Gaussian noise across various challenging scenarios, compared to the existing LiDAR SLAM methods that rely solely on geometric features. Extensive field experiments conducted on publicly available data sets, along with independently developed backpack-based and robotic platforms, validated the robustness and accuracy of the proposed approach in both indoor and outdoor environments. In 3D mapping, we quantified the precision of 3D points by comparing point clouds collected by high-precision Mobile Laser Scanning (MLS) and Terrestrial Laser Scanning (TLS). Our method outperforms in terms of absolute pose errors (APE) and point cloud matching quality. Based on the fitted Weibull distribution, the root mean square error (RMS) of point-to-plane distances improved by 20%. Additionally, ablation tests revealed the efficacy of different components within our system.

同时定位和绘图(SLAM)是机器人自动化的核心,依靠激光探测和测距(LiDAR)和摄像头等传感器以数字方式构建机器人的环境并确定其在其中的位置。基于激光雷达的SLAM优于视觉SLAM,特别是在低能见度和具有挑战性的照明条件下。然而,这些系统在处理特征缺乏的退化环境(如长走廊或隧道)时仍然面临着场景退化等挑战。传统的LiDAR SLAM算法主要侧重于从场景中提取几何特征,对视觉信息的利用较少,例如LiDAR生成的反射率(通常也称为强度图像)和深度图像。在本研究中,我们探索了将几何和激光雷达生成的图像特征以各种形式融合到SLAM系统中的潜力,旨在增强系统在不同环境中的适应性和对环境退化的鲁棒性。我们提出了一种新的多特征模态SLAM,用于在具有挑战性的环境中进行鲁棒的实时定位和映射。该方法从激光雷达生成的图像中增强和提取视觉特征,然后通过整体残差函数将视觉特征与几何特征融合,进行姿态优化。我们还集成了一种基于深度学习的物体去除算法,以降低对移动物体和传感器噪声的敏感性。本文在扫描匹配精度、鲁棒性、里程计和映射方面对所提出的算法与几种领先技术进行了深入的比较。实验结果生动地展示了我们的方法在各种具有挑战性的场景下实现高扫描匹配成功率和对随机异常值和高斯噪声的强弹性方面的优势,与现有的仅依赖几何特征的LiDAR SLAM方法相比。在公开可用的数据集上进行了大量的现场实验,以及独立开发的基于背包和机器人的平台,验证了该方法在室内和室外环境下的鲁棒性和准确性。在三维制图中,我们通过比较高精度移动激光扫描(MLS)和地面激光扫描(TLS)收集的点云,量化了三维点的精度。我们的方法在绝对位姿误差(APE)和点云匹配质量方面优于其他方法。基于拟合的威布尔分布,点平面距离的均方根误差(RMS)提高了20%。此外,消融测试揭示了我们系统中不同成分的功效。
{"title":"Degeneracy-Resistant LiDAR-SLAM Algorithm Based on Geometric and Visual Features' Fusion","authors":"Daping Yang,&nbsp;Wenzhong Shi,&nbsp;Shuyu Zhang,&nbsp;Mingyan Nie,&nbsp;Yitao Wei,&nbsp;Qiru Zhong,&nbsp;Shengyu Lu,&nbsp;Ameer Hamza Khan","doi":"10.1002/rob.70026","DOIUrl":"https://doi.org/10.1002/rob.70026","url":null,"abstract":"<div>\u0000 \u0000 <p>Simultaneous localization and mapping (SLAM) is at the core of robotics automation, relying on sensors such as Laser Detection and Ranging (LiDAR) and cameras to digitally construct a robot's environment and determine its position within. LiDAR-based SLAM outperforms visual-SLAM, especially in low visibility and challenging lighting conditions. However, these systems still face challenges like scene degradation when dealing with feature-deficient degenerate environments such as long corridors or tunnels. Traditional LiDAR SLAM algorithms primarily focus on the extraction of geometric features from the scene, with less utilization of visual information, for example, LiDAR-generated reflectivity (also commonly referred to as intensity image) and depth imagery. In this study, we explore the potential of fusing both geometric and LiDAR-generated image features into the SLAM system in various forms, aiming to enhance the system's adaptability in diverse environments and its robustness against environment degeneracy. We propose a new multifeature-modality SLAM designed for robust real-time localization and mapping in challenging environments. Our method enhances and extracts visual features from LiDAR-generated images, which are then fused with geometric features through a holistic residual function for pose optimization. We also integrate a deep learning-based object removal algorithm to reduce sensitivity to moving objects and sensor noise. This article conducts an in-depth comparison of the proposed algorithm with several leading technologies in terms of scan matching accuracy, robustness, odometry, and mapping. The experimental results vividly showcase the superiority of our method in achieving high scan matching success rates and strong resilience against random outliers and Gaussian noise across various challenging scenarios, compared to the existing LiDAR SLAM methods that rely solely on geometric features. Extensive field experiments conducted on publicly available data sets, along with independently developed backpack-based and robotic platforms, validated the robustness and accuracy of the proposed approach in both indoor and outdoor environments. In 3D mapping, we quantified the precision of 3D points by comparing point clouds collected by high-precision Mobile Laser Scanning (MLS) and Terrestrial Laser Scanning (TLS). Our method outperforms in terms of absolute pose errors (APE) and point cloud matching quality. Based on the fitted Weibull distribution, the root mean square error (RMS) of point-to-plane distances improved by 20%. Additionally, ablation tests revealed the efficacy of different components within our system.</p>\u0000 </div>","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"43 1","pages":"330-352"},"PeriodicalIF":5.2,"publicationDate":"2025-08-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145792462","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Development and Field Evaluation of a Hydraulic Arm Integrated With an E-Vehicle for Robotic Cotton Picker 摘棉机器人液压臂与电动车辆集成的研制与现场评价
IF 5.2 2区 计算机科学 Q2 ROBOTICS Pub Date : 2025-08-25 DOI: 10.1002/rob.70045
Rahul Yadav, Manjeet Singh, Aseem Verma, Naresh Kumar Chhuneja, Derminder Singh, Paramjit Singh

In cotton picking, there is a need for a robotic arm that can pick bolls efficiently with minimal loss and damage to immature bolls. This study addresses that gap by developing a hydraulic arm to work effectively in unstructured cotton fields. A three-degrees-of-freedom hydraulic arm was developed with vertical, radial, and rotational movements, equipped with a roller-type end-effector and vacuum suction system to pick cotton bolls from the plants and collect them into a storage tank. The performance of the hydraulic arm was evaluated in the field under varying conditions of plant-to-plant spacing (250, 350, and 450 mm) and at suction pressures (25, 50, and 75 mmHg). Key performance parameters such as picking efficiency, picking capacity, damage of green bolls, losses of cotton bolls, and trash content were evaluated. The results indicated that the maximum picking efficiency (92.79%) and capacity (567 bolls picked/h) were achieved at 450 mm plant-to-plant spacing and 75 mmHg suction pressure, with minimal damage of green bolls (126 bolls/ha) and losses of cotton bolls (3.79%). The lowest trash content (2.06%) was observed at 450 mm spacing and 25 mmHg suction pressure. Finally, the hydraulic arm achieved precise and accurate cotton picking in complex cotton fields, showing the best performance at 450 mm plant-to-plant spacing and 75 mmHg suction pressure.

在棉花采摘中,需要一种机械臂,能够有效地采摘棉铃,同时将未成熟棉铃的损失和损坏降到最低。这项研究通过开发一种液压臂来解决这一差距,使其在非结构化棉田中有效地工作。开发了一个具有垂直、径向和旋转运动的三自由度液压臂,配备了滚轮式末端执行器和真空吸吸系统,从植株上采摘棉铃并将其收集到储罐中。在不同的工厂间距(250、350和450 mm)和吸入压力(25、50和75 mmHg)条件下,对液压臂的性能进行了现场评估。对采收效率、采收能力、青铃损失率、棉铃损失率、垃圾含量等关键性能参数进行了评价。结果表明,在450mm株距和75mmhg吸压条件下,青铃损伤最小(126铃/ha),棉铃损失最小(3.79%),单株采摘效率最高(92.79%),产量最高(567铃/h)。在450 mm间距和25 mmHg吸压条件下,垃圾含量最低,为2.06%。最后,液压臂在复杂的棉田中实现了精确的棉花采摘,在450mm株间间距和75mmhg吸压下表现最佳。
{"title":"Development and Field Evaluation of a Hydraulic Arm Integrated With an E-Vehicle for Robotic Cotton Picker","authors":"Rahul Yadav,&nbsp;Manjeet Singh,&nbsp;Aseem Verma,&nbsp;Naresh Kumar Chhuneja,&nbsp;Derminder Singh,&nbsp;Paramjit Singh","doi":"10.1002/rob.70045","DOIUrl":"https://doi.org/10.1002/rob.70045","url":null,"abstract":"<div>\u0000 \u0000 <p>In cotton picking, there is a need for a robotic arm that can pick bolls efficiently with minimal loss and damage to immature bolls. This study addresses that gap by developing a hydraulic arm to work effectively in unstructured cotton fields. A three-degrees-of-freedom hydraulic arm was developed with vertical, radial, and rotational movements, equipped with a roller-type end-effector and vacuum suction system to pick cotton bolls from the plants and collect them into a storage tank. The performance of the hydraulic arm was evaluated in the field under varying conditions of plant-to-plant spacing (250, 350, and 450 mm) and at suction pressures (25, 50, and 75 mmHg). Key performance parameters such as picking efficiency, picking capacity, damage of green bolls, losses of cotton bolls, and trash content were evaluated. The results indicated that the maximum picking efficiency (92.79%) and capacity (567 bolls picked/h) were achieved at 450 mm plant-to-plant spacing and 75 mmHg suction pressure, with minimal damage of green bolls (126 bolls/ha) and losses of cotton bolls (3.79%). The lowest trash content (2.06%) was observed at 450 mm spacing and 25 mmHg suction pressure. Finally, the hydraulic arm achieved precise and accurate cotton picking in complex cotton fields, showing the best performance at 450 mm plant-to-plant spacing and 75 mmHg suction pressure.</p>\u0000 </div>","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"43 1","pages":"409-420"},"PeriodicalIF":5.2,"publicationDate":"2025-08-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145792464","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Monocular Visual-Based State Estimator for Online Navigation in Complex and Unstructured Underwater Environments 基于单目视觉的复杂非结构化水下环境在线导航状态估计
IF 5.2 2区 计算机科学 Q2 ROBOTICS Pub Date : 2025-08-25 DOI: 10.1002/rob.70032
Francesco Ruscio, Simone Tani, Andrea Caiti, Riccardo Costanzi

This paper proposes a monocular visual-based navigation state estimator designed to operate onboard cost-effective Autonomous Underwater Vehicles (AUVs) in monitoring and inspection applications. The estimator exploits a monocular visual odometry solution, named Mono UVO (Monocular Underwater Visual Odometry), integrating acoustic range information to make the scale observable and provide an estimate of the robot linear velocity in complex and unstructured underwater scenarios. By utilizing an Extended Kalman Filter, the visual-based linear velocity is fused with robot attitude and depth measurements to retrieve the AUV navigation state. The proposed navigation framework was extensively tested offline using heterogeneous data sets of real underwater images collected during several experimental campaigns. Moreover, online validation of the navigation state estimator was performed onboard an AUV to accomplish a closed-loop autonomous survey at sea. The performance of the state estimator is evaluated by comparing the estimation output with reference signals obtained from Doppler Velocity Log measurements and GPS when available. The results demonstrate the feasibility of the proposed visual-based state estimator in providing reliable AUV navigation state in very different and challenging underwater environments. Among the contributions, the source code of the Mono UVO algorithm is made available online, together with the release of an underwater data set.

本文提出了一种基于单目视觉的导航状态估计器,设计用于具有成本效益的自主水下航行器(auv)的监测和检测应用。该估计器利用了一种名为Mono UVO (monocular Underwater visual odometry)的单目视觉里程计解决方案,集成了声学距离信息,使尺度可观测,并提供了复杂和非结构化水下场景下机器人线速度的估计。利用扩展卡尔曼滤波,将基于视觉的线速度与机器人姿态和深度测量相融合,获取水下机器人的导航状态。使用在几个实验活动中收集的真实水下图像的异构数据集,对所提出的导航框架进行了广泛的离线测试。此外,在水下航行器上对导航状态估计器进行了在线验证,以完成海上闭环自主测量。通过将估计输出与多普勒速度日志测量和GPS测量的参考信号进行比较,评估了状态估计器的性能。结果表明,所提出的基于视觉的状态估计器在非常不同和具有挑战性的水下环境中提供可靠的AUV导航状态是可行的。在这些贡献中,Mono UVO算法的源代码在线提供,并发布了一组水下数据集。
{"title":"Monocular Visual-Based State Estimator for Online Navigation in Complex and Unstructured Underwater Environments","authors":"Francesco Ruscio,&nbsp;Simone Tani,&nbsp;Andrea Caiti,&nbsp;Riccardo Costanzi","doi":"10.1002/rob.70032","DOIUrl":"https://doi.org/10.1002/rob.70032","url":null,"abstract":"<div>\u0000 \u0000 <p>This paper proposes a monocular visual-based navigation state estimator designed to operate onboard cost-effective Autonomous Underwater Vehicles (AUVs) in monitoring and inspection applications. The estimator exploits a monocular visual odometry solution, named Mono UVO (Monocular Underwater Visual Odometry), integrating acoustic range information to make the scale observable and provide an estimate of the robot linear velocity in complex and unstructured underwater scenarios. By utilizing an Extended Kalman Filter, the visual-based linear velocity is fused with robot attitude and depth measurements to retrieve the AUV navigation state. The proposed navigation framework was extensively tested offline using heterogeneous data sets of real underwater images collected during several experimental campaigns. Moreover, online validation of the navigation state estimator was performed onboard an AUV to accomplish a closed-loop autonomous survey at sea. The performance of the state estimator is evaluated by comparing the estimation output with reference signals obtained from Doppler Velocity Log measurements and GPS when available. The results demonstrate the feasibility of the proposed visual-based state estimator in providing reliable AUV navigation state in very different and challenging underwater environments. Among the contributions, the source code of the Mono UVO algorithm is made available online, together with the release of an underwater data set.</p>\u0000 </div>","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"43 1","pages":"353-375"},"PeriodicalIF":5.2,"publicationDate":"2025-08-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145792463","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LMFFNet: Lightweight Multiscale Feature Fusion Network for Underwater Structural Defect Detection LMFFNet:用于水下结构缺陷检测的轻量级多尺度特征融合网络
IF 5.2 2区 计算机科学 Q2 ROBOTICS Pub Date : 2025-08-25 DOI: 10.1002/rob.70041
Chunyan Ma, Huibin Wang, Kai Zhang, Guangze Shen, Zhe Chen

Complex physicochemical environmental effects result in highly intricate and diverse backgrounds in underwater object images, posing significant challenges for detecting structural defects. Besides, current methods overlook the tradeoff between the detection accuracy and computational cost. To effectively address the aforementioned issues, we present a lightweight multiscale feature fusion network (LMFFNet) for underwater structural defect detection. Aiming to enhance the feature representability, spatial attention, and bidirectional pyramid modules are jointly employed for fusing multiscale features. A parameter-sharing header module is designed to reduce the model parameters. The dynamic nonmonotonic focusing mechanism is introduced in the loss function, which can improve the defect detection performance on degraded underwater images. Comprehensive experiments on a real-world underwater data set demonstrate the superiority of the LMFFNet over existing state-of-the-art methods.

复杂的物化环境效应导致水下目标图像背景高度复杂多样,给结构缺陷检测带来重大挑战。此外,目前的方法忽略了检测精度和计算成本之间的权衡。为了有效地解决上述问题,我们提出了一种用于水下结构缺陷检测的轻型多尺度特征融合网络(LMFFNet)。为了增强特征的可表征性,将空间注意力和双向金字塔模块联合用于多尺度特征融合。为了减少模型参数,设计了参数共享头模块。在损失函数中引入了动态非单调聚焦机制,提高了对退化水下图像的缺陷检测性能。在真实水下数据集上的综合实验证明了LMFFNet优于现有最先进的方法。
{"title":"LMFFNet: Lightweight Multiscale Feature Fusion Network for Underwater Structural Defect Detection","authors":"Chunyan Ma,&nbsp;Huibin Wang,&nbsp;Kai Zhang,&nbsp;Guangze Shen,&nbsp;Zhe Chen","doi":"10.1002/rob.70041","DOIUrl":"https://doi.org/10.1002/rob.70041","url":null,"abstract":"<div>\u0000 \u0000 <p>Complex physicochemical environmental effects result in highly intricate and diverse backgrounds in underwater object images, posing significant challenges for detecting structural defects. Besides, current methods overlook the tradeoff between the detection accuracy and computational cost. To effectively address the aforementioned issues, we present a lightweight multiscale feature fusion network (LMFFNet) for underwater structural defect detection. Aiming to enhance the feature representability, spatial attention, and bidirectional pyramid modules are jointly employed for fusing multiscale features. A parameter-sharing header module is designed to reduce the model parameters. The dynamic nonmonotonic focusing mechanism is introduced in the loss function, which can improve the defect detection performance on degraded underwater images. Comprehensive experiments on a real-world underwater data set demonstrate the superiority of the LMFFNet over existing state-of-the-art methods.</p>\u0000 </div>","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"43 1","pages":"376-386"},"PeriodicalIF":5.2,"publicationDate":"2025-08-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145779595","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Effect of Tip Design on Technological Performance During the Exploration of Earth, Lunar, and Martian Soil Environments 在地球、月球和火星土壤环境探测中,尖端设计对技术性能的影响
IF 5.2 2区 计算机科学 Q2 ROBOTICS Pub Date : 2025-08-25 DOI: 10.1002/rob.70043
Serena Rosa Maria Pirrone, Emanuela Del Dottore, Gunter Just, Barbara Mazzolai, Luc Sibille

This paper investigates the penetration performance of soil-burrowing probes with different tip designs during shallow-depth penetration in various media, including terrestrial soils (Hostun sand) and well-characterized planetary soil simulants (LHS-1 Lunar regolith simulant and MGS-1 Martian regolith simulant). The analysis evaluates performance based on the pressure required to successfully penetrate the soil, comparing a conical tip design (i.e., the traditional tip design of penetrometers) with a plant root-inspired design. For each soil type, three different levels of soil compaction were considered to verify how initial soil porosity affects penetration performance. The study involves both experimental and numerical analyses. Experimentally, penetration tests were conducted in chambers filled with Hostun sand, LHS-1, and MGS-1. Numerically, a three-dimensional Discrete Element Model was developed to simulate probe penetration in soil packings with geomechanical properties of Hostun sand, LHS-1, and MGS-1, respectively. In accordance with the experimental findings, the modeling results show significant advantages of the plant root-inspired tip design over the conical tip. Indeed, the plant root-inspired design encountered lower soil resistance pressure during penetration across all soil types and compaction levels: the arithmetic mean values of the pressure reductions associated with the use of the bioinspired tip design resulted 25.5% experimentally and 25.4% numerically compared with the non-bioinspired tip.

本文研究了不同尖端设计的土壤挖洞探针在不同介质中的浅深穿透性能,包括陆地土壤(Hostun砂)和特征良好的行星土壤模拟(LHS-1月球风化层模拟和MGS-1火星风化层模拟)。该分析基于成功穿透土壤所需的压力来评估性能,并将锥形尖端设计(即传统的尖端设计)与植物根部设计进行了比较。对于每种土壤类型,考虑了三种不同程度的土壤压实,以验证初始土壤孔隙度如何影响渗透性能。这项研究包括实验分析和数值分析。实验中,在充满Hostun砂、LHS-1和MGS-1的腔室中进行侵透试验。在数值上,建立了三维离散元模型,分别模拟了具有Hostun砂、LHS-1和MGS-1地质力学性质的探针在土壤填料中的穿透作用。与实验结果一致,模拟结果表明植物根型尖端设计比锥形尖端设计具有显著的优势。事实上,植物根系启发设计在穿透所有土壤类型和压实水平时遇到较低的土壤阻力压力:与非生物启发尖端设计相比,与使用生物启发尖端设计相关的压力降低的算术平均值实验结果为25.5%,数值结果为25.4%。
{"title":"The Effect of Tip Design on Technological Performance During the Exploration of Earth, Lunar, and Martian Soil Environments","authors":"Serena Rosa Maria Pirrone,&nbsp;Emanuela Del Dottore,&nbsp;Gunter Just,&nbsp;Barbara Mazzolai,&nbsp;Luc Sibille","doi":"10.1002/rob.70043","DOIUrl":"https://doi.org/10.1002/rob.70043","url":null,"abstract":"<p>This paper investigates the penetration performance of soil-burrowing probes with different tip designs during shallow-depth penetration in various media, including terrestrial soils (Hostun sand) and well-characterized planetary soil simulants (LHS-1 Lunar regolith simulant and MGS-1 Martian regolith simulant). The analysis evaluates performance based on the pressure required to successfully penetrate the soil, comparing a conical tip design (i.e., the traditional tip design of penetrometers) with a plant root-inspired design. For each soil type, three different levels of soil compaction were considered to verify how initial soil porosity affects penetration performance. The study involves both experimental and numerical analyses. Experimentally, penetration tests were conducted in chambers filled with Hostun sand, LHS-1, and MGS-1. Numerically, a three-dimensional Discrete Element Model was developed to simulate probe penetration in soil packings with geomechanical properties of Hostun sand, LHS-1, and MGS-1, respectively. In accordance with the experimental findings, the modeling results show significant advantages of the plant root-inspired tip design over the conical tip. Indeed, the plant root-inspired design encountered lower soil resistance pressure during penetration across all soil types and compaction levels: the arithmetic mean values of the pressure reductions associated with the use of the bioinspired tip design resulted 25.5% experimentally and 25.4% numerically compared with the non-bioinspired tip.</p>","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"43 1","pages":"387-408"},"PeriodicalIF":5.2,"publicationDate":"2025-08-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/rob.70043","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145792465","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PhytoPatholoBot: Autonomous Ground Robot for Near-Real-Time Disease Scouting in the Vineyard 植物病理学机器人:用于葡萄园近实时疾病侦察的自主地面机器人
IF 5.2 2区 计算机科学 Q2 ROBOTICS Pub Date : 2025-08-25 DOI: 10.1002/rob.70049
Ertai Liu, Kaitlin M. Gold, Lance Cadle-Davidson, Kathleen Kanaley, David Combs, Yu Jiang

The grape and wine industry suffers substantial losses annually due to diseases like downy mildew and grapevine leafroll-associated virus 3. Effective control of these diseases hinges on precise and timely diagnosis, which is often hindered by the shortage of highly skilled disease scouts. This highlights the urgent need for alternative, scalable solutions. We introduce PhytoPatholoBot (PPB), a fully autonomous ground robot equipped with a custom imaging system and onboard analysis pipeline for near-real-time disease detection and severity quantification, enabling rapid disease assessments in vineyards. The imaging system uses active illumination to enhance image quality and consistency, addressing a key challenge in ensuring the generalizability of analysis models. The analysis pipeline incorporates a disease mapping near-real-time model, a custom segmentation model designed for deployment on low-power edge computing devices, allowing near-real-time inference. PPB was deployed in both research and commercial vineyards for field-based disease scouting. Experimental results demonstrated that its disease detection and severity quantification performance was comparable to those of experienced human scouts and advanced offline computer vision models, while maintaining high computational efficiency and low-power consumption suited to field robots. PPB's ability to map disease progression over the growing season and manage multiple disease types in previously unseen vineyards highlights its potential to advance agricultural research and improve vineyard disease management practices.

由于霜霉病和葡萄叶卷相关病毒等疾病,葡萄和葡萄酒行业每年都遭受重大损失。这些疾病的有效控制取决于准确和及时的诊断,这往往受到缺乏高技能疾病侦察员的阻碍。这凸显了对可替代的、可扩展的解决方案的迫切需求。我们推出了PhytoPatholoBot (PPB),这是一个完全自主的地面机器人,配备了定制的成像系统和机载分析管道,用于近实时的疾病检测和严重程度量化,使葡萄园的疾病快速评估成为可能。成像系统使用主动照明来提高图像质量和一致性,解决了确保分析模型通用性的关键挑战。分析管道包含疾病映射近实时模型,这是一种定制分割模型,专为部署在低功耗边缘计算设备上而设计,允许近实时推断。PPB被部署在研究和商业葡萄园,用于田间病害侦察。实验结果表明,其疾病检测和严重程度量化性能可与经验丰富的人类侦察兵和先进的离线计算机视觉模型相媲美,同时保持适合野外机器人的高计算效率和低功耗。PPB能够在生长季节绘制疾病进展图,并在以前未见过的葡萄园中管理多种疾病类型,这突出了它在推进农业研究和改善葡萄园疾病管理实践方面的潜力。
{"title":"PhytoPatholoBot: Autonomous Ground Robot for Near-Real-Time Disease Scouting in the Vineyard","authors":"Ertai Liu,&nbsp;Kaitlin M. Gold,&nbsp;Lance Cadle-Davidson,&nbsp;Kathleen Kanaley,&nbsp;David Combs,&nbsp;Yu Jiang","doi":"10.1002/rob.70049","DOIUrl":"https://doi.org/10.1002/rob.70049","url":null,"abstract":"<div>\u0000 \u0000 <p>The grape and wine industry suffers substantial losses annually due to diseases like downy mildew and grapevine leafroll-associated virus 3. Effective control of these diseases hinges on precise and timely diagnosis, which is often hindered by the shortage of highly skilled disease scouts. This highlights the urgent need for alternative, scalable solutions. We introduce PhytoPatholoBot (PPB), a fully autonomous ground robot equipped with a custom imaging system and onboard analysis pipeline for near-real-time disease detection and severity quantification, enabling rapid disease assessments in vineyards. The imaging system uses active illumination to enhance image quality and consistency, addressing a key challenge in ensuring the generalizability of analysis models. The analysis pipeline incorporates a disease mapping near-real-time model, a custom segmentation model designed for deployment on low-power edge computing devices, allowing near-real-time inference. PPB was deployed in both research and commercial vineyards for field-based disease scouting. Experimental results demonstrated that its disease detection and severity quantification performance was comparable to those of experienced human scouts and advanced offline computer vision models, while maintaining high computational efficiency and low-power consumption suited to field robots. PPB's ability to map disease progression over the growing season and manage multiple disease types in previously unseen vineyards highlights its potential to advance agricultural research and improve vineyard disease management practices.</p>\u0000 </div>","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"43 1","pages":"442-453"},"PeriodicalIF":5.2,"publicationDate":"2025-08-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145779596","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Back Cover Image, Volume 42, Number 6, September 2025 封底图片,42卷,第6期,2025年9月
IF 5.2 2区 计算机科学 Q2 ROBOTICS Pub Date : 2025-08-20 DOI: 10.1002/rob.70074
SaiXuan Chen, SaiHu Mu, GuanWu Jiang, Abdelaziz Omar, Zina Zhu, Fuzhou Niu

The cover image is based on the article Kinematic modeling of a 7-DOF tendonlike-driven robot based on optimization and deep learning by Niu Fuzhou et al., 10.1002/rob.22544.

封面图像基于牛福州等人,10.1002/ rob2 .22544的基于优化和深度学习的7自由度类肌腱机器人运动学建模。
{"title":"Back Cover Image, Volume 42, Number 6, September 2025","authors":"SaiXuan Chen,&nbsp;SaiHu Mu,&nbsp;GuanWu Jiang,&nbsp;Abdelaziz Omar,&nbsp;Zina Zhu,&nbsp;Fuzhou Niu","doi":"10.1002/rob.70074","DOIUrl":"https://doi.org/10.1002/rob.70074","url":null,"abstract":"<p>The cover image is based on the article <i>Kinematic modeling of a 7-DOF tendonlike-driven robot based on optimization and deep learning</i> by Niu Fuzhou et al., 10.1002/rob.22544.\u0000\u0000 <figure>\u0000 <div><picture>\u0000 <source></source></picture><p></p>\u0000 </div>\u0000 </figure></p>","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"42 6","pages":""},"PeriodicalIF":5.2,"publicationDate":"2025-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/rob.70074","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145101884","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Inside Front Cover Image, Volume 42, Number 6, September 2025 封面内图,42卷,第6期,2025年9月
IF 5.2 2区 计算机科学 Q2 ROBOTICS Pub Date : 2025-08-20 DOI: 10.1002/rob.70072
Qingxiang Wu, Yu'ao Wang, Yu Fu, Tong Yang, Yongchun Fang, Ning Sun

The cover image is based on the article Design and kinematic modeling of wrist-inspired joints for restricted operating spaces by Ning Sun et al., 10.1002/rob.22552.

封面图像基于Ning Sun et al., 10.1002/rob.22552的文章《受限操作空间腕式关节的设计与运动学建模》。
{"title":"Inside Front Cover Image, Volume 42, Number 6, September 2025","authors":"Qingxiang Wu,&nbsp;Yu'ao Wang,&nbsp;Yu Fu,&nbsp;Tong Yang,&nbsp;Yongchun Fang,&nbsp;Ning Sun","doi":"10.1002/rob.70072","DOIUrl":"https://doi.org/10.1002/rob.70072","url":null,"abstract":"<p>The cover image is based on the article <i>Design and kinematic modeling of wrist-inspired joints for restricted operating spaces</i> by Ning Sun et al., 10.1002/rob.22552.\u0000\u0000 <figure>\u0000 <div><picture>\u0000 <source></source></picture><p></p>\u0000 </div>\u0000 </figure></p>","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"42 6","pages":""},"PeriodicalIF":5.2,"publicationDate":"2025-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/rob.70072","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145101926","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cover Image, Volume 42, Number 6, September 2025 封面图片,42卷,第6期,2025年9月
IF 5.2 2区 计算机科学 Q2 ROBOTICS Pub Date : 2025-08-20 DOI: 10.1002/rob.70042
Yan Huang, Jiawei Zhang, Ran Yu, Shoujie Li, Wenbo Ding

The cover image is based on the article SimLiquid: A simulation-based liquid perception pipeline for robot liquid manipulation by Wenbo Ding et al., 10.1002/rob.22548.

封面图片来源于丁文波等人的文章《SimLiquid:一种基于仿真的机器人液体操作的液体感知管道》(10.1002/rob.22548)。
{"title":"Cover Image, Volume 42, Number 6, September 2025","authors":"Yan Huang,&nbsp;Jiawei Zhang,&nbsp;Ran Yu,&nbsp;Shoujie Li,&nbsp;Wenbo Ding","doi":"10.1002/rob.70042","DOIUrl":"https://doi.org/10.1002/rob.70042","url":null,"abstract":"<p>The cover image is based on the article <i>SimLiquid: A simulation-based liquid perception pipeline for robot liquid manipulation</i> by Wenbo Ding et al., 10.1002/rob.22548.\u0000\u0000 <figure>\u0000 <div><picture>\u0000 <source></source></picture><p></p>\u0000 </div>\u0000 </figure></p>","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"42 6","pages":""},"PeriodicalIF":5.2,"publicationDate":"2025-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/rob.70042","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144881450","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Inside Back Cover Image, Volume 42, Number 6, September 2025 内页封底图片,42卷,第6期,2025年9月
IF 5.2 2区 计算机科学 Q2 ROBOTICS Pub Date : 2025-08-20 DOI: 10.1002/rob.70073
Zhenliang Zheng, Chao Wang, Xiaoli Hu, Lun Zhang, Wenchao Zhang, Yongyuan Xu, Pengfei Liu, Xufang Pang, Tin Lun Lam, Ning Ding

The cover image is based on the article Developing a climbing robot for stay cable maintenance with security and rescue mechanisms by Ning Ding et al., 10.1002/rob.22519.

封面图片基于宁丁等人,10.1002/rob.22519的文章《开发一种具有安全救援机构的斜拉索维修攀爬机器人》。
{"title":"Inside Back Cover Image, Volume 42, Number 6, September 2025","authors":"Zhenliang Zheng,&nbsp;Chao Wang,&nbsp;Xiaoli Hu,&nbsp;Lun Zhang,&nbsp;Wenchao Zhang,&nbsp;Yongyuan Xu,&nbsp;Pengfei Liu,&nbsp;Xufang Pang,&nbsp;Tin Lun Lam,&nbsp;Ning Ding","doi":"10.1002/rob.70073","DOIUrl":"https://doi.org/10.1002/rob.70073","url":null,"abstract":"<p>The cover image is based on the article <i>Developing a climbing robot for stay cable maintenance with security and rescue mechanisms</i> by Ning Ding et al., 10.1002/rob.22519.\u0000\u0000 <figure>\u0000 <div><picture>\u0000 <source></source></picture><p></p>\u0000 </div>\u0000 </figure></p>","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"42 6","pages":""},"PeriodicalIF":5.2,"publicationDate":"2025-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/rob.70073","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145101927","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of Field Robotics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1