首页 > 最新文献

Journal of Field Robotics最新文献

英文 中文
A simulation-assisted point cloud segmentation neural network for human–robot interaction applications 用于人机交互应用的仿真辅助点云分割神经网络
IF 4.2 2区 计算机科学 Q2 ROBOTICS Pub Date : 2024-07-01 DOI: 10.1002/rob.22385
Jingxin Lin, Kaifan Zhong, Tao Gong, Xianmin Zhang, Nianfeng Wang

With the advancement of industrial automation, the frequency of human–robot interaction (HRI) has significantly increased, necessitating a paramount focus on ensuring human safety throughout this process. This paper proposes a simulation-assisted neural network for point cloud segmentation in HRI, specifically distinguishing humans from various surrounding objects. During HRI, readily accessible prior information, such as the positions of background objects and the robot's posture, can generate a simulated point cloud and assist in point cloud segmentation. The simulation-assisted neural network utilizes simulated and actual point clouds as dual inputs. A simulation-assisted edge convolution module in the network facilitates the combination of features from the actual and simulated point clouds, updating the features of the actual point cloud to incorporate simulation information. Experiments of point cloud segmentation in industrial environments verify the efficacy of the proposed method.

随着工业自动化的发展,人机交互(HRI)的频率显著增加,因此在整个过程中必须高度重视确保人身安全。本文提出了一种仿真辅助神经网络,用于 HRI 中的点云分割,特别是将人与周围的各种物体区分开来。在人机交互过程中,背景物体的位置和机器人的姿态等容易获取的先验信息可以生成模拟点云,帮助进行点云分割。模拟辅助神经网络利用模拟点云和实际点云作为双重输入。网络中的仿真辅助边缘卷积模块有助于将实际点云和模拟点云的特征结合起来,更新实际点云的特征以纳入仿真信息。在工业环境中进行的点云分割实验验证了所提方法的有效性。
{"title":"A simulation-assisted point cloud segmentation neural network for human–robot interaction applications","authors":"Jingxin Lin,&nbsp;Kaifan Zhong,&nbsp;Tao Gong,&nbsp;Xianmin Zhang,&nbsp;Nianfeng Wang","doi":"10.1002/rob.22385","DOIUrl":"10.1002/rob.22385","url":null,"abstract":"<p>With the advancement of industrial automation, the frequency of human–robot interaction (HRI) has significantly increased, necessitating a paramount focus on ensuring human safety throughout this process. This paper proposes a simulation-assisted neural network for point cloud segmentation in HRI, specifically distinguishing humans from various surrounding objects. During HRI, readily accessible prior information, such as the positions of background objects and the robot's posture, can generate a simulated point cloud and assist in point cloud segmentation. The simulation-assisted neural network utilizes simulated and actual point clouds as dual inputs. A simulation-assisted edge convolution module in the network facilitates the combination of features from the actual and simulated point clouds, updating the features of the actual point cloud to incorporate simulation information. Experiments of point cloud segmentation in industrial environments verify the efficacy of the proposed method.</p>","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"41 8","pages":"2689-2704"},"PeriodicalIF":4.2,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141525511","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Three-dimensional kinematics-based real-time localization method using two robots 基于三维运动学的双机器人实时定位方法
IF 4.2 2区 计算机科学 Q2 ROBOTICS Pub Date : 2024-07-01 DOI: 10.1002/rob.22383
Guy Elmakis, Matan Coronel, David Zarrouk

This paper presents a precise two-robot collaboration method for three-dimensional (3D) self-localization relying on a single rotating camera and onboard accelerometers used to measure the tilt of the robots. This method allows for localization in global positioning system-denied environments and in the presence of magnetic interference or relatively (or totally) dark and unstructured unmarked locations. One robot moves forward on each step while the other remains stationary. The tilt angles of the robots obtained from the accelerometers and the rotational angle of the turret, associated with the video analysis, make it possible to continuously calculate the location of each robot. We describe a hardware setup used for experiments and provide a detailed description of the algorithm that fuses the data obtained by the accelerometers and cameras and runs in real-time on onboard microcomputers. Finally, we present 2D and 3D experimental results, which show that the system achieves 2% accuracy for the total traveled distance (see Supporting Information S1: video).

本文介绍了一种精确的双机器人协作三维(3D)自定位方法,该方法依赖于单个旋转摄像头和用于测量机器人倾斜度的机载加速度计。该方法可在全球定位系统被屏蔽的环境中,以及在存在磁场干扰或相对(或完全)黑暗、无结构、无标记的地点进行定位。一个机器人每一步都向前移动,而另一个机器人则保持静止。通过加速度计获得的机器人倾斜角度以及与视频分析相关的炮塔旋转角度,可以连续计算出每个机器人的位置。我们介绍了用于实验的硬件设置,并详细说明了融合加速度计和摄像头获取的数据并在机载微型计算机上实时运行的算法。最后,我们介绍了二维和三维实验结果,结果表明该系统的总行驶距离精确度达到了 2%(见佐证资料 S1:视频)。
{"title":"Three-dimensional kinematics-based real-time localization method using two robots","authors":"Guy Elmakis,&nbsp;Matan Coronel,&nbsp;David Zarrouk","doi":"10.1002/rob.22383","DOIUrl":"10.1002/rob.22383","url":null,"abstract":"<p>This paper presents a precise two-robot collaboration method for three-dimensional (3D) self-localization relying on a single rotating camera and onboard accelerometers used to measure the tilt of the robots. This method allows for localization in global positioning system-denied environments and in the presence of magnetic interference or relatively (or totally) dark and unstructured unmarked locations. One robot moves forward on each step while the other remains stationary. The tilt angles of the robots obtained from the accelerometers and the rotational angle of the turret, associated with the video analysis, make it possible to continuously calculate the location of each robot. We describe a hardware setup used for experiments and provide a detailed description of the algorithm that fuses the data obtained by the accelerometers and cameras and runs in real-time on onboard microcomputers. Finally, we present 2D and 3D experimental results, which show that the system achieves 2% accuracy for the total traveled distance (see Supporting Information S1: video).</p>","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"41 8","pages":"2676-2688"},"PeriodicalIF":4.2,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/rob.22383","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141525512","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Soft crawling caterpillar driven by electrohydrodynamic pumps 由电动流体动力泵驱动的软履带履带车
IF 4.2 2区 计算机科学 Q2 ROBOTICS Pub Date : 2024-07-01 DOI: 10.1002/rob.22388
Tianyu Zhao, Cheng Wang, Zhongbao Luo, Weiqi Cheng, Nan Xiang

Soft crawling robots are usually driven by bulky and complex external pneumatic or hydraulic actuators. In this work, we proposed a miniaturized soft crawling caterpillar based on electrohydrodynamic (EHD) pumps. The caterpillar was mainly composed of a flexible EHD pump for providing the driving force, an artificial muscle for performing the crawling, a fluid reservoir, and several stabilizers and auxiliary feet. To achieve better crawling performances for our caterpillar, the flow rate and pressure of the EHD pump were improved by using a curved electrode design. The electrode gap, electrode overlap length, channel height, electrode thickness, and electrode pair number of the EHD pump were further optimized for better performance. Compared with the EHD pumps with conventional straight electrodes, our EHD pump showed a 50% enhancement in driving pressure and a 60% increase in flow rate. The bending capability of the artificial muscles was also characterized, showing a maximum bending angle of over 50°. Then, the crawling ability of the soft crawling caterpillar is also tested. Finally, our caterpillar owns the advantages of simple fabrication, low-cost, fast movement speed, and small footprint, which has robust and wide potential for practical use, especially over various terrains.

软爬行机器人通常由笨重而复杂的外部气动或液压致动器驱动。在这项工作中,我们提出了一种基于电流体动力(EHD)泵的微型软爬行履带式机器人。毛毛虫主要由提供驱动力的柔性 EHD 泵、执行爬行的人造肌肉、储液器以及多个稳定器和辅助脚组成。为了使我们的履带式机器人获得更好的爬行性能,我们采用了弧形电极设计,从而提高了 EHD 泵的流速和压力。为了获得更好的性能,还进一步优化了 EHD 泵的电极间隙、电极重叠长度、通道高度、电极厚度和电极对数。与使用传统直电极的 EHD 泵相比,我们的 EHD 泵的驱动压力提高了 50%,流速提高了 60%。我们还对人造肌肉的弯曲能力进行了鉴定,结果显示其最大弯曲角度超过 50°。然后,还测试了软爬行毛毛虫的爬行能力。最后,我们的毛毛虫具有制造简单、成本低廉、移动速度快、占地面积小等优点,在实际应用中,特别是在各种地形上,具有强大而广泛的潜力。
{"title":"Soft crawling caterpillar driven by electrohydrodynamic pumps","authors":"Tianyu Zhao,&nbsp;Cheng Wang,&nbsp;Zhongbao Luo,&nbsp;Weiqi Cheng,&nbsp;Nan Xiang","doi":"10.1002/rob.22388","DOIUrl":"10.1002/rob.22388","url":null,"abstract":"<p>Soft crawling robots are usually driven by bulky and complex external pneumatic or hydraulic actuators. In this work, we proposed a miniaturized soft crawling caterpillar based on electrohydrodynamic (EHD) pumps. The caterpillar was mainly composed of a flexible EHD pump for providing the driving force, an artificial muscle for performing the crawling, a fluid reservoir, and several stabilizers and auxiliary feet. To achieve better crawling performances for our caterpillar, the flow rate and pressure of the EHD pump were improved by using a curved electrode design. The electrode gap, electrode overlap length, channel height, electrode thickness, and electrode pair number of the EHD pump were further optimized for better performance. Compared with the EHD pumps with conventional straight electrodes, our EHD pump showed a 50% enhancement in driving pressure and a 60% increase in flow rate. The bending capability of the artificial muscles was also characterized, showing a maximum bending angle of over 50°. Then, the crawling ability of the soft crawling caterpillar is also tested. Finally, our caterpillar owns the advantages of simple fabrication, low-cost, fast movement speed, and small footprint, which has robust and wide potential for practical use, especially over various terrains.</p>","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"41 8","pages":"2705-2714"},"PeriodicalIF":4.2,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141532743","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Autonomous navigation method based on RGB-D camera for a crop phenotyping robot 基于 RGB-D 摄像头的作物表型机器人自主导航方法
IF 4.2 2区 计算机科学 Q2 ROBOTICS Pub Date : 2024-06-30 DOI: 10.1002/rob.22379
Meng Yang, Chenglong Huang, Zhengda Li, Yang Shao, Jinzhan Yuan, Wanneng Yang, Peng Song

Phenotyping robots have the potential to obtain crop phenotypic traits on a large scale with high throughput. Autonomous navigation technology for phenotyping robots can significantly improve the efficiency of phenotypic traits collection. This study developed an autonomous navigation method utilizing an RGB-D camera, specifically designed for phenotyping robots in field environments. The PP-LiteSeg semantic segmentation model was employed due to its real-time and accurate segmentation capabilities, enabling the distinction of crop areas in images captured by the RGB-D camera. Navigation feature points were extracted from these segmented areas, with their three-dimensional coordinates determined from pixel and depth information, facilitating the computation of angle deviation (α) and lateral deviation (d). Fuzzy controllers were designed with α and d as inputs for real-time deviation correction during the walking of phenotyping robot. Additionally, the method includes end-of-row recognition and row spacing calculation, based on both visible and depth data, enabling automatic turning and row transition. The experimental results showed that the adopted PP-LiteSeg semantic segmentation model had a testing accuracy of 95.379% and a mean intersection over union of 90.615%. The robot's navigation demonstrated an average walking deviation of 1.33 cm, with a maximum of 3.82 cm. Additionally, the average error in row spacing measurement was 2.71 cm, while the success rate of row transition at the end of the row was 100%. These findings indicate that the proposed method provides effective support for the autonomous operation of phenotyping robots.

表型机器人具有大规模、高通量采集作物表型性状的潜力。表型机器人的自主导航技术可显著提高表型性状采集的效率。本研究利用 RGB-D 摄像机开发了一种自主导航方法,专门用于田间环境中的表型机器人。由于 PP-LiteSeg 语义分割模型具有实时、准确的分割能力,因此本研究采用了该模型,以便在 RGB-D 摄像机拍摄的图像中区分作物区域。从这些分割区域中提取导航特征点,并根据像素和深度信息确定其三维坐标,从而便于计算角度偏差(α)和横向偏差(d)。以 α 和 d 为输入,设计了模糊控制器,用于在表型机器人行走过程中进行实时偏差校正。此外,该方法还包括基于可见光和深度数据的行末识别和行间距计算,从而实现自动转弯和行间距转换。实验结果表明,所采用的 PP-LiteSeg 语义分割模型的测试准确率为 95.379%,平均交叉比结合率为 90.615%。机器人导航的平均行走偏差为 1.33 厘米,最大偏差为 3.82 厘米。此外,行间距测量的平均误差为 2.71 厘米,而在行尾进行行过渡的成功率为 100%。这些结果表明,所提出的方法为表型机器人的自主操作提供了有效支持。
{"title":"Autonomous navigation method based on RGB-D camera for a crop phenotyping robot","authors":"Meng Yang,&nbsp;Chenglong Huang,&nbsp;Zhengda Li,&nbsp;Yang Shao,&nbsp;Jinzhan Yuan,&nbsp;Wanneng Yang,&nbsp;Peng Song","doi":"10.1002/rob.22379","DOIUrl":"10.1002/rob.22379","url":null,"abstract":"<p>Phenotyping robots have the potential to obtain crop phenotypic traits on a large scale with high throughput. Autonomous navigation technology for phenotyping robots can significantly improve the efficiency of phenotypic traits collection. This study developed an autonomous navigation method utilizing an RGB-D camera, specifically designed for phenotyping robots in field environments. The PP-LiteSeg semantic segmentation model was employed due to its real-time and accurate segmentation capabilities, enabling the distinction of crop areas in images captured by the RGB-D camera. Navigation feature points were extracted from these segmented areas, with their three-dimensional coordinates determined from pixel and depth information, facilitating the computation of angle deviation (<i>α</i>) and lateral deviation (<i>d</i>). Fuzzy controllers were designed with <i>α</i> and <i>d</i> as inputs for real-time deviation correction during the walking of phenotyping robot. Additionally, the method includes end-of-row recognition and row spacing calculation, based on both visible and depth data, enabling automatic turning and row transition. The experimental results showed that the adopted PP-LiteSeg semantic segmentation model had a testing accuracy of 95.379% and a mean intersection over union of 90.615%. The robot's navigation demonstrated an average walking deviation of 1.33 cm, with a maximum of 3.82 cm. Additionally, the average error in row spacing measurement was 2.71 cm, while the success rate of row transition at the end of the row was 100%. These findings indicate that the proposed method provides effective support for the autonomous operation of phenotyping robots.</p>","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"41 8","pages":"2663-2675"},"PeriodicalIF":4.2,"publicationDate":"2024-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/rob.22379","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141525510","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Design and movement mechanism analysis of a multiple degree of freedom bionic crocodile robot based on the characteristic of “death roll” 基于 "死亡翻滚 "特性的多自由度仿生鳄鱼机器人的设计与运动机理分析
IF 4.2 2区 计算机科学 Q2 ROBOTICS Pub Date : 2024-06-21 DOI: 10.1002/rob.22380
Chujun Liu, Jingwei Wang, Zhongyang Liu, Zejia Zhao, Guoqing Zhang

This paper introduces a multi-degree of freedom bionic crocodile robot designed to tackle the challenge of cleaning pollutants and debris from the surfaces of narrow, shallow rivers. The robot mimics the “death roll” motion of crocodiles which is a technique used for object disintegration. First, the design incorporated a swinging tail mechanism using a multi-section oscillating guide-bar mechanism. By analyzing three-, four-, and five-section tail structures, the four-section tail was identified as the most effective structure, offering optimal strength and swing amplitude. Each section of the tail can reach maximum swing angles of 8.05°, 20.95°, 35.09°, and 43.84°, respectively, under a single motor's drive. Next, the robotic legs were designed with a double parallelogram mechanism, facilitating both crawling and retracting movements. In addition, the mouth employed a double-rocker mechanism for efficient closure and locking, achieving an average torque of 5.69 N m with a motor torque of 3.92 N m. Moreover, the robotic body was designed with upper and lower segment structures and waterproofing function was also considered. Besides, the kinematic mechanism and mechanical properties of the bionic crocodile structure were analyzed from the perspectives of modeling and field tests. The results demonstrated an exceptional kinematic performance of the bionic crocodile robot, effectively replicating the authentic movement characteristics of a crocodile.

本文介绍了一种多自由度仿生鳄鱼机器人,旨在应对清理狭窄浅水河流表面污染物和杂物的挑战。该机器人模仿鳄鱼的 "死亡翻滚 "动作,这是一种用于分解物体的技术。首先,设计采用了多节摆动导杆机构的摆尾机制。通过分析三节、四节和五节尾翼结构,确定四节尾翼是最有效的结构,具有最佳的强度和摆动幅度。在单电机驱动下,每节尾翼的最大摆动角度分别为 8.05°、20.95°、35.09° 和 43.84°。其次,机器人腿部采用双平行四边形机构设计,便于爬行和缩回运动。此外,机器人嘴部采用了双摇杆机构,可有效闭合和锁定,在电机扭矩为 3.92 N m 的情况下,平均扭矩达到 5.69 N m。此外,还从建模和现场测试的角度分析了仿生鳄鱼结构的运动机理和机械性能。结果表明,仿生鳄鱼机器人的运动学性能优异,能有效复制鳄鱼的真实运动特征。
{"title":"Design and movement mechanism analysis of a multiple degree of freedom bionic crocodile robot based on the characteristic of “death roll”","authors":"Chujun Liu,&nbsp;Jingwei Wang,&nbsp;Zhongyang Liu,&nbsp;Zejia Zhao,&nbsp;Guoqing Zhang","doi":"10.1002/rob.22380","DOIUrl":"10.1002/rob.22380","url":null,"abstract":"<p>This paper introduces a multi-degree of freedom bionic crocodile robot designed to tackle the challenge of cleaning pollutants and debris from the surfaces of narrow, shallow rivers. The robot mimics the “death roll” motion of crocodiles which is a technique used for object disintegration. First, the design incorporated a swinging tail mechanism using a multi-section oscillating guide-bar mechanism. By analyzing three-, four-, and five-section tail structures, the four-section tail was identified as the most effective structure, offering optimal strength and swing amplitude. Each section of the tail can reach maximum swing angles of 8.05°, 20.95°, 35.09°, and 43.84°, respectively, under a single motor's drive. Next, the robotic legs were designed with a double parallelogram mechanism, facilitating both crawling and retracting movements. In addition, the mouth employed a double-rocker mechanism for efficient closure and locking, achieving an average torque of 5.69 N m with a motor torque of 3.92 N m. Moreover, the robotic body was designed with upper and lower segment structures and waterproofing function was also considered. Besides, the kinematic mechanism and mechanical properties of the bionic crocodile structure were analyzed from the perspectives of modeling and field tests. The results demonstrated an exceptional kinematic performance of the bionic crocodile robot, effectively replicating the authentic movement characteristics of a crocodile.</p>","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"41 8","pages":"2650-2662"},"PeriodicalIF":4.2,"publicationDate":"2024-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141529118","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MPC-based cooperative multiagent search for multiple targets using a Bayesian framework 利用贝叶斯框架,基于 MPC 的多目标多代理合作搜索
IF 4.2 2区 计算机科学 Q2 ROBOTICS Pub Date : 2024-06-19 DOI: 10.1002/rob.22382
Hu Xiao, Rongxin Cui, Demin Xu, Yanran Li

This paper presents a multiagent cooperative search algorithm for identifying an unknown number of targets. The objective is to determine a collection of observation points and corresponding safe paths for agents, which involves balancing the detection time and the number of targets searched. A Bayesian framework is used to update the local probability density function of the targets when the agents obtain information. We utilize model predictive control and establish utility functions based on the detection probability and decrease in information entropy. A target detection algorithm is implemented to verify the target based on minimum-risk Bayesian decision-making. Then, we improve the search algorithm with the target detection algorithm. Several simulations demonstrate that compared with other existing approaches, the proposed approach can reduce the time needed to detect targets and the number of targets searched. We establish an experimental platform with three unmanned aerial vehicles. The simulation and experimental results verify the satisfactory performance of our algorithm.

本文提出了一种多代理合作搜索算法,用于识别未知数量的目标。该算法的目标是为代理确定观测点集合和相应的安全路径,其中涉及探测时间和搜索目标数量之间的平衡。贝叶斯框架用于在代理获得信息时更新目标的局部概率密度函数。我们利用模型预测控制,并根据检测概率和信息熵的减少建立效用函数。基于最小风险贝叶斯决策,我们实施了一种目标检测算法来验证目标。然后,我们利用目标检测算法改进了搜索算法。一些模拟证明,与其他现有方法相比,所提出的方法可以减少检测目标所需的时间和搜索目标的数量。我们用三架无人飞行器建立了一个实验平台。仿真和实验结果验证了我们的算法性能令人满意。
{"title":"MPC-based cooperative multiagent search for multiple targets using a Bayesian framework","authors":"Hu Xiao,&nbsp;Rongxin Cui,&nbsp;Demin Xu,&nbsp;Yanran Li","doi":"10.1002/rob.22382","DOIUrl":"10.1002/rob.22382","url":null,"abstract":"<p>This paper presents a multiagent cooperative search algorithm for identifying an unknown number of targets. The objective is to determine a collection of observation points and corresponding safe paths for agents, which involves balancing the detection time and the number of targets searched. A Bayesian framework is used to update the local probability density function of the targets when the agents obtain information. We utilize model predictive control and establish utility functions based on the detection probability and decrease in information entropy. A target detection algorithm is implemented to verify the target based on minimum-risk Bayesian decision-making. Then, we improve the search algorithm with the target detection algorithm. Several simulations demonstrate that compared with other existing approaches, the proposed approach can reduce the time needed to detect targets and the number of targets searched. We establish an experimental platform with three unmanned aerial vehicles. The simulation and experimental results verify the satisfactory performance of our algorithm.</p>","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"41 8","pages":"2630-2649"},"PeriodicalIF":4.2,"publicationDate":"2024-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141529117","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A cable-driven underwater robotic system for delicate manipulation of marine biology samples 用于精细操作海洋生物样本的缆索驱动水下机器人系统
IF 4.2 2区 计算机科学 Q2 ROBOTICS Pub Date : 2024-06-17 DOI: 10.1002/rob.22381
Mahmoud Zarebidoki, Jaspreet Singh Dhupia, Minas Liarokapis, Weiliang Xu

Underwater robotic systems have the potential to assist and complement humans in dangerous or remote environments, such as in the monitoring, sampling, or manipulation of sensitive underwater species. Here we present the design, modeling, and development of an underwater manipulator (UM) with a lightweight cable-driven structure that allows for delicate deep-sea reef sampling. The compact and lightweight design of the UM and gripper decreases the coupling effect between the UM and the underwater vehicle (UV) significantly. The UM and gripper are equipped with force sensors, enabling them for soft and sensitive object manipulation and grasping. The accurate force exertion capabilities of the UM ensure efficient operation in the process of localization and approaching reef samples, such as the corals and sponges. The active force control of the tendon-driven gripper ensures gentle/delicate grasping, handling, and transporting of the marine samples without damaging their tissues. A complete simulation of the UM is provided for deriving the required specifications of actuators and sensors to be compatible with the UVs with a speed range of 1–4 Knots. The system's performance for accurate trajectory tracking and delicate grasping of two different types of underwater species (a sponge skeleton and a Neptune's necklace seaweed) is verified using a model-free robust-adaptive position/force controller.

水下机器人系统有可能在危险或偏远的环境中协助和辅助人类,如监测、取样或操纵敏感的水下物种。在此,我们介绍了一种具有轻型缆索驱动结构的水下机械手(UM)的设计、建模和开发情况,该机械手可进行精细的深海珊瑚礁采样。水下机械手和抓取器的紧凑轻便设计大大降低了水下机械手和水下航行器(UV)之间的耦合效应。UM 和抓取器配备了力传感器,可对柔软而敏感的物体进行操作和抓取。在定位和接近珊瑚礁样本(如珊瑚和海绵)的过程中,UM 的精确施力能力可确保高效运行。腱驱动抓手的主动力控制可确保轻柔/精细地抓取、处理和运输海洋样本,而不会损坏其组织。对 UM 进行了完整的模拟,以推导出所需的执行器和传感器规格,使其与速度范围为 1-4 海里/小时的紫外线兼容。使用无模型鲁棒自适应位置/力控制器验证了该系统在精确轨迹跟踪和精细抓取两种不同类型的水下物种(海绵骨架和海王星项链海藻)方面的性能。
{"title":"A cable-driven underwater robotic system for delicate manipulation of marine biology samples","authors":"Mahmoud Zarebidoki,&nbsp;Jaspreet Singh Dhupia,&nbsp;Minas Liarokapis,&nbsp;Weiliang Xu","doi":"10.1002/rob.22381","DOIUrl":"https://doi.org/10.1002/rob.22381","url":null,"abstract":"<p>Underwater robotic systems have the potential to assist and complement humans in dangerous or remote environments, such as in the monitoring, sampling, or manipulation of sensitive underwater species. Here we present the design, modeling, and development of an underwater manipulator (UM) with a lightweight cable-driven structure that allows for delicate deep-sea reef sampling. The compact and lightweight design of the UM and gripper decreases the coupling effect between the UM and the underwater vehicle (UV) significantly. The UM and gripper are equipped with force sensors, enabling them for soft and sensitive object manipulation and grasping. The accurate force exertion capabilities of the UM ensure efficient operation in the process of localization and approaching reef samples, such as the corals and sponges. The active force control of the tendon-driven gripper ensures gentle/delicate grasping, handling, and transporting of the marine samples without damaging their tissues. A complete simulation of the UM is provided for deriving the required specifications of actuators and sensors to be compatible with the UVs with a speed range of 1–4 Knots. The system's performance for accurate trajectory tracking and delicate grasping of two different types of underwater species (a sponge skeleton and a Neptune's necklace seaweed) is verified using a model-free robust-adaptive position/force controller.</p>","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"41 8","pages":"2615-2629"},"PeriodicalIF":4.2,"publicationDate":"2024-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/rob.22381","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142587966","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
UP-GAN: Channel-spatial attention-based progressive generative adversarial network for underwater image enhancement UP-GAN:基于通道空间注意力的渐进生成对抗网络,用于水下图像增强
IF 4.2 2区 计算机科学 Q2 ROBOTICS Pub Date : 2024-06-12 DOI: 10.1002/rob.22378
Ning Wang, Yanzheng Chen, Yi Wei, Tingkai Chen, Hamid Reza Karimi

Focusing on severe color deviation, low brightness, and mixed noise caused by inherent scattering and light attenuation effects within underwater environments, an underwater-attention progressive generative adversarial network (UP-GAN) is innovated for underwater image enhancement (UIE). Salient contributions are as follows: (1) By elaborately devising an underwater background light estimation module via an underwater imaging model, the degradation mechanism can be sufficiently integrated to fuse prior information, which in turn saves computational burden on subsequent enhancement; (2) to suppress mixed noise and enhance foreground, simultaneously, an underwater dual-attention module is created to fertilize skip connection from channel and spatial aspects, thereby getting rid of noise amplification within the UIE; and (3) by systematically combining with spatial consistency, exposure control, color constancy, color relative dispersion losses, the entire UP-GAN framework is skillfully optimized by taking into account multidegradation factors. Comprehensive experiments conducted on the UIEB data set demonstrate the effectiveness and superiority of the proposed UP-GAN in terms of both subjective and objective aspects.

针对水下环境中固有散射和光衰减效应导致的严重色彩偏差、低亮度和混合噪声,创新性地提出了一种用于水下图像增强(UIE)的水下关注渐进生成对抗网络(UP-GAN)。其突出贡献如下(1) 通过水下成像模型精心设计水下背景光估计模块,充分整合降解机制,融合先验信息,从而减轻后续增强的计算负担;(2) 为了同时抑制混合噪声和增强前景,创建了水下双关注模块,从通道和空间两方面对跳接进行优化,从而摆脱了 UIE 内部噪声放大的问题;以及 (3) 通过系统地与空间一致性、曝光控制、色彩恒定性、色彩相对色散损失等因素相结合,巧妙地优化了整个 UP-GAN 框架,兼顾了多种降解因素。在 UIEB 数据集上进行的综合实验证明了所提出的 UP-GAN 在主观和客观方面的有效性和优越性。
{"title":"UP-GAN: Channel-spatial attention-based progressive generative adversarial network for underwater image enhancement","authors":"Ning Wang,&nbsp;Yanzheng Chen,&nbsp;Yi Wei,&nbsp;Tingkai Chen,&nbsp;Hamid Reza Karimi","doi":"10.1002/rob.22378","DOIUrl":"10.1002/rob.22378","url":null,"abstract":"<p>Focusing on severe color deviation, low brightness, and mixed noise caused by inherent scattering and light attenuation effects within underwater environments, an underwater-attention progressive generative adversarial network (UP-GAN) is innovated for underwater image enhancement (UIE). Salient contributions are as follows: (1) By elaborately devising an underwater background light estimation module via an underwater imaging model, the degradation mechanism can be sufficiently integrated to fuse prior information, which in turn saves computational burden on subsequent enhancement; (2) to suppress mixed noise and enhance foreground, simultaneously, an underwater dual-attention module is created to fertilize skip connection from channel and spatial aspects, thereby getting rid of noise amplification within the UIE; and (3) by systematically combining with spatial consistency, exposure control, color constancy, color relative dispersion losses, the entire UP-GAN framework is skillfully optimized by taking into account multidegradation factors. Comprehensive experiments conducted on the UIEB data set demonstrate the effectiveness and superiority of the proposed UP-GAN in terms of both subjective and objective aspects.</p>","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"41 8","pages":"2597-2614"},"PeriodicalIF":4.2,"publicationDate":"2024-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141352465","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Mountain search and recovery: An unmanned aerial vehicle deployment case study and analysis 山区搜索和救援:无人驾驶飞行器部署案例研究与分析
IF 4.2 2区 计算机科学 Q2 ROBOTICS Pub Date : 2024-06-12 DOI: 10.1002/rob.22376
Nathan L. Schomer, Julie A. Adams

Mountain search and rescue (MSAR) seeks to assist people in extreme remote environments. This method of emergency response often relies on crewed aircraft to perform aerial visual search. Many MSAR teams use low-cost, consumer-grade unmanned aerial vehicles (UAVs) to augment the crewed aircraft operations. These UAVs are primarily developed for aerial photography and lack many features critical (e.g., probability-prioritized coverage path planning) to support MSAR operations. As a result, UAVs are underutilized in MSAR. A case study of a recent mountain search and recovery scenario that did not use, but may have benefited from, UAVs is provided. An overview of the mission is augmented with a subject matter expert-informed analysis of how the mission may have benefited from current UAV technology. Lastly, mission relevant requirements are presented along with a discussion of how future UAV development can seek to bridge the gap between state-of-the-art robotics and MSAR.

山地搜救(MSAR)旨在为极端偏远环境中的人们提供帮助。这种应急方法通常依靠机组人员驾驶飞机进行空中目视搜索。许多 MSAR 团队使用低成本、消费级无人飞行器 (UAV) 来加强机组人员飞机的操作。这些无人飞行器主要是为航空摄影而开发的,缺乏支持 MSAR 行动的许多关键功能(如概率优先覆盖路径规划)。因此,无人机在澳门巴黎人娱乐场搜索中的利用率很低。本文提供了最近一次山区搜索和救援的案例研究,该案例没有使用无人机,但可能会受益于无人机。在对任务进行概述的同时,还根据相关专家的意见分析了该任务如何从当前的无人机技术中获益。最后,介绍了与任务相关的要求,并讨论了未来的无人机开发如何缩小最先进的机器人技术与澳门巴黎人娱乐场搜索和救援之间的差距。
{"title":"Mountain search and recovery: An unmanned aerial vehicle deployment case study and analysis","authors":"Nathan L. Schomer,&nbsp;Julie A. Adams","doi":"10.1002/rob.22376","DOIUrl":"10.1002/rob.22376","url":null,"abstract":"<p>Mountain search and rescue (MSAR) seeks to assist people in extreme remote environments. This method of emergency response often relies on crewed aircraft to perform aerial visual search. Many MSAR teams use low-cost, consumer-grade unmanned aerial vehicles (UAVs) to augment the crewed aircraft operations. These UAVs are primarily developed for aerial photography and lack many features critical (e.g., probability-prioritized coverage path planning) to support MSAR operations. As a result, UAVs are underutilized in MSAR. A case study of a recent mountain search and recovery scenario that did not use, but may have benefited from, UAVs is provided. An overview of the mission is augmented with a subject matter expert-informed analysis of how the mission may have benefited from current UAV technology. Lastly, mission relevant requirements are presented along with a discussion of how future UAV development can seek to bridge the gap between state-of-the-art robotics and MSAR.</p>","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"41 8","pages":"2583-2596"},"PeriodicalIF":4.2,"publicationDate":"2024-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141353168","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A selective harvesting robot for cherry tomatoes: Design, development, field evaluation analysis 樱桃番茄选择性收获机器人:设计、开发和实地评估分析
IF 4.2 2区 计算机科学 Q2 ROBOTICS Pub Date : 2024-06-10 DOI: 10.1002/rob.22377
Jiacheng Rong, Lin Hu, Hui Zhou, Guanglin Dai, Ting Yuan, Pengbo Wang

With the aging population and increasing labor costs, traditional manual harvesting methods have become less economically efficient. Consequently, research into fully automated harvesting using selective harvesting robots for cherry tomatoes has become a hot topic. However, most of the current research is focused on individual harvesting of large tomatoes, and there is less research on the development of complete systems for harvesting cherry tomatoes in clusters. The purpose of this study is to develop a harvesting robot system capable of picking tomato clusters by cutting their fruit-bearing pedicels and to evaluate the robot prototype in real greenhouse environments. First, to enhance the grasping stability, a novel end-effector was designed. This end-effector utilizes a cam mechanism to achieve asynchronous actions of cutting and grasping with only one power source. Subsequently, a visual perception system was developed to locate the cutting points of the pedicels. This system is divided into two parts: rough positioning of the fruits in the far-range view and accurate positioning of the cutting points of the pedicels in the close-range view. Furthermore, it possesses the capability to adaptively infer the approaching pose of the end-effector based on point cloud features extracted from fruit-bearing pedicels and stems. Finally, a prototype of the tomato-harvesting robot was assembled for field trials. The test results demonstrate that in tomato clusters with unobstructed pedicels, the localization success rates for the cutting points were 88.5% and 83.7% in the two greenhouses, respectively, while the harvesting success rates reached 57.7% and 55.4%, respectively. The average cycle time to harvest a tomato cluster was 24 s. The experimental results prove the potential for commercial application of the developed tomato-harvesting robot and through the analysis of failure cases, discuss directions for future work.

随着人口老龄化和劳动力成本的增加,传统的人工采收方法已经变得越来越不经济。因此,利用选择性采收机器人对樱桃番茄进行全自动采收的研究已成为热门话题。然而,目前的研究大多集中在大番茄的单个采收上,而对成群采收樱桃番茄的完整系统的开发研究较少。本研究的目的是开发一种采收机器人系统,该系统能够通过切割番茄果梗采收番茄簇,并在实际温室环境中对机器人原型进行评估。首先,为了增强抓取稳定性,设计了一种新型末端执行器。该末端执行器采用凸轮机构,只需一个动力源即可实现切割和抓取的异步动作。随后,还开发了一套视觉感知系统,用于定位花梗的切割点。该系统分为两部分:在远距离视图中对水果进行粗略定位,在近距离视图中对果柄切割点进行精确定位。此外,该系统还能根据从果实花梗和茎中提取的点云特征,自适应地推断末端执行器的接近姿态。最后,我们组装了一个西红柿采摘机器人原型进行实地试验。试验结果表明,在两个温室中,在果梗未被遮挡的番茄群中,切割点的定位成功率分别为88.5%和83.7%,而收获成功率分别达到57.7%和55.4%。收获一簇番茄的平均周期为 24 秒。实验结果证明了所开发的番茄收获机器人的商业应用潜力,并通过对故障案例的分析,探讨了未来的工作方向。
{"title":"A selective harvesting robot for cherry tomatoes: Design, development, field evaluation analysis","authors":"Jiacheng Rong,&nbsp;Lin Hu,&nbsp;Hui Zhou,&nbsp;Guanglin Dai,&nbsp;Ting Yuan,&nbsp;Pengbo Wang","doi":"10.1002/rob.22377","DOIUrl":"10.1002/rob.22377","url":null,"abstract":"<p>With the aging population and increasing labor costs, traditional manual harvesting methods have become less economically efficient. Consequently, research into fully automated harvesting using selective harvesting robots for cherry tomatoes has become a hot topic. However, most of the current research is focused on individual harvesting of large tomatoes, and there is less research on the development of complete systems for harvesting cherry tomatoes in clusters. The purpose of this study is to develop a harvesting robot system capable of picking tomato clusters by cutting their fruit-bearing pedicels and to evaluate the robot prototype in real greenhouse environments. First, to enhance the grasping stability, a novel end-effector was designed. This end-effector utilizes a cam mechanism to achieve asynchronous actions of cutting and grasping with only one power source. Subsequently, a visual perception system was developed to locate the cutting points of the pedicels. This system is divided into two parts: rough positioning of the fruits in the far-range view and accurate positioning of the cutting points of the pedicels in the close-range view. Furthermore, it possesses the capability to adaptively infer the approaching pose of the end-effector based on point cloud features extracted from fruit-bearing pedicels and stems. Finally, a prototype of the tomato-harvesting robot was assembled for field trials. The test results demonstrate that in tomato clusters with unobstructed pedicels, the localization success rates for the cutting points were 88.5% and 83.7% in the two greenhouses, respectively, while the harvesting success rates reached 57.7% and 55.4%, respectively. The average cycle time to harvest a tomato cluster was 24 s. The experimental results prove the potential for commercial application of the developed tomato-harvesting robot and through the analysis of failure cases, discuss directions for future work.</p>","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"41 8","pages":"2564-2582"},"PeriodicalIF":4.2,"publicationDate":"2024-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141365470","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of Field Robotics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1