首页 > 最新文献

Journal of Field Robotics最新文献

英文 中文
Cover Image, Volume 41, Number 8, December 2024 封面图片,第 41 卷第 8 号,2024 年 12 月
IF 4.2 2区 计算机科学 Q2 ROBOTICS Pub Date : 2024-11-05 DOI: 10.1002/rob.22467
Guy Elmakis, Matan Coronel, David Zarrouk

The cover image is based on the Article Three-dimensional kinematics-based real-time localization method using two robots by Guy Elmakis et al., https://doi.org/10.1002/rob.22383

盖伊-埃尔马基斯(Guy Elmakis)等人的文章《基于三维运动学的双机器人实时定位方法》的封面图像,https://doi.org/10.1002/rob.22383。
{"title":"Cover Image, Volume 41, Number 8, December 2024","authors":"Guy Elmakis,&nbsp;Matan Coronel,&nbsp;David Zarrouk","doi":"10.1002/rob.22467","DOIUrl":"https://doi.org/10.1002/rob.22467","url":null,"abstract":"<p>The cover image is based on the Article <i>Three-dimensional kinematics-based real-time localization method using two robots</i> by Guy Elmakis et al., https://doi.org/10.1002/rob.22383\u0000 \u0000 <figure>\u0000 <div><picture>\u0000 <source></source></picture><p></p>\u0000 </div>\u0000 </figure></p>","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"41 8","pages":"i"},"PeriodicalIF":4.2,"publicationDate":"2024-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/rob.22467","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142588084","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A CIELAB fusion-based generative adversarial network for reliable sand–dust removal in open-pit mines 基于 CIELAB 融合的生成式对抗网络,用于露天矿可靠除砂除尘
IF 4.2 2区 计算机科学 Q2 ROBOTICS Pub Date : 2024-07-15 DOI: 10.1002/rob.22387
Xudong Li, Chong Liu, Yangyang Sun, Wujie Li, Jingmin Li

Intelligent electric shovels are being developed for intelligent mining in open-pit mines. Complex environment detection and target recognition based on image recognition technology are prerequisites for achieving intelligent electric shovel operation. However, there is a large amount of sand–dust in open-pit mines, which can lead to low visibility and color shift in the environment during data collection, resulting in low-quality images. The images collected for environmental perception in sand–dust environment can seriously affect the target detection and scene segmentation capabilities of intelligent electric shovels. Therefore, developing an effective image processing algorithm to solve these problems and improve the perception ability of intelligent electric shovels has become crucial. At present, methods based on deep learning have achieved good results in image dehazing, and have a certain correlation in image sand–dust removal. However, deep learning heavily relies on data sets, but existing data sets are concentrated in haze environments, with significant gaps in the data set of sand–dust images, especially in open-pit mining scenes. Another bottleneck is the limited performance associated with traditional methods when removing sand–dust from images, such as image distortion and blurring. To address the aforementioned issues, a method for generating sand–dust image data based on atmospheric physical models and CIELAB color space features is proposed. The impact mechanism of sand–dust on images was analyzed through atmospheric physical models, and the formation of sand–dust images was divided into two parts: blurring and color deviation. We studied the blurring and color deviation effect generation theories based on atmospheric physical models and CIELAB color space, and designed a two-stage sand–dust image generation method. We also constructed an open-pit mine sand–dust data set in a real mining environment. Last but not least, this article takes generative adversarial network (GAN) as the research foundation and focuses on the formation mechanism of sand–dust image effects. The CIELAB color features are fused with the discriminator of GAN as basic priors and additional constraints to improve the discrimination effect. By combining the three feature components of CIELAB color space and comparing the algorithm performance, a feature fusion scheme is determined. The results show that the proposed method can generate clear and realistic images well, which helps to improve the performance of target detection and scene segmentation tasks in heavy sand–dust open-pit mines.

目前正在开发用于露天矿智能采矿的智能电铲。基于图像识别技术的复杂环境检测和目标识别是实现智能电铲操作的先决条件。然而,露天矿中存在大量沙尘,在数据采集过程中会导致环境能见度低和颜色偏移,从而产生低质量的图像。在沙尘环境中采集的环境感知图像会严重影响智能电铲的目标检测和场景分割能力。因此,开发一种有效的图像处理算法来解决这些问题并提高智能电铲的感知能力变得至关重要。目前,基于深度学习的方法在图像去毛刺方面取得了不错的效果,在图像去沙尘方面也有一定的相关性。然而,深度学习在很大程度上依赖于数据集,但现有的数据集主要集中在雾霾环境中,沙尘图像数据集存在很大缺口,尤其是露天采矿场景。另一个瓶颈是传统方法在去除图像中的沙尘时性能有限,如图像失真和模糊。针对上述问题,提出了一种基于大气物理模型和 CIELAB 色彩空间特征的沙尘图像数据生成方法。通过大气物理模型分析了沙尘对图像的影响机理,并将沙尘图像的形成分为模糊和色彩偏差两部分。我们研究了基于大气物理模型和 CIELAB 色彩空间的模糊和色彩偏差效应生成理论,并设计了两阶段沙尘图像生成方法。我们还在真实的采矿环境中构建了露天矿沙尘数据集。最后,本文以生成式对抗网络(GAN)为研究基础,重点研究了沙尘图像效果的形成机理。将 CIELAB 颜色特征与 GAN 的判别器融合,作为基本前提和附加约束,以提高判别效果。通过结合 CIELAB 色彩空间的三个特征成分并比较算法性能,确定了特征融合方案。结果表明,所提出的方法能很好地生成清晰逼真的图像,有助于提高重沙尘露天矿中目标检测和场景分割任务的性能。
{"title":"A CIELAB fusion-based generative adversarial network for reliable sand–dust removal in open-pit mines","authors":"Xudong Li,&nbsp;Chong Liu,&nbsp;Yangyang Sun,&nbsp;Wujie Li,&nbsp;Jingmin Li","doi":"10.1002/rob.22387","DOIUrl":"10.1002/rob.22387","url":null,"abstract":"<p>Intelligent electric shovels are being developed for intelligent mining in open-pit mines. Complex environment detection and target recognition based on image recognition technology are prerequisites for achieving intelligent electric shovel operation. However, there is a large amount of sand–dust in open-pit mines, which can lead to low visibility and color shift in the environment during data collection, resulting in low-quality images. The images collected for environmental perception in sand–dust environment can seriously affect the target detection and scene segmentation capabilities of intelligent electric shovels. Therefore, developing an effective image processing algorithm to solve these problems and improve the perception ability of intelligent electric shovels has become crucial. At present, methods based on deep learning have achieved good results in image dehazing, and have a certain correlation in image sand–dust removal. However, deep learning heavily relies on data sets, but existing data sets are concentrated in haze environments, with significant gaps in the data set of sand–dust images, especially in open-pit mining scenes. Another bottleneck is the limited performance associated with traditional methods when removing sand–dust from images, such as image distortion and blurring. To address the aforementioned issues, a method for generating sand–dust image data based on atmospheric physical models and CIELAB color space features is proposed. The impact mechanism of sand–dust on images was analyzed through atmospheric physical models, and the formation of sand–dust images was divided into two parts: blurring and color deviation. We studied the blurring and color deviation effect generation theories based on atmospheric physical models and CIELAB color space, and designed a two-stage sand–dust image generation method. We also constructed an open-pit mine sand–dust data set in a real mining environment. Last but not least, this article takes generative adversarial network (GAN) as the research foundation and focuses on the formation mechanism of sand–dust image effects. The CIELAB color features are fused with the discriminator of GAN as basic priors and additional constraints to improve the discrimination effect. By combining the three feature components of CIELAB color space and comparing the algorithm performance, a feature fusion scheme is determined. The results show that the proposed method can generate clear and realistic images well, which helps to improve the performance of target detection and scene segmentation tasks in heavy sand–dust open-pit mines.</p>","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"41 8","pages":"2832-2847"},"PeriodicalIF":4.2,"publicationDate":"2024-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141720389","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ANN-PID based automatic braking control system for small agricultural tractors 基于 ANN-PID 的小型农用拖拉机自动制动控制系统
IF 4.2 2区 计算机科学 Q2 ROBOTICS Pub Date : 2024-07-11 DOI: 10.1002/rob.22393
Nrusingh Charan Pradhan, Pramod Kumar Sahoo, Dilip Kumar Kushwaha, Dattatray G. Bhalekar, Indra Mani, Kishan Kumar, Avesh Kumar Singh, Mohit Kumar, Yash Makwana, Soumya Krishnan V., Aruna T. N.
<p>Braking system is a crucial component of tractors as it ensures safe operation and control of the vehicle. The limited space availability in the workspace of a small tractor exposes the operator to undesirable posture and a maximum level of vibration during operation. The primary cause of road accidents, particularly collisions, is attributed to the tractor operator's insufficient capacity to provide the necessary pedal power for engaging the brake pedal. During the process of engaging the brake pedal, the operator adjusts the backrest support to facilitate access to the brake pedal while operating under stressed conditions. In the present study, a linear actuator-assisted automatic braking system was developed for the small tractors. An integrated artificial neural network proportional–integral–derivative (ANN-PID) controller-based algorithm was developed to control the position of the brake pedal based on the input parameters like terrain condition, obstacle distance, and forward speed of the tractor. The tractor was operated at four different speeds (i.e., 10, 15, 20, and 25 km/h) in different terrain conditions (i.e., dry compacted soil, tilled soil, and asphalt road). The performance parameters like sensor digital output (SDO), force applied on the brake pedal (<span></span><math> <semantics> <mrow> <mrow> <msub> <mi>F</mi> <mi>b</mi> </msub> </mrow> </mrow> <annotation> <math altimg="urn:x-wiley:15564959:media:rob22393:rob22393-math-0001" wiley:location="equation/rob22393-math-0001.png" display="inline" xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mrow><msub><mi>F</mi><mi>b</mi></msub></mrow></mrow></math></annotation> </semantics></math>), and deceleration were considered as dependent parameters. The SDO was found to good approximation for sensing the position of the brake pedal during braking. The optimized network topology of the developed multilayer perceptron neural network (MLPNN) was 3-6-2 for predicting SDO and deceleration of the tractor with a coefficient of determination (<span></span><math> <semantics> <mrow> <mrow> <msup> <mi>R</mi> <mn>2</mn> </msup> </mrow> </mrow> <annotation> <math altimg="urn:x-wiley:15564959:media:rob22393:rob22393-math-0002" wiley:location="equation/rob22393-math-0002.png" display="inline" xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mrow><msup><mi>R</mi><mn>2</mn></msup></mrow></mrow></math><
制动系统是拖拉机的重要组成部分,因为它能确保车辆的安全操作和控制。由于小型拖拉机的工作空间有限,操作员在操作过程中很难保持良好的姿势和最大程度的振动。道路交通事故,特别是碰撞事故的主要原因是拖拉机驾驶员没有足够的能力提供必要的踩踏制动踏板的力量。在踩下制动踏板的过程中,操作员会调整靠背支撑,以方便在压力条件下踩下制动踏板。本研究为小型拖拉机开发了线性执行器辅助自动制动系统。根据地形条件、障碍物距离和拖拉机前进速度等输入参数,开发了一种基于集成人工神经网络比例-积分-派生(ANN-PID)控制器的算法来控制制动踏板的位置。拖拉机以四种不同的速度(即 10、15、20 和 25 公里/小时)在不同的地形条件(即干燥压实土壤、耕作土壤和沥青路面)下运行。传感器数字输出(SDO)、施加在制动踏板上的力量()和减速度等性能参数被视为从属参数。结果表明,SDO 可以很好地近似感知制动过程中制动踏板的位置。所开发的多层感知器神经网络(MLPNN)的优化网络拓扑结构为 3-6-2,用于预测拖拉机的 SDO 和减速度,SDO 和减速度的训练数据集和测试数据集的决定系数()分别为 0.9953 和 0.9854,以及 0.9254 和 0.9096。采用齐格勒-尼科尔斯法(Z-N 法)确定了 PID 控制器的初始最佳增益,随后使用响应面法对这些系数进行了优化。优化后的比例()、积分()和导数()系数值分别为 4.8、6.782 和 3.15。所开发的集成 ANN(即基于 MLPNN 和 PID 的算法)可成功控制制动时制动踏板的位置。在所有选定的地形条件下,随着拖拉机前进速度从 10 km/h 增加到 25 km/h,拖拉机在自动制动时的制动距离和滑移量都有所增加。
{"title":"ANN-PID based automatic braking control system for small agricultural tractors","authors":"Nrusingh Charan Pradhan,&nbsp;Pramod Kumar Sahoo,&nbsp;Dilip Kumar Kushwaha,&nbsp;Dattatray G. Bhalekar,&nbsp;Indra Mani,&nbsp;Kishan Kumar,&nbsp;Avesh Kumar Singh,&nbsp;Mohit Kumar,&nbsp;Yash Makwana,&nbsp;Soumya Krishnan V.,&nbsp;Aruna T. N.","doi":"10.1002/rob.22393","DOIUrl":"10.1002/rob.22393","url":null,"abstract":"&lt;p&gt;Braking system is a crucial component of tractors as it ensures safe operation and control of the vehicle. The limited space availability in the workspace of a small tractor exposes the operator to undesirable posture and a maximum level of vibration during operation. The primary cause of road accidents, particularly collisions, is attributed to the tractor operator's insufficient capacity to provide the necessary pedal power for engaging the brake pedal. During the process of engaging the brake pedal, the operator adjusts the backrest support to facilitate access to the brake pedal while operating under stressed conditions. In the present study, a linear actuator-assisted automatic braking system was developed for the small tractors. An integrated artificial neural network proportional–integral–derivative (ANN-PID) controller-based algorithm was developed to control the position of the brake pedal based on the input parameters like terrain condition, obstacle distance, and forward speed of the tractor. The tractor was operated at four different speeds (i.e., 10, 15, 20, and 25 km/h) in different terrain conditions (i.e., dry compacted soil, tilled soil, and asphalt road). The performance parameters like sensor digital output (SDO), force applied on the brake pedal (&lt;span&gt;&lt;/span&gt;&lt;math&gt;\u0000 &lt;semantics&gt;\u0000 &lt;mrow&gt;\u0000 \u0000 &lt;mrow&gt;\u0000 &lt;msub&gt;\u0000 &lt;mi&gt;F&lt;/mi&gt;\u0000 \u0000 &lt;mi&gt;b&lt;/mi&gt;\u0000 &lt;/msub&gt;\u0000 &lt;/mrow&gt;\u0000 &lt;/mrow&gt;\u0000 &lt;annotation&gt; &lt;math altimg=\"urn:x-wiley:15564959:media:rob22393:rob22393-math-0001\" wiley:location=\"equation/rob22393-math-0001.png\" display=\"inline\" xmlns=\"http://www.w3.org/1998/Math/MathML\"&gt;&lt;mrow&gt;&lt;mrow&gt;&lt;msub&gt;&lt;mi&gt;F&lt;/mi&gt;&lt;mi&gt;b&lt;/mi&gt;&lt;/msub&gt;&lt;/mrow&gt;&lt;/mrow&gt;&lt;/math&gt;&lt;/annotation&gt;\u0000 &lt;/semantics&gt;&lt;/math&gt;), and deceleration were considered as dependent parameters. The SDO was found to good approximation for sensing the position of the brake pedal during braking. The optimized network topology of the developed multilayer perceptron neural network (MLPNN) was 3-6-2 for predicting SDO and deceleration of the tractor with a coefficient of determination (&lt;span&gt;&lt;/span&gt;&lt;math&gt;\u0000 &lt;semantics&gt;\u0000 &lt;mrow&gt;\u0000 \u0000 &lt;mrow&gt;\u0000 &lt;msup&gt;\u0000 &lt;mi&gt;R&lt;/mi&gt;\u0000 \u0000 &lt;mn&gt;2&lt;/mn&gt;\u0000 &lt;/msup&gt;\u0000 &lt;/mrow&gt;\u0000 &lt;/mrow&gt;\u0000 &lt;annotation&gt; &lt;math altimg=\"urn:x-wiley:15564959:media:rob22393:rob22393-math-0002\" wiley:location=\"equation/rob22393-math-0002.png\" display=\"inline\" xmlns=\"http://www.w3.org/1998/Math/MathML\"&gt;&lt;mrow&gt;&lt;mrow&gt;&lt;msup&gt;&lt;mi&gt;R&lt;/mi&gt;&lt;mn&gt;2&lt;/mn&gt;&lt;/msup&gt;&lt;/mrow&gt;&lt;/mrow&gt;&lt;/math&gt;&lt;","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"41 8","pages":"2805-2831"},"PeriodicalIF":4.2,"publicationDate":"2024-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141611472","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Uncertainty-aware LiDAR-based localization for outdoor mobile robots 基于不确定性感知激光雷达的户外移动机器人定位系统
IF 4.2 2区 计算机科学 Q2 ROBOTICS Pub Date : 2024-07-09 DOI: 10.1002/rob.22392
Geonhyeok Park, Woojin Chung

Accurate and robust localization is essential for autonomous mobile robots. Map matching based on Light Detection and Ranging (LiDAR) sensors has been widely adopted to estimate the global location of robots. However, map-matching performance can be degraded when the environment changes or when sufficient features are unavailable. Indiscriminately incorporating inaccurate map-matching poses for localization can significantly decrease the reliability of pose estimation. This paper aims to develop a robust LiDAR-based localization method based on map matching. We focus on determining appropriate weights that are computed from the uncertainty of map-matching poses. The uncertainty of map-matching poses is estimated by the probability distribution over the poses. We exploit the normal distribution transform map to derive the probability distribution. A factor graph is employed to combine the map-matching pose, LiDAR-inertial odometry, and global navigation satellite system information. Experimental verification was successfully conducted outdoors on the university campus in three different scenarios, each involving changing or dynamic environments. We compared the performance of the proposed method with three LiDAR-based localization methods. The experimental results show that robust localization performances can be achieved even when map-matching poses are inaccurate in various outdoor environments. The experimental video can be found at https://youtu.be/L6p8gwxn4ak.

对于自主移动机器人来说,准确而稳健的定位至关重要。基于光探测和测距(LiDAR)传感器的地图匹配技术已被广泛用于估算机器人的全球位置。然而,当环境发生变化或缺乏足够的特征时,地图匹配性能就会下降。不加区分地采用不准确的地图匹配姿势进行定位,会大大降低姿势估计的可靠性。本文旨在开发一种基于地图匹配的鲁棒激光雷达定位方法。我们的重点是根据地图匹配姿势的不确定性来确定适当的权重。地图匹配姿势的不确定性由姿势的概率分布估算得出。我们利用正态分布变换地图来推导概率分布。利用因子图将地图匹配姿势、激光雷达-惯性里程测量和全球导航卫星系统信息结合起来。实验验证在大学校园室外的三个不同场景中成功进行,每个场景都涉及变化或动态环境。我们将所提方法的性能与三种基于激光雷达的定位方法进行了比较。实验结果表明,在各种室外环境中,即使地图匹配姿势不准确,也能实现稳健的定位性能。实验视频见 https://youtu.be/L6p8gwxn4ak。
{"title":"Uncertainty-aware LiDAR-based localization for outdoor mobile robots","authors":"Geonhyeok Park,&nbsp;Woojin Chung","doi":"10.1002/rob.22392","DOIUrl":"10.1002/rob.22392","url":null,"abstract":"<p>Accurate and robust localization is essential for autonomous mobile robots. Map matching based on Light Detection and Ranging (LiDAR) sensors has been widely adopted to estimate the global location of robots. However, map-matching performance can be degraded when the environment changes or when sufficient features are unavailable. Indiscriminately incorporating inaccurate map-matching poses for localization can significantly decrease the reliability of pose estimation. This paper aims to develop a robust LiDAR-based localization method based on map matching. We focus on determining appropriate weights that are computed from the uncertainty of map-matching poses. The uncertainty of map-matching poses is estimated by the probability distribution over the poses. We exploit the normal distribution transform map to derive the probability distribution. A factor graph is employed to combine the map-matching pose, LiDAR-inertial odometry, and global navigation satellite system information. Experimental verification was successfully conducted outdoors on the university campus in three different scenarios, each involving changing or dynamic environments. We compared the performance of the proposed method with three LiDAR-based localization methods. The experimental results show that robust localization performances can be achieved even when map-matching poses are inaccurate in various outdoor environments. The experimental video can be found at https://youtu.be/L6p8gwxn4ak.</p>","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"41 8","pages":"2790-2804"},"PeriodicalIF":4.2,"publicationDate":"2024-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/rob.22392","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141571733","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimising robotic operation speed with edge computing via 5G network: Insights from selective harvesting robots 通过 5G 网络的边缘计算优化机器人运行速度:选择性收割机器人的启示
IF 4.2 2区 计算机科学 Q2 ROBOTICS Pub Date : 2024-07-05 DOI: 10.1002/rob.22384
Usman A. Zahidi, Arshad Khan, Tsvetan Zhivkov, Johann Dichtl, Dom Li, Soran Parsa, Marc Hanheide, Grzegorz Cielniak, Elizabeth I. Sklar, Simon Pearson, Amir Ghalamzan-E.

Selective harvesting by autonomous robots will be a critical enabling technology for future farming. Increases in inflation and shortages of skilled labor are driving factors that can help encourage user acceptability of robotic harvesting. For example, robotic strawberry harvesting requires real-time high-precision fruit localization, three-dimensional (3D) mapping, and path planning for 3D cluster manipulation. Whilst industry and academia have developed multiple strawberry harvesting robots, none have yet achieved human–cost parity. Achieving this goal requires increased picking speed (perception, control, and movement), accuracy, and the development of low-cost robotic system designs. We propose the edge-server over 5G for Selective Harvesting (E5SH) system, which is an integration of high bandwidth and low latency Fifth-Generation (5G) mobile network into a crop harvesting robotic platform, which we view as an enabler for future robotic harvesting systems. We also consider processing scale and speed in conjunction with system environmental and energy costs. A system architecture is presented and evaluated with support from quantitative results from a series of experiments that compare the performance of the system in response to different architecture choices, including image segmentation models, network infrastructure (5G vs. Wireless Fidelity), and messaging protocols, such as Message Queuing Telemetry Transport and Transport Control Protocol Robot Operating System. Our results demonstrate that the E5SH system delivers step-change peak processing performance speedup of above 18-fold than a standalone embedded computing Nvidia Jetson Xavier NX system.

自主机器人的选择性收割将成为未来农业的一项关键技术。通货膨胀的加剧和熟练劳动力的短缺是促使用户接受机器人收割的驱动因素。例如,机器人收获草莓需要实时高精度的果实定位、三维(3D)绘图和三维集群操作的路径规划。虽然工业界和学术界已开发出多种草莓采摘机器人,但还没有一种机器人能达到与人力成本持平的水平。要实现这一目标,就必须提高采摘速度(感知、控制和移动)、精确度,并开发低成本的机器人系统设计。我们提出了用于选择性收割的 5G 边缘服务器(E5SH)系统,该系统将高带宽、低延迟的第五代(5G)移动网络集成到农作物收割机器人平台中,我们将其视为未来机器人收割系统的推动力。我们还将处理规模和速度与系统环境和能源成本结合起来考虑。在一系列实验量化结果的支持下,我们介绍并评估了系统架构,这些实验比较了系统在不同架构选择下的性能,包括图像分割模型、网络基础设施(5G 与无线保真)以及消息传输协议,如消息队列遥测传输和传输控制协议机器人操作系统。我们的研究结果表明,E5SH 系统的峰值处理性能比独立嵌入式计算 Nvidia Jetson Xavier NX 系统提高了 18 倍以上。
{"title":"Optimising robotic operation speed with edge computing via 5G network: Insights from selective harvesting robots","authors":"Usman A. Zahidi,&nbsp;Arshad Khan,&nbsp;Tsvetan Zhivkov,&nbsp;Johann Dichtl,&nbsp;Dom Li,&nbsp;Soran Parsa,&nbsp;Marc Hanheide,&nbsp;Grzegorz Cielniak,&nbsp;Elizabeth I. Sklar,&nbsp;Simon Pearson,&nbsp;Amir Ghalamzan-E.","doi":"10.1002/rob.22384","DOIUrl":"10.1002/rob.22384","url":null,"abstract":"<p>Selective harvesting by autonomous robots will be a critical enabling technology for future farming. Increases in inflation and shortages of skilled labor are driving factors that can help encourage user acceptability of robotic harvesting. For example, robotic strawberry harvesting requires real-time high-precision fruit localization, three-dimensional (3D) mapping, and path planning for 3D cluster manipulation. Whilst industry and academia have developed multiple strawberry harvesting robots, none have yet achieved human–cost parity. Achieving this goal requires increased picking speed (perception, control, and movement), accuracy, and the development of low-cost robotic system designs. We propose the <i>edge-server over 5G for Selective Harvesting</i> (E5SH) system, which is an integration of high bandwidth and low latency <i>Fifth-Generation</i> (5G) mobile network into a crop harvesting robotic platform, which we view as an enabler for future robotic harvesting systems. We also consider processing scale and speed in conjunction with system environmental and energy costs. A system architecture is presented and evaluated with support from quantitative results from a series of experiments that compare the performance of the system in response to different architecture choices, including image segmentation models, network infrastructure (5G vs. Wireless Fidelity), and messaging protocols, such as <i>Message Queuing Telemetry Transport</i> and <i>Transport Control Protocol Robot Operating System</i>. Our results demonstrate that the E5SH system delivers step-change peak processing performance speedup of above 18-fold than a standalone embedded computing Nvidia Jetson Xavier NX system.</p>","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"41 8","pages":"2771-2789"},"PeriodicalIF":4.2,"publicationDate":"2024-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/rob.22384","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141571734","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Semihierarchical reconstruction and weak-area revisiting for robotic visual seafloor mapping 用于机器人视觉海底测绘的半层次重建和弱区重访
IF 4.2 2区 计算机科学 Q2 ROBOTICS Pub Date : 2024-07-04 DOI: 10.1002/rob.22390
Mengkun She, Yifan Song, David Nakath, Kevin Köser

Despite impressive results achieved by many on-land visual mapping algorithms in the recent decades, transferring these methods from land to the deep sea remains a challenge due to harsh environmental conditions. Images captured by autonomous underwater vehicles, equipped with high-resolution cameras and artificial illumination systems, often suffer from heterogeneous illumination and quality degradation caused by attenuation and scattering, on top of refraction of light rays. These challenges often result in the failure of on-land Simultaneous Localization and Mapping (SLAM) approaches when applied underwater or cause Structure-from-Motion (SfM) approaches to exhibit drifting or omit challenging images. Consequently, this leads to gaps, jumps, or weakly reconstructed areas. In this work, we present a navigation-aided hierarchical reconstruction approach to facilitate the automated robotic three-dimensional reconstruction of hectares of seafloor. Our hierarchical approach combines the advantages of SLAM and global SfM that are much more efficient than incremental SfM, while ensuring the completeness and consistency of the global map. This is achieved through identifying and revisiting problematic or weakly reconstructed areas, avoiding to omit images and making better use of limited dive time. The proposed system has been extensively tested and evaluated during several research cruises, demonstrating its robustness and practicality in real-world conditions.

尽管近几十年来许多陆地视觉制图算法取得了令人印象深刻的成果,但由于环境条件恶劣,将这些方法从陆地转移到深海仍然是一项挑战。配备了高分辨率照相机和人工照明系统的自动水下航行器所捕获的图像,往往会受到异质照明以及光线折射造成的衰减和散射等因素的影响而质量下降。这些挑战往往导致陆地上的同步定位和绘图(SLAM)方法在水下应用时失败,或导致运动结构(SfM)方法出现漂移或遗漏具有挑战性的图像。因此,这会导致空白、跳跃或重建区域薄弱。在这项工作中,我们提出了一种导航辅助分层重建方法,以促进自动机器人对数百公顷的海底进行三维重建。我们的分层方法结合了 SLAM 和全局 SfM 的优势,比增量 SfM 更有效,同时确保了全局地图的完整性和一致性。这是通过识别和重访有问题或重建薄弱的区域、避免遗漏图像以及更好地利用有限的潜水时间来实现的。拟议的系统已在几次研究航行中进行了广泛的测试和评估,证明了其在实际条件下的稳健性和实用性。
{"title":"Semihierarchical reconstruction and weak-area revisiting for robotic visual seafloor mapping","authors":"Mengkun She,&nbsp;Yifan Song,&nbsp;David Nakath,&nbsp;Kevin Köser","doi":"10.1002/rob.22390","DOIUrl":"10.1002/rob.22390","url":null,"abstract":"<p>Despite impressive results achieved by many on-land visual mapping algorithms in the recent decades, transferring these methods from land to the deep sea remains a challenge due to harsh environmental conditions. Images captured by autonomous underwater vehicles, equipped with high-resolution cameras and artificial illumination systems, often suffer from heterogeneous illumination and quality degradation caused by attenuation and scattering, on top of refraction of light rays. These challenges often result in the failure of on-land Simultaneous Localization and Mapping (SLAM) approaches when applied underwater or cause Structure-from-Motion (SfM) approaches to exhibit drifting or omit challenging images. Consequently, this leads to gaps, jumps, or weakly reconstructed areas. In this work, we present a navigation-aided hierarchical reconstruction approach to facilitate the automated robotic three-dimensional reconstruction of hectares of seafloor. Our hierarchical approach combines the advantages of SLAM and global SfM that are much more efficient than incremental SfM, while ensuring the completeness and consistency of the global map. This is achieved through identifying and revisiting problematic or weakly reconstructed areas, avoiding to omit images and making better use of limited dive time. The proposed system has been extensively tested and evaluated during several research cruises, demonstrating its robustness and practicality in real-world conditions.</p>","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"41 8","pages":"2749-2770"},"PeriodicalIF":4.2,"publicationDate":"2024-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/rob.22390","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141548908","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Development and field evaluation of a VR/AR-based remotely controlled system for a two-wheel paddy transplanter 为双轮水稻插秧机开发基于 VR/AR 的远程控制系统并进行实地评估
IF 4.2 2区 计算机科学 Q2 ROBOTICS Pub Date : 2024-07-02 DOI: 10.1002/rob.22389
Shiv Kumar Lohan, Mahesh Kumar Narang, Parmar Raghuvirsinh, Santosh Kumar, Lakhwinder Pal Singh

Operating a two-wheel paddy transplanter traditionally poses physical strain and cognitive workload challenges for farm workers, especially during headland turns. This study introduces a virtual reality (VR)/augmented reality (AR)based remote-control system for a two-wheel paddy transplanter to resolve these issues. The system replaces manual controls with VR interfaces, integrating gear motors and an electronic control unit. Front and rear-view cameras provide real-time field perception on light-emitting diode screens, displaying path trajectories via an autopilot controller and real-time kinematic global navigation satellite systems module. Human operators manipulate the machine using a hand-held remote controller while observing live camera feeds and path navigation trajectories. The study found that forward speed necessitated optimization within manageable limits of 1.75–2.00 km h1 for walk-behind types and 2.00–2.25 km h1 for remote-controlled systems. While higher speeds enhanced field capacity by 11.67%–12.95%, they also resulted in 0.74%–1.17% lower field efficiency. Additionally, Operators' physiological workload analysis revealed significant differences between walk-behind and remotely controlled operators. Significant differences in energy expenditure rate (EER) were observed between walk-behind and remote-controlled paddy transplanters, with EER values ranging from 8.20 ± 0.80 to 27.67 ± 0.45 kJ min⁻¹ and 7.56 ± 0.55 to 9.72 ± 0.37 kJ min⁻¹, respectively (p < 0.05). Overall, the VR-based remote-control system shows promise in enhancing operational efficiency and reducing physical strain in paddy transplanting operations.

传统上,操作双轮水稻插秧机会给农场工人带来体力和认知工作量方面的挑战,尤其是在地头转弯时。本研究介绍了一种基于虚拟现实(VR)/增强现实(AR)的双轮水稻插秧机遥控系统,以解决这些问题。该系统以 VR 界面取代手动控制,集成了齿轮电机和电子控制单元。前视和后视摄像头在发光二极管屏幕上提供实时现场感知,通过自动驾驶控制器和实时运动学全球卫星导航系统模块显示路径轨迹。人类操作员使用手持遥控器操纵机器,同时观察实时摄像机画面和路径导航轨迹。研究发现,前进速度必须在可控范围内进行优化,步行式系统为 1.75-2.00 km h-1,遥控系统为 2.00-2.25 km h-1。虽然更高的速度可提高 11.67%-12.95% 的田间作业能力,但也导致田间效率降低 0.74%-1.17%。此外,操作员生理工作量分析表明,步行式和遥控式操作员之间存在显著差异。步行式和遥控式插秧机的能量消耗率(EER)存在显著差异,EER 值分别为 8.20 ± 0.80 至 27.67 ± 0.45 kJ min-¹ 和 7.56 ± 0.55 至 9.72 ± 0.37 kJ min-¹(p <0.05)。总之,基于 VR 的遥控系统有望在水稻插秧作业中提高作业效率并减轻体力负担。
{"title":"Development and field evaluation of a VR/AR-based remotely controlled system for a two-wheel paddy transplanter","authors":"Shiv Kumar Lohan,&nbsp;Mahesh Kumar Narang,&nbsp;Parmar Raghuvirsinh,&nbsp;Santosh Kumar,&nbsp;Lakhwinder Pal Singh","doi":"10.1002/rob.22389","DOIUrl":"10.1002/rob.22389","url":null,"abstract":"<p>Operating a two-wheel paddy transplanter traditionally poses physical strain and cognitive workload challenges for farm workers, especially during headland turns. This study introduces a virtual reality (VR)/augmented reality (AR)based remote-control system for a two-wheel paddy transplanter to resolve these issues. The system replaces manual controls with VR interfaces, integrating gear motors and an electronic control unit. Front and rear-view cameras provide real-time field perception on light-emitting diode screens, displaying path trajectories via an autopilot controller and real-time kinematic global navigation satellite systems module. Human operators manipulate the machine using a hand-held remote controller while observing live camera feeds and path navigation trajectories. The study found that forward speed necessitated optimization within manageable limits of 1.75–2.00 km h<sup>−</sup><sup>1</sup> for walk-behind types and 2.00–2.25 km h<sup>−</sup><sup>1</sup> for remote-controlled systems. While higher speeds enhanced field capacity by 11.67%–12.95%, they also resulted in 0.74%–1.17% lower field efficiency. Additionally, Operators' physiological workload analysis revealed significant differences between walk-behind and remotely controlled operators. Significant differences in energy expenditure rate (EER) were observed between walk-behind and remote-controlled paddy transplanters, with EER values ranging from 8.20 ± 0.80 to 27.67 ± 0.45 kJ min⁻¹ and 7.56 ± 0.55 to 9.72 ± 0.37 kJ min⁻¹, respectively (<i>p</i> &lt; 0.05). Overall, the VR-based remote-control system shows promise in enhancing operational efficiency and reducing physical strain in paddy transplanting operations.</p>","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"41 8","pages":"2732-2748"},"PeriodicalIF":4.2,"publicationDate":"2024-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141525509","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multitarget adaptive virtual fixture based on task learning for hydraulic manipulator 基于液压机械手任务学习的多目标自适应虚拟夹具
IF 4.2 2区 计算机科学 Q2 ROBOTICS Pub Date : 2024-07-02 DOI: 10.1002/rob.22386
Min Cheng, Renming Li, Ruqi Ding, Bing Xu

Heavy-duty construction tasks implemented by hydraulic manipulators are highly challenging due to unstructured hazardous environments. Considering many tasks have quasirepetitive features (such as cyclic material handling or excavation), a multitarget adaptive virtual fixture (MAVF) method by teleoperation-based learning from demonstration is proposed to improve task efficiency and safety, by generating an online variable assistance force on the master. First, the demonstration trajectory of picking scattered materials is learned to extract its distribution and the nominal trajectory is generated. Then, the MAVF is established and adjusted online by a defined nonlinear variable stiffness and position deviation from the nominal trajectory. An energy tank is introduced to regulate the stiffness so that passivity and stability can be ensured. Taking the operation mode without virtual fixture (VF) assistance and with traditional weighted adaptation VF as comparisons, two groups of tests with and without time delay were carried out to validate the proposed method.

由于非结构化的危险环境,由液压机械手执行的重型建筑任务极具挑战性。考虑到许多任务具有准竞争特征(如周期性材料处理或挖掘),我们提出了一种基于远程操作的多目标自适应虚拟夹具(MAVF)方法,该方法通过从演示中学习,对主控器产生在线可变辅助力,从而提高任务效率和安全性。首先,学习散落物料的拾取示范轨迹,提取其分布并生成标称轨迹。然后,通过定义的非线性可变刚度和与标称轨迹的位置偏差,建立并在线调整 MAVF。为了确保被动性和稳定性,引入了一个能量槽来调节刚度。以无虚拟夹具(VF)辅助和传统加权自适应 VF 的运行模式为对比,进行了有时间延迟和无时间延迟的两组测试,以验证所提出的方法。
{"title":"Multitarget adaptive virtual fixture based on task learning for hydraulic manipulator","authors":"Min Cheng,&nbsp;Renming Li,&nbsp;Ruqi Ding,&nbsp;Bing Xu","doi":"10.1002/rob.22386","DOIUrl":"10.1002/rob.22386","url":null,"abstract":"<p>Heavy-duty construction tasks implemented by hydraulic manipulators are highly challenging due to unstructured hazardous environments. Considering many tasks have quasirepetitive features (such as cyclic material handling or excavation), a multitarget adaptive virtual fixture (MAVF) method by teleoperation-based learning from demonstration is proposed to improve task efficiency and safety, by generating an online variable assistance force on the master. First, the demonstration trajectory of picking scattered materials is learned to extract its distribution and the nominal trajectory is generated. Then, the MAVF is established and adjusted online by a defined nonlinear variable stiffness and position deviation from the nominal trajectory. An energy tank is introduced to regulate the stiffness so that passivity and stability can be ensured. Taking the operation mode without virtual fixture (VF) assistance and with traditional weighted adaptation VF as comparisons, two groups of tests with and without time delay were carried out to validate the proposed method.</p>","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"41 8","pages":"2715-2731"},"PeriodicalIF":4.2,"publicationDate":"2024-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141496224","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A simulation-assisted point cloud segmentation neural network for human–robot interaction applications 用于人机交互应用的仿真辅助点云分割神经网络
IF 4.2 2区 计算机科学 Q2 ROBOTICS Pub Date : 2024-07-01 DOI: 10.1002/rob.22385
Jingxin Lin, Kaifan Zhong, Tao Gong, Xianmin Zhang, Nianfeng Wang

With the advancement of industrial automation, the frequency of human–robot interaction (HRI) has significantly increased, necessitating a paramount focus on ensuring human safety throughout this process. This paper proposes a simulation-assisted neural network for point cloud segmentation in HRI, specifically distinguishing humans from various surrounding objects. During HRI, readily accessible prior information, such as the positions of background objects and the robot's posture, can generate a simulated point cloud and assist in point cloud segmentation. The simulation-assisted neural network utilizes simulated and actual point clouds as dual inputs. A simulation-assisted edge convolution module in the network facilitates the combination of features from the actual and simulated point clouds, updating the features of the actual point cloud to incorporate simulation information. Experiments of point cloud segmentation in industrial environments verify the efficacy of the proposed method.

随着工业自动化的发展,人机交互(HRI)的频率显著增加,因此在整个过程中必须高度重视确保人身安全。本文提出了一种仿真辅助神经网络,用于 HRI 中的点云分割,特别是将人与周围的各种物体区分开来。在人机交互过程中,背景物体的位置和机器人的姿态等容易获取的先验信息可以生成模拟点云,帮助进行点云分割。模拟辅助神经网络利用模拟点云和实际点云作为双重输入。网络中的仿真辅助边缘卷积模块有助于将实际点云和模拟点云的特征结合起来,更新实际点云的特征以纳入仿真信息。在工业环境中进行的点云分割实验验证了所提方法的有效性。
{"title":"A simulation-assisted point cloud segmentation neural network for human–robot interaction applications","authors":"Jingxin Lin,&nbsp;Kaifan Zhong,&nbsp;Tao Gong,&nbsp;Xianmin Zhang,&nbsp;Nianfeng Wang","doi":"10.1002/rob.22385","DOIUrl":"10.1002/rob.22385","url":null,"abstract":"<p>With the advancement of industrial automation, the frequency of human–robot interaction (HRI) has significantly increased, necessitating a paramount focus on ensuring human safety throughout this process. This paper proposes a simulation-assisted neural network for point cloud segmentation in HRI, specifically distinguishing humans from various surrounding objects. During HRI, readily accessible prior information, such as the positions of background objects and the robot's posture, can generate a simulated point cloud and assist in point cloud segmentation. The simulation-assisted neural network utilizes simulated and actual point clouds as dual inputs. A simulation-assisted edge convolution module in the network facilitates the combination of features from the actual and simulated point clouds, updating the features of the actual point cloud to incorporate simulation information. Experiments of point cloud segmentation in industrial environments verify the efficacy of the proposed method.</p>","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"41 8","pages":"2689-2704"},"PeriodicalIF":4.2,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141525511","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Three-dimensional kinematics-based real-time localization method using two robots 基于三维运动学的双机器人实时定位方法
IF 4.2 2区 计算机科学 Q2 ROBOTICS Pub Date : 2024-07-01 DOI: 10.1002/rob.22383
Guy Elmakis, Matan Coronel, David Zarrouk

This paper presents a precise two-robot collaboration method for three-dimensional (3D) self-localization relying on a single rotating camera and onboard accelerometers used to measure the tilt of the robots. This method allows for localization in global positioning system-denied environments and in the presence of magnetic interference or relatively (or totally) dark and unstructured unmarked locations. One robot moves forward on each step while the other remains stationary. The tilt angles of the robots obtained from the accelerometers and the rotational angle of the turret, associated with the video analysis, make it possible to continuously calculate the location of each robot. We describe a hardware setup used for experiments and provide a detailed description of the algorithm that fuses the data obtained by the accelerometers and cameras and runs in real-time on onboard microcomputers. Finally, we present 2D and 3D experimental results, which show that the system achieves 2% accuracy for the total traveled distance (see Supporting Information S1: video).

本文介绍了一种精确的双机器人协作三维(3D)自定位方法,该方法依赖于单个旋转摄像头和用于测量机器人倾斜度的机载加速度计。该方法可在全球定位系统被屏蔽的环境中,以及在存在磁场干扰或相对(或完全)黑暗、无结构、无标记的地点进行定位。一个机器人每一步都向前移动,而另一个机器人则保持静止。通过加速度计获得的机器人倾斜角度以及与视频分析相关的炮塔旋转角度,可以连续计算出每个机器人的位置。我们介绍了用于实验的硬件设置,并详细说明了融合加速度计和摄像头获取的数据并在机载微型计算机上实时运行的算法。最后,我们介绍了二维和三维实验结果,结果表明该系统的总行驶距离精确度达到了 2%(见佐证资料 S1:视频)。
{"title":"Three-dimensional kinematics-based real-time localization method using two robots","authors":"Guy Elmakis,&nbsp;Matan Coronel,&nbsp;David Zarrouk","doi":"10.1002/rob.22383","DOIUrl":"10.1002/rob.22383","url":null,"abstract":"<p>This paper presents a precise two-robot collaboration method for three-dimensional (3D) self-localization relying on a single rotating camera and onboard accelerometers used to measure the tilt of the robots. This method allows for localization in global positioning system-denied environments and in the presence of magnetic interference or relatively (or totally) dark and unstructured unmarked locations. One robot moves forward on each step while the other remains stationary. The tilt angles of the robots obtained from the accelerometers and the rotational angle of the turret, associated with the video analysis, make it possible to continuously calculate the location of each robot. We describe a hardware setup used for experiments and provide a detailed description of the algorithm that fuses the data obtained by the accelerometers and cameras and runs in real-time on onboard microcomputers. Finally, we present 2D and 3D experimental results, which show that the system achieves 2% accuracy for the total traveled distance (see Supporting Information S1: video).</p>","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"41 8","pages":"2676-2688"},"PeriodicalIF":4.2,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/rob.22383","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141525512","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of Field Robotics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1