Xudong Li, Chong Liu, Yangyang Sun, Wujie Li, Jingmin Li
Intelligent electric shovels are being developed for intelligent mining in open-pit mines. Complex environment detection and target recognition based on image recognition technology are prerequisites for achieving intelligent electric shovel operation. However, there is a large amount of sand–dust in open-pit mines, which can lead to low visibility and color shift in the environment during data collection, resulting in low-quality images. The images collected for environmental perception in sand–dust environment can seriously affect the target detection and scene segmentation capabilities of intelligent electric shovels. Therefore, developing an effective image processing algorithm to solve these problems and improve the perception ability of intelligent electric shovels has become crucial. At present, methods based on deep learning have achieved good results in image dehazing, and have a certain correlation in image sand–dust removal. However, deep learning heavily relies on data sets, but existing data sets are concentrated in haze environments, with significant gaps in the data set of sand–dust images, especially in open-pit mining scenes. Another bottleneck is the limited performance associated with traditional methods when removing sand–dust from images, such as image distortion and blurring. To address the aforementioned issues, a method for generating sand–dust image data based on atmospheric physical models and CIELAB color space features is proposed. The impact mechanism of sand–dust on images was analyzed through atmospheric physical models, and the formation of sand–dust images was divided into two parts: blurring and color deviation. We studied the blurring and color deviation effect generation theories based on atmospheric physical models and CIELAB color space, and designed a two-stage sand–dust image generation method. We also constructed an open-pit mine sand–dust data set in a real mining environment. Last but not least, this article takes generative adversarial network (GAN) as the research foundation and focuses on the formation mechanism of sand–dust image effects. The CIELAB color features are fused with the discriminator of GAN as basic priors and additional constraints to improve the discrimination effect. By combining the three feature components of CIELAB color space and comparing the algorithm performance, a feature fusion scheme is determined. The results show that the proposed method can generate clear and realistic images well, which helps to improve the performance of target detection and scene segmentation tasks in heavy sand–dust open-pit mines.
{"title":"A CIELAB fusion-based generative adversarial network for reliable sand–dust removal in open-pit mines","authors":"Xudong Li, Chong Liu, Yangyang Sun, Wujie Li, Jingmin Li","doi":"10.1002/rob.22387","DOIUrl":"10.1002/rob.22387","url":null,"abstract":"<p>Intelligent electric shovels are being developed for intelligent mining in open-pit mines. Complex environment detection and target recognition based on image recognition technology are prerequisites for achieving intelligent electric shovel operation. However, there is a large amount of sand–dust in open-pit mines, which can lead to low visibility and color shift in the environment during data collection, resulting in low-quality images. The images collected for environmental perception in sand–dust environment can seriously affect the target detection and scene segmentation capabilities of intelligent electric shovels. Therefore, developing an effective image processing algorithm to solve these problems and improve the perception ability of intelligent electric shovels has become crucial. At present, methods based on deep learning have achieved good results in image dehazing, and have a certain correlation in image sand–dust removal. However, deep learning heavily relies on data sets, but existing data sets are concentrated in haze environments, with significant gaps in the data set of sand–dust images, especially in open-pit mining scenes. Another bottleneck is the limited performance associated with traditional methods when removing sand–dust from images, such as image distortion and blurring. To address the aforementioned issues, a method for generating sand–dust image data based on atmospheric physical models and CIELAB color space features is proposed. The impact mechanism of sand–dust on images was analyzed through atmospheric physical models, and the formation of sand–dust images was divided into two parts: blurring and color deviation. We studied the blurring and color deviation effect generation theories based on atmospheric physical models and CIELAB color space, and designed a two-stage sand–dust image generation method. We also constructed an open-pit mine sand–dust data set in a real mining environment. Last but not least, this article takes generative adversarial network (GAN) as the research foundation and focuses on the formation mechanism of sand–dust image effects. The CIELAB color features are fused with the discriminator of GAN as basic priors and additional constraints to improve the discrimination effect. By combining the three feature components of CIELAB color space and comparing the algorithm performance, a feature fusion scheme is determined. The results show that the proposed method can generate clear and realistic images well, which helps to improve the performance of target detection and scene segmentation tasks in heavy sand–dust open-pit mines.</p>","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"41 8","pages":"2832-2847"},"PeriodicalIF":4.2,"publicationDate":"2024-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141720389","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nrusingh Charan Pradhan, Pramod Kumar Sahoo, Dilip Kumar Kushwaha, Dattatray G. Bhalekar, Indra Mani, Kishan Kumar, Avesh Kumar Singh, Mohit Kumar, Yash Makwana, Soumya Krishnan V., Aruna T. N.
<p>Braking system is a crucial component of tractors as it ensures safe operation and control of the vehicle. The limited space availability in the workspace of a small tractor exposes the operator to undesirable posture and a maximum level of vibration during operation. The primary cause of road accidents, particularly collisions, is attributed to the tractor operator's insufficient capacity to provide the necessary pedal power for engaging the brake pedal. During the process of engaging the brake pedal, the operator adjusts the backrest support to facilitate access to the brake pedal while operating under stressed conditions. In the present study, a linear actuator-assisted automatic braking system was developed for the small tractors. An integrated artificial neural network proportional–integral–derivative (ANN-PID) controller-based algorithm was developed to control the position of the brake pedal based on the input parameters like terrain condition, obstacle distance, and forward speed of the tractor. The tractor was operated at four different speeds (i.e., 10, 15, 20, and 25 km/h) in different terrain conditions (i.e., dry compacted soil, tilled soil, and asphalt road). The performance parameters like sensor digital output (SDO), force applied on the brake pedal (<span></span><math> <semantics> <mrow> <mrow> <msub> <mi>F</mi> <mi>b</mi> </msub> </mrow> </mrow> <annotation> <math altimg="urn:x-wiley:15564959:media:rob22393:rob22393-math-0001" wiley:location="equation/rob22393-math-0001.png" display="inline" xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mrow><msub><mi>F</mi><mi>b</mi></msub></mrow></mrow></math></annotation> </semantics></math>), and deceleration were considered as dependent parameters. The SDO was found to good approximation for sensing the position of the brake pedal during braking. The optimized network topology of the developed multilayer perceptron neural network (MLPNN) was 3-6-2 for predicting SDO and deceleration of the tractor with a coefficient of determination (<span></span><math> <semantics> <mrow> <mrow> <msup> <mi>R</mi> <mn>2</mn> </msup> </mrow> </mrow> <annotation> <math altimg="urn:x-wiley:15564959:media:rob22393:rob22393-math-0002" wiley:location="equation/rob22393-math-0002.png" display="inline" xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mrow><msup><mi>R</mi><mn>2</mn></msup></mrow></mrow></math><
{"title":"ANN-PID based automatic braking control system for small agricultural tractors","authors":"Nrusingh Charan Pradhan, Pramod Kumar Sahoo, Dilip Kumar Kushwaha, Dattatray G. Bhalekar, Indra Mani, Kishan Kumar, Avesh Kumar Singh, Mohit Kumar, Yash Makwana, Soumya Krishnan V., Aruna T. N.","doi":"10.1002/rob.22393","DOIUrl":"10.1002/rob.22393","url":null,"abstract":"<p>Braking system is a crucial component of tractors as it ensures safe operation and control of the vehicle. The limited space availability in the workspace of a small tractor exposes the operator to undesirable posture and a maximum level of vibration during operation. The primary cause of road accidents, particularly collisions, is attributed to the tractor operator's insufficient capacity to provide the necessary pedal power for engaging the brake pedal. During the process of engaging the brake pedal, the operator adjusts the backrest support to facilitate access to the brake pedal while operating under stressed conditions. In the present study, a linear actuator-assisted automatic braking system was developed for the small tractors. An integrated artificial neural network proportional–integral–derivative (ANN-PID) controller-based algorithm was developed to control the position of the brake pedal based on the input parameters like terrain condition, obstacle distance, and forward speed of the tractor. The tractor was operated at four different speeds (i.e., 10, 15, 20, and 25 km/h) in different terrain conditions (i.e., dry compacted soil, tilled soil, and asphalt road). The performance parameters like sensor digital output (SDO), force applied on the brake pedal (<span></span><math>\u0000 <semantics>\u0000 <mrow>\u0000 \u0000 <mrow>\u0000 <msub>\u0000 <mi>F</mi>\u0000 \u0000 <mi>b</mi>\u0000 </msub>\u0000 </mrow>\u0000 </mrow>\u0000 <annotation> <math altimg=\"urn:x-wiley:15564959:media:rob22393:rob22393-math-0001\" wiley:location=\"equation/rob22393-math-0001.png\" display=\"inline\" xmlns=\"http://www.w3.org/1998/Math/MathML\"><mrow><mrow><msub><mi>F</mi><mi>b</mi></msub></mrow></mrow></math></annotation>\u0000 </semantics></math>), and deceleration were considered as dependent parameters. The SDO was found to good approximation for sensing the position of the brake pedal during braking. The optimized network topology of the developed multilayer perceptron neural network (MLPNN) was 3-6-2 for predicting SDO and deceleration of the tractor with a coefficient of determination (<span></span><math>\u0000 <semantics>\u0000 <mrow>\u0000 \u0000 <mrow>\u0000 <msup>\u0000 <mi>R</mi>\u0000 \u0000 <mn>2</mn>\u0000 </msup>\u0000 </mrow>\u0000 </mrow>\u0000 <annotation> <math altimg=\"urn:x-wiley:15564959:media:rob22393:rob22393-math-0002\" wiley:location=\"equation/rob22393-math-0002.png\" display=\"inline\" xmlns=\"http://www.w3.org/1998/Math/MathML\"><mrow><mrow><msup><mi>R</mi><mn>2</mn></msup></mrow></mrow></math><","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"41 8","pages":"2805-2831"},"PeriodicalIF":4.2,"publicationDate":"2024-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141611472","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Accurate and robust localization is essential for autonomous mobile robots. Map matching based on Light Detection and Ranging (LiDAR) sensors has been widely adopted to estimate the global location of robots. However, map-matching performance can be degraded when the environment changes or when sufficient features are unavailable. Indiscriminately incorporating inaccurate map-matching poses for localization can significantly decrease the reliability of pose estimation. This paper aims to develop a robust LiDAR-based localization method based on map matching. We focus on determining appropriate weights that are computed from the uncertainty of map-matching poses. The uncertainty of map-matching poses is estimated by the probability distribution over the poses. We exploit the normal distribution transform map to derive the probability distribution. A factor graph is employed to combine the map-matching pose, LiDAR-inertial odometry, and global navigation satellite system information. Experimental verification was successfully conducted outdoors on the university campus in three different scenarios, each involving changing or dynamic environments. We compared the performance of the proposed method with three LiDAR-based localization methods. The experimental results show that robust localization performances can be achieved even when map-matching poses are inaccurate in various outdoor environments. The experimental video can be found at https://youtu.be/L6p8gwxn4ak.
{"title":"Uncertainty-aware LiDAR-based localization for outdoor mobile robots","authors":"Geonhyeok Park, Woojin Chung","doi":"10.1002/rob.22392","DOIUrl":"10.1002/rob.22392","url":null,"abstract":"<p>Accurate and robust localization is essential for autonomous mobile robots. Map matching based on Light Detection and Ranging (LiDAR) sensors has been widely adopted to estimate the global location of robots. However, map-matching performance can be degraded when the environment changes or when sufficient features are unavailable. Indiscriminately incorporating inaccurate map-matching poses for localization can significantly decrease the reliability of pose estimation. This paper aims to develop a robust LiDAR-based localization method based on map matching. We focus on determining appropriate weights that are computed from the uncertainty of map-matching poses. The uncertainty of map-matching poses is estimated by the probability distribution over the poses. We exploit the normal distribution transform map to derive the probability distribution. A factor graph is employed to combine the map-matching pose, LiDAR-inertial odometry, and global navigation satellite system information. Experimental verification was successfully conducted outdoors on the university campus in three different scenarios, each involving changing or dynamic environments. We compared the performance of the proposed method with three LiDAR-based localization methods. The experimental results show that robust localization performances can be achieved even when map-matching poses are inaccurate in various outdoor environments. The experimental video can be found at https://youtu.be/L6p8gwxn4ak.</p>","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"41 8","pages":"2790-2804"},"PeriodicalIF":4.2,"publicationDate":"2024-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/rob.22392","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141571733","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Usman A. Zahidi, Arshad Khan, Tsvetan Zhivkov, Johann Dichtl, Dom Li, Soran Parsa, Marc Hanheide, Grzegorz Cielniak, Elizabeth I. Sklar, Simon Pearson, Amir Ghalamzan-E.
Selective harvesting by autonomous robots will be a critical enabling technology for future farming. Increases in inflation and shortages of skilled labor are driving factors that can help encourage user acceptability of robotic harvesting. For example, robotic strawberry harvesting requires real-time high-precision fruit localization, three-dimensional (3D) mapping, and path planning for 3D cluster manipulation. Whilst industry and academia have developed multiple strawberry harvesting robots, none have yet achieved human–cost parity. Achieving this goal requires increased picking speed (perception, control, and movement), accuracy, and the development of low-cost robotic system designs. We propose the edge-server over 5G for Selective Harvesting (E5SH) system, which is an integration of high bandwidth and low latency Fifth-Generation (5G) mobile network into a crop harvesting robotic platform, which we view as an enabler for future robotic harvesting systems. We also consider processing scale and speed in conjunction with system environmental and energy costs. A system architecture is presented and evaluated with support from quantitative results from a series of experiments that compare the performance of the system in response to different architecture choices, including image segmentation models, network infrastructure (5G vs. Wireless Fidelity), and messaging protocols, such as Message Queuing Telemetry Transport and Transport Control Protocol Robot Operating System. Our results demonstrate that the E5SH system delivers step-change peak processing performance speedup of above 18-fold than a standalone embedded computing Nvidia Jetson Xavier NX system.
{"title":"Optimising robotic operation speed with edge computing via 5G network: Insights from selective harvesting robots","authors":"Usman A. Zahidi, Arshad Khan, Tsvetan Zhivkov, Johann Dichtl, Dom Li, Soran Parsa, Marc Hanheide, Grzegorz Cielniak, Elizabeth I. Sklar, Simon Pearson, Amir Ghalamzan-E.","doi":"10.1002/rob.22384","DOIUrl":"10.1002/rob.22384","url":null,"abstract":"<p>Selective harvesting by autonomous robots will be a critical enabling technology for future farming. Increases in inflation and shortages of skilled labor are driving factors that can help encourage user acceptability of robotic harvesting. For example, robotic strawberry harvesting requires real-time high-precision fruit localization, three-dimensional (3D) mapping, and path planning for 3D cluster manipulation. Whilst industry and academia have developed multiple strawberry harvesting robots, none have yet achieved human–cost parity. Achieving this goal requires increased picking speed (perception, control, and movement), accuracy, and the development of low-cost robotic system designs. We propose the <i>edge-server over 5G for Selective Harvesting</i> (E5SH) system, which is an integration of high bandwidth and low latency <i>Fifth-Generation</i> (5G) mobile network into a crop harvesting robotic platform, which we view as an enabler for future robotic harvesting systems. We also consider processing scale and speed in conjunction with system environmental and energy costs. A system architecture is presented and evaluated with support from quantitative results from a series of experiments that compare the performance of the system in response to different architecture choices, including image segmentation models, network infrastructure (5G vs. Wireless Fidelity), and messaging protocols, such as <i>Message Queuing Telemetry Transport</i> and <i>Transport Control Protocol Robot Operating System</i>. Our results demonstrate that the E5SH system delivers step-change peak processing performance speedup of above 18-fold than a standalone embedded computing Nvidia Jetson Xavier NX system.</p>","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"41 8","pages":"2771-2789"},"PeriodicalIF":4.2,"publicationDate":"2024-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/rob.22384","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141571734","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mengkun She, Yifan Song, David Nakath, Kevin Köser
Despite impressive results achieved by many on-land visual mapping algorithms in the recent decades, transferring these methods from land to the deep sea remains a challenge due to harsh environmental conditions. Images captured by autonomous underwater vehicles, equipped with high-resolution cameras and artificial illumination systems, often suffer from heterogeneous illumination and quality degradation caused by attenuation and scattering, on top of refraction of light rays. These challenges often result in the failure of on-land Simultaneous Localization and Mapping (SLAM) approaches when applied underwater or cause Structure-from-Motion (SfM) approaches to exhibit drifting or omit challenging images. Consequently, this leads to gaps, jumps, or weakly reconstructed areas. In this work, we present a navigation-aided hierarchical reconstruction approach to facilitate the automated robotic three-dimensional reconstruction of hectares of seafloor. Our hierarchical approach combines the advantages of SLAM and global SfM that are much more efficient than incremental SfM, while ensuring the completeness and consistency of the global map. This is achieved through identifying and revisiting problematic or weakly reconstructed areas, avoiding to omit images and making better use of limited dive time. The proposed system has been extensively tested and evaluated during several research cruises, demonstrating its robustness and practicality in real-world conditions.
尽管近几十年来许多陆地视觉制图算法取得了令人印象深刻的成果,但由于环境条件恶劣,将这些方法从陆地转移到深海仍然是一项挑战。配备了高分辨率照相机和人工照明系统的自动水下航行器所捕获的图像,往往会受到异质照明以及光线折射造成的衰减和散射等因素的影响而质量下降。这些挑战往往导致陆地上的同步定位和绘图(SLAM)方法在水下应用时失败,或导致运动结构(SfM)方法出现漂移或遗漏具有挑战性的图像。因此,这会导致空白、跳跃或重建区域薄弱。在这项工作中,我们提出了一种导航辅助分层重建方法,以促进自动机器人对数百公顷的海底进行三维重建。我们的分层方法结合了 SLAM 和全局 SfM 的优势,比增量 SfM 更有效,同时确保了全局地图的完整性和一致性。这是通过识别和重访有问题或重建薄弱的区域、避免遗漏图像以及更好地利用有限的潜水时间来实现的。拟议的系统已在几次研究航行中进行了广泛的测试和评估,证明了其在实际条件下的稳健性和实用性。
{"title":"Semihierarchical reconstruction and weak-area revisiting for robotic visual seafloor mapping","authors":"Mengkun She, Yifan Song, David Nakath, Kevin Köser","doi":"10.1002/rob.22390","DOIUrl":"10.1002/rob.22390","url":null,"abstract":"<p>Despite impressive results achieved by many on-land visual mapping algorithms in the recent decades, transferring these methods from land to the deep sea remains a challenge due to harsh environmental conditions. Images captured by autonomous underwater vehicles, equipped with high-resolution cameras and artificial illumination systems, often suffer from heterogeneous illumination and quality degradation caused by attenuation and scattering, on top of refraction of light rays. These challenges often result in the failure of on-land Simultaneous Localization and Mapping (SLAM) approaches when applied underwater or cause Structure-from-Motion (SfM) approaches to exhibit drifting or omit challenging images. Consequently, this leads to gaps, jumps, or weakly reconstructed areas. In this work, we present a navigation-aided hierarchical reconstruction approach to facilitate the automated robotic three-dimensional reconstruction of hectares of seafloor. Our hierarchical approach combines the advantages of SLAM and global SfM that are much more efficient than incremental SfM, while ensuring the completeness and consistency of the global map. This is achieved through identifying and revisiting problematic or weakly reconstructed areas, avoiding to omit images and making better use of limited dive time. The proposed system has been extensively tested and evaluated during several research cruises, demonstrating its robustness and practicality in real-world conditions.</p>","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"41 8","pages":"2749-2770"},"PeriodicalIF":4.2,"publicationDate":"2024-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/rob.22390","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141548908","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Operating a two-wheel paddy transplanter traditionally poses physical strain and cognitive workload challenges for farm workers, especially during headland turns. This study introduces a virtual reality (VR)/augmented reality (AR)based remote-control system for a two-wheel paddy transplanter to resolve these issues. The system replaces manual controls with VR interfaces, integrating gear motors and an electronic control unit. Front and rear-view cameras provide real-time field perception on light-emitting diode screens, displaying path trajectories via an autopilot controller and real-time kinematic global navigation satellite systems module. Human operators manipulate the machine using a hand-held remote controller while observing live camera feeds and path navigation trajectories. The study found that forward speed necessitated optimization within manageable limits of 1.75–2.00 km h−1 for walk-behind types and 2.00–2.25 km h−1 for remote-controlled systems. While higher speeds enhanced field capacity by 11.67%–12.95%, they also resulted in 0.74%–1.17% lower field efficiency. Additionally, Operators' physiological workload analysis revealed significant differences between walk-behind and remotely controlled operators. Significant differences in energy expenditure rate (EER) were observed between walk-behind and remote-controlled paddy transplanters, with EER values ranging from 8.20 ± 0.80 to 27.67 ± 0.45 kJ min⁻¹ and 7.56 ± 0.55 to 9.72 ± 0.37 kJ min⁻¹, respectively (p < 0.05). Overall, the VR-based remote-control system shows promise in enhancing operational efficiency and reducing physical strain in paddy transplanting operations.
{"title":"Development and field evaluation of a VR/AR-based remotely controlled system for a two-wheel paddy transplanter","authors":"Shiv Kumar Lohan, Mahesh Kumar Narang, Parmar Raghuvirsinh, Santosh Kumar, Lakhwinder Pal Singh","doi":"10.1002/rob.22389","DOIUrl":"10.1002/rob.22389","url":null,"abstract":"<p>Operating a two-wheel paddy transplanter traditionally poses physical strain and cognitive workload challenges for farm workers, especially during headland turns. This study introduces a virtual reality (VR)/augmented reality (AR)based remote-control system for a two-wheel paddy transplanter to resolve these issues. The system replaces manual controls with VR interfaces, integrating gear motors and an electronic control unit. Front and rear-view cameras provide real-time field perception on light-emitting diode screens, displaying path trajectories via an autopilot controller and real-time kinematic global navigation satellite systems module. Human operators manipulate the machine using a hand-held remote controller while observing live camera feeds and path navigation trajectories. The study found that forward speed necessitated optimization within manageable limits of 1.75–2.00 km h<sup>−</sup><sup>1</sup> for walk-behind types and 2.00–2.25 km h<sup>−</sup><sup>1</sup> for remote-controlled systems. While higher speeds enhanced field capacity by 11.67%–12.95%, they also resulted in 0.74%–1.17% lower field efficiency. Additionally, Operators' physiological workload analysis revealed significant differences between walk-behind and remotely controlled operators. Significant differences in energy expenditure rate (EER) were observed between walk-behind and remote-controlled paddy transplanters, with EER values ranging from 8.20 ± 0.80 to 27.67 ± 0.45 kJ min⁻¹ and 7.56 ± 0.55 to 9.72 ± 0.37 kJ min⁻¹, respectively (<i>p</i> < 0.05). Overall, the VR-based remote-control system shows promise in enhancing operational efficiency and reducing physical strain in paddy transplanting operations.</p>","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"41 8","pages":"2732-2748"},"PeriodicalIF":4.2,"publicationDate":"2024-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141525509","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Heavy-duty construction tasks implemented by hydraulic manipulators are highly challenging due to unstructured hazardous environments. Considering many tasks have quasirepetitive features (such as cyclic material handling or excavation), a multitarget adaptive virtual fixture (MAVF) method by teleoperation-based learning from demonstration is proposed to improve task efficiency and safety, by generating an online variable assistance force on the master. First, the demonstration trajectory of picking scattered materials is learned to extract its distribution and the nominal trajectory is generated. Then, the MAVF is established and adjusted online by a defined nonlinear variable stiffness and position deviation from the nominal trajectory. An energy tank is introduced to regulate the stiffness so that passivity and stability can be ensured. Taking the operation mode without virtual fixture (VF) assistance and with traditional weighted adaptation VF as comparisons, two groups of tests with and without time delay were carried out to validate the proposed method.
{"title":"Multitarget adaptive virtual fixture based on task learning for hydraulic manipulator","authors":"Min Cheng, Renming Li, Ruqi Ding, Bing Xu","doi":"10.1002/rob.22386","DOIUrl":"10.1002/rob.22386","url":null,"abstract":"<p>Heavy-duty construction tasks implemented by hydraulic manipulators are highly challenging due to unstructured hazardous environments. Considering many tasks have quasirepetitive features (such as cyclic material handling or excavation), a multitarget adaptive virtual fixture (MAVF) method by teleoperation-based learning from demonstration is proposed to improve task efficiency and safety, by generating an online variable assistance force on the master. First, the demonstration trajectory of picking scattered materials is learned to extract its distribution and the nominal trajectory is generated. Then, the MAVF is established and adjusted online by a defined nonlinear variable stiffness and position deviation from the nominal trajectory. An energy tank is introduced to regulate the stiffness so that passivity and stability can be ensured. Taking the operation mode without virtual fixture (VF) assistance and with traditional weighted adaptation VF as comparisons, two groups of tests with and without time delay were carried out to validate the proposed method.</p>","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"41 8","pages":"2715-2731"},"PeriodicalIF":4.2,"publicationDate":"2024-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141496224","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jingxin Lin, Kaifan Zhong, Tao Gong, Xianmin Zhang, Nianfeng Wang
With the advancement of industrial automation, the frequency of human–robot interaction (HRI) has significantly increased, necessitating a paramount focus on ensuring human safety throughout this process. This paper proposes a simulation-assisted neural network for point cloud segmentation in HRI, specifically distinguishing humans from various surrounding objects. During HRI, readily accessible prior information, such as the positions of background objects and the robot's posture, can generate a simulated point cloud and assist in point cloud segmentation. The simulation-assisted neural network utilizes simulated and actual point clouds as dual inputs. A simulation-assisted edge convolution module in the network facilitates the combination of features from the actual and simulated point clouds, updating the features of the actual point cloud to incorporate simulation information. Experiments of point cloud segmentation in industrial environments verify the efficacy of the proposed method.
{"title":"A simulation-assisted point cloud segmentation neural network for human–robot interaction applications","authors":"Jingxin Lin, Kaifan Zhong, Tao Gong, Xianmin Zhang, Nianfeng Wang","doi":"10.1002/rob.22385","DOIUrl":"10.1002/rob.22385","url":null,"abstract":"<p>With the advancement of industrial automation, the frequency of human–robot interaction (HRI) has significantly increased, necessitating a paramount focus on ensuring human safety throughout this process. This paper proposes a simulation-assisted neural network for point cloud segmentation in HRI, specifically distinguishing humans from various surrounding objects. During HRI, readily accessible prior information, such as the positions of background objects and the robot's posture, can generate a simulated point cloud and assist in point cloud segmentation. The simulation-assisted neural network utilizes simulated and actual point clouds as dual inputs. A simulation-assisted edge convolution module in the network facilitates the combination of features from the actual and simulated point clouds, updating the features of the actual point cloud to incorporate simulation information. Experiments of point cloud segmentation in industrial environments verify the efficacy of the proposed method.</p>","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"41 8","pages":"2689-2704"},"PeriodicalIF":4.2,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141525511","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents a precise two-robot collaboration method for three-dimensional (3D) self-localization relying on a single rotating camera and onboard accelerometers used to measure the tilt of the robots. This method allows for localization in global positioning system-denied environments and in the presence of magnetic interference or relatively (or totally) dark and unstructured unmarked locations. One robot moves forward on each step while the other remains stationary. The tilt angles of the robots obtained from the accelerometers and the rotational angle of the turret, associated with the video analysis, make it possible to continuously calculate the location of each robot. We describe a hardware setup used for experiments and provide a detailed description of the algorithm that fuses the data obtained by the accelerometers and cameras and runs in real-time on onboard microcomputers. Finally, we present 2D and 3D experimental results, which show that the system achieves 2% accuracy for the total traveled distance (see Supporting Information S1: video).
{"title":"Three-dimensional kinematics-based real-time localization method using two robots","authors":"Guy Elmakis, Matan Coronel, David Zarrouk","doi":"10.1002/rob.22383","DOIUrl":"10.1002/rob.22383","url":null,"abstract":"<p>This paper presents a precise two-robot collaboration method for three-dimensional (3D) self-localization relying on a single rotating camera and onboard accelerometers used to measure the tilt of the robots. This method allows for localization in global positioning system-denied environments and in the presence of magnetic interference or relatively (or totally) dark and unstructured unmarked locations. One robot moves forward on each step while the other remains stationary. The tilt angles of the robots obtained from the accelerometers and the rotational angle of the turret, associated with the video analysis, make it possible to continuously calculate the location of each robot. We describe a hardware setup used for experiments and provide a detailed description of the algorithm that fuses the data obtained by the accelerometers and cameras and runs in real-time on onboard microcomputers. Finally, we present 2D and 3D experimental results, which show that the system achieves 2% accuracy for the total traveled distance (see Supporting Information S1: video).</p>","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"41 8","pages":"2676-2688"},"PeriodicalIF":4.2,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/rob.22383","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141525512","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tianyu Zhao, Cheng Wang, Zhongbao Luo, Weiqi Cheng, Nan Xiang
Soft crawling robots are usually driven by bulky and complex external pneumatic or hydraulic actuators. In this work, we proposed a miniaturized soft crawling caterpillar based on electrohydrodynamic (EHD) pumps. The caterpillar was mainly composed of a flexible EHD pump for providing the driving force, an artificial muscle for performing the crawling, a fluid reservoir, and several stabilizers and auxiliary feet. To achieve better crawling performances for our caterpillar, the flow rate and pressure of the EHD pump were improved by using a curved electrode design. The electrode gap, electrode overlap length, channel height, electrode thickness, and electrode pair number of the EHD pump were further optimized for better performance. Compared with the EHD pumps with conventional straight electrodes, our EHD pump showed a 50% enhancement in driving pressure and a 60% increase in flow rate. The bending capability of the artificial muscles was also characterized, showing a maximum bending angle of over 50°. Then, the crawling ability of the soft crawling caterpillar is also tested. Finally, our caterpillar owns the advantages of simple fabrication, low-cost, fast movement speed, and small footprint, which has robust and wide potential for practical use, especially over various terrains.
{"title":"Soft crawling caterpillar driven by electrohydrodynamic pumps","authors":"Tianyu Zhao, Cheng Wang, Zhongbao Luo, Weiqi Cheng, Nan Xiang","doi":"10.1002/rob.22388","DOIUrl":"10.1002/rob.22388","url":null,"abstract":"<p>Soft crawling robots are usually driven by bulky and complex external pneumatic or hydraulic actuators. In this work, we proposed a miniaturized soft crawling caterpillar based on electrohydrodynamic (EHD) pumps. The caterpillar was mainly composed of a flexible EHD pump for providing the driving force, an artificial muscle for performing the crawling, a fluid reservoir, and several stabilizers and auxiliary feet. To achieve better crawling performances for our caterpillar, the flow rate and pressure of the EHD pump were improved by using a curved electrode design. The electrode gap, electrode overlap length, channel height, electrode thickness, and electrode pair number of the EHD pump were further optimized for better performance. Compared with the EHD pumps with conventional straight electrodes, our EHD pump showed a 50% enhancement in driving pressure and a 60% increase in flow rate. The bending capability of the artificial muscles was also characterized, showing a maximum bending angle of over 50°. Then, the crawling ability of the soft crawling caterpillar is also tested. Finally, our caterpillar owns the advantages of simple fabrication, low-cost, fast movement speed, and small footprint, which has robust and wide potential for practical use, especially over various terrains.</p>","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"41 8","pages":"2705-2714"},"PeriodicalIF":4.2,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141532743","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}