Accurate and robust localization is essential for autonomous mobile robots. Map matching based on Light Detection and Ranging (LiDAR) sensors has been widely adopted to estimate the global location of robots. However, map-matching performance can be degraded when the environment changes or when sufficient features are unavailable. Indiscriminately incorporating inaccurate map-matching poses for localization can significantly decrease the reliability of pose estimation. This paper aims to develop a robust LiDAR-based localization method based on map matching. We focus on determining appropriate weights that are computed from the uncertainty of map-matching poses. The uncertainty of map-matching poses is estimated by the probability distribution over the poses. We exploit the normal distribution transform map to derive the probability distribution. A factor graph is employed to combine the map-matching pose, LiDAR-inertial odometry, and global navigation satellite system information. Experimental verification was successfully conducted outdoors on the university campus in three different scenarios, each involving changing or dynamic environments. We compared the performance of the proposed method with three LiDAR-based localization methods. The experimental results show that robust localization performances can be achieved even when map-matching poses are inaccurate in various outdoor environments. The experimental video can be found at https://youtu.be/L6p8gwxn4ak.
{"title":"Uncertainty-aware LiDAR-based localization for outdoor mobile robots","authors":"Geonhyeok Park, Woojin Chung","doi":"10.1002/rob.22392","DOIUrl":"10.1002/rob.22392","url":null,"abstract":"<p>Accurate and robust localization is essential for autonomous mobile robots. Map matching based on Light Detection and Ranging (LiDAR) sensors has been widely adopted to estimate the global location of robots. However, map-matching performance can be degraded when the environment changes or when sufficient features are unavailable. Indiscriminately incorporating inaccurate map-matching poses for localization can significantly decrease the reliability of pose estimation. This paper aims to develop a robust LiDAR-based localization method based on map matching. We focus on determining appropriate weights that are computed from the uncertainty of map-matching poses. The uncertainty of map-matching poses is estimated by the probability distribution over the poses. We exploit the normal distribution transform map to derive the probability distribution. A factor graph is employed to combine the map-matching pose, LiDAR-inertial odometry, and global navigation satellite system information. Experimental verification was successfully conducted outdoors on the university campus in three different scenarios, each involving changing or dynamic environments. We compared the performance of the proposed method with three LiDAR-based localization methods. The experimental results show that robust localization performances can be achieved even when map-matching poses are inaccurate in various outdoor environments. The experimental video can be found at https://youtu.be/L6p8gwxn4ak.</p>","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"41 8","pages":"2790-2804"},"PeriodicalIF":4.2,"publicationDate":"2024-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/rob.22392","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141571733","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Usman A. Zahidi, Arshad Khan, Tsvetan Zhivkov, Johann Dichtl, Dom Li, Soran Parsa, Marc Hanheide, Grzegorz Cielniak, Elizabeth I. Sklar, Simon Pearson, Amir Ghalamzan-E.
Selective harvesting by autonomous robots will be a critical enabling technology for future farming. Increases in inflation and shortages of skilled labor are driving factors that can help encourage user acceptability of robotic harvesting. For example, robotic strawberry harvesting requires real-time high-precision fruit localization, three-dimensional (3D) mapping, and path planning for 3D cluster manipulation. Whilst industry and academia have developed multiple strawberry harvesting robots, none have yet achieved human–cost parity. Achieving this goal requires increased picking speed (perception, control, and movement), accuracy, and the development of low-cost robotic system designs. We propose the edge-server over 5G for Selective Harvesting (E5SH) system, which is an integration of high bandwidth and low latency Fifth-Generation (5G) mobile network into a crop harvesting robotic platform, which we view as an enabler for future robotic harvesting systems. We also consider processing scale and speed in conjunction with system environmental and energy costs. A system architecture is presented and evaluated with support from quantitative results from a series of experiments that compare the performance of the system in response to different architecture choices, including image segmentation models, network infrastructure (5G vs. Wireless Fidelity), and messaging protocols, such as Message Queuing Telemetry Transport and Transport Control Protocol Robot Operating System. Our results demonstrate that the E5SH system delivers step-change peak processing performance speedup of above 18-fold than a standalone embedded computing Nvidia Jetson Xavier NX system.
{"title":"Optimising robotic operation speed with edge computing via 5G network: Insights from selective harvesting robots","authors":"Usman A. Zahidi, Arshad Khan, Tsvetan Zhivkov, Johann Dichtl, Dom Li, Soran Parsa, Marc Hanheide, Grzegorz Cielniak, Elizabeth I. Sklar, Simon Pearson, Amir Ghalamzan-E.","doi":"10.1002/rob.22384","DOIUrl":"10.1002/rob.22384","url":null,"abstract":"<p>Selective harvesting by autonomous robots will be a critical enabling technology for future farming. Increases in inflation and shortages of skilled labor are driving factors that can help encourage user acceptability of robotic harvesting. For example, robotic strawberry harvesting requires real-time high-precision fruit localization, three-dimensional (3D) mapping, and path planning for 3D cluster manipulation. Whilst industry and academia have developed multiple strawberry harvesting robots, none have yet achieved human–cost parity. Achieving this goal requires increased picking speed (perception, control, and movement), accuracy, and the development of low-cost robotic system designs. We propose the <i>edge-server over 5G for Selective Harvesting</i> (E5SH) system, which is an integration of high bandwidth and low latency <i>Fifth-Generation</i> (5G) mobile network into a crop harvesting robotic platform, which we view as an enabler for future robotic harvesting systems. We also consider processing scale and speed in conjunction with system environmental and energy costs. A system architecture is presented and evaluated with support from quantitative results from a series of experiments that compare the performance of the system in response to different architecture choices, including image segmentation models, network infrastructure (5G vs. Wireless Fidelity), and messaging protocols, such as <i>Message Queuing Telemetry Transport</i> and <i>Transport Control Protocol Robot Operating System</i>. Our results demonstrate that the E5SH system delivers step-change peak processing performance speedup of above 18-fold than a standalone embedded computing Nvidia Jetson Xavier NX system.</p>","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"41 8","pages":"2771-2789"},"PeriodicalIF":4.2,"publicationDate":"2024-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/rob.22384","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141571734","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mengkun She, Yifan Song, David Nakath, Kevin Köser
Despite impressive results achieved by many on-land visual mapping algorithms in the recent decades, transferring these methods from land to the deep sea remains a challenge due to harsh environmental conditions. Images captured by autonomous underwater vehicles, equipped with high-resolution cameras and artificial illumination systems, often suffer from heterogeneous illumination and quality degradation caused by attenuation and scattering, on top of refraction of light rays. These challenges often result in the failure of on-land Simultaneous Localization and Mapping (SLAM) approaches when applied underwater or cause Structure-from-Motion (SfM) approaches to exhibit drifting or omit challenging images. Consequently, this leads to gaps, jumps, or weakly reconstructed areas. In this work, we present a navigation-aided hierarchical reconstruction approach to facilitate the automated robotic three-dimensional reconstruction of hectares of seafloor. Our hierarchical approach combines the advantages of SLAM and global SfM that are much more efficient than incremental SfM, while ensuring the completeness and consistency of the global map. This is achieved through identifying and revisiting problematic or weakly reconstructed areas, avoiding to omit images and making better use of limited dive time. The proposed system has been extensively tested and evaluated during several research cruises, demonstrating its robustness and practicality in real-world conditions.
尽管近几十年来许多陆地视觉制图算法取得了令人印象深刻的成果,但由于环境条件恶劣,将这些方法从陆地转移到深海仍然是一项挑战。配备了高分辨率照相机和人工照明系统的自动水下航行器所捕获的图像,往往会受到异质照明以及光线折射造成的衰减和散射等因素的影响而质量下降。这些挑战往往导致陆地上的同步定位和绘图(SLAM)方法在水下应用时失败,或导致运动结构(SfM)方法出现漂移或遗漏具有挑战性的图像。因此,这会导致空白、跳跃或重建区域薄弱。在这项工作中,我们提出了一种导航辅助分层重建方法,以促进自动机器人对数百公顷的海底进行三维重建。我们的分层方法结合了 SLAM 和全局 SfM 的优势,比增量 SfM 更有效,同时确保了全局地图的完整性和一致性。这是通过识别和重访有问题或重建薄弱的区域、避免遗漏图像以及更好地利用有限的潜水时间来实现的。拟议的系统已在几次研究航行中进行了广泛的测试和评估,证明了其在实际条件下的稳健性和实用性。
{"title":"Semihierarchical reconstruction and weak-area revisiting for robotic visual seafloor mapping","authors":"Mengkun She, Yifan Song, David Nakath, Kevin Köser","doi":"10.1002/rob.22390","DOIUrl":"10.1002/rob.22390","url":null,"abstract":"<p>Despite impressive results achieved by many on-land visual mapping algorithms in the recent decades, transferring these methods from land to the deep sea remains a challenge due to harsh environmental conditions. Images captured by autonomous underwater vehicles, equipped with high-resolution cameras and artificial illumination systems, often suffer from heterogeneous illumination and quality degradation caused by attenuation and scattering, on top of refraction of light rays. These challenges often result in the failure of on-land Simultaneous Localization and Mapping (SLAM) approaches when applied underwater or cause Structure-from-Motion (SfM) approaches to exhibit drifting or omit challenging images. Consequently, this leads to gaps, jumps, or weakly reconstructed areas. In this work, we present a navigation-aided hierarchical reconstruction approach to facilitate the automated robotic three-dimensional reconstruction of hectares of seafloor. Our hierarchical approach combines the advantages of SLAM and global SfM that are much more efficient than incremental SfM, while ensuring the completeness and consistency of the global map. This is achieved through identifying and revisiting problematic or weakly reconstructed areas, avoiding to omit images and making better use of limited dive time. The proposed system has been extensively tested and evaluated during several research cruises, demonstrating its robustness and practicality in real-world conditions.</p>","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"41 8","pages":"2749-2770"},"PeriodicalIF":4.2,"publicationDate":"2024-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/rob.22390","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141548908","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Operating a two-wheel paddy transplanter traditionally poses physical strain and cognitive workload challenges for farm workers, especially during headland turns. This study introduces a virtual reality (VR)/augmented reality (AR)based remote-control system for a two-wheel paddy transplanter to resolve these issues. The system replaces manual controls with VR interfaces, integrating gear motors and an electronic control unit. Front and rear-view cameras provide real-time field perception on light-emitting diode screens, displaying path trajectories via an autopilot controller and real-time kinematic global navigation satellite systems module. Human operators manipulate the machine using a hand-held remote controller while observing live camera feeds and path navigation trajectories. The study found that forward speed necessitated optimization within manageable limits of 1.75–2.00 km h−1 for walk-behind types and 2.00–2.25 km h−1 for remote-controlled systems. While higher speeds enhanced field capacity by 11.67%–12.95%, they also resulted in 0.74%–1.17% lower field efficiency. Additionally, Operators' physiological workload analysis revealed significant differences between walk-behind and remotely controlled operators. Significant differences in energy expenditure rate (EER) were observed between walk-behind and remote-controlled paddy transplanters, with EER values ranging from 8.20 ± 0.80 to 27.67 ± 0.45 kJ min⁻¹ and 7.56 ± 0.55 to 9.72 ± 0.37 kJ min⁻¹, respectively (p < 0.05). Overall, the VR-based remote-control system shows promise in enhancing operational efficiency and reducing physical strain in paddy transplanting operations.
{"title":"Development and field evaluation of a VR/AR-based remotely controlled system for a two-wheel paddy transplanter","authors":"Shiv Kumar Lohan, Mahesh Kumar Narang, Parmar Raghuvirsinh, Santosh Kumar, Lakhwinder Pal Singh","doi":"10.1002/rob.22389","DOIUrl":"10.1002/rob.22389","url":null,"abstract":"<p>Operating a two-wheel paddy transplanter traditionally poses physical strain and cognitive workload challenges for farm workers, especially during headland turns. This study introduces a virtual reality (VR)/augmented reality (AR)based remote-control system for a two-wheel paddy transplanter to resolve these issues. The system replaces manual controls with VR interfaces, integrating gear motors and an electronic control unit. Front and rear-view cameras provide real-time field perception on light-emitting diode screens, displaying path trajectories via an autopilot controller and real-time kinematic global navigation satellite systems module. Human operators manipulate the machine using a hand-held remote controller while observing live camera feeds and path navigation trajectories. The study found that forward speed necessitated optimization within manageable limits of 1.75–2.00 km h<sup>−</sup><sup>1</sup> for walk-behind types and 2.00–2.25 km h<sup>−</sup><sup>1</sup> for remote-controlled systems. While higher speeds enhanced field capacity by 11.67%–12.95%, they also resulted in 0.74%–1.17% lower field efficiency. Additionally, Operators' physiological workload analysis revealed significant differences between walk-behind and remotely controlled operators. Significant differences in energy expenditure rate (EER) were observed between walk-behind and remote-controlled paddy transplanters, with EER values ranging from 8.20 ± 0.80 to 27.67 ± 0.45 kJ min⁻¹ and 7.56 ± 0.55 to 9.72 ± 0.37 kJ min⁻¹, respectively (<i>p</i> < 0.05). Overall, the VR-based remote-control system shows promise in enhancing operational efficiency and reducing physical strain in paddy transplanting operations.</p>","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"41 8","pages":"2732-2748"},"PeriodicalIF":4.2,"publicationDate":"2024-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141525509","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Heavy-duty construction tasks implemented by hydraulic manipulators are highly challenging due to unstructured hazardous environments. Considering many tasks have quasirepetitive features (such as cyclic material handling or excavation), a multitarget adaptive virtual fixture (MAVF) method by teleoperation-based learning from demonstration is proposed to improve task efficiency and safety, by generating an online variable assistance force on the master. First, the demonstration trajectory of picking scattered materials is learned to extract its distribution and the nominal trajectory is generated. Then, the MAVF is established and adjusted online by a defined nonlinear variable stiffness and position deviation from the nominal trajectory. An energy tank is introduced to regulate the stiffness so that passivity and stability can be ensured. Taking the operation mode without virtual fixture (VF) assistance and with traditional weighted adaptation VF as comparisons, two groups of tests with and without time delay were carried out to validate the proposed method.
{"title":"Multitarget adaptive virtual fixture based on task learning for hydraulic manipulator","authors":"Min Cheng, Renming Li, Ruqi Ding, Bing Xu","doi":"10.1002/rob.22386","DOIUrl":"10.1002/rob.22386","url":null,"abstract":"<p>Heavy-duty construction tasks implemented by hydraulic manipulators are highly challenging due to unstructured hazardous environments. Considering many tasks have quasirepetitive features (such as cyclic material handling or excavation), a multitarget adaptive virtual fixture (MAVF) method by teleoperation-based learning from demonstration is proposed to improve task efficiency and safety, by generating an online variable assistance force on the master. First, the demonstration trajectory of picking scattered materials is learned to extract its distribution and the nominal trajectory is generated. Then, the MAVF is established and adjusted online by a defined nonlinear variable stiffness and position deviation from the nominal trajectory. An energy tank is introduced to regulate the stiffness so that passivity and stability can be ensured. Taking the operation mode without virtual fixture (VF) assistance and with traditional weighted adaptation VF as comparisons, two groups of tests with and without time delay were carried out to validate the proposed method.</p>","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"41 8","pages":"2715-2731"},"PeriodicalIF":4.2,"publicationDate":"2024-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141496224","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jingxin Lin, Kaifan Zhong, Tao Gong, Xianmin Zhang, Nianfeng Wang
With the advancement of industrial automation, the frequency of human–robot interaction (HRI) has significantly increased, necessitating a paramount focus on ensuring human safety throughout this process. This paper proposes a simulation-assisted neural network for point cloud segmentation in HRI, specifically distinguishing humans from various surrounding objects. During HRI, readily accessible prior information, such as the positions of background objects and the robot's posture, can generate a simulated point cloud and assist in point cloud segmentation. The simulation-assisted neural network utilizes simulated and actual point clouds as dual inputs. A simulation-assisted edge convolution module in the network facilitates the combination of features from the actual and simulated point clouds, updating the features of the actual point cloud to incorporate simulation information. Experiments of point cloud segmentation in industrial environments verify the efficacy of the proposed method.
{"title":"A simulation-assisted point cloud segmentation neural network for human–robot interaction applications","authors":"Jingxin Lin, Kaifan Zhong, Tao Gong, Xianmin Zhang, Nianfeng Wang","doi":"10.1002/rob.22385","DOIUrl":"10.1002/rob.22385","url":null,"abstract":"<p>With the advancement of industrial automation, the frequency of human–robot interaction (HRI) has significantly increased, necessitating a paramount focus on ensuring human safety throughout this process. This paper proposes a simulation-assisted neural network for point cloud segmentation in HRI, specifically distinguishing humans from various surrounding objects. During HRI, readily accessible prior information, such as the positions of background objects and the robot's posture, can generate a simulated point cloud and assist in point cloud segmentation. The simulation-assisted neural network utilizes simulated and actual point clouds as dual inputs. A simulation-assisted edge convolution module in the network facilitates the combination of features from the actual and simulated point clouds, updating the features of the actual point cloud to incorporate simulation information. Experiments of point cloud segmentation in industrial environments verify the efficacy of the proposed method.</p>","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"41 8","pages":"2689-2704"},"PeriodicalIF":4.2,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141525511","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents a precise two-robot collaboration method for three-dimensional (3D) self-localization relying on a single rotating camera and onboard accelerometers used to measure the tilt of the robots. This method allows for localization in global positioning system-denied environments and in the presence of magnetic interference or relatively (or totally) dark and unstructured unmarked locations. One robot moves forward on each step while the other remains stationary. The tilt angles of the robots obtained from the accelerometers and the rotational angle of the turret, associated with the video analysis, make it possible to continuously calculate the location of each robot. We describe a hardware setup used for experiments and provide a detailed description of the algorithm that fuses the data obtained by the accelerometers and cameras and runs in real-time on onboard microcomputers. Finally, we present 2D and 3D experimental results, which show that the system achieves 2% accuracy for the total traveled distance (see Supporting Information S1: video).
{"title":"Three-dimensional kinematics-based real-time localization method using two robots","authors":"Guy Elmakis, Matan Coronel, David Zarrouk","doi":"10.1002/rob.22383","DOIUrl":"10.1002/rob.22383","url":null,"abstract":"<p>This paper presents a precise two-robot collaboration method for three-dimensional (3D) self-localization relying on a single rotating camera and onboard accelerometers used to measure the tilt of the robots. This method allows for localization in global positioning system-denied environments and in the presence of magnetic interference or relatively (or totally) dark and unstructured unmarked locations. One robot moves forward on each step while the other remains stationary. The tilt angles of the robots obtained from the accelerometers and the rotational angle of the turret, associated with the video analysis, make it possible to continuously calculate the location of each robot. We describe a hardware setup used for experiments and provide a detailed description of the algorithm that fuses the data obtained by the accelerometers and cameras and runs in real-time on onboard microcomputers. Finally, we present 2D and 3D experimental results, which show that the system achieves 2% accuracy for the total traveled distance (see Supporting Information S1: video).</p>","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"41 8","pages":"2676-2688"},"PeriodicalIF":4.2,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/rob.22383","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141525512","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tianyu Zhao, Cheng Wang, Zhongbao Luo, Weiqi Cheng, Nan Xiang
Soft crawling robots are usually driven by bulky and complex external pneumatic or hydraulic actuators. In this work, we proposed a miniaturized soft crawling caterpillar based on electrohydrodynamic (EHD) pumps. The caterpillar was mainly composed of a flexible EHD pump for providing the driving force, an artificial muscle for performing the crawling, a fluid reservoir, and several stabilizers and auxiliary feet. To achieve better crawling performances for our caterpillar, the flow rate and pressure of the EHD pump were improved by using a curved electrode design. The electrode gap, electrode overlap length, channel height, electrode thickness, and electrode pair number of the EHD pump were further optimized for better performance. Compared with the EHD pumps with conventional straight electrodes, our EHD pump showed a 50% enhancement in driving pressure and a 60% increase in flow rate. The bending capability of the artificial muscles was also characterized, showing a maximum bending angle of over 50°. Then, the crawling ability of the soft crawling caterpillar is also tested. Finally, our caterpillar owns the advantages of simple fabrication, low-cost, fast movement speed, and small footprint, which has robust and wide potential for practical use, especially over various terrains.
{"title":"Soft crawling caterpillar driven by electrohydrodynamic pumps","authors":"Tianyu Zhao, Cheng Wang, Zhongbao Luo, Weiqi Cheng, Nan Xiang","doi":"10.1002/rob.22388","DOIUrl":"10.1002/rob.22388","url":null,"abstract":"<p>Soft crawling robots are usually driven by bulky and complex external pneumatic or hydraulic actuators. In this work, we proposed a miniaturized soft crawling caterpillar based on electrohydrodynamic (EHD) pumps. The caterpillar was mainly composed of a flexible EHD pump for providing the driving force, an artificial muscle for performing the crawling, a fluid reservoir, and several stabilizers and auxiliary feet. To achieve better crawling performances for our caterpillar, the flow rate and pressure of the EHD pump were improved by using a curved electrode design. The electrode gap, electrode overlap length, channel height, electrode thickness, and electrode pair number of the EHD pump were further optimized for better performance. Compared with the EHD pumps with conventional straight electrodes, our EHD pump showed a 50% enhancement in driving pressure and a 60% increase in flow rate. The bending capability of the artificial muscles was also characterized, showing a maximum bending angle of over 50°. Then, the crawling ability of the soft crawling caterpillar is also tested. Finally, our caterpillar owns the advantages of simple fabrication, low-cost, fast movement speed, and small footprint, which has robust and wide potential for practical use, especially over various terrains.</p>","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"41 8","pages":"2705-2714"},"PeriodicalIF":4.2,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141532743","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Meng Yang, Chenglong Huang, Zhengda Li, Yang Shao, Jinzhan Yuan, Wanneng Yang, Peng Song
Phenotyping robots have the potential to obtain crop phenotypic traits on a large scale with high throughput. Autonomous navigation technology for phenotyping robots can significantly improve the efficiency of phenotypic traits collection. This study developed an autonomous navigation method utilizing an RGB-D camera, specifically designed for phenotyping robots in field environments. The PP-LiteSeg semantic segmentation model was employed due to its real-time and accurate segmentation capabilities, enabling the distinction of crop areas in images captured by the RGB-D camera. Navigation feature points were extracted from these segmented areas, with their three-dimensional coordinates determined from pixel and depth information, facilitating the computation of angle deviation (α) and lateral deviation (d). Fuzzy controllers were designed with α and d as inputs for real-time deviation correction during the walking of phenotyping robot. Additionally, the method includes end-of-row recognition and row spacing calculation, based on both visible and depth data, enabling automatic turning and row transition. The experimental results showed that the adopted PP-LiteSeg semantic segmentation model had a testing accuracy of 95.379% and a mean intersection over union of 90.615%. The robot's navigation demonstrated an average walking deviation of 1.33 cm, with a maximum of 3.82 cm. Additionally, the average error in row spacing measurement was 2.71 cm, while the success rate of row transition at the end of the row was 100%. These findings indicate that the proposed method provides effective support for the autonomous operation of phenotyping robots.
{"title":"Autonomous navigation method based on RGB-D camera for a crop phenotyping robot","authors":"Meng Yang, Chenglong Huang, Zhengda Li, Yang Shao, Jinzhan Yuan, Wanneng Yang, Peng Song","doi":"10.1002/rob.22379","DOIUrl":"10.1002/rob.22379","url":null,"abstract":"<p>Phenotyping robots have the potential to obtain crop phenotypic traits on a large scale with high throughput. Autonomous navigation technology for phenotyping robots can significantly improve the efficiency of phenotypic traits collection. This study developed an autonomous navigation method utilizing an RGB-D camera, specifically designed for phenotyping robots in field environments. The PP-LiteSeg semantic segmentation model was employed due to its real-time and accurate segmentation capabilities, enabling the distinction of crop areas in images captured by the RGB-D camera. Navigation feature points were extracted from these segmented areas, with their three-dimensional coordinates determined from pixel and depth information, facilitating the computation of angle deviation (<i>α</i>) and lateral deviation (<i>d</i>). Fuzzy controllers were designed with <i>α</i> and <i>d</i> as inputs for real-time deviation correction during the walking of phenotyping robot. Additionally, the method includes end-of-row recognition and row spacing calculation, based on both visible and depth data, enabling automatic turning and row transition. The experimental results showed that the adopted PP-LiteSeg semantic segmentation model had a testing accuracy of 95.379% and a mean intersection over union of 90.615%. The robot's navigation demonstrated an average walking deviation of 1.33 cm, with a maximum of 3.82 cm. Additionally, the average error in row spacing measurement was 2.71 cm, while the success rate of row transition at the end of the row was 100%. These findings indicate that the proposed method provides effective support for the autonomous operation of phenotyping robots.</p>","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"41 8","pages":"2663-2675"},"PeriodicalIF":4.2,"publicationDate":"2024-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/rob.22379","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141525510","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper introduces a multi-degree of freedom bionic crocodile robot designed to tackle the challenge of cleaning pollutants and debris from the surfaces of narrow, shallow rivers. The robot mimics the “death roll” motion of crocodiles which is a technique used for object disintegration. First, the design incorporated a swinging tail mechanism using a multi-section oscillating guide-bar mechanism. By analyzing three-, four-, and five-section tail structures, the four-section tail was identified as the most effective structure, offering optimal strength and swing amplitude. Each section of the tail can reach maximum swing angles of 8.05°, 20.95°, 35.09°, and 43.84°, respectively, under a single motor's drive. Next, the robotic legs were designed with a double parallelogram mechanism, facilitating both crawling and retracting movements. In addition, the mouth employed a double-rocker mechanism for efficient closure and locking, achieving an average torque of 5.69 N m with a motor torque of 3.92 N m. Moreover, the robotic body was designed with upper and lower segment structures and waterproofing function was also considered. Besides, the kinematic mechanism and mechanical properties of the bionic crocodile structure were analyzed from the perspectives of modeling and field tests. The results demonstrated an exceptional kinematic performance of the bionic crocodile robot, effectively replicating the authentic movement characteristics of a crocodile.
本文介绍了一种多自由度仿生鳄鱼机器人,旨在应对清理狭窄浅水河流表面污染物和杂物的挑战。该机器人模仿鳄鱼的 "死亡翻滚 "动作,这是一种用于分解物体的技术。首先,设计采用了多节摆动导杆机构的摆尾机制。通过分析三节、四节和五节尾翼结构,确定四节尾翼是最有效的结构,具有最佳的强度和摆动幅度。在单电机驱动下,每节尾翼的最大摆动角度分别为 8.05°、20.95°、35.09° 和 43.84°。其次,机器人腿部采用双平行四边形机构设计,便于爬行和缩回运动。此外,机器人嘴部采用了双摇杆机构,可有效闭合和锁定,在电机扭矩为 3.92 N m 的情况下,平均扭矩达到 5.69 N m。此外,还从建模和现场测试的角度分析了仿生鳄鱼结构的运动机理和机械性能。结果表明,仿生鳄鱼机器人的运动学性能优异,能有效复制鳄鱼的真实运动特征。
{"title":"Design and movement mechanism analysis of a multiple degree of freedom bionic crocodile robot based on the characteristic of “death roll”","authors":"Chujun Liu, Jingwei Wang, Zhongyang Liu, Zejia Zhao, Guoqing Zhang","doi":"10.1002/rob.22380","DOIUrl":"10.1002/rob.22380","url":null,"abstract":"<p>This paper introduces a multi-degree of freedom bionic crocodile robot designed to tackle the challenge of cleaning pollutants and debris from the surfaces of narrow, shallow rivers. The robot mimics the “death roll” motion of crocodiles which is a technique used for object disintegration. First, the design incorporated a swinging tail mechanism using a multi-section oscillating guide-bar mechanism. By analyzing three-, four-, and five-section tail structures, the four-section tail was identified as the most effective structure, offering optimal strength and swing amplitude. Each section of the tail can reach maximum swing angles of 8.05°, 20.95°, 35.09°, and 43.84°, respectively, under a single motor's drive. Next, the robotic legs were designed with a double parallelogram mechanism, facilitating both crawling and retracting movements. In addition, the mouth employed a double-rocker mechanism for efficient closure and locking, achieving an average torque of 5.69 N m with a motor torque of 3.92 N m. Moreover, the robotic body was designed with upper and lower segment structures and waterproofing function was also considered. Besides, the kinematic mechanism and mechanical properties of the bionic crocodile structure were analyzed from the perspectives of modeling and field tests. The results demonstrated an exceptional kinematic performance of the bionic crocodile robot, effectively replicating the authentic movement characteristics of a crocodile.</p>","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"41 8","pages":"2650-2662"},"PeriodicalIF":4.2,"publicationDate":"2024-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141529118","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}