首页 > 最新文献

Journal of Field Robotics最新文献

英文 中文
A New Hybrid Control Scheme for Tracking Control Problem of AUVs With System Uncertainties and External Disruptions
IF 4.2 2区 计算机科学 Q2 ROBOTICS Pub Date : 2024-12-24 DOI: 10.1002/rob.22492
Km Shelly Chaudhary, Naveen Kumar

Autonomous underwater vehicles (AUVs) are highly nonlinear, coupled, uncertain, and time-varying mechatronic systems that inevitably suffer from uncertainties and environmental disturbances. This study presents an intelligent hybrid fractional-order fast terminal sliding mode controller that utilizes the positive aspects of a model-free control approach, designed to enhance the tracking control of AUVs. Using a nonlinear fractional-order fast terminal sliding manifold, the proposed control approach integrates intelligent hybrid sliding mode control with fractional calculus to guarantee finite-time convergence of system states and provide explicit settling time estimates. The nonlinear dynamics of the AUVs is modeled using radial basis function neural networks, while bound on uncertainties, external disturbances, and the reconstruction errors are accommodated by the adaptive compensator. By using a fast terminal-type sliding mode reaching law, the controller exhibits enhanced transient response, resulting in robustness and finite-time convergence of tracking errors. Using fractional-order Barbalat's lemma and the Lyapunov technique, the stability of the control scheme is validated. The effectiveness of the proposed control scheme is validated by a numerical simulation study, which also shows enhanced trajectory tracking performance for AUVs over existing control schemes. This hybrid technique addresses the complicated nature of AUV dynamics in unpredictable circumstances by utilizing the advantages of model-free intelligent control and fractional calculus.

{"title":"A New Hybrid Control Scheme for Tracking Control Problem of AUVs With System Uncertainties and External Disruptions","authors":"Km Shelly Chaudhary,&nbsp;Naveen Kumar","doi":"10.1002/rob.22492","DOIUrl":"https://doi.org/10.1002/rob.22492","url":null,"abstract":"<div>\u0000 \u0000 <p>Autonomous underwater vehicles (AUVs) are highly nonlinear, coupled, uncertain, and time-varying mechatronic systems that inevitably suffer from uncertainties and environmental disturbances. This study presents an intelligent hybrid fractional-order fast terminal sliding mode controller that utilizes the positive aspects of a model-free control approach, designed to enhance the tracking control of AUVs. Using a nonlinear fractional-order fast terminal sliding manifold, the proposed control approach integrates intelligent hybrid sliding mode control with fractional calculus to guarantee finite-time convergence of system states and provide explicit settling time estimates. The nonlinear dynamics of the AUVs is modeled using radial basis function neural networks, while bound on uncertainties, external disturbances, and the reconstruction errors are accommodated by the adaptive compensator. By using a fast terminal-type sliding mode reaching law, the controller exhibits enhanced transient response, resulting in robustness and finite-time convergence of tracking errors. Using fractional-order Barbalat's lemma and the Lyapunov technique, the stability of the control scheme is validated. The effectiveness of the proposed control scheme is validated by a numerical simulation study, which also shows enhanced trajectory tracking performance for AUVs over existing control schemes. This hybrid technique addresses the complicated nature of AUV dynamics in unpredictable circumstances by utilizing the advantages of model-free intelligent control and fractional calculus.</p>\u0000 </div>","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"42 3","pages":"716-741"},"PeriodicalIF":4.2,"publicationDate":"2024-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143827088","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Visual Inertial SLAM Based on Spatiotemporal Consistency Optimization in Diverse Environments
IF 4.2 2区 计算机科学 Q2 ROBOTICS Pub Date : 2024-12-16 DOI: 10.1002/rob.22487
Huayan Pu, Jun Luo, Gang Wang, Tao Huang, Lang Wu, Dengyu Xiao, Hongliang Liu, Jun Luo

Currently, the majority of robots equipped with visual-based simultaneous mapping and localization (SLAM) systems exhibit good performance in static environments. However, practical scenarios often present dynamic objects, rendering the environment less than entirely “static.” Diverse dynamic objects within the environment pose substantial challenges to the precision of visual SLAM system. To address this challenge, we propose a real-time visual inertial SLAM system that extensively leverages objects within the environment. First, we reject regions corresponding to dynamic objects. Following this, geometric constraints are applied within the stationary object regions to elaborate the mask of static areas, thereby facilitating the extraction of more stable feature points. Second, static landmarks are constructed based on the static regions. A spatiotemporal factor graph is then created by combining the temporal information from the Inertial Measurement Unit (IMU) with the semantic information from the static landmarks. Finally, we perform a diverse set of validation experiments on the proposed system, encompassing challenging scenarios from publicly available benchmarks and the real world. Within these experimental scenarios, we compare with state-of-the-art approaches. More specifically, our system achieved a more than 40% accuracy improvement over baseline method in these data sets. The results demonstrate that our proposed method exhibits outstanding robustness and accuracy not only in complex dynamic environments but also in static environments.

{"title":"Visual Inertial SLAM Based on Spatiotemporal Consistency Optimization in Diverse Environments","authors":"Huayan Pu,&nbsp;Jun Luo,&nbsp;Gang Wang,&nbsp;Tao Huang,&nbsp;Lang Wu,&nbsp;Dengyu Xiao,&nbsp;Hongliang Liu,&nbsp;Jun Luo","doi":"10.1002/rob.22487","DOIUrl":"https://doi.org/10.1002/rob.22487","url":null,"abstract":"<div>\u0000 \u0000 <p>Currently, the majority of robots equipped with visual-based simultaneous mapping and localization (SLAM) systems exhibit good performance in static environments. However, practical scenarios often present dynamic objects, rendering the environment less than entirely “static.” Diverse dynamic objects within the environment pose substantial challenges to the precision of visual SLAM system. To address this challenge, we propose a real-time visual inertial SLAM system that extensively leverages objects within the environment. First, we reject regions corresponding to dynamic objects. Following this, geometric constraints are applied within the stationary object regions to elaborate the mask of static areas, thereby facilitating the extraction of more stable feature points. Second, static landmarks are constructed based on the static regions. A spatiotemporal factor graph is then created by combining the temporal information from the Inertial Measurement Unit (IMU) with the semantic information from the static landmarks. Finally, we perform a diverse set of validation experiments on the proposed system, encompassing challenging scenarios from publicly available benchmarks and the real world. Within these experimental scenarios, we compare with state-of-the-art approaches. More specifically, our system achieved a more than 40% accuracy improvement over baseline method in these data sets. The results demonstrate that our proposed method exhibits outstanding robustness and accuracy not only in complex dynamic environments but also in static environments.</p>\u0000 </div>","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"42 3","pages":"679-696"},"PeriodicalIF":4.2,"publicationDate":"2024-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143826845","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Multimodal Agile Land-Air Aircraft (AlAA) That Can Fly, Roll, and Stand
IF 4.2 2区 计算机科学 Q2 ROBOTICS Pub Date : 2024-12-16 DOI: 10.1002/rob.22491
Qing Guo, Zihua Guo, Yujie Shi, Zhijie Zhou, Dexiao Ma

The multimodal land-air aircraft combines the advantages of traditional drones and ground unmanned equipment. It can cross obstacles on the ground, such as lakes and mountains, and fly quickly in the air, reaching a wider range. It can also switch to an energy-saving mode based on the characteristics of the surrounding environment and mission requirements, reducing energy consumption and noise while increasing endurance. Based on the idea of reusing the same structure, we have designed a multi-mode agile land-air aircraft, abbreviated as ALAA. ALAA has eight actuators, and it combines propellers, wheels, and gearboxes in different ways to achieve multiple modes of locomotion on the ground and in the air: flight mode, driving mode, and upright mode. In propeller-assisted driving mode, it can climb slopes up to 50°. It can also combine driving and upright modes, demonstrating strong obstacle-crossing capabilities. In addition, ALAA reuses the same components, simplifying the transition between flight and ground movement without the need for deformation, thus enabling fast and rational mode transition suitable for complex environments. Ground modes can extend the endurance time of ALAA, and experimental results show that ALAA can operate 21 times longer than an aerial only system. This paper presents the overall design and mechanical architecture of ALAA, discusses the algorithm and controller design, and verifies the feasibility of the scheme and design through experiments with the physical prototype, showing its performance in different modes.

{"title":"A Multimodal Agile Land-Air Aircraft (AlAA) That Can Fly, Roll, and Stand","authors":"Qing Guo,&nbsp;Zihua Guo,&nbsp;Yujie Shi,&nbsp;Zhijie Zhou,&nbsp;Dexiao Ma","doi":"10.1002/rob.22491","DOIUrl":"https://doi.org/10.1002/rob.22491","url":null,"abstract":"<div>\u0000 \u0000 <p>The multimodal land-air aircraft combines the advantages of traditional drones and ground unmanned equipment. It can cross obstacles on the ground, such as lakes and mountains, and fly quickly in the air, reaching a wider range. It can also switch to an energy-saving mode based on the characteristics of the surrounding environment and mission requirements, reducing energy consumption and noise while increasing endurance. Based on the idea of reusing the same structure, we have designed a multi-mode agile land-air aircraft, abbreviated as ALAA. ALAA has eight actuators, and it combines propellers, wheels, and gearboxes in different ways to achieve multiple modes of locomotion on the ground and in the air: flight mode, driving mode, and upright mode. In propeller-assisted driving mode, it can climb slopes up to 50°. It can also combine driving and upright modes, demonstrating strong obstacle-crossing capabilities. In addition, ALAA reuses the same components, simplifying the transition between flight and ground movement without the need for deformation, thus enabling fast and rational mode transition suitable for complex environments. Ground modes can extend the endurance time of ALAA, and experimental results show that ALAA can operate 21 times longer than an aerial only system. This paper presents the overall design and mechanical architecture of ALAA, discusses the algorithm and controller design, and verifies the feasibility of the scheme and design through experiments with the physical prototype, showing its performance in different modes.</p></div>","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"42 3","pages":"697-715"},"PeriodicalIF":4.2,"publicationDate":"2024-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143826844","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
High-Throughput Robotic Phenotyping for Quantifying Tomato Disease Severity Enabled by Synthetic Data and Domain-Adaptive Semantic Segmentation
IF 4.2 2区 计算机科学 Q2 ROBOTICS Pub Date : 2024-12-13 DOI: 10.1002/rob.22490
Weilong He, Xingjian Li, Zhenghua Zhang, Yuxi Chen, Jianbo Zhang, Dilip R. Panthee, Inga Meadows, Lirong Xiang

Plant diseases cause an annual global crop loss of 20%–40%, leading to estimated economic losses of 30–50 billion dollars. Tomatoes are susceptible to more than 200 diseases. Breeding disease-resistant cultivars is more cost-effective and environmentally sustainable than the frequent use of pesticides. Traditional breeding methods for disease resistance, relying on direct visual observation to measure disease-related traits, are time-consuming, inaccurate, expensive, and require specific knowledge of tomato diseases. High-throughput disease phenotyping is essential to reduce labor costs, improve measurement accuracy, and expedite the release of new varieties, thereby more effectively identifying disease-resistant crops. Precision agriculture efforts have primarily focused on detecting diseases on individual tomato leaves under controlled laboratory conditions, neglecting the assessment of disease severity of the entire plant in the field. To address this, we created a synthetic data set using existing field and individual leaf data sets, leveraging a game engine to minimize additional data labeling. Consequently, we developed a customized unsupervised domain-adaptive tomato disease segmentation algorithm that monitors the entire tomato plant and determines disease severity based on the proportion of affected leaf areas. The system-derived disease percentages show a high correlation with manually labeled data, evidenced by a correlation coefficient of 0.91. Our research demonstrates the feasibility of using ground robots equipped with deep-learning algorithms to monitor tomato disease severity under field conditions, potentially accelerating the automation and standardization of whole-plant disease severity monitoring in tomatoes. This high-throughput disease phenotyping system can also be adapted to analyze diseases in other crops with similar foliar diseases, such as maize, soybeans, and cotton.

植物病害每年造成全球作物损失 20%-40%,估计经济损失达 300-500 亿美元。番茄易感染 200 多种病害。与频繁使用杀虫剂相比,培育抗病栽培品种更具成本效益和环境可持续性。传统的抗病育种方法依赖直接肉眼观察来测量与病害相关的性状,耗时长、不准确、成本高,而且需要了解番茄病害的具体知识。高通量病害表型分析对于降低劳动力成本、提高测量准确性、加快新品种的发布,从而更有效地确定抗病作物至关重要。精准农业的工作主要集中在受控实验室条件下检测单个番茄叶片上的病害,而忽视了对田间整株植物病害严重程度的评估。为了解决这个问题,我们利用现有的田间和单个叶片数据集创建了一个合成数据集,并利用游戏引擎尽量减少额外的数据标记。因此,我们开发了一种定制的无监督领域自适应番茄病害分割算法,该算法可监测整个番茄植株,并根据受影响叶片区域的比例确定病害严重程度。系统得出的病害比例与人工标注的数据具有很高的相关性,相关系数高达 0.91。我们的研究证明了在田间条件下使用配备深度学习算法的地面机器人监测番茄病害严重程度的可行性,从而有可能加快番茄整株病害严重程度监测的自动化和标准化进程。这种高通量病害表型系统还可用于分析玉米、大豆和棉花等其他具有类似叶面病害的作物的病害。
{"title":"High-Throughput Robotic Phenotyping for Quantifying Tomato Disease Severity Enabled by Synthetic Data and Domain-Adaptive Semantic Segmentation","authors":"Weilong He,&nbsp;Xingjian Li,&nbsp;Zhenghua Zhang,&nbsp;Yuxi Chen,&nbsp;Jianbo Zhang,&nbsp;Dilip R. Panthee,&nbsp;Inga Meadows,&nbsp;Lirong Xiang","doi":"10.1002/rob.22490","DOIUrl":"https://doi.org/10.1002/rob.22490","url":null,"abstract":"<p>Plant diseases cause an annual global crop loss of 20%–40%, leading to estimated economic losses of 30–50 billion dollars. Tomatoes are susceptible to more than 200 diseases. Breeding disease-resistant cultivars is more cost-effective and environmentally sustainable than the frequent use of pesticides. Traditional breeding methods for disease resistance, relying on direct visual observation to measure disease-related traits, are time-consuming, inaccurate, expensive, and require specific knowledge of tomato diseases. High-throughput disease phenotyping is essential to reduce labor costs, improve measurement accuracy, and expedite the release of new varieties, thereby more effectively identifying disease-resistant crops. Precision agriculture efforts have primarily focused on detecting diseases on individual tomato leaves under controlled laboratory conditions, neglecting the assessment of disease severity of the entire plant in the field. To address this, we created a synthetic data set using existing field and individual leaf data sets, leveraging a game engine to minimize additional data labeling. Consequently, we developed a customized unsupervised domain-adaptive tomato disease segmentation algorithm that monitors the entire tomato plant and determines disease severity based on the proportion of affected leaf areas. The system-derived disease percentages show a high correlation with manually labeled data, evidenced by a correlation coefficient of 0.91. Our research demonstrates the feasibility of using ground robots equipped with deep-learning algorithms to monitor tomato disease severity under field conditions, potentially accelerating the automation and standardization of whole-plant disease severity monitoring in tomatoes. This high-throughput disease phenotyping system can also be adapted to analyze diseases in other crops with similar foliar diseases, such as maize, soybeans, and cotton.</p>","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"42 3","pages":"657-678"},"PeriodicalIF":4.2,"publicationDate":"2024-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/rob.22490","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143826784","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Continuous Curvature Path Planning for Headland Coverage With Agricultural Robots
IF 4.2 2区 计算机科学 Q2 ROBOTICS Pub Date : 2024-12-10 DOI: 10.1002/rob.22489
Gonzalo Mier, Rick Fennema, João Valente, Sytze de Bruin

We introduce a methodology for headland coverage planning for autonomous agricultural robot systems, which is a complex problem often overlooked in agricultural robotics. At the corners of the headlands, a robot faces the risk to cross the border of a field while turning. Though potentially dangerous, current papers about corner turns in headlands do not tackle this issue. Moreover, they produce paths with curvature discontinuities, which are not feasible by non-holonomic robots. This paper presents an approach to strictly adhere to field borders during the headland coverage, and three types of continuous curvature turn planners for convex and concave corners. The turning planners are evaluated in terms of path length and uncovered area to assess their effectiveness in headland corner navigation. Through empirical validation, including extensive tests on a coverage path planning benchmark as well as real-field experiments with an autonomous robot, the proposed approach demonstrates its practical applicability and effectiveness. In simulations, the mean coverage area of the fields went from 94.73%, using a constant offset around the field, to 97.29% using the proposed approach. Besides providing a solution to the coverage of headlands in agricultural automation, this paper also extends the covered area on the mainland, thus increasing the overall productivity of the field.

{"title":"Continuous Curvature Path Planning for Headland Coverage With Agricultural Robots","authors":"Gonzalo Mier,&nbsp;Rick Fennema,&nbsp;João Valente,&nbsp;Sytze de Bruin","doi":"10.1002/rob.22489","DOIUrl":"https://doi.org/10.1002/rob.22489","url":null,"abstract":"<p>We introduce a methodology for headland coverage planning for autonomous agricultural robot systems, which is a complex problem often overlooked in agricultural robotics. At the corners of the headlands, a robot faces the risk to cross the border of a field while turning. Though potentially dangerous, current papers about corner turns in headlands do not tackle this issue. Moreover, they produce paths with curvature discontinuities, which are not feasible by non-holonomic robots. This paper presents an approach to strictly adhere to field borders during the headland coverage, and three types of continuous curvature turn planners for convex and concave corners. The turning planners are evaluated in terms of path length and uncovered area to assess their effectiveness in headland corner navigation. Through empirical validation, including extensive tests on a coverage path planning benchmark as well as real-field experiments with an autonomous robot, the proposed approach demonstrates its practical applicability and effectiveness. In simulations, the mean coverage area of the fields went from 94.73%, using a constant offset around the field, to 97.29% using the proposed approach. Besides providing a solution to the coverage of headlands in agricultural automation, this paper also extends the covered area on the mainland, thus increasing the overall productivity of the field.</p>","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"42 3","pages":"641-656"},"PeriodicalIF":4.2,"publicationDate":"2024-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/rob.22489","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143826988","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Digital Twin/MARS-CycleGAN: Enhancing Sim-to-Real Crop/Row Detection for MARS Phenotyping Robot Using Synthetic Images
IF 4.2 2区 计算机科学 Q2 ROBOTICS Pub Date : 2024-12-10 DOI: 10.1002/rob.22473
David Liu, Zhengkun Li, Zihao Wu, Changying Li

Robotic crop phenotyping has emerged as a key technology for assessing crops' phenotypic traits at scale, which is essential for developing new crop varieties with the aim of increasing productivity and adapting to the changing climate. However, developing and deploying crop phenotyping robots faces many challenges, such as complex and variable crop shapes that complicate robotic object detection, dynamic and unstructured environments that confound robotic control, and real-time computing and managing big data that challenge robotic hardware/software. This work specifically addresses the first challenge by proposing a novel Digital Twin(DT)/MARS-CycleGAN model for image augmentation to improve our Modular Agricultural Robotic System (MARS)'s crop object detection from complex and variable backgrounds. The core idea is that in addition to the cycle consistency losses in the CycleGAN model, we designed and enforced a new DT/MARS loss in the deep learning model to penalize the inconsistency between real crop images captured by MARS and synthesized images generated by DT/MARS-CycleGAN. Therefore, the synthesized crop images closely mimic real images in terms of realism, and they are employed to fine-tune object detectors such as YOLOv8. Extensive experiments demonstrate that the new DT/MARS-CycleGAN framework significantly boosts crop/row detection performance for MARS, contributing to the field of robotic crop phenotyping. We release our code and data to the research community (https://github.com/UGA-BSAIL/DT-MARS-CycleGAN).

机器人作物表型技术已成为大规模评估作物表型特征的关键技术,这对于开发新作物品种以提高生产力和适应不断变化的气候至关重要。然而,开发和部署作物表型机器人面临着许多挑战,例如复杂多变的作物形状使机器人物体检测变得复杂,动态和非结构化环境使机器人控制变得困难,实时计算和大数据管理对机器人硬件/软件提出了挑战。这项工作专门针对第一个挑战,提出了一种用于图像增强的新型数字双胞胎(DT)/MARS-CycleGAN 模型,以改进我们的模块化农业机器人系统(MARS)从复杂多变的背景中检测作物目标的能力。其核心思想是,除了 CycleGAN 模型中的循环一致性损失外,我们还在深度学习模型中设计并强制执行了新的 DT/MARS 损失,以惩罚 MARS 捕捉到的真实作物图像与 DT/MARS-CycleGAN 生成的合成图像之间的不一致性。因此,合成的作物图像在逼真度上接近真实图像,可用于微调 YOLOv8 等物体检测器。大量实验证明,新的 DT/MARS-CycleGAN 框架显著提高了 MARS 的作物/行检测性能,为机器人作物表型领域做出了贡献。我们向研究社区发布了我们的代码和数据 (https://github.com/UGA-BSAIL/DT-MARS-CycleGAN)。
{"title":"Digital Twin/MARS-CycleGAN: Enhancing Sim-to-Real Crop/Row Detection for MARS Phenotyping Robot Using Synthetic Images","authors":"David Liu,&nbsp;Zhengkun Li,&nbsp;Zihao Wu,&nbsp;Changying Li","doi":"10.1002/rob.22473","DOIUrl":"https://doi.org/10.1002/rob.22473","url":null,"abstract":"<div>\u0000 \u0000 <p>Robotic crop phenotyping has emerged as a key technology for assessing crops' phenotypic traits at scale, which is essential for developing new crop varieties with the aim of increasing productivity and adapting to the changing climate. However, developing and deploying crop phenotyping robots faces many challenges, such as complex and variable crop shapes that complicate robotic object detection, dynamic and unstructured environments that confound robotic control, and real-time computing and managing big data that challenge robotic hardware/software. This work specifically addresses the first challenge by proposing a novel Digital Twin(DT)/MARS-CycleGAN model for image augmentation to improve our Modular Agricultural Robotic System (MARS)'s crop object detection from complex and variable backgrounds. The core idea is that in addition to the cycle consistency losses in the CycleGAN model, we designed and enforced a new DT/MARS loss in the deep learning model to penalize the inconsistency between real crop images captured by MARS and synthesized images generated by DT/MARS-CycleGAN. Therefore, the synthesized crop images closely mimic real images in terms of realism, and they are employed to fine-tune object detectors such as YOLOv8. Extensive experiments demonstrate that the new DT/MARS-CycleGAN framework significantly boosts crop/row detection performance for MARS, contributing to the field of robotic crop phenotyping. We release our code and data to the research community (https://github.com/UGA-BSAIL/DT-MARS-CycleGAN).</p></div>","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"42 3","pages":"625-640"},"PeriodicalIF":4.2,"publicationDate":"2024-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143826987","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Back Cover, Volume 42, Number 1, January 2025 封底,42卷,第1期,2025年1月
IF 4.2 2区 计算机科学 Q2 ROBOTICS Pub Date : 2024-12-06 DOI: 10.1002/rob.22497
Qi Shao, Qixing Xia, Zhonghan Lin, Xuguang Dong, Xin An, Haoqi Zhao, Zhangyi Li, Xin-Jun Liu, Wenqiang Dong, Huichan Zhao

The cover image is based on the Article Unearthing the history with A-RHex: Leveraging articulated hexapod robots for archeological pre-exploration by Qi Shao et al., https://doi.org/10.1002/rob.22410

封面图片来源于齐少等人的文章《用A-RHex挖掘历史:利用铰接六足机器人进行考古预探索》(https://doi.org/10.1002/rob.22410)
{"title":"Back Cover, Volume 42, Number 1, January 2025","authors":"Qi Shao,&nbsp;Qixing Xia,&nbsp;Zhonghan Lin,&nbsp;Xuguang Dong,&nbsp;Xin An,&nbsp;Haoqi Zhao,&nbsp;Zhangyi Li,&nbsp;Xin-Jun Liu,&nbsp;Wenqiang Dong,&nbsp;Huichan Zhao","doi":"10.1002/rob.22497","DOIUrl":"https://doi.org/10.1002/rob.22497","url":null,"abstract":"<p>The cover image is based on the Article <i>Unearthing the history with A-RHex: Leveraging articulated hexapod robots for archeological pre-exploration</i> by Qi Shao et al., https://doi.org/10.1002/rob.22410\u0000 \u0000 <figure>\u0000 <div><picture>\u0000 <source></source></picture><p></p>\u0000 </div>\u0000 </figure></p>","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"42 1","pages":"ii"},"PeriodicalIF":4.2,"publicationDate":"2024-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/rob.22497","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142860405","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cover Image, Volume 42, Number 1, January 2025 封面图片,第 42 卷第 1 期,2025 年 1 月
IF 4.2 2区 计算机科学 Q2 ROBOTICS Pub Date : 2024-12-06 DOI: 10.1002/rob.22496
Yifan Gao, Jiangpeng Shu, Zhe Xia, Yaozhi Luo

The cover image is based on the Article From muscular to dexterous: A systematic review to understand the robotic taxonomy in construction and effectiveness by Yifan Gao et al., https://doi.org/10.1002/rob.22409

封面图片基于《从肌肉型到灵巧型》一文:高一帆等人的系统综述:了解建筑机器人分类法及其有效性》,https://doi.org/10.1002/rob.22409
{"title":"Cover Image, Volume 42, Number 1, January 2025","authors":"Yifan Gao,&nbsp;Jiangpeng Shu,&nbsp;Zhe Xia,&nbsp;Yaozhi Luo","doi":"10.1002/rob.22496","DOIUrl":"https://doi.org/10.1002/rob.22496","url":null,"abstract":"<p>The cover image is based on the Article <i>From muscular to dexterous: A systematic review to understand the robotic taxonomy in construction and effectiveness</i> by Yifan Gao et al., https://doi.org/10.1002/rob.22409\u0000 \u0000 <figure>\u0000 <div><picture>\u0000 <source></source></picture><p></p>\u0000 </div>\u0000 </figure></p>","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"42 1","pages":"i"},"PeriodicalIF":4.2,"publicationDate":"2024-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/rob.22496","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142860403","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cover Image, Volume 41, Number 8, December 2024 封面图片,第 41 卷第 8 号,2024 年 12 月
IF 4.2 2区 计算机科学 Q2 ROBOTICS Pub Date : 2024-11-05 DOI: 10.1002/rob.22467
Guy Elmakis, Matan Coronel, David Zarrouk

The cover image is based on the Article Three-dimensional kinematics-based real-time localization method using two robots by Guy Elmakis et al., https://doi.org/10.1002/rob.22383

盖伊-埃尔马基斯(Guy Elmakis)等人的文章《基于三维运动学的双机器人实时定位方法》的封面图像,https://doi.org/10.1002/rob.22383。
{"title":"Cover Image, Volume 41, Number 8, December 2024","authors":"Guy Elmakis,&nbsp;Matan Coronel,&nbsp;David Zarrouk","doi":"10.1002/rob.22467","DOIUrl":"https://doi.org/10.1002/rob.22467","url":null,"abstract":"<p>The cover image is based on the Article <i>Three-dimensional kinematics-based real-time localization method using two robots</i> by Guy Elmakis et al., https://doi.org/10.1002/rob.22383\u0000 \u0000 <figure>\u0000 <div><picture>\u0000 <source></source></picture><p></p>\u0000 </div>\u0000 </figure></p>","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"41 8","pages":"i"},"PeriodicalIF":4.2,"publicationDate":"2024-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/rob.22467","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142588084","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Performance-Oriented Understanding and Design of a Robotic Tadpole: Lower Energy Cost, Higher Speed 以性能为导向理解和设计机器人蝌蚪:更低的能源成本、更高的速度
IF 4.2 2区 计算机科学 Q2 ROBOTICS Pub Date : 2024-10-17 DOI: 10.1002/rob.22452
Xu Chao, Imran Hameed, David Navarro-Alarcon, Xingjian Jing

A compliant plate driven by an active joint is frequently employed as a fin to improve swimming efficiency due to its continuous and compliant kinematics. However, very few studies have focused on the performance-oriented design of multijoint mechanisms enhanced with flexible fins, particularly regarding critical design factors such as the active-joint ratio and dimension-related stiffness distribution of the fin. To this aim, we developed a robotic tadpole by integrating a multijoint mechanism with a flexible fin and conduct a comprehensive investigation of its swimming performance with different tail configurations. A dynamic model with identified hydrodynamic parameters was established to predict propulsive performance. Numerous simulations and experiments were conducted to explore the impact of the active-joint ratio and the dimension-related stiffness distribution of the fin. The results reveal that (a) tails with different active-joint ratios achieve their best performance at a small phase difference, while tails with a larger active-joint ratio tend to perform worse than those with a smaller active-joint ratio when a larger phase difference is used; (b) the optimal active-joint ratio enables the robot to achieve superior performance in terms of swimming velocity and energy efficiency; and (c) with the same surface area, a longer fin with a wide leading edge and a narrow trailing edge can achieve higher swimming speeds with lower energy consumption. This work presents novel and in-depth insights into the design of bio-inspired underwater robots with compliant propulsion mechanisms.

{"title":"Performance-Oriented Understanding and Design of a Robotic Tadpole: Lower Energy Cost, Higher Speed","authors":"Xu Chao,&nbsp;Imran Hameed,&nbsp;David Navarro-Alarcon,&nbsp;Xingjian Jing","doi":"10.1002/rob.22452","DOIUrl":"https://doi.org/10.1002/rob.22452","url":null,"abstract":"<div>\u0000 \u0000 <p>A compliant plate driven by an active joint is frequently employed as a fin to improve swimming efficiency due to its continuous and compliant kinematics. However, very few studies have focused on the performance-oriented design of multijoint mechanisms enhanced with flexible fins, particularly regarding critical design factors such as the active-joint ratio and dimension-related stiffness distribution of the fin. To this aim, we developed a robotic tadpole by integrating a multijoint mechanism with a flexible fin and conduct a comprehensive investigation of its swimming performance with different tail configurations. A dynamic model with identified hydrodynamic parameters was established to predict propulsive performance. Numerous simulations and experiments were conducted to explore the impact of the active-joint ratio and the dimension-related stiffness distribution of the fin. The results reveal that (a) tails with different active-joint ratios achieve their best performance at a small phase difference, while tails with a larger active-joint ratio tend to perform worse than those with a smaller active-joint ratio when a larger phase difference is used; (b) the optimal active-joint ratio enables the robot to achieve superior performance in terms of swimming velocity and energy efficiency; and (c) with the same surface area, a longer fin with a wide leading edge and a narrow trailing edge can achieve higher swimming speeds with lower energy consumption. This work presents novel and in-depth insights into the design of bio-inspired underwater robots with compliant propulsion mechanisms.</p>\u0000 </div>","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"42 3","pages":"607-624"},"PeriodicalIF":4.2,"publicationDate":"2024-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143826720","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of Field Robotics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1