Autonomous navigation method based on RGB-D camera for a crop phenotyping robot

IF 4.2 2区 计算机科学 Q2 ROBOTICS Journal of Field Robotics Pub Date : 2024-06-30 DOI:10.1002/rob.22379
Meng Yang, Chenglong Huang, Zhengda Li, Yang Shao, Jinzhan Yuan, Wanneng Yang, Peng Song
{"title":"Autonomous navigation method based on RGB-D camera for a crop phenotyping robot","authors":"Meng Yang,&nbsp;Chenglong Huang,&nbsp;Zhengda Li,&nbsp;Yang Shao,&nbsp;Jinzhan Yuan,&nbsp;Wanneng Yang,&nbsp;Peng Song","doi":"10.1002/rob.22379","DOIUrl":null,"url":null,"abstract":"<p>Phenotyping robots have the potential to obtain crop phenotypic traits on a large scale with high throughput. Autonomous navigation technology for phenotyping robots can significantly improve the efficiency of phenotypic traits collection. This study developed an autonomous navigation method utilizing an RGB-D camera, specifically designed for phenotyping robots in field environments. The PP-LiteSeg semantic segmentation model was employed due to its real-time and accurate segmentation capabilities, enabling the distinction of crop areas in images captured by the RGB-D camera. Navigation feature points were extracted from these segmented areas, with their three-dimensional coordinates determined from pixel and depth information, facilitating the computation of angle deviation (<i>α</i>) and lateral deviation (<i>d</i>). Fuzzy controllers were designed with <i>α</i> and <i>d</i> as inputs for real-time deviation correction during the walking of phenotyping robot. Additionally, the method includes end-of-row recognition and row spacing calculation, based on both visible and depth data, enabling automatic turning and row transition. The experimental results showed that the adopted PP-LiteSeg semantic segmentation model had a testing accuracy of 95.379% and a mean intersection over union of 90.615%. The robot's navigation demonstrated an average walking deviation of 1.33 cm, with a maximum of 3.82 cm. Additionally, the average error in row spacing measurement was 2.71 cm, while the success rate of row transition at the end of the row was 100%. These findings indicate that the proposed method provides effective support for the autonomous operation of phenotyping robots.</p>","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"41 8","pages":"2663-2675"},"PeriodicalIF":4.2000,"publicationDate":"2024-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/rob.22379","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Field Robotics","FirstCategoryId":"94","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/rob.22379","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ROBOTICS","Score":null,"Total":0}
引用次数: 0

Abstract

Phenotyping robots have the potential to obtain crop phenotypic traits on a large scale with high throughput. Autonomous navigation technology for phenotyping robots can significantly improve the efficiency of phenotypic traits collection. This study developed an autonomous navigation method utilizing an RGB-D camera, specifically designed for phenotyping robots in field environments. The PP-LiteSeg semantic segmentation model was employed due to its real-time and accurate segmentation capabilities, enabling the distinction of crop areas in images captured by the RGB-D camera. Navigation feature points were extracted from these segmented areas, with their three-dimensional coordinates determined from pixel and depth information, facilitating the computation of angle deviation (α) and lateral deviation (d). Fuzzy controllers were designed with α and d as inputs for real-time deviation correction during the walking of phenotyping robot. Additionally, the method includes end-of-row recognition and row spacing calculation, based on both visible and depth data, enabling automatic turning and row transition. The experimental results showed that the adopted PP-LiteSeg semantic segmentation model had a testing accuracy of 95.379% and a mean intersection over union of 90.615%. The robot's navigation demonstrated an average walking deviation of 1.33 cm, with a maximum of 3.82 cm. Additionally, the average error in row spacing measurement was 2.71 cm, while the success rate of row transition at the end of the row was 100%. These findings indicate that the proposed method provides effective support for the autonomous operation of phenotyping robots.

Abstract Image

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于 RGB-D 摄像头的作物表型机器人自主导航方法
表型机器人具有大规模、高通量采集作物表型性状的潜力。表型机器人的自主导航技术可显著提高表型性状采集的效率。本研究利用 RGB-D 摄像机开发了一种自主导航方法,专门用于田间环境中的表型机器人。由于 PP-LiteSeg 语义分割模型具有实时、准确的分割能力,因此本研究采用了该模型,以便在 RGB-D 摄像机拍摄的图像中区分作物区域。从这些分割区域中提取导航特征点,并根据像素和深度信息确定其三维坐标,从而便于计算角度偏差(α)和横向偏差(d)。以 α 和 d 为输入,设计了模糊控制器,用于在表型机器人行走过程中进行实时偏差校正。此外,该方法还包括基于可见光和深度数据的行末识别和行间距计算,从而实现自动转弯和行间距转换。实验结果表明,所采用的 PP-LiteSeg 语义分割模型的测试准确率为 95.379%,平均交叉比结合率为 90.615%。机器人导航的平均行走偏差为 1.33 厘米,最大偏差为 3.82 厘米。此外,行间距测量的平均误差为 2.71 厘米,而在行尾进行行过渡的成功率为 100%。这些结果表明,所提出的方法为表型机器人的自主操作提供了有效支持。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Journal of Field Robotics
Journal of Field Robotics 工程技术-机器人学
CiteScore
15.00
自引率
3.60%
发文量
80
审稿时长
6 months
期刊介绍: The Journal of Field Robotics seeks to promote scholarly publications dealing with the fundamentals of robotics in unstructured and dynamic environments. The Journal focuses on experimental robotics and encourages publication of work that has both theoretical and practical significance.
期刊最新文献
Issue Information Cover Image, Volume 41, Number 8, December 2024 Issue Information ForzaETH Race Stack—Scaled Autonomous Head‐to‐Head Racing on Fully Commercial Off‐the‐Shelf Hardware Research on Satellite Navigation Control of Six‐Crawler Machinery Based on Fuzzy PID Algorithm
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1