首页 > 最新文献

Computers and Electronics in Agriculture最新文献

英文 中文
Safflower picking points localization method during the full harvest period based on SBP-YOLOv8s-seg network 基于 SBP-YOLOv8s-seg 网络的全收获期红花采摘点定位方法
IF 7.7 1区 农林科学 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2024-11-25 DOI: 10.1016/j.compag.2024.109646
He Zhang, Yun Ge, Hao Xia, Chao Sun
Visual recognition is crucial for robotic harvesting of safflower filaments in field. However, accurate detection and localization is challenging due to complex backgrounds, leaves and branches shielding, and variable safflower morphology. This study proposes a safflower picking points localization method during the full harvest period based on SBP-YOLOv8s-seg network. The method enhanced the accuracy by improving the performance of the detection and segmentation network and implementing phased localization. Specifically, SBP-YOLOv8s-seg network based on self-calibration was constructed for precise segmentation of safflower filaments and fruit balls. Additionally, different morphological features of safflower during the full harvest period were analyzed. The segmented masks underwent Principal Component Analysis (PCA) computation, region of interest (ROI) extraction, and contour fitting to extract the principal eigenvectors that express information about the filaments. To address the issue of picking position being invisible due to the occlusion of safflower necking, the picking points were determined in conjunction with the positional relationship between filaments and fruit balls. Experimental results demonstrated that the segmentation performance of SBP-YOLOv8s-seg network was superior to other networks, achieving a significant improvement in mean average precision (mAP) compared to YOLOv5s-seg, YOLOv6s-seg, YOLOv7s-seg, and YOLOv8s-seg, with improvements of 5.1 %, 2.3 %, 4.1 %, and 1.3 % respectively. The precision, recall and mAP of SBP-YOLOv8s-seg network in the segmentation task increased from 87.9 %, 79 %, and 84.4 % of YOLOv8s-seg to 89.1 %, 79.7 %, and 85.7 %. The accuracy of blooming safflower and decaying safflower calculated by the proposed method were 93.0 % and 91.9 %, respectively. The overall localization accuracy of safflower picking points was 92.9 %. Field experiments showed that the picking success rate was 90.7 %. This study provides a theoretical basis and data support for visual localization of safflower picking robot in the future.
视觉识别对于田间红花花丝的机器人收割至关重要。然而,由于背景复杂、枝叶遮挡以及红花形态多变,准确检测和定位具有挑战性。本研究提出了一种基于 SBP-YOLOv8s-seg 网络的全收获期红花采摘点定位方法。该方法通过提高检测和分割网络的性能以及实施分阶段定位,提高了定位精度。具体而言,构建了基于自校准的 SBP-YOLOv8s-seg 网络,用于精确分割红花花丝和果球。此外,还分析了整个收获期红花的不同形态特征。分割后的掩膜经过主成分分析(PCA)计算、感兴趣区域(ROI)提取和轮廓拟合,以提取表达花丝信息的主特征向量。为了解决由于红花颈部遮挡导致采摘位置不可见的问题,采摘点是结合花丝和果球之间的位置关系确定的。实验结果表明,SBP-YOLOv8s-seg 网络的分割性能优于其他网络,与 YOLOv5s-seg、YOLOv6s-seg、YOLOv7s-seg 和 YOLOv8s-seg 相比,平均精确度(mAP)显著提高,分别提高了 5.1%、2.3%、4.1% 和 1.3%。在分割任务中,SBP-YOLOv8s-seg 网络的精确度、召回率和 mAP 分别从 YOLOv8s-seg 的 87.9 %、79 % 和 84.4 % 提高到 89.1 %、79.7 % 和 85.7 %。建议方法计算的盛开红花和衰落红花的准确率分别为 93.0 % 和 91.9 %。红花采摘点的总体定位精度为 92.9%。田间试验表明,采摘成功率为 90.7%。这项研究为今后红花采摘机器人的可视化定位提供了理论依据和数据支持。
{"title":"Safflower picking points localization method during the full harvest period based on SBP-YOLOv8s-seg network","authors":"He Zhang,&nbsp;Yun Ge,&nbsp;Hao Xia,&nbsp;Chao Sun","doi":"10.1016/j.compag.2024.109646","DOIUrl":"10.1016/j.compag.2024.109646","url":null,"abstract":"<div><div>Visual recognition is crucial for robotic harvesting of safflower filaments in field. However, accurate detection and localization is challenging due to complex backgrounds, leaves and branches shielding, and variable safflower morphology. This study proposes a safflower picking points localization method during the full harvest period based on SBP-YOLOv8s-seg network. The method enhanced the accuracy by improving the performance of the detection and segmentation network and implementing phased localization. Specifically, SBP-YOLOv8s-seg network based on self-calibration was constructed for precise segmentation of safflower filaments and fruit balls. Additionally, different morphological features of safflower during the full harvest period were analyzed. The segmented masks underwent Principal Component Analysis (PCA) computation, region of interest (ROI) extraction, and contour fitting to extract the principal eigenvectors that express information about the filaments. To address the issue of picking position being invisible due to the occlusion of safflower necking, the picking points were determined in conjunction with the positional relationship between filaments and fruit balls. Experimental results demonstrated that the segmentation performance of SBP-YOLOv8s-seg network was superior to other networks, achieving a significant improvement in mean average precision (mAP) compared to YOLOv5s-seg, YOLOv6s-seg, YOLOv7s-seg, and YOLOv8s-seg, with improvements of 5.1 %, 2.3 %, 4.1 %, and 1.3 % respectively. The precision, recall and mAP of SBP-YOLOv8s-seg network in the segmentation task increased from 87.9 %, 79 %, and 84.4 % of YOLOv8s-seg to 89.1 %, 79.7 %, and 85.7 %. The accuracy of blooming safflower and decaying safflower calculated by the proposed method were 93.0 % and 91.9 %, respectively. The overall localization accuracy of safflower picking points was 92.9 %. Field experiments showed that the picking success rate was 90.7 %. This study provides a theoretical basis and data support for visual localization of safflower picking robot in the future.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"227 ","pages":"Article 109646"},"PeriodicalIF":7.7,"publicationDate":"2024-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142699952","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A spatial machine-learning model for predicting crop water stress index for precision irrigation of vineyards 用于预测葡萄园精准灌溉作物水分胁迫指数的空间机器学习模型
IF 7.7 1区 农林科学 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2024-11-24 DOI: 10.1016/j.compag.2024.109578
Aviva Peeters , Yafit Cohen , Idan Bahat , Noa Ohana-Levi , Eitan Goldshtein , Yishai Netzer , Tomás R. Tenreiro , Victor Alchanatis , Alon Ben-Gal
Optimization of water inputs is possible through precision irrigation based on prescription maps. The crop water stress index (CWSI) is an indicator of spatial and dynamic changes in plant water status that can serve irrigation management decision-making. The driving hypothesis was that in-season CWSI maps based on combined static and spatial-dynamic variables could be used to delineate irrigation MZs. A primary incentive was to minimize thermal-imaging campaigns and to complement CWSI maps between campaigns with cost-effective multi-spectral imaging campaigns producing normalized difference vegetative index (NDVI) maps. A spatial machine-learning model based on a random-forest (RF) algorithm combined with spatial statistical methods was developed to predict the spatial and temporal variability in CWSI of single vines in a vineyard. Model criteria and objectives included the reduction of sample data and input variables to a minimum without impacting prediction accuracy, consideration of only variables readily available to farmers, and accounting for spatial location and spatial processes.
The model was developed and tested on data from a ‘Cabernet Sauvignon’ vineyard in Israel over two years. Prediction of CWSI was driven by terrain parameters, slope, aspect and topographical wetness index, soil apparent electrical conductivity (ECa), and NDVI.
Spatial models based on RF were found to support CWSI prediction. Adding a geospatial component significantly improved model performance and accuracy, particularly when raw data was represented as z-scores or when z-scores were used as weights. NDVI, followed by ECa, aspect, or slope, was the most important variable predicting CWSI in the non-spatial models. The stronger the variable importance of NDVI, the better the model performed. The weaker the effect of NDVI in predicting CWSI, the stronger the effect of terrain and soil variables. In the spatial models, based on z-transformed values or on weighted values, the most important variable in predicting CWSI was either NDVI or location.
The model, based on a limited and readily accessible number of variables, can serve as the basis for user-friendly decision support tools for precision irrigation. Additional research is needed to evaluate alternative prediction variables and to account for case studies in more geographical locations to address overfitting specific input data. Socio-economic and cost-benefit considerations should be integrated to examine whether precision irrigation management based on such models has the desired effects on water consumption and yield.
根据处方图进行精准灌溉可以优化水的投入。作物水分胁迫指数(CWSI)是植物水分状况空间和动态变化的指标,可用于灌溉管理决策。我们提出的假设是,基于静态和空间-动态综合变量的当季 CWSI 地图可用于划定灌溉分区。主要动机是最大限度地减少热成像活动,并在活动间隙利用成本效益高的多光谱成像活动生成归一化植被指数(NDVI)图,以补充 CWSI 图。我们开发了一个空间机器学习模型,该模型基于随机森林 (RF) 算法,并结合了空间统计方法,用于预测葡萄园中单株葡萄的 CWSI 的空间和时间变化。模型的标准和目标包括:在不影响预测准确性的前提下,将样本数据和输入变量减少到最低限度;只考虑农民容易获得的变量;考虑空间位置和空间过程。地形参数、坡度、坡向和地形湿润指数、土壤表观导电率(ECa)和 NDVI 驱动了 CWSI 的预测。添加地理空间组件可显著提高模型的性能和准确性,尤其是当原始数据以 z 值表示或以 z 值作为权重时。在非空间模型中,预测 CWSI 的最重要变量是 NDVI,其次是 ECa、纵向或横坡。NDVI 变量的重要性越强,模型的表现就越好。NDVI 对预测 CWSI 的作用越弱,地形和土壤变量的作用就越强。在基于 z 变形值或加权值的空间模型中,预测 CWSI 的最重要变量不是 NDVI 就是位置。还需要开展更多研究,以评估其他预测变量,并对更多地理位置进行案例研究,以解决特定输入数据的过度拟合问题。应综合考虑社会经济和成本效益因素,研究基于此类模型的精准灌溉管理是否能对耗水量和产量产生预期效果。
{"title":"A spatial machine-learning model for predicting crop water stress index for precision irrigation of vineyards","authors":"Aviva Peeters ,&nbsp;Yafit Cohen ,&nbsp;Idan Bahat ,&nbsp;Noa Ohana-Levi ,&nbsp;Eitan Goldshtein ,&nbsp;Yishai Netzer ,&nbsp;Tomás R. Tenreiro ,&nbsp;Victor Alchanatis ,&nbsp;Alon Ben-Gal","doi":"10.1016/j.compag.2024.109578","DOIUrl":"10.1016/j.compag.2024.109578","url":null,"abstract":"<div><div>Optimization of water inputs is possible through precision irrigation based on prescription maps. The crop water stress index (CWSI) is an indicator of spatial and dynamic changes in plant water status that can serve irrigation management decision-making. The driving hypothesis was that in-season CWSI maps based on combined static and spatial-dynamic variables could be used to delineate irrigation MZs. A primary incentive was to minimize thermal-imaging campaigns and to complement CWSI maps between campaigns with cost-effective multi-spectral imaging campaigns producing normalized difference vegetative index (NDVI) maps. A spatial machine-learning model based on a random-forest (RF) algorithm combined with spatial statistical methods was developed to predict the spatial and temporal variability in CWSI of single vines in a vineyard. Model criteria and objectives included the reduction of sample data and input variables to a minimum without impacting prediction accuracy, consideration of only variables readily available to farmers, and accounting for spatial location and spatial processes.</div><div>The model was developed and tested on data from a ‘Cabernet Sauvignon’ vineyard in Israel over two years. Prediction of CWSI was driven by terrain parameters, slope, aspect and topographical wetness index, soil apparent electrical conductivity (ECa), and NDVI.</div><div>Spatial models based on RF were found to support CWSI prediction. Adding a geospatial component significantly improved model performance and accuracy, particularly when raw data was represented as z-scores or when z-scores were used as weights. NDVI, followed by ECa, aspect, or slope, was the most important variable predicting CWSI in the non-spatial models. The stronger the variable importance of NDVI, the better the model performed. The weaker the effect of NDVI in predicting CWSI, the stronger the effect of terrain and soil variables. In the spatial models, based on z-transformed values or on weighted values, the most important variable in predicting CWSI was either NDVI or location.</div><div>The model, based on a limited and readily accessible number of variables, can serve as the basis for user-friendly decision support tools for precision irrigation. Additional research is needed to evaluate alternative prediction variables and to account for case studies in more geographical locations to address overfitting specific input data. Socio-economic and cost-benefit considerations should be integrated to examine whether precision irrigation management based on such models has the desired effects on water consumption and yield.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"227 ","pages":"Article 109578"},"PeriodicalIF":7.7,"publicationDate":"2024-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142699943","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Integrating UAV, UGV and UAV-UGV collaboration in future industrialized agriculture: Analysis, opportunities and challenges 在未来工业化农业中整合 UAV、UGV 和 UAV-UGV 协作:分析、机遇和挑战
IF 7.7 1区 农林科学 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2024-11-23 DOI: 10.1016/j.compag.2024.109631
Zhigang Ren, Han Zheng, Jian Chen, Tao Chen, Pengyang Xie, Yunzhe Xu, Jiaming Deng, Huanzhe Wang, Mingjiang Sun, Wenchi Jiao
Industrialized agriculture is the direction of future agricultural development, which is developing in the direction of scale, diversification, unmanned and integration. The cooperative operation of UAV, UGV and UAV-UGV is a hot topic in the field of intelligent agricultural multi-machine research. However, at present, most of the research projects have not systematically given the solutions of UAV, UGV and UAV-UGV collaborative application in the future industrialized agriculture. Therefore, we propose the development model of future industrialized agriculture, which derives the key technologies and applications of agricultural UAV, UGV and UAV-UGV collaboration. We summarize and discuss the difficulties and innovative design of the application of UAV, UGV and UAV-UGV collaboration technology in the future industrialized environment, and analyze the opportunities and challenges of the application of UAV, UGV and UAV-UGV collaboration technology in combination with future industrialized agricultural production. Finally, we describe that more technologies (multi-modal sensing technology, embodied intelligent control technology, edge computing technology, end-edge cloud collaborative management and control technology, virtual reality, augmented reality, etc.) are the future research directions for the application of UAV, UGV and UAV-UGV collaboration in industrialized agriculture.
产业化农业是未来农业发展的方向,正朝着规模化、多样化、无人化、集成化方向发展。无人机、无人潜航器和无人潜航器-无人潜航器的协同作业是智能农业多机研究领域的热点话题。然而,目前大多数研究项目还没有系统地给出无人机、无人潜航器和无人潜航器-无人潜航器协同应用于未来农业产业化的解决方案。因此,我们提出了未来农业产业化发展模式,推导出农业无人机、无人潜航器和无人潜航器-无人潜航器协同应用的关键技术和应用领域。我们总结和探讨了无人机、无人潜航器和无人潜航器-无人潜航器协同技术在未来工业化环境下应用的难点和创新设计,分析了无人机、无人潜航器和无人潜航器-无人潜航器协同技术结合未来工业化农业生产应用的机遇和挑战。最后,阐述了更多技术(多模态感知技术、体现式智能控制技术、边缘计算技术、端云协同管控技术、虚拟现实、增强现实等)是未来无人机、无人潜航器和无人潜航器-无人潜航器协同技术在农业产业化中应用的研究方向。
{"title":"Integrating UAV, UGV and UAV-UGV collaboration in future industrialized agriculture: Analysis, opportunities and challenges","authors":"Zhigang Ren,&nbsp;Han Zheng,&nbsp;Jian Chen,&nbsp;Tao Chen,&nbsp;Pengyang Xie,&nbsp;Yunzhe Xu,&nbsp;Jiaming Deng,&nbsp;Huanzhe Wang,&nbsp;Mingjiang Sun,&nbsp;Wenchi Jiao","doi":"10.1016/j.compag.2024.109631","DOIUrl":"10.1016/j.compag.2024.109631","url":null,"abstract":"<div><div>Industrialized agriculture is the direction of future agricultural development, which is developing in the direction of scale, diversification, unmanned and integration. The cooperative operation of UAV, UGV and UAV-UGV is a hot topic in the field of intelligent agricultural multi-machine research. However, at present, most of the research projects have not systematically given the solutions of UAV, UGV and UAV-UGV collaborative application in the future industrialized agriculture. Therefore, we propose the development model of future industrialized agriculture, which derives the key technologies and applications of agricultural UAV, UGV and UAV-UGV collaboration. We summarize and discuss the difficulties and innovative design of the application of UAV, UGV and UAV-UGV collaboration technology in the future industrialized environment, and analyze the opportunities and challenges of the application of UAV, UGV and UAV-UGV collaboration technology in combination with future industrialized agricultural production. Finally, we describe that more technologies (multi-modal sensing technology, embodied intelligent control technology, edge computing technology, end-edge cloud collaborative management and control technology, virtual reality, augmented reality, etc.) are the future research directions for the application of UAV, UGV and UAV-UGV collaboration in industrialized agriculture.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"227 ","pages":"Article 109631"},"PeriodicalIF":7.7,"publicationDate":"2024-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142699905","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Application of AMIS-optimized vision transformer in identifying disease in Nile Tilapia 应用 AMIS 优化视觉转换器识别尼罗罗非鱼疾病
IF 7.7 1区 农林科学 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2024-11-23 DOI: 10.1016/j.compag.2024.109676
Chutchai Kaewta , Rapeepan Pitakaso , Surajet Khonjun , Thanatkij Srichok , Peerawat Luesak , Sarayut Gonwirat , Prem Enkvetchakul , Achara Jutagate , Tuanthong Jutagate
Efficient health monitoring in Nile tilapia aquaculture is critical due to the substantial economic losses from diseases, underlining the necessity for innovative monitoring solutions. This study introduces an advanced, automated health monitoring system known as the “Automated System for Identifying Disease in Nile Tilapia (AS-ID-NT),” which incorporates a heterogeneous ensemble deep learning model using the Artificial Multiple Intelligence System (AMIS) as the decision fusion strategy (HE-DLM-AMIS). This system enhances the accuracy and efficiency of disease detection in Nile tilapia. The research utilized two specially curated video datasets, NT-1 and NT-2, each consisting of short videos lasting between 3–10 s, showcasing various behaviors of Nile tilapia in controlled environments. These datasets were critical for training and validating the ensemble model. Comparative analysis reveals that the HE-DLM-AMIS embedded in AS-ID-NT achieves superior performance, with an accuracy of 92.48% in detecting health issues in tilapia. This system outperforms both single model configurations, such as the 3D Convolutional Neural Network and Vision Transformer (ViT-large), which recorded accuracies of 84.64% and 85.7% respectively, and homogeneous ensemble models like ViT-large-Ho and ConvLSTM-Ho, which achieved accuracies of 88.49% and 86.84% respectively. AS-ID-NT provides a non-invasive, continuous, and automated solution for timely intervention, successfully identifying both healthy and unhealthy (infected and environmentally stressed) fish. This system not only demonstrates the potential of advanced AI and machine learning techniques in enhancing aquaculture management but also promotes sustainable practices and food security by maintaining healthier fish populations and supporting the economic viability of tilapia farms.
由于疾病造成的巨大经济损失,对尼罗罗非鱼水产养殖进行高效的健康监测至关重要,这凸显了创新监测解决方案的必要性。本研究介绍了一种先进的自动健康监测系统,即 "尼罗罗非鱼疾病自动识别系统(AS-ID-NT)",该系统采用人工多元智能系统(AMIS)的异构集合深度学习模型作为决策融合策略(HE-DLM-AMIS)。该系统提高了尼罗罗非鱼疾病检测的准确性和效率。该研究利用了两个专门制作的视频数据集(NT-1 和 NT-2),每个数据集由 3-10 秒的短视频组成,展示了尼罗罗非鱼在受控环境中的各种行为。这些数据集对于训练和验证集合模型至关重要。对比分析表明,嵌入 AS-ID-NT 的 HE-DLM-AMIS 性能卓越,在检测罗非鱼健康问题方面的准确率高达 92.48%。该系统优于三维卷积神经网络和视觉转换器(ViT-large)等单一模型配置(准确率分别为 84.64% 和 85.7%),也优于 ViT-large-Ho 和 ConvLSTM-Ho 等同构集合模型(准确率分别为 88.49% 和 86.84%)。AS-ID-NT 为及时干预提供了一种非侵入性、连续性和自动化的解决方案,成功识别了健康和不健康(感染和环境压力)的鱼类。该系统不仅展示了先进的人工智能和机器学习技术在加强水产养殖管理方面的潜力,还通过维持更健康的鱼类种群和支持罗非鱼养殖场的经济可行性,促进了可持续发展实践和食品安全。
{"title":"Application of AMIS-optimized vision transformer in identifying disease in Nile Tilapia","authors":"Chutchai Kaewta ,&nbsp;Rapeepan Pitakaso ,&nbsp;Surajet Khonjun ,&nbsp;Thanatkij Srichok ,&nbsp;Peerawat Luesak ,&nbsp;Sarayut Gonwirat ,&nbsp;Prem Enkvetchakul ,&nbsp;Achara Jutagate ,&nbsp;Tuanthong Jutagate","doi":"10.1016/j.compag.2024.109676","DOIUrl":"10.1016/j.compag.2024.109676","url":null,"abstract":"<div><div>Efficient health monitoring in Nile tilapia aquaculture is critical due to the substantial economic losses from diseases, underlining the necessity for innovative monitoring solutions. This study introduces an advanced, automated health monitoring system known as the “Automated System for Identifying Disease in Nile Tilapia (AS-ID-NT),” which incorporates a heterogeneous ensemble deep learning model using the Artificial Multiple Intelligence System (AMIS) as the decision fusion strategy (HE-DLM-AMIS). This system enhances the accuracy and efficiency of disease detection in Nile tilapia. The research utilized two specially curated video datasets, NT-1 and NT-2, each consisting of short videos lasting between 3–10 s, showcasing various behaviors of Nile tilapia in controlled environments. These datasets were critical for training and validating the ensemble model. Comparative analysis reveals that the HE-DLM-AMIS embedded in AS-ID-NT achieves superior performance, with an accuracy of 92.48% in detecting health issues in tilapia. This system outperforms both single model configurations, such as the 3D Convolutional Neural Network and Vision Transformer (ViT-large), which recorded accuracies of 84.64% and 85.7% respectively, and homogeneous ensemble models like ViT-large-Ho and ConvLSTM-Ho, which achieved accuracies of 88.49% and 86.84% respectively. AS-ID-NT provides a non-invasive, continuous, and automated solution for timely intervention, successfully identifying both healthy and unhealthy (infected and environmentally stressed) fish. This system not only demonstrates the potential of advanced AI and machine learning techniques in enhancing aquaculture management but also promotes sustainable practices and food security by maintaining healthier fish populations and supporting the economic viability of tilapia farms.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"227 ","pages":"Article 109676"},"PeriodicalIF":7.7,"publicationDate":"2024-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142699949","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A study of soil modelling methods based on line-structured light—Preparing for the subsoiling digital twin 基于线结构光的土壤建模方法研究--为地下数字孪生模型做准备
IF 7.7 1区 农林科学 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2024-11-23 DOI: 10.1016/j.compag.2024.109685
Xia Li , Birong You , Xuhui Wang , Zhipeng Zhao , Tianyu Qi , Jinyou Xu
The virtual model forms the foundation for building a digital twin system; however, methods for modelling dynamically changing soil in subsoiling have not yet been studied. To provide technical guidance for constructing such a system, this study employs a line structured light method for soil model construction. After conducting field and indoor trials, the extreme value method, grayscale centroid method, and Steger algorithm are used to extract the laser centreline. Results indicate that the extreme value method and grayscale centroid method require relatively little processing time—approximately 1.9 ms and 16 ms, respectively—with processing times being nearly the same in different environments. In contrast, the Steger algorithm requires over 300 ms. Regarding memory usage, the three methods demonstrate similar memory consumption when processing images of different environmental conditions: the extreme value method stabilizes at 86.48 MB, the grayscale centroid method at 105.72 MB, and the Steger algorithm fluctuates around 110 MB. The grayscale centroid method exhibits the best stability, making it most suitable for centreline extraction in the digital twin system. During 3D reconstruction, camera capture frequency is positively correlated with reconstruction quality, while movement speed negatively correlates. Each image’s processing time is under 1 ms, showing that the line laser 3D reconstruction method meets the real-time requirements of the digital twin system for subsoiling.
虚拟模型是建立数字孪生系统的基础,然而,目前尚未研究出建立地下土壤动态变化模型的方法。为了给此类系统的构建提供技术指导,本研究采用了线结构光法构建土壤模型。在进行实地和室内试验后,采用极值法、灰度中心点法和 Steger 算法提取激光中心线。结果表明,极值法和灰度向心法所需的处理时间相对较短,分别约为 1.9 毫秒和 16 毫秒,在不同环境下处理时间几乎相同。相比之下,Steger 算法需要 300 多毫秒。在内存使用方面,三种方法在处理不同环境条件下的图像时内存消耗相似:极值法稳定在 86.48 MB,灰度中心法为 105.72 MB,而 Steger 算法在 110 MB 左右波动。灰度向心法的稳定性最好,因此最适合用于数字孪生系统的中心线提取。在三维重建过程中,相机捕捉频率与重建质量呈正相关,而移动速度则呈负相关。每幅图像的处理时间小于 1 毫秒,这表明线激光三维重建方法符合数字孪生系统对底土测量的实时要求。
{"title":"A study of soil modelling methods based on line-structured light—Preparing for the subsoiling digital twin","authors":"Xia Li ,&nbsp;Birong You ,&nbsp;Xuhui Wang ,&nbsp;Zhipeng Zhao ,&nbsp;Tianyu Qi ,&nbsp;Jinyou Xu","doi":"10.1016/j.compag.2024.109685","DOIUrl":"10.1016/j.compag.2024.109685","url":null,"abstract":"<div><div>The virtual model forms the foundation for building a digital twin system; however, methods for modelling dynamically changing soil in subsoiling have not yet been studied. To provide technical guidance for constructing such a system, this study employs a line structured light method for soil model construction. After conducting field and indoor trials, the extreme value method, grayscale centroid method, and Steger algorithm are used to extract the laser centreline. Results indicate that the extreme value method and grayscale centroid method require relatively little processing time—approximately 1.9 ms and 16 ms, respectively—with processing times being nearly the same in different environments. In contrast, the Steger algorithm requires over 300 ms. Regarding memory usage, the three methods demonstrate similar memory consumption when processing images of different environmental conditions: the extreme value method stabilizes at 86.48 MB, the grayscale centroid method at 105.72 MB, and the Steger algorithm fluctuates around 110 MB. The grayscale centroid method exhibits the best stability, making it most suitable for centreline extraction in the digital twin system. During 3D reconstruction, camera capture frequency is positively correlated with reconstruction quality, while movement speed negatively correlates. Each image’s processing time is under 1 ms, showing that the line laser 3D reconstruction method meets the real-time requirements of the digital twin system for subsoiling.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"227 ","pages":"Article 109685"},"PeriodicalIF":7.7,"publicationDate":"2024-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142699948","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A point-based method for identification and counting of tiny object insects in cotton fields 基于点的棉田微小物体昆虫识别和计数方法
IF 7.7 1区 农林科学 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2024-11-22 DOI: 10.1016/j.compag.2024.109648
Mingshuang Bai , Tao Chen , Jia Yuan , Gang Zhou , Jiajia Wang , Zhenhong Jia
Monitoring of crop pests in the field can be achieved by using sticky traps that capture pests. However, due to the small size and high density of the captured pests, conventional object detection methods relying on bounding boxes struggle to accurately identify and count pests, as they are highly sensitive to positional deviations. Therefore, we propose a novel point framework for multi-species insect identification and counting, termed MS-P2P, which is free from the limitation of Bounding box. Specifically, we employ the lightweight object detection network YOLOv7-tiny for feature extraction and incorporate a lightweight attention detection head (LAHead) for point coordinate regression and insect classification. The LAHead enhances the model’s sensitivity to subtle insect features in complex environments. Additionally, we utilize point proposal prediction and the Hungarian matching algorithm to achieve one-to-one matching of optimal prediction points for targets, which simplifies post-processing methods significantly. Finally, we introduce SmoothL1 Loss and Focal Loss to address the issues of matching instability and class imbalance in the point estimation strategy, respectively. Extensive experiments on the self-built NSC dataset and the publicly available YST dataset have demonstrated the effectiveness of our designed MS-P2P. In particular, on our self-built dataset of 9 insect species, the overall counting metrics achieved a MAE of 18.9 and a RMSE of 28.8. The combined localization and counting metric, nAP0.5, reached 86.4%. Compared with other state-of-the-art algorithms, MS-P2P achieved the best overall results in both localization and counting metrics.
使用捕捉害虫的粘性诱捕器可以实现对田间作物害虫的监测。然而,由于捕获的害虫体积小、密度高,传统的物体检测方法依赖于边界框,对位置偏差高度敏感,难以准确识别和计数害虫。因此,我们提出了一种用于多物种昆虫识别和计数的新型点框架,称为 MS-P2P,它摆脱了边界框的限制。具体来说,我们采用轻量级对象检测网络 YOLOv7-tiny 进行特征提取,并结合轻量级注意力检测头(LAHead)进行点坐标回归和昆虫分类。LAHead 增强了模型对复杂环境中昆虫细微特征的灵敏度。此外,我们还利用点建议预测和匈牙利匹配算法来实现目标最佳预测点的一对一匹配,从而大大简化了后处理方法。最后,我们引入了 SmoothL1 Loss 和 Focal Loss,以分别解决点估计策略中匹配不稳定和类不平衡的问题。在自建的 NSC 数据集和公开的 YST 数据集上进行的大量实验证明了我们设计的 MS-P2P 的有效性。特别是在我们自建的 9 个昆虫物种数据集上,整体计数指标的 MAE 为 18.9,RMSE 为 28.8。综合定位和计数指标 nAP0.5 达到了 86.4%。与其他最先进的算法相比,MS-P2P 在定位和计数指标方面都取得了最佳的总体结果。
{"title":"A point-based method for identification and counting of tiny object insects in cotton fields","authors":"Mingshuang Bai ,&nbsp;Tao Chen ,&nbsp;Jia Yuan ,&nbsp;Gang Zhou ,&nbsp;Jiajia Wang ,&nbsp;Zhenhong Jia","doi":"10.1016/j.compag.2024.109648","DOIUrl":"10.1016/j.compag.2024.109648","url":null,"abstract":"<div><div>Monitoring of crop pests in the field can be achieved by using sticky traps that capture pests. However, due to the small size and high density of the captured pests, conventional object detection methods relying on bounding boxes struggle to accurately identify and count pests, as they are highly sensitive to positional deviations. Therefore, we propose a novel point framework for multi-species insect identification and counting, termed MS-P2P, which is free from the limitation of Bounding box. Specifically, we employ the lightweight object detection network YOLOv7-tiny for feature extraction and incorporate a lightweight attention detection head (LAHead) for point coordinate regression and insect classification. The LAHead enhances the model’s sensitivity to subtle insect features in complex environments. Additionally, we utilize point proposal prediction and the Hungarian matching algorithm to achieve one-to-one matching of optimal prediction points for targets, which simplifies post-processing methods significantly. Finally, we introduce SmoothL1 Loss and Focal Loss to address the issues of matching instability and class imbalance in the point estimation strategy, respectively. Extensive experiments on the self-built NSC dataset and the publicly available YST dataset have demonstrated the effectiveness of our designed MS-P2P. In particular, on our self-built dataset of 9 insect species, the overall counting metrics achieved a MAE of 18.9 and a RMSE of 28.8. The combined localization and counting metric, nAP0.5, reached 86.4%. Compared with other state-of-the-art algorithms, MS-P2P achieved the best overall results in both localization and counting metrics.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"227 ","pages":"Article 109648"},"PeriodicalIF":7.7,"publicationDate":"2024-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142700047","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The influence of a seeding plate of the air-suction minituber precision seed-metering device on seeding quality 气吸式微型单粒播种器的播种板对播种质量的影响
IF 7.7 1区 农林科学 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2024-11-22 DOI: 10.1016/j.compag.2024.109680
Zhiming Zhao , Yining Lyu , Jinqing Lyu , Xiaoxin Zhu , Jicheng Li , Deqiu Yang
The existing seed-metering device has the problems of low qualified index and high multiple index of minituber mechanized seeding. In this work, a seed-metering device suitable for precision seeding of minituber was designed to solve the above problems and improve the seeding efficiency. By analyzing the motion mechanism of minituber on the seeding plate, it is determined that the diameter of the suction seeding hole, the rotation speed and tilt angle of the seeding plate and the negative pressure value are the main factors affecting the seeding performance of the seed-metering device. The steady-state airflow in the negative pressure chamber was analyzed by computational fluid dynamics. When the diameter of the suction seeding hole is 8 mm and the rotation speed of the seeding plate is 40 r/min, the highest negative pressure value is reached at the suction seeding hole. The CFD-DEM coupling simulation method was used to investigate the stress of minituber and the effect of adsorption of minituber by suction seeding hole under different tilt angles of seeding plate and negative pressures. The coupling simulation results were verified and optimized by bench test, and the movement of the minituber on the seeding plate was observed by a high-speed camera. Design Expert was used to optimize the test results, and it is found that when the tilt angle is 20° and the negative pressure is −6000 Pa, the working effect of the seed-metering device could achieve the multiple index is below 3.5 %, the miss seeding index no more than 1.5 %, the qualified index remained above 94.5 %, and the coefficient of variation is kept under 11 %. This work puts forward new ideas in improving the seeding quality of high-speed precision seed-metering device, and also provides a new design idea for the development of seeding device.
现有的种子计量装置存在微型旋耕机机械化播种合格率低、复种指数高的问题。为解决上述问题,提高播种效率,本研究设计了一种适用于微型推杆精量播种的种子计量装置。通过分析微型推杆在播种板上的运动机理,确定吸种孔直径、播种板转速和倾斜角度以及负压值是影响种子计量装置播种性能的主要因素。通过计算流体动力学分析了负压室内的稳态气流。当吸气排种孔直径为 8 mm、排种板转速为 40 r/min 时,吸气排种孔处的负压值最高。采用 CFD-DEM 耦合模拟方法研究了不同播种板倾斜角度和负压条件下微型微管的应力和吸种孔对微型微管吸附的影响。通过台架试验对耦合模拟结果进行了验证和优化,并用高速摄像机观察了微型吸盘在播种板上的运动。利用 Design Expert 对试验结果进行了优化,发现当倾斜角为 20°、负压为 -6000 Pa 时,种子计量装置的工作效果可以达到复种指数低于 3.5%,漏种指数不超过 1.5%,合格指数保持在 94.5%以上,变异系数保持在 11%以下。这项工作为提高高速精密种子计量装置的播种质量提出了新思路,也为播种装置的开发提供了新的设计思路。
{"title":"The influence of a seeding plate of the air-suction minituber precision seed-metering device on seeding quality","authors":"Zhiming Zhao ,&nbsp;Yining Lyu ,&nbsp;Jinqing Lyu ,&nbsp;Xiaoxin Zhu ,&nbsp;Jicheng Li ,&nbsp;Deqiu Yang","doi":"10.1016/j.compag.2024.109680","DOIUrl":"10.1016/j.compag.2024.109680","url":null,"abstract":"<div><div>The existing seed-metering device has the problems of low qualified index and high multiple index of minituber mechanized seeding. In this work, a seed-metering device suitable for precision seeding of minituber was designed to solve the above problems and improve the seeding efficiency. By analyzing the motion mechanism of minituber on the seeding plate, it is determined that the diameter of the suction seeding hole, the rotation speed and tilt angle of the seeding plate and the negative pressure value are the main factors affecting the seeding performance of the seed-metering device. The steady-state airflow in the negative pressure chamber was analyzed by computational fluid dynamics. When the diameter of the suction seeding hole is 8 mm and the rotation speed of the seeding plate is 40 r/min, the highest negative pressure value is reached at the suction seeding hole. The CFD-DEM coupling simulation method was used to investigate the stress of minituber and the effect of adsorption of minituber by suction seeding hole under different tilt angles of seeding plate and negative pressures. The coupling simulation results were verified and optimized by bench test, and the movement of the minituber on the seeding plate was observed by a high-speed camera. Design Expert was used to optimize the test results, and it is found that when the tilt angle is 20° and the negative pressure is −6000 Pa, the working effect of the seed-metering device could achieve the multiple index is below 3.5 %, the miss seeding index no more than 1.5 %, the qualified index remained above 94.5 %, and the coefficient of variation is kept under 11 %. This work puts forward new ideas in improving the seeding quality of high-speed precision seed-metering device, and also provides a new design idea for the development of seeding device.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"227 ","pages":"Article 109680"},"PeriodicalIF":7.7,"publicationDate":"2024-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142699880","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Crop canopy volume weighted by color parameters from UAV-based RGB imagery to estimate above-ground biomass of potatoes 利用基于无人机的 RGB 图像中的颜色参数对作物冠层体积进行加权,以估算马铃薯的地上生物量
IF 7.7 1区 农林科学 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2024-11-22 DOI: 10.1016/j.compag.2024.109678
Yang Liu , Fuqin Yang , Jibo Yue , Wanxue Zhu , Yiguang Fan , Jiejie Fan , Yanpeng Ma , Mingbo Bian , Riqiang Chen , Guijun Yang , Haikuan Feng
Current techniques to estimate crop aboveground biomass (AGB) across the multiple growth stages mainly used optical remote-sensing techniques. However, this technology was limited by saturation of the canopy spectrum. To meet this problem, this study used digital images obtained by an unmanned aerial vehicle to extract the spectral and structural indicators of the crop canopy in three key potato growth stages. We took the color parameters (CP) of assorted color space transformations as the canopy spectral information, and crop height (CH), crop coverage (CC), and crop canopy volume (CCV) as the canopy structural indicators. Based on the complementary advantages of CP and CCV, we proposed a new metric: the color parameter-weighted crop-canopy volume (CCVCP). Results showed that the CH, CCV, and CCVCP correlated more strongly with potato AGB during the multi-growth stages than do CP and CC. The hue-weighted crop-canopy volume (CCVH) correlated most strongly with the potato AGB among all structural indicators. Using CH was more accurate in estimating potato AGB compared to CP and CC. Combining indicators (CP + CC/CH, CP + CC + CH) improved the accuracy of potato AGB estimation over the multi-growth stages. Except for the CP + CC + CH model, other AGB estimation models produced inaccurate AGB estimation than the models based on CCV and CCVH. The AGB estimation accuracy produced by the univariate-based CCVH model (R2 = 0.65, RMSE = 281 kg/hm2, and NRMSE = 23.61 %) was comparable to that of the complex model [CP + CC + CH using random forest (RF) or multiple stepwise regression (MSR)]. Compared with CP + CC + CH using RF and MSR, the RMSE decreased and increased by 0.35 % and 4.24 %, respectively. Compared with CP, CP + CC, CP + CH, and CCV, the use of CCVH to estimate AGB decreased the RMSE by 10.24 %, 7.42 %, 6.36 %, and 6.33 %, respectively. Meanwhile, the performance of CCVH was verified at different stages and among varieties. Thus, this indicator can be used for monitoring potato growth to help guide field production management.
目前估算作物多个生长阶段的地上生物量(AGB)的技术主要采用光学遥感技术。然而,这种技术受到冠层光谱饱和度的限制。针对这一问题,本研究利用无人机获取的数字图像提取了马铃薯三个关键生长阶段作物冠层的光谱和结构指标。我们将各种色彩空间变换的色彩参数(CP)作为冠层光谱信息,将作物高度(CH)、作物覆盖率(CC)和作物冠层体积(CCV)作为冠层结构指标。基于 CP 和 CCV 的互补优势,我们提出了一种新指标:颜色参数加权作物冠层体积(CCVCP)。结果表明,与 CP 和 CC 相比,CH、CCV 和 CCVCP 与马铃薯多生长期 AGB 的相关性更强。在所有结构指标中,色调加权作物冠层体积(CCVH)与马铃薯 AGB 的相关性最强。与 CP 和 CC 相比,使用 CH 估算马铃薯 AGB 更准确。组合指标(CP + CC/CH、CP + CC + CH)提高了马铃薯在多个生长阶段AGB估算的准确性。除 CP + CC + CH 模型外,其他 AGB 估算模型的 AGB 估算结果均低于基于 CCV 和 CCVH 的模型。基于单变量的CCVH模型(R2 = 0.65,RMSE = 281 kg/hm2,NRMSE = 23.61 %)的AGB估计精度与复合模型[使用随机森林(RF)或多元逐步回归(MSR)的CP + CC + CH]相当。与使用 RF 和 MSR 的 CP + CC + CH 相比,均方根误差分别减少了 0.35 % 和增加了 4.24 %。与 CP、CP + CC、CP + CH 和 CCV 相比,使用 CCVH 估算 AGB 的均方根误差分别降低了 10.24 %、7.42 %、6.36 % 和 6.33 %。同时,CCVH 的性能在不同阶段和不同品种之间都得到了验证。因此,该指标可用于监测马铃薯生长,帮助指导田间生产管理。
{"title":"Crop canopy volume weighted by color parameters from UAV-based RGB imagery to estimate above-ground biomass of potatoes","authors":"Yang Liu ,&nbsp;Fuqin Yang ,&nbsp;Jibo Yue ,&nbsp;Wanxue Zhu ,&nbsp;Yiguang Fan ,&nbsp;Jiejie Fan ,&nbsp;Yanpeng Ma ,&nbsp;Mingbo Bian ,&nbsp;Riqiang Chen ,&nbsp;Guijun Yang ,&nbsp;Haikuan Feng","doi":"10.1016/j.compag.2024.109678","DOIUrl":"10.1016/j.compag.2024.109678","url":null,"abstract":"<div><div>Current techniques to estimate crop aboveground biomass (AGB) across the multiple growth stages mainly used optical remote-sensing techniques. However, this technology was limited by saturation of the canopy spectrum. To meet this problem, this study used digital images obtained by an unmanned aerial vehicle to extract the spectral and structural indicators of the crop canopy in three key potato growth stages. We took the color parameters (CP) of assorted color space transformations as the canopy spectral information, and crop height (CH), crop coverage (CC), and crop canopy volume (CCV) as the canopy structural indicators. Based on the complementary advantages of CP and CCV, we proposed a new metric: the color parameter-weighted crop-canopy volume (CCV<sub>CP</sub>). Results showed that the CH, CCV, and CCV<sub>CP</sub> correlated more strongly with potato AGB during the multi-growth stages than do CP and CC. The hue-weighted crop-canopy volume (CCV<sub>H</sub>) correlated most strongly with the potato AGB among all structural indicators. Using CH was more accurate in estimating potato AGB compared to CP and CC. Combining indicators (CP + CC/CH, CP + CC + CH) improved the accuracy of potato AGB estimation over the multi-growth stages. Except for the CP + CC + CH model, other AGB estimation models produced inaccurate AGB estimation than the models based on CCV and CCV<sub>H</sub>. The AGB estimation accuracy produced by the univariate-based CCV<sub>H</sub> model (R<sup>2</sup> = 0.65, RMSE = 281 kg/hm<sup>2</sup>, and NRMSE = 23.61 %) was comparable to that of the complex model [CP + CC + CH using random forest (RF) or multiple stepwise regression (MSR)]. Compared with CP + CC + CH using RF and MSR, the RMSE decreased and increased by 0.35 % and 4.24 %, respectively. Compared with CP, CP + CC, CP + CH, and CCV, the use of CCV<sub>H</sub> to estimate AGB decreased the RMSE by 10.24 %, 7.42 %, 6.36 %, and 6.33 %, respectively. Meanwhile, the performance of CCV<sub>H</sub> was verified at different stages and among varieties. Thus, this indicator can be used for monitoring potato growth to help guide field production management.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"227 ","pages":"Article 109678"},"PeriodicalIF":7.7,"publicationDate":"2024-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142699903","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Development of a pumpkin fruits pick-and-place robot using an RGB-D camera and a YOLO based object detection AI model 利用 RGB-D 摄像机和基于 YOLO 的物体检测人工智能模型开发南瓜水果拾放机器人
IF 7.7 1区 农林科学 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2024-11-22 DOI: 10.1016/j.compag.2024.109625
Liangliang Yang, Tomoki Noguchi, Yohei Hoshino
It is a hard job for farmers to harvest heavy fruits such as pumpkin fruits because of the aging problem of farmers. To solve this problem, this study aims to develop an automatic pick-and-place robot system that alleviates labor demands in pumpkin harvesting. We proposed a system capable of detecting pumpkins in the field and obtaining their three-dimensional (3D) coordinate values using artificial intelligence (AI) object detection methods and RGB-D camera, respectively. The harvesting system incorporates a crawler-type vehicle as the base platform, while a collaborative robot arm is employed to lift the pumpkin fruits. A newly designed robot hand, mounted at the end of the robot arm, is responsible for grasping the pumpkins. In this paper, we utilized various versions of YOLO (from version 2 to 8) for pumpkin fruit detection, and compare the results obtained from these different versions. The RGB-D camera, that was mounted at the root of the robot arm, captures the position of the pumpkin fruits in camera coordinates. We proposed a calibration method can simply transform the position to the coordinates of robot arm. In addition, we finished all the software and hardware of the pumpkin fruits pick-and-place robot system. Field experiments were conducted at an outdoor pumpkin field. The experiments demonstrate the fruits detection accuracy rate exceeding 99% and a picking success rate surpassing 90%. However, fruits that were surrounded by excessive vines could not be successfully grasped.
由于农民老龄化问题,采收南瓜等重型水果是一项艰巨的工作。为解决这一问题,本研究旨在开发一种自动拾放机器人系统,以减轻南瓜收获时的劳动力需求。我们提出了一种能够检测田间南瓜并分别利用人工智能(AI)物体检测方法和 RGB-D 摄像头获取其三维(3D)坐标值的系统。该收获系统以履带式车辆为基础平台,采用协作机械臂提升南瓜果实。新设计的机械手安装在机械臂的末端,负责抓取南瓜。在本文中,我们使用了不同版本的 YOLO(从第 2 版到第 8 版)来检测南瓜果实,并比较了这些不同版本的检测结果。安装在机械臂根部的 RGB-D 摄像机以摄像机坐标捕捉南瓜果实的位置。我们提出的校准方法可以简单地将位置转换为机械臂的坐标。此外,我们还完成了南瓜果实拾放机器人系统的所有软件和硬件。我们在室外南瓜地进行了现场实验。实验表明,水果检测准确率超过 99%,摘取成功率超过 90%。但是,被过多藤蔓包围的果实无法成功抓取。
{"title":"Development of a pumpkin fruits pick-and-place robot using an RGB-D camera and a YOLO based object detection AI model","authors":"Liangliang Yang,&nbsp;Tomoki Noguchi,&nbsp;Yohei Hoshino","doi":"10.1016/j.compag.2024.109625","DOIUrl":"10.1016/j.compag.2024.109625","url":null,"abstract":"<div><div>It is a hard job for farmers to harvest heavy fruits such as pumpkin fruits because of the aging problem of farmers. To solve this problem, this study aims to develop an automatic pick-and-place robot system that alleviates labor demands in pumpkin harvesting. We proposed a system capable of detecting pumpkins in the field and obtaining their three-dimensional (3D) coordinate values using artificial intelligence (AI) object detection methods and RGB-D camera, respectively. The harvesting system incorporates a crawler-type vehicle as the base platform, while a collaborative robot arm is employed to lift the pumpkin fruits. A newly designed robot hand, mounted at the end of the robot arm, is responsible for grasping the pumpkins. In this paper, we utilized various versions of YOLO (from version 2 to 8) for pumpkin fruit detection, and compare the results obtained from these different versions. The RGB-D camera, that was mounted at the root of the robot arm, captures the position of the pumpkin fruits in camera coordinates. We proposed a calibration method can simply transform the position to the coordinates of robot arm. In addition, we finished all the software and hardware of the pumpkin fruits pick-and-place robot system. Field experiments were conducted at an outdoor pumpkin field. The experiments demonstrate the fruits detection accuracy rate exceeding 99% and a picking success rate surpassing 90%. However, fruits that were surrounded by excessive vines could not be successfully grasped.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"227 ","pages":"Article 109625"},"PeriodicalIF":7.7,"publicationDate":"2024-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142699950","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unmanned Aerial Vehicle-based Autonomous Tracking System for Invasive Flying Insects 基于无人飞行器的入侵飞虫自主追踪系统
IF 7.7 1区 农林科学 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2024-11-22 DOI: 10.1016/j.compag.2024.109616
Jeonghyeon Pak , Bosung Kim , Chanyoung Ju , Hyoung Il Son
The Asian hornet or yellow-legged hornet, Vespa velutina nigrithorax, is a global predator of honeybees (Apis mellifera L.) that has become widespread owing to rapid climate change. Herein, we propose a localization system for tracking radio-tagged hornets and discovering hornet hives by combining unmanned aerial vehicles with a trilateration system. By leveraging the homing instinct of hornets, we systematically structured our experiments as a behavioral experiment, ground-truth experiment, and localization experiment. According to the experimental results, we successfully discovered the hives of two of the five hornets tested. Additionally, a comprehensive analysis of the experimental outcomes provided insights into hornet flight patterns and behaviors. The results of this research demonstrate the efficacy of integrating UAVs with radio telemetry for precision object tracking and ecosystem management, offering a robust tool for mitigating the impacts of invasive species on honeybee populations.
亚洲大黄蜂或黄腿大黄蜂(Vespa velutina nigrithorax)是蜜蜂(Apis mellifera L.)的一种全球性天敌,由于气候变化迅速而变得广泛传播。在此,我们提出了一种定位系统,通过将无人驾驶飞行器与三坐标系统相结合,跟踪带有无线电标签的大黄蜂并发现大黄蜂蜂巢。利用大黄蜂的归巢本能,我们将实验系统地分为行为实验、地面实况实验和定位实验。根据实验结果,我们成功地发现了所测试的五只大黄蜂中的两只的蜂巢。此外,通过对实验结果的综合分析,我们对大黄蜂的飞行模式和行为有了更深入的了解。这项研究成果证明了将无人飞行器与无线电遥测技术整合在一起进行精确目标跟踪和生态系统管理的有效性,为减轻入侵物种对蜜蜂种群的影响提供了一种强有力的工具。
{"title":"Unmanned Aerial Vehicle-based Autonomous Tracking System for Invasive Flying Insects","authors":"Jeonghyeon Pak ,&nbsp;Bosung Kim ,&nbsp;Chanyoung Ju ,&nbsp;Hyoung Il Son","doi":"10.1016/j.compag.2024.109616","DOIUrl":"10.1016/j.compag.2024.109616","url":null,"abstract":"<div><div>The Asian hornet or yellow-legged hornet, <em>Vespa velutina nigrithorax</em>, is a global predator of honeybees (<em>Apis mellifera</em> L.) that has become widespread owing to rapid climate change. Herein, we propose a localization system for tracking radio-tagged hornets and discovering hornet hives by combining unmanned aerial vehicles with a trilateration system. By leveraging the homing instinct of hornets, we systematically structured our experiments as a behavioral experiment, ground-truth experiment, and localization experiment. According to the experimental results, we successfully discovered the hives of two of the five hornets tested. Additionally, a comprehensive analysis of the experimental outcomes provided insights into hornet flight patterns and behaviors. The results of this research demonstrate the efficacy of integrating UAVs with radio telemetry for precision object tracking and ecosystem management, offering a robust tool for mitigating the impacts of invasive species on honeybee populations.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"227 ","pages":"Article 109616"},"PeriodicalIF":7.7,"publicationDate":"2024-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142699879","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computers and Electronics in Agriculture
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1