首页 > 最新文献

Artificial Intelligence in Agriculture最新文献

英文 中文
Prediction of sugar beet yield and quality parameters using Stacked-LSTM model with pre-harvest UAV time series data and meteorological factors 基于采收前无人机时间序列数据和气象因子的叠置- lstm模型预测甜菜产量和品质参数
IF 8.2 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2025-02-27 DOI: 10.1016/j.aiia.2025.02.004
Qing Wang , Ke Shao , Zhibo Cai , Yingpu Che , Haochong Chen , Shunfu Xiao , Ruili Wang , Yaling Liu , Baoguo Li , Yuntao Ma
Accurate pre-harvest prediction of sugar beet yield is vital for effective agricultural management and decision-making. However, traditional methods are constrained by reliance on empirical knowledge, time-consuming processes, resource intensiveness, and spatial-temporal variability in prediction accuracy. This study presented a plot-level approach that leverages UAV technology and recurrent neural networks to provide field yield predictions within the same growing season, addressing a significant gap in previous research that often focuses on regional scale predictions relied on multi-year history datasets. End-of-season yield and quality parameters were forecasted using UAV-derived time series data and meteorological factors collected at three critical growth stages, providing a timely and practical tool for farm management. Two years of data covering 185 sugar beet varieties were used to train a developed stacked Long Short-Term Memory (LSTM) model, which was compared with traditional machine learning approaches. Incorporating fresh weight estimates of aboveground and root biomass as predictive factors significantly enhanced prediction accuracy. Optimal performance in prediction was observed when utilizing data from all three growth periods, with R2 values of 0.761 (rRMSE = 7.1 %) for sugar content, 0.531 (rRMSE = 22.5 %) for root yield, and 0.478 (rRMSE = 23.4 %) for sugar yield. Furthermore, combining data from the first two growth periods shows promising results for making the predictions earlier. Key predictive features identified through the Permutation Importance (PIMP) method provided insights into the main factors influencing yield. These findings underscore the potential of using UAV time-series data and recurrent neural networks for accurate pre-harvest yield prediction at the field scale, supporting timely and precise agricultural decisions.
准确的收获前甜菜产量预测对有效的农业管理和决策至关重要。然而,传统的方法受到依赖经验知识、耗时、资源密集和预测精度时空变异性的限制。本研究提出了一种利用无人机技术和循环神经网络的地块级方法,在同一生长季节提供现场产量预测,解决了以往研究中依赖多年历史数据集的区域尺度预测的重大空白。利用无人机获取的三个关键生长阶段的时间序列数据和气象因子对季末产量和品质参数进行预测,为农场管理提供了及时实用的工具。研究人员使用了185个甜菜品种的两年数据来训练开发的堆叠长短期记忆(LSTM)模型,并将其与传统的机器学习方法进行了比较。将地上鲜重估算值和根系生物量作为预测因子显著提高了预测精度。利用所有三个生长期的数据进行预测的效果最佳,其中糖含量的R2值为0.761 (rRMSE = 7.1%),根产量的R2值为0.531 (rRMSE = 22.5%),糖产量的R2值为0.478 (rRMSE = 23.4%)。此外,结合前两个增长期的数据显示,提前做出预测的结果很有希望。通过排列重要性(PIMP)方法确定的关键预测特征可以深入了解影响产量的主要因素。这些发现强调了使用无人机时间序列数据和循环神经网络在田间规模上进行准确的收获前产量预测的潜力,支持及时和精确的农业决策。
{"title":"Prediction of sugar beet yield and quality parameters using Stacked-LSTM model with pre-harvest UAV time series data and meteorological factors","authors":"Qing Wang ,&nbsp;Ke Shao ,&nbsp;Zhibo Cai ,&nbsp;Yingpu Che ,&nbsp;Haochong Chen ,&nbsp;Shunfu Xiao ,&nbsp;Ruili Wang ,&nbsp;Yaling Liu ,&nbsp;Baoguo Li ,&nbsp;Yuntao Ma","doi":"10.1016/j.aiia.2025.02.004","DOIUrl":"10.1016/j.aiia.2025.02.004","url":null,"abstract":"<div><div>Accurate pre-harvest prediction of sugar beet yield is vital for effective agricultural management and decision-making. However, traditional methods are constrained by reliance on empirical knowledge, time-consuming processes, resource intensiveness, and spatial-temporal variability in prediction accuracy. This study presented a plot-level approach that leverages UAV technology and recurrent neural networks to provide field yield predictions within the same growing season, addressing a significant gap in previous research that often focuses on regional scale predictions relied on multi-year history datasets. End-of-season yield and quality parameters were forecasted using UAV-derived time series data and meteorological factors collected at three critical growth stages, providing a timely and practical tool for farm management. Two years of data covering 185 sugar beet varieties were used to train a developed stacked Long Short-Term Memory (LSTM) model, which was compared with traditional machine learning approaches. Incorporating fresh weight estimates of aboveground and root biomass as predictive factors significantly enhanced prediction accuracy. Optimal performance in prediction was observed when utilizing data from all three growth periods, with <em>R</em><sup>2</sup> values of 0.761 (rRMSE = 7.1 %) for sugar content, 0.531 (rRMSE = 22.5 %) for root yield, and 0.478 (rRMSE = 23.4 %) for sugar yield. Furthermore, combining data from the first two growth periods shows promising results for making the predictions earlier. Key predictive features identified through the Permutation Importance (PIMP) method provided insights into the main factors influencing yield. These findings underscore the potential of using UAV time-series data and recurrent neural networks for accurate pre-harvest yield prediction at the field scale, supporting timely and precise agricultural decisions.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 2","pages":"Pages 252-265"},"PeriodicalIF":8.2,"publicationDate":"2025-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143643996","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep learning-based classification, detection, and segmentation of tomato leaf diseases: A state-of-the-art review 基于深度学习的番茄叶病分类、检测和分割:最新进展综述
IF 8.2 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2025-02-20 DOI: 10.1016/j.aiia.2025.02.006
Aritra Das , Fahad Pathan , Jamin Rahman Jim , Md Mohsin Kabir , M.F. Mridha
The early identification and treatment of tomato leaf diseases are crucial for optimizing plant productivity, efficiency and quality. Misdiagnosis by the farmers poses the risk of inadequate treatments, harming both tomato plants and agroecosystems. Precision of disease diagnosis is essential, necessitating a swift and accurate response to misdiagnosis for early identification. Tropical regions are ideal for tomato plants, but there are inherent concerns, such as weather-related problems. Plant diseases largely cause financial losses in crop production. The slow detection periods of conventional approaches are insufficient for the timely detection of tomato diseases. Deep learning has emerged as a promising avenue for early disease identification. This study comprehensively analyzed techniques for classifying and detecting tomato leaf diseases and evaluating their strengths and weaknesses. The study delves into various diagnostic procedures, including image pre-processing, localization and segmentation. In conclusion, applying deep learning algorithms holds great promise for enhancing the accuracy and efficiency of tomato leaf disease diagnosis by offering faster and more effective results.
番茄叶片病害的早期识别和处理是优化植株产量、效率和品质的关键。农民的误诊造成了治疗不充分的风险,损害了番茄植株和农业生态系统。疾病诊断的准确性至关重要,需要对误诊作出迅速和准确的反应,以便及早发现。热带地区是种植番茄的理想之地,但也存在一些固有的问题,比如与天气有关的问题。植物病害在很大程度上造成作物生产的经济损失。传统方法的检测周期较慢,不足以及时发现番茄病害。深度学习已经成为早期疾病识别的一个有前途的途径。本文综合分析了番茄叶片病害的分类检测技术,并对其优缺点进行了评价。该研究深入研究了各种诊断程序,包括图像预处理,定位和分割。综上所述,应用深度学习算法可以提供更快、更有效的结果,从而提高番茄叶病诊断的准确性和效率。
{"title":"Deep learning-based classification, detection, and segmentation of tomato leaf diseases: A state-of-the-art review","authors":"Aritra Das ,&nbsp;Fahad Pathan ,&nbsp;Jamin Rahman Jim ,&nbsp;Md Mohsin Kabir ,&nbsp;M.F. Mridha","doi":"10.1016/j.aiia.2025.02.006","DOIUrl":"10.1016/j.aiia.2025.02.006","url":null,"abstract":"<div><div>The early identification and treatment of tomato leaf diseases are crucial for optimizing plant productivity, efficiency and quality. Misdiagnosis by the farmers poses the risk of inadequate treatments, harming both tomato plants and agroecosystems. Precision of disease diagnosis is essential, necessitating a swift and accurate response to misdiagnosis for early identification. Tropical regions are ideal for tomato plants, but there are inherent concerns, such as weather-related problems. Plant diseases largely cause financial losses in crop production. The slow detection periods of conventional approaches are insufficient for the timely detection of tomato diseases. Deep learning has emerged as a promising avenue for early disease identification. This study comprehensively analyzed techniques for classifying and detecting tomato leaf diseases and evaluating their strengths and weaknesses. The study delves into various diagnostic procedures, including image pre-processing, localization and segmentation. In conclusion, applying deep learning algorithms holds great promise for enhancing the accuracy and efficiency of tomato leaf disease diagnosis by offering faster and more effective results.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 2","pages":"Pages 192-220"},"PeriodicalIF":8.2,"publicationDate":"2025-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143520260","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Using UAV-based multispectral images and CGS-YOLO algorithm to distinguish maize seeding from weed 利用无人机多光谱图像和CGS-YOLO算法对玉米种子和杂草进行区分
IF 8.2 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2025-02-17 DOI: 10.1016/j.aiia.2025.02.007
Boyi Tang , Jingping Zhou , Chunjiang Zhao , Yuchun Pan , Yao Lu , Chang Liu , Kai Ma , Xuguang Sun , Ruifang Zhang , Xiaohe Gu
Accurate recognition of maize seedlings on the plot scale under the disturbance of weeds is crucial for early seedling replenishment and weed removal. Currently, UAV-based maize seedling recognition depends primarily on RGB images. The main purpose of this study is to compare the performances of multispectral images and RGB images of unmanned aerial vehicle (UAV) on maize seeding recognition using deep learning algorithms. Additionally, we aim to assess the disturbance of different weed coverage on the recognition of maize seeding. Firstly, principal component analysis was used in multispectral image transformation. Secondly, by introducing the CARAFE sampling operator and a small target detection layer (SLAY), we extracted the contextual information of each pixel to retain weak features in the maize seedling image. Thirdly, the global attention mechanism (GAM) was employed to capture the features of maize seedlings using the dual attention mechanism of spatial and channel information. The CGS-YOLO algorithm was constructed and formed. Finally, we compared the performance of the improved algorithm with a series of deep learning algorithms, including YOLO v3, v5, v6 and v8. The results show that after PCA transformation, the recognition mAP of maize seedlings reaches 82.6 %, representing 3.1 percentage points improvement compared to RGB images. Compared with YOLOv8, YOLOv6, YOLOv5, and YOLOv3, the CGS-YOLO algorithm has improved mAP by 3.8, 4.2, 4.5 and 6.6 percentage points, respectively. With the increase of weed coverage, the recognition effect of maize seedlings gradually decreased. When weed coverage was more than 70 %, the mAP difference becomes significant, but CGS-YOLO still maintains a recognition mAP of 72 %. Therefore, in maize seedings recognition, UAV-based multispectral images perform better than RGB images. The application of CGS-YOLO deep learning algorithm with UAV multi-spectral images proves beneficial in the recognition of maize seedlings under weed disturbance.
在杂草干扰下准确识别玉米幼苗在样地尺度上的位置,对早期补苗和除草至关重要。目前,基于无人机的玉米幼苗识别主要依赖于RGB图像。本研究的主要目的是利用深度学习算法比较无人机(UAV)多光谱图像和RGB图像对玉米种子识别的性能。此外,我们还旨在评估不同杂草覆盖对玉米播种识别的干扰。首先,将主成分分析应用于多光谱图像变换。其次,通过引入CARAFE采样算子和小目标检测层(SLAY),提取每个像素的上下文信息,保留玉米幼苗图像中的弱特征;第三,采用全局注意机制(GAM),利用空间信息和通道信息的双重注意机制捕捉玉米幼苗的特征。构造并形成了CGS-YOLO算法。最后,我们将改进算法与一系列深度学习算法(包括YOLO v3、v5、v6和v8)的性能进行了比较。结果表明,经过PCA变换后,玉米幼苗的mAP识别率达到82.6%,比RGB图像提高了3.1个百分点。与YOLOv8、YOLOv6、YOLOv5和YOLOv3相比,CGS-YOLO算法的mAP分别提高了3.8、4.2、4.5和6.6个百分点。随着杂草盖度的增加,玉米幼苗的识别效果逐渐降低。当杂草盖度大于70%时,mAP差异显著,但CGS-YOLO仍然保持72%的识别mAP。因此,在玉米种子识别中,基于无人机的多光谱图像优于RGB图像。将CGS-YOLO深度学习算法应用于无人机多光谱图像,可有效识别杂草干扰下的玉米幼苗。
{"title":"Using UAV-based multispectral images and CGS-YOLO algorithm to distinguish maize seeding from weed","authors":"Boyi Tang ,&nbsp;Jingping Zhou ,&nbsp;Chunjiang Zhao ,&nbsp;Yuchun Pan ,&nbsp;Yao Lu ,&nbsp;Chang Liu ,&nbsp;Kai Ma ,&nbsp;Xuguang Sun ,&nbsp;Ruifang Zhang ,&nbsp;Xiaohe Gu","doi":"10.1016/j.aiia.2025.02.007","DOIUrl":"10.1016/j.aiia.2025.02.007","url":null,"abstract":"<div><div>Accurate recognition of maize seedlings on the plot scale under the disturbance of weeds is crucial for early seedling replenishment and weed removal. Currently, UAV-based maize seedling recognition depends primarily on RGB images. The main purpose of this study is to compare the performances of multispectral images and RGB images of unmanned aerial vehicle (UAV) on maize seeding recognition using deep learning algorithms. Additionally, we aim to assess the disturbance of different weed coverage on the recognition of maize seeding. Firstly, principal component analysis was used in multispectral image transformation. Secondly, by introducing the CARAFE sampling operator and a small target detection layer (SLAY), we extracted the contextual information of each pixel to retain weak features in the maize seedling image. Thirdly, the global attention mechanism (GAM) was employed to capture the features of maize seedlings using the dual attention mechanism of spatial and channel information. The CGS-YOLO algorithm was constructed and formed. Finally, we compared the performance of the improved algorithm with a series of deep learning algorithms, including YOLO v3, v5, v6 and v8. The results show that after PCA transformation, the recognition mAP of maize seedlings reaches 82.6 %, representing 3.1 percentage points improvement compared to RGB images. Compared with YOLOv8, YOLOv6, YOLOv5, and YOLOv3, the CGS-YOLO algorithm has improved mAP by 3.8, 4.2, 4.5 and 6.6 percentage points, respectively. With the increase of weed coverage, the recognition effect of maize seedlings gradually decreased. When weed coverage was more than 70 %, the mAP difference becomes significant, but CGS-YOLO still maintains a recognition mAP of 72 %. Therefore, in maize seedings recognition, UAV-based multispectral images perform better than RGB images. The application of CGS-YOLO deep learning algorithm with UAV multi-spectral images proves beneficial in the recognition of maize seedlings under weed disturbance.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 2","pages":"Pages 162-181"},"PeriodicalIF":8.2,"publicationDate":"2025-02-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143512498","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Addressing computation resource exhaustion associated with deep learning training of three-dimensional hyperspectral images using multiclass weed classification 利用多类杂草分类解决三维高光谱图像深度学习训练相关的计算资源耗尽问题
IF 8.2 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2025-02-11 DOI: 10.1016/j.aiia.2025.02.005
Billy G. Ram , Kirk Howatt , Joseph Mettler , Xin Sun
Addressing the computational bottleneck of training deep learning models on high-resolution, three-dimensional images, this study introduces an optimized approach, combining distributed learning (parallelism), image resolution, and data augmentation. We propose analysis methodologies that help train deep learning (DL) models on proximal hyperspectral images, demonstrating superior performance in eight-class crop (canola, field pea, sugarbeet and flax) and weed (redroot pigweed, resistant kochia, waterhemp and ragweed) classification. Utilizing state-of-the-art model architectures (ResNet-50, VGG-16, DenseNet, EfficientNet) in comparison with ResNet-50 inspired Hyper-Residual Convolutional Neural Network model. Our findings reveal that an image resolution of 100x100x54 maximizes accuracy while maintaining computational efficiency, surpassing the performance of 150x150x54 and 50x50x54 resolution images. By employing data parallelism, we overcome system memory limitations and achieve exceptional classification results, with test accuracies and F1-scores reaching 0.96 and 0.97, respectively. This research highlights the potential of residual-based networks for analyzing hyperspectral images. It offers valuable insights into optimizing deep learning models in resource-constrained environments. The research presents detailed training pipelines for deep learning models that utilize large (> 4k) hyperspectral training samples, including background and without any data preprocessing. This approach enables the training of deep learning models directly on raw hyperspectral data.
为了解决在高分辨率三维图像上训练深度学习模型的计算瓶颈,本研究引入了一种优化方法,将分布式学习(并行)、图像分辨率和数据增强相结合。我们提出的分析方法有助于在近端高光谱图像上训练深度学习(DL)模型,在八类作物(油菜籽、豌豆、甜菜和亚麻)和杂草(红根藜、抗性土匪、水麻和豚草)分类中显示出卓越的性能。利用最先进的模型架构(ResNet-50, VGG-16, DenseNet, EfficientNet)与ResNet-50启发的超残差卷积神经网络模型进行比较。我们的研究结果表明,100x100x54的图像分辨率在保持计算效率的同时最大限度地提高了精度,超过了150x150x54和50x50x54分辨率图像的性能。通过使用数据并行性,我们克服了系统内存的限制,取得了优异的分类效果,测试准确率和f1分数分别达到0.96和0.97。这项研究突出了残差网络分析高光谱图像的潜力。它为在资源受限的环境中优化深度学习模型提供了有价值的见解。该研究为深度学习模型提供了详细的训练管道,这些模型利用大量的(>;4k)高光谱训练样本,包括背景和未经任何数据预处理。这种方法可以直接在原始高光谱数据上训练深度学习模型。
{"title":"Addressing computation resource exhaustion associated with deep learning training of three-dimensional hyperspectral images using multiclass weed classification","authors":"Billy G. Ram ,&nbsp;Kirk Howatt ,&nbsp;Joseph Mettler ,&nbsp;Xin Sun","doi":"10.1016/j.aiia.2025.02.005","DOIUrl":"10.1016/j.aiia.2025.02.005","url":null,"abstract":"<div><div>Addressing the computational bottleneck of training deep learning models on high-resolution, three-dimensional images, this study introduces an optimized approach, combining distributed learning (parallelism), image resolution, and data augmentation. We propose analysis methodologies that help train deep learning (DL) models on proximal hyperspectral images, demonstrating superior performance in eight-class crop (canola, field pea, sugarbeet and flax) and weed (redroot pigweed, resistant kochia, waterhemp and ragweed) classification. Utilizing state-of-the-art model architectures (ResNet-50, VGG-16, DenseNet, EfficientNet) in comparison with ResNet-50 inspired Hyper-Residual Convolutional Neural Network model. Our findings reveal that an image resolution of 100x100x54 maximizes accuracy while maintaining computational efficiency, surpassing the performance of 150x150x54 and 50x50x54 resolution images. By employing data parallelism, we overcome system memory limitations and achieve exceptional classification results, with test accuracies and F1-scores reaching 0.96 and 0.97, respectively. This research highlights the potential of residual-based networks for analyzing hyperspectral images. It offers valuable insights into optimizing deep learning models in resource-constrained environments. The research presents detailed training pipelines for deep learning models that utilize large (&gt; 4k) hyperspectral training samples, including background and without any data preprocessing. This approach enables the training of deep learning models directly on raw hyperspectral data.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 2","pages":"Pages 131-146"},"PeriodicalIF":8.2,"publicationDate":"2025-02-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143487682","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Advancing precision agriculture: A comparative analysis of YOLOv8 for multi-class weed detection in cotton cultivation 推进精准农业:YOLOv8在棉花种植多类别杂草检测中的比较分析
IF 8.2 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2025-02-11 DOI: 10.1016/j.aiia.2025.01.013
Ameer Tamoor Khan , Signe Marie Jensen , Abdul Rehman Khan
Effective weed management plays a critical role in enhancing the productivity and sustainability of cotton cultivation. The rapid emergence of herbicide-resistant weeds has underscored the need for innovative solutions to address the challenges associated with precise weed detection. This paper investigates the potential of YOLOv8, the latest advancement in the YOLO family of object detectors, for multi-class weed detection in U.S. cotton fields. Leveraging the CottonWeedDet12 dataset, which includes diverse weed species captured under varying environmental conditions, this study provides a comprehensive evaluation of YOLOv8's performance. A comparative analysis with earlier YOLO variants reveals substantial improvements in detection accuracy, as evidenced by higher mean Average Precision (mAP) scores. These findings highlight YOLOv8's superior capability to generalize across complex field scenarios, making it a promising candidate for real-time applications in precision agriculture. The enhanced architecture of YOLOv8, featuring anchor-free detection, an advanced Feature Pyramid Network (FPN), and an optimized loss function, enables accurate detection even under challenging conditions. This research emphasizes the importance of machine vision technologies in modern agriculture, particularly for minimizing herbicide reliance and promoting sustainable farming practices. The results not only validate YOLOv8's efficacy in multi-class weed detection but also pave the way for its integration into autonomous agricultural systems, thereby contributing to the broader goals of precision agriculture and ecological sustainability.
有效的杂草管理对提高棉花种植的生产力和可持续性起着至关重要的作用。抗除草剂杂草的迅速出现强调了需要创新的解决方案来解决与精确杂草检测相关的挑战。本文研究了YOLO目标检测器家族的最新进展YOLOv8在美国棉花田多类别杂草检测中的潜力。利用CottonWeedDet12数据集,其中包括在不同环境条件下捕获的多种杂草,本研究对YOLOv8的性能进行了全面评估。与早期的YOLO变体的比较分析显示,检测精度有了实质性的提高,平均平均精度(mAP)得分更高。这些发现突出了YOLOv8在复杂现场场景中的卓越泛化能力,使其成为精准农业实时应用的有希望的候选者。YOLOv8的增强架构具有无锚检测,先进的特征金字塔网络(FPN)和优化的损失函数,即使在具有挑战性的条件下也能实现准确的检测。这项研究强调了机器视觉技术在现代农业中的重要性,特别是在减少对除草剂的依赖和促进可持续农业实践方面。研究结果不仅验证了YOLOv8在多类别杂草检测中的有效性,而且为其融入自主农业系统铺平了道路,从而为精准农业和生态可持续性的更广泛目标做出贡献。
{"title":"Advancing precision agriculture: A comparative analysis of YOLOv8 for multi-class weed detection in cotton cultivation","authors":"Ameer Tamoor Khan ,&nbsp;Signe Marie Jensen ,&nbsp;Abdul Rehman Khan","doi":"10.1016/j.aiia.2025.01.013","DOIUrl":"10.1016/j.aiia.2025.01.013","url":null,"abstract":"<div><div>Effective weed management plays a critical role in enhancing the productivity and sustainability of cotton cultivation. The rapid emergence of herbicide-resistant weeds has underscored the need for innovative solutions to address the challenges associated with precise weed detection. This paper investigates the potential of YOLOv8, the latest advancement in the YOLO family of object detectors, for multi-class weed detection in U.S. cotton fields. Leveraging the CottonWeedDet12 dataset, which includes diverse weed species captured under varying environmental conditions, this study provides a comprehensive evaluation of YOLOv8's performance. A comparative analysis with earlier YOLO variants reveals substantial improvements in detection accuracy, as evidenced by higher mean Average Precision (mAP) scores. These findings highlight YOLOv8's superior capability to generalize across complex field scenarios, making it a promising candidate for real-time applications in precision agriculture. The enhanced architecture of YOLOv8, featuring anchor-free detection, an advanced Feature Pyramid Network (FPN), and an optimized loss function, enables accurate detection even under challenging conditions. This research emphasizes the importance of machine vision technologies in modern agriculture, particularly for minimizing herbicide reliance and promoting sustainable farming practices. The results not only validate YOLOv8's efficacy in multi-class weed detection but also pave the way for its integration into autonomous agricultural systems, thereby contributing to the broader goals of precision agriculture and ecological sustainability.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 2","pages":"Pages 182-191"},"PeriodicalIF":8.2,"publicationDate":"2025-02-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143520252","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Precision agriculture technologies for soil site-specific nutrient management: A comprehensive review 土壤定点养分管理的精准农业技术:综述
IF 8.2 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2025-02-11 DOI: 10.1016/j.aiia.2025.02.001
Niharika Vullaganti, Billy G. Ram, Xin Sun
Amidst the growing food demands of an increasing population, agricultural intensification frequently depends on excessive chemical and fertilizer applications. While this approach initially boosts crop yields, it effects long-term sustainability through soil degradation and compromised food quality. Thus, prioritizing soil health while enhancing crop production is essential for sustainable food production. Site-Specific Nutrient Management (SSNM) emerges as a critical strategy to increase crop production, maintain soil health, and reduce environmental pollution. Despite its potential, the application of SSNM technologies remain limited in farmers' fields due to existing research gaps. This review critically analyzes and presents research conducted in SSNM in the past 11 years (2013–2024), identifying gaps and future research directions. A comprehensive study of 97 relevant research publications reveals several key findings: a) Electrochemical sensing and spectroscopy are the two widely explored areas in SSNM research, b) Despite numerous technologies in SSNM, each has its own limitation, preventing any single technology from being ideal, c) The selection of models and preprocessing techniques significantly impacts nutrient prediction accuracy, d) No single sensor or sensor combination can predict all soil properties, as suitability is highly attribute-specific. This review provides researchers, and technical personnel in precision agriculture, and farmers with detailed insights into SSNM research, its implementation, limitations, challenges, and future research directions.
在不断增长的人口不断增长的粮食需求中,农业集约化往往依赖于过度的化学和化肥施用。虽然这种方法最初能提高作物产量,但由于土壤退化和食品质量受损,影响了长期的可持续性。因此,在提高作物生产的同时优先考虑土壤健康,对可持续粮食生产至关重要。定点养分管理(SSNM)是提高作物产量、保持土壤健康和减少环境污染的关键策略。尽管具有潜力,但由于现有的研究差距,SSNM技术在农民领域的应用仍然有限。本文对过去11年(2013-2024)在SSNM领域的研究进行了批判性的分析和介绍,指出了差距和未来的研究方向。一项对97份相关研究出版物的综合研究揭示了以下几个关键发现:a)电化学传感和光谱是SSNM研究中被广泛探索的两个领域;b)尽管SSNM中有许多技术,但每种技术都有自己的局限性,阻止任何一种技术都是理想的;c)模型和预处理技术的选择显著影响养分预测的准确性;d)没有单一传感器或传感器组合可以预测所有的土壤性质,因为适用性是高度属性特异性的。本文旨在为精准农业研究人员、技术人员和农民提供关于SSNM研究、实施、局限性、挑战和未来研究方向的详细见解。
{"title":"Precision agriculture technologies for soil site-specific nutrient management: A comprehensive review","authors":"Niharika Vullaganti,&nbsp;Billy G. Ram,&nbsp;Xin Sun","doi":"10.1016/j.aiia.2025.02.001","DOIUrl":"10.1016/j.aiia.2025.02.001","url":null,"abstract":"<div><div>Amidst the growing food demands of an increasing population, agricultural intensification frequently depends on excessive chemical and fertilizer applications. While this approach initially boosts crop yields, it effects long-term sustainability through soil degradation and compromised food quality. Thus, prioritizing soil health while enhancing crop production is essential for sustainable food production. Site-Specific Nutrient Management (SSNM) emerges as a critical strategy to increase crop production, maintain soil health, and reduce environmental pollution. Despite its potential, the application of SSNM technologies remain limited in farmers' fields due to existing research gaps. This review critically analyzes and presents research conducted in SSNM in the past 11 years (2013–2024), identifying gaps and future research directions. A comprehensive study of 97 relevant research publications reveals several key findings: a) Electrochemical sensing and spectroscopy are the two widely explored areas in SSNM research, b) Despite numerous technologies in SSNM, each has its own limitation, preventing any single technology from being ideal, c) The selection of models and preprocessing techniques significantly impacts nutrient prediction accuracy, d) No single sensor or sensor combination can predict all soil properties, as suitability is highly attribute-specific. This review provides researchers, and technical personnel in precision agriculture, and farmers with detailed insights into SSNM research, its implementation, limitations, challenges, and future research directions.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 2","pages":"Pages 147-161"},"PeriodicalIF":8.2,"publicationDate":"2025-02-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143507923","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An efficient strawberry segmentation model based on Mask R-CNN and TensorRT 基于Mask R-CNN和TensorRT的高效草莓分割模型
IF 8.2 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2025-02-03 DOI: 10.1016/j.aiia.2025.01.008
Anthony Crespo , Claudia Moncada , Fabricio Crespo , Manuel Eugenio Morocho-Cayamcela
Currently, artificial intelligence (AI), particularly computer vision (CV), has numerous applications in agriculture. In this field, the production and consumption of strawberries have experienced great growth in recent years, which makes meeting the growing demand a challenge that producers must face. However, one of the main problems regarding the cultivation of this fruit is the high cost and long picking times. In response, automatic harvesting has surged as an option to address this difficulty, and fruit instance segmentation plays a crucial role in these types of systems. Fruit segmentation is related to the identification and separation of individual fruits within a crop, allowing a more efficient and accurate harvesting process. Although deep learning (DL) techniques have shown potential for this activity, the complexity of the models leads to difficulty in their implementation in real-time systems. For this reason, a model capable of performing adequately in real-time, while also having good precision is of great interest. With this motivation, this work presents a efficient Mask R-CNN model to perform instance segmentation in strawberry fruits. The efficiency of the model is assessed considering the amount of frames per second (FPS) it can process, its size in megabytes (MB) and its mean average precision (mAP) value. Two approaches are provided: The first one consists on the training of the model using the Detectron2 library, while the second one focuses on the training of the model using the NVIDIA TAO Toolkit. In both cases, NVIDIA TensorRT is used to optimize the models. The results show that the best Mask R-CNN model, without optimization, has a performance of 83.45 mAP, 4 FPS, and 351 MB of size, which, after the TensorRT optimization, achieved 83.17 mAP, 25.46 FPS, and only 48.2 MB of size. It represents a suitable model for implementation in real-time systems.
目前,人工智能(AI),特别是计算机视觉(CV)在农业中有许多应用。在这一领域,草莓的生产和消费近年来都有了很大的增长,这使得满足日益增长的需求成为生产者必须面对的挑战。然而,种植这种水果的主要问题之一是成本高,采摘时间长。作为回应,自动收获已经成为解决这一困难的一种选择,水果实例分割在这些类型的系统中起着至关重要的作用。水果分割与作物中单个水果的识别和分离有关,允许更有效和准确的收获过程。尽管深度学习(DL)技术已经显示出这种活动的潜力,但模型的复杂性导致它们在实时系统中实现困难。出于这个原因,一个能够充分实时执行,同时又具有良好精度的模型是非常有趣的。基于这一动机,本文提出了一种高效的Mask R-CNN模型来对草莓果实进行实例分割。该模型的效率是根据其每秒可以处理的帧数(FPS)、以兆字节(MB)为单位的大小和平均精度(mAP)值来评估的。提供了两种方法:第一种方法是使用Detectron2库对模型进行训练,第二种方法是使用NVIDIA TAO Toolkit对模型进行训练。在这两种情况下,都使用NVIDIA TensorRT来优化模型。结果表明,未经优化的最佳Mask R-CNN模型的性能为83.45 mAP, 4 FPS,大小为351 MB,经过TensorRT优化后的性能为83.17 mAP, 25.46 FPS,大小仅为48.2 MB。它为实时系统的实现提供了一个合适的模型。
{"title":"An efficient strawberry segmentation model based on Mask R-CNN and TensorRT","authors":"Anthony Crespo ,&nbsp;Claudia Moncada ,&nbsp;Fabricio Crespo ,&nbsp;Manuel Eugenio Morocho-Cayamcela","doi":"10.1016/j.aiia.2025.01.008","DOIUrl":"10.1016/j.aiia.2025.01.008","url":null,"abstract":"<div><div>Currently, artificial intelligence (AI), particularly computer vision (CV), has numerous applications in agriculture. In this field, the production and consumption of strawberries have experienced great growth in recent years, which makes meeting the growing demand a challenge that producers must face. However, one of the main problems regarding the cultivation of this fruit is the high cost and long picking times. In response, automatic harvesting has surged as an option to address this difficulty, and fruit instance segmentation plays a crucial role in these types of systems. Fruit segmentation is related to the identification and separation of individual fruits within a crop, allowing a more efficient and accurate harvesting process. Although deep learning (DL) techniques have shown potential for this activity, the complexity of the models leads to difficulty in their implementation in real-time systems. For this reason, a model capable of performing adequately in real-time, while also having good precision is of great interest. With this motivation, this work presents a efficient Mask R-CNN model to perform instance segmentation in strawberry fruits. The efficiency of the model is assessed considering the amount of frames per second (FPS) it can process, its size in megabytes (MB) and its mean average precision (mAP) value. Two approaches are provided: The first one consists on the training of the model using the Detectron2 library, while the second one focuses on the training of the model using the NVIDIA TAO Toolkit. In both cases, NVIDIA TensorRT is used to optimize the models. The results show that the best Mask R-CNN model, without optimization, has a performance of 83.45 mAP, 4 FPS, and 351 MB of size, which, after the TensorRT optimization, achieved 83.17 mAP, 25.46 FPS, and only 48.2 MB of size. It represents a suitable model for implementation in real-time systems.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 2","pages":"Pages 327-337"},"PeriodicalIF":8.2,"publicationDate":"2025-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143704966","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Efficient one-stage detection of shrimp larvae in complex aquaculture scenarios 复杂养殖环境下对虾幼虫的高效一期检测
IF 8.2 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2025-01-27 DOI: 10.1016/j.aiia.2025.01.009
Guoxu Zhang , Tianyi Liao , Yingyi Chen , Ping Zhong , Zhencai Shen , Daoliang Li
The swift evolution of deep learning has greatly benefited the field of intensive aquaculture. Specifically, deep learning-based shrimp larvae detection has offered important technical assistance for counting shrimp larvae and recognizing abnormal behaviors. Firstly, the transparent bodies and small sizes of shrimp larvae, combined with complex scenarios due to variations in light intensity and water turbidity, make it challenging for current detection methods to achieve high accuracy. Secondly, deep learning-based object detection demands substantial computing power and storage space, which restricts its application on edge devices. This paper proposes an efficient one-stage shrimp larvae detection method, FAMDet, specifically designed for complex scenarios in intensive aquaculture. Firstly, different from the ordinary detection methods, it exploits an efficient FasterNet backbone, constructed with partial convolution, to extract effective multi-scale shrimp larvae features. Meanwhile, we construct an adaptively bi-directional fusion neck to integrate high-level semantic information and low-level detail information of shrimp larvae in a matter that sufficiently merges features and further mitigates noise interference. Finally, a decoupled detection head equipped with MPDIoU is used for precise bounding box regression of shrimp larvae. We collected images of shrimp larvae from multiple scenarios and labeled 108,365 targets for experiments. Compared with the ordinary detection methods (Faster RCNN, SSD, RetinaNet, CenterNet, FCOS, DETR, and YOLOX_s), FAMDet has obtained considerable advantages in accuracy, speed, and complexity. Compared with the outstanding one-stage method YOLOv8s, it has improved accuracy while reducing 57 % parameters, 37 % FLOPs, 22 % inference latency per image on CPU, and 56 % storage overhead. Furthermore, FAMDet has still outperformed multiple lightweight methods (EfficientDet, RT-DETR, GhostNetV2, EfficientFormerV2, EfficientViT, and MobileNetV4). In addition, we conducted experiments on the public dataset (VOC 07 + 12) to further verify the effectiveness of FAMDet. Consequently, the proposed method can effectively alleviate the limitations faced by resource-constrained devices and achieve superior shrimp larvae detection results.
深度学习的迅速发展极大地促进了集约化水产养殖领域的发展。具体而言,基于深度学习的虾幼虫检测为虾幼虫计数和异常行为识别提供了重要的技术支持。首先,虾幼体透明、体积小,再加上光照强度和水体浑浊度变化导致的复杂情况,使得现有的检测方法难以达到较高的精度。其次,基于深度学习的目标检测需要大量的计算能力和存储空间,这限制了其在边缘设备上的应用。本文提出了一种针对集约化养殖复杂场景的高效一期对虾幼虫检测方法FAMDet。首先,与普通检测方法不同,该方法利用部分卷积构造的高效FasterNet主干提取有效的多尺度对虾幼虫特征;同时,我们构建了自适应双向融合颈,在充分融合特征的情况下,将对虾幼虫的高级语义信息和低级细节信息融合在一起,进一步减轻噪声干扰。最后,利用配备MPDIoU的解耦检测头对虾仔进行精确边界盒回归。我们收集了多种情况下的虾幼虫图像,标记了108,365个实验目标。与一般的检测方法(Faster RCNN、SSD、RetinaNet、CenterNet、FCOS、DETR、YOLOX_s)相比,FAMDet在精度、速度和复杂度上都有相当大的优势。与出色的单阶段方法YOLOv8s相比,它提高了精度,同时减少了57%的参数,37%的FLOPs, 22%的CPU上每个图像的推理延迟和56%的存储开销。此外,FAMDet仍然优于多种轻量级方法(EfficientDet、RT-DETR、GhostNetV2、EfficientFormerV2、EfficientViT和MobileNetV4)。此外,我们在公共数据集(VOC 07 + 12)上进行了实验,进一步验证了FAMDet的有效性。因此,该方法可以有效缓解资源受限设备所面临的局限性,获得较好的对虾幼虫检测效果。
{"title":"Efficient one-stage detection of shrimp larvae in complex aquaculture scenarios","authors":"Guoxu Zhang ,&nbsp;Tianyi Liao ,&nbsp;Yingyi Chen ,&nbsp;Ping Zhong ,&nbsp;Zhencai Shen ,&nbsp;Daoliang Li","doi":"10.1016/j.aiia.2025.01.009","DOIUrl":"10.1016/j.aiia.2025.01.009","url":null,"abstract":"<div><div>The swift evolution of deep learning has greatly benefited the field of intensive aquaculture. Specifically, deep learning-based shrimp larvae detection has offered important technical assistance for counting shrimp larvae and recognizing abnormal behaviors. Firstly, the transparent bodies and small sizes of shrimp larvae, combined with complex scenarios due to variations in light intensity and water turbidity, make it challenging for current detection methods to achieve high accuracy. Secondly, deep learning-based object detection demands substantial computing power and storage space, which restricts its application on edge devices. This paper proposes an efficient one-stage shrimp larvae detection method, FAMDet, specifically designed for complex scenarios in intensive aquaculture. Firstly, different from the ordinary detection methods, it exploits an efficient FasterNet backbone, constructed with partial convolution, to extract effective multi-scale shrimp larvae features. Meanwhile, we construct an adaptively bi-directional fusion neck to integrate high-level semantic information and low-level detail information of shrimp larvae in a matter that sufficiently merges features and further mitigates noise interference. Finally, a decoupled detection head equipped with MPDIoU is used for precise bounding box regression of shrimp larvae. We collected images of shrimp larvae from multiple scenarios and labeled 108,365 targets for experiments. Compared with the ordinary detection methods (Faster RCNN, SSD, RetinaNet, CenterNet, FCOS, DETR, and YOLOX_s), FAMDet has obtained considerable advantages in accuracy, speed, and complexity. Compared with the outstanding one-stage method YOLOv8s, it has improved accuracy while reducing 57 % parameters, 37 % FLOPs, 22 % inference latency per image on CPU, and 56 % storage overhead. Furthermore, FAMDet has still outperformed multiple lightweight methods (EfficientDet, RT-DETR, GhostNetV2, EfficientFormerV2, EfficientViT, and MobileNetV4). In addition, we conducted experiments on the public dataset (VOC 07 + 12) to further verify the effectiveness of FAMDet. Consequently, the proposed method can effectively alleviate the limitations faced by resource-constrained devices and achieve superior shrimp larvae detection results.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 2","pages":"Pages 338-349"},"PeriodicalIF":8.2,"publicationDate":"2025-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143704748","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automatic body condition scoring system for dairy cows in group state based on improved YOLOv5 and video analysis 基于改进YOLOv5和视频分析的奶牛群态体况自动评分系统
IF 8.2 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2025-01-27 DOI: 10.1016/j.aiia.2025.01.010
Jingwen Li , Pengbo Zeng , Shuai Yue , Zhiyang Zheng , Lifeng Qin , Huaibo Song
This study proposes an automated scoring system for cow body condition using improved YOLOv5 to assess the body condition distribution of herd cows, which significantly impacts herd productivity and feeding management. A dataset was created by capturing images of the cow's hindquarters using an image sensor at the entrance of the milking hall. This system enhances feature extraction ability by introducing dual path networks and convolutional block attention modules and improves efficiency by replacing some modules from the standard YOLOv5s with deep separable convolution to reduce parameters. Furthermore, the system employs an automatic detection and segmentation algorithm to achieve individual cow segmentation and body condition acquisition in the video. Subsequently, the system computes the body condition distribution of cows in a group state. The experimental findings demonstrate that the proposed model outperforms the original YOLOv5 network with higher accuracy and fewer computations and parameters. The precision, recall, and mean average precision of the model are 94.3 %, 92.5 %, and 91.8 %, respectively. The algorithm achieved an overall detection rate of 94.2 % for individual cow segmentation and body condition acquisition in the video, with a body condition scoring accuracy of 92.5 % among accurately detected cows and an overall body condition scoring accuracy of 87.1 % across the 10 video tests.
本研究提出了一种基于改进的YOLOv5的奶牛体况自动评分系统,以评估牛群体况分布,这对牛群生产力和饲养管理有重要影响。通过使用挤奶大厅入口处的图像传感器捕获奶牛后腿的图像,创建了一个数据集。该系统通过引入双路径网络和卷积块注意模块来增强特征提取能力,并通过深度可分卷积取代标准yolov5中的部分模块来减少参数来提高效率。此外,系统采用自动检测和分割算法,实现视频中奶牛个体的分割和身体状态的采集。随后,系统计算奶牛在群体状态下的身体状况分布。实验结果表明,该模型比原始的YOLOv5网络具有更高的精度和更少的计算量和参数。模型的精密度、召回率和平均精密度分别为94.3%、92.5%和91.8%。该算法对视频中奶牛个体分割和身体状况采集的总体检测率为94.2%,在被准确检测的奶牛中,身体状况评分准确率为92.5%,在10个视频测试中,整体身体状况评分准确率为87.1%。
{"title":"Automatic body condition scoring system for dairy cows in group state based on improved YOLOv5 and video analysis","authors":"Jingwen Li ,&nbsp;Pengbo Zeng ,&nbsp;Shuai Yue ,&nbsp;Zhiyang Zheng ,&nbsp;Lifeng Qin ,&nbsp;Huaibo Song","doi":"10.1016/j.aiia.2025.01.010","DOIUrl":"10.1016/j.aiia.2025.01.010","url":null,"abstract":"<div><div>This study proposes an automated scoring system for cow body condition using improved YOLOv5 to assess the body condition distribution of herd cows, which significantly impacts herd productivity and feeding management. A dataset was created by capturing images of the cow's hindquarters using an image sensor at the entrance of the milking hall. This system enhances feature extraction ability by introducing dual path networks and convolutional block attention modules and improves efficiency by replacing some modules from the standard YOLOv5s with deep separable convolution to reduce parameters. Furthermore, the system employs an automatic detection and segmentation algorithm to achieve individual cow segmentation and body condition acquisition in the video. Subsequently, the system computes the body condition distribution of cows in a group state. The experimental findings demonstrate that the proposed model outperforms the original YOLOv5 network with higher accuracy and fewer computations and parameters. The precision, recall, and mean average precision of the model are 94.3 %, 92.5 %, and 91.8 %, respectively. The algorithm achieved an overall detection rate of 94.2 % for individual cow segmentation and body condition acquisition in the video, with a body condition scoring accuracy of 92.5 % among accurately detected cows and an overall body condition scoring accuracy of 87.1 % across the 10 video tests.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 2","pages":"Pages 350-362"},"PeriodicalIF":8.2,"publicationDate":"2025-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143705546","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Identifying key factors influencing maize stalk lodging resistance through wind tunnel simulations with machine learning algorithms 利用机器学习算法模拟风洞,识别影响玉米茎秆抗倒伏的关键因素
IF 8.2 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2025-01-13 DOI: 10.1016/j.aiia.2025.01.007
Guanmin Huang, Ying Zhang, Shenghao Gu, Weiliang Wen, Xianju Lu, Xinyu Guo
Climate change has intensified maize stalk lodging, severely impacting global maize production. While numerous traits influence stalk lodging resistance, their relative importance remains unclear, hindering breeding efforts. This study introduces an combining wind tunnel testing with machine learning algorithms to quantitatively evaluate stalk lodging resistance traits. Through extensive field experiments and literature review, we identified and measured 74 phenotypic traits encompassing plant morphology, biomass, and anatomical characteristics in maize plants. Correlation analysis revealed a median linear correlation coefficient of 0.497 among these traits, with 15.1 % of correlations exceeding 0.8. Principal component analysis showed that the first five components explained 90 % of the total variance, indicating significant trait interactions. Through feature engineering and gradient boosting regression, we developed a high-precision wind speed-ear displacement prediction model (R2 = 0.93) and identified 29 key traits critical for stalk lodging resistance. Sensitivity analysis revealed plant height as the most influential factor (sensitivity coefficient: −3.87), followed by traits of the 7th internode including epidermis layer thickness (0.62), pith area (−0.60), and lignin content (0.35). Our methodological framework not only provides quantitative insights into maize stalk lodging resistance mechanisms but also establishes a systematic approach for trait evaluation. The findings offer practical guidance for breeding programs focused on enhancing stalk lodging resistance and yield stability under climate change conditions, with potential applications in agronomic practice optimization and breeding strategy development.
气候变化加剧了玉米秸秆倒伏,严重影响了全球玉米生产。虽然许多性状影响茎秆抗倒伏性,但它们的相对重要性尚不清楚,这阻碍了育种工作。本文介绍了一种将风洞测试与机器学习算法相结合的方法来定量评估茎秆抗倒伏特性。通过广泛的田间实验和文献综述,我们确定并测量了玉米植株的74个表型性状,包括植物形态、生物量和解剖特征。相关分析显示,这些性状的线性相关系数中位数为0.497,其中15.1%的相关系数超过0.8。主成分分析表明,前5个分量解释了总方差的90%,表明性状间存在显著的交互作用。通过特征工程和梯度增强回归,建立了高精度的风速-穗位移预测模型(R2 = 0.93),并确定了茎秆抗倒伏的29个关键性状。敏感性分析显示,株高是影响植株生长的最大因子(敏感性系数为−3.87),其次是7节间的表皮层厚度(敏感性系数为0.62)、髓面积(敏感性系数为−0.60)和木质素含量(敏感性系数为0.35)。我们的方法框架不仅为玉米茎秆抗倒伏机制提供了定量的见解,而且为性状评价建立了系统的方法。研究结果为气候变化条件下提高茎秆抗倒伏性和产量稳定性的育种计划提供了实用指导,在优化农艺实践和制定育种策略方面具有潜在的应用价值。
{"title":"Identifying key factors influencing maize stalk lodging resistance through wind tunnel simulations with machine learning algorithms","authors":"Guanmin Huang,&nbsp;Ying Zhang,&nbsp;Shenghao Gu,&nbsp;Weiliang Wen,&nbsp;Xianju Lu,&nbsp;Xinyu Guo","doi":"10.1016/j.aiia.2025.01.007","DOIUrl":"10.1016/j.aiia.2025.01.007","url":null,"abstract":"<div><div>Climate change has intensified maize stalk lodging, severely impacting global maize production. While numerous traits influence stalk lodging resistance, their relative importance remains unclear, hindering breeding efforts. This study introduces an combining wind tunnel testing with machine learning algorithms to quantitatively evaluate stalk lodging resistance traits. Through extensive field experiments and literature review, we identified and measured 74 phenotypic traits encompassing plant morphology, biomass, and anatomical characteristics in maize plants. Correlation analysis revealed a median linear correlation coefficient of 0.497 among these traits, with 15.1 % of correlations exceeding 0.8. Principal component analysis showed that the first five components explained 90 % of the total variance, indicating significant trait interactions. Through feature engineering and gradient boosting regression, we developed a high-precision wind speed-ear displacement prediction model (R<sup>2</sup> = 0.93) and identified 29 key traits critical for stalk lodging resistance. Sensitivity analysis revealed plant height as the most influential factor (sensitivity coefficient: −3.87), followed by traits of the 7th internode including epidermis layer thickness (0.62), pith area (−0.60), and lignin content (0.35). Our methodological framework not only provides quantitative insights into maize stalk lodging resistance mechanisms but also establishes a systematic approach for trait evaluation. The findings offer practical guidance for breeding programs focused on enhancing stalk lodging resistance and yield stability under climate change conditions, with potential applications in agronomic practice optimization and breeding strategy development.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 2","pages":"Pages 316-326"},"PeriodicalIF":8.2,"publicationDate":"2025-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143704965","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Artificial Intelligence in Agriculture
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1