首页 > 最新文献

Artificial Intelligence in Agriculture最新文献

英文 中文
Enhancing maize LAI estimation accuracy using unmanned aerial vehicle remote sensing and deep learning techniques 利用无人机遥感和深度学习技术提高玉米LAI估计精度
IF 8.2 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2025-09-01 Epub Date: 2025-04-25 DOI: 10.1016/j.aiia.2025.04.008
Zhen Chen , Weiguang Zhai , Qian Cheng
The leaf area index (LAI) is crucial for precision agriculture management. UAV remote sensing technology has been widely applied for LAI estimation. Although spectral features are widely used for LAI estimation, their performance is often constrained in complex agricultural scenarios due to interference from soil background reflectance, variations in lighting conditions, and vegetation heterogeneity. Therefore, this study evaluates the potential of multi-source feature fusion and convolutional neural networks (CNN) in estimating maize LAI. To achieve this goal, field experiments on maize were conducted in Xinxiang City and Xuzhou City, China. Subsequently, spectral features, texture features, and crop height were extracted from the multi-spectral remote sensing data to construct a multi-source feature dataset. Then, maize LAI estimation models were developed using multiple linear regression, gradient boosting decision tree, and CNN. The results showed that: (1) Multi-source feature fusion, which integrates spectral features, texture features, and crop height, demonstrated the highest accuracy in LAI estimation, with the R2 ranging from 0.70 to 0.83, the RMSE ranging from 0.44 to 0.60, and the rRMSE ranging from 10.79 % to 14.57 %. In addition, the multi-source feature fusion demonstrates strong adaptability across different growth environments. In Xinxiang, the R2 ranges from 0.76 to 0.88, the RMSE ranges from 0.35 to 0.50, and the rRMSE ranges from 8.73 % to 12.40 %. In Xuzhou, the R2 ranges from 0.60 to 0.83, the RMSE ranges from 0.46 to 0.71, and the rRMSE ranges from 10.96 % to 17.11 %. (2) The CNN model outperformed traditional machine learning algorithms in most cases. Moreover, the combination of spectral features, texture features, and crop height using the CNN model achieved the highest accuracy in LAI estimation, with the R2 ranging from 0.83 to 0.88, the RMSE ranging from 0.35 to 0.46, and the rRMSE ranging from 8.73 % to 10.96 %.
叶面积指数(LAI)是精准农业管理的重要指标。无人机遥感技术在LAI估算中得到了广泛的应用。虽然光谱特征被广泛用于LAI估算,但由于土壤背景反射率、光照条件变化和植被异质性的干扰,其性能在复杂的农业场景下往往受到限制。因此,本研究评估了多源特征融合和卷积神经网络(CNN)在估计玉米LAI中的潜力。为实现这一目标,在中国新乡市和徐州市进行了玉米田间试验。随后,从多光谱遥感数据中提取光谱特征、纹理特征和作物高度,构建多源特征数据集。然后,利用多元线性回归、梯度增强决策树和CNN建立玉米LAI估计模型。结果表明:(1)融合光谱特征、纹理特征和作物高度的多源特征融合对LAI的估计精度最高,R2范围为0.70 ~ 0.83,RMSE范围为0.44 ~ 0.60,rRMSE范围为10.79% ~ 14.57%。此外,多源特征融合对不同生长环境具有较强的适应性。新乡市R2范围为0.76 ~ 0.88,RMSE范围为0.35 ~ 0.50,rRMSE范围为8.73% ~ 12.40%。徐州地区R2范围为0.60 ~ 0.83,RMSE范围为0.46 ~ 0.71,rRMSE范围为10.96% ~ 17.11%。(2) CNN模型在大多数情况下优于传统机器学习算法。此外,利用CNN模型组合光谱特征、纹理特征和作物高度估算LAI的精度最高,R2范围为0.83 ~ 0.88,RMSE范围为0.35 ~ 0.46,rRMSE范围为8.73% ~ 10.96%。
{"title":"Enhancing maize LAI estimation accuracy using unmanned aerial vehicle remote sensing and deep learning techniques","authors":"Zhen Chen ,&nbsp;Weiguang Zhai ,&nbsp;Qian Cheng","doi":"10.1016/j.aiia.2025.04.008","DOIUrl":"10.1016/j.aiia.2025.04.008","url":null,"abstract":"<div><div>The leaf area index (LAI) is crucial for precision agriculture management. UAV remote sensing technology has been widely applied for LAI estimation. Although spectral features are widely used for LAI estimation, their performance is often constrained in complex agricultural scenarios due to interference from soil background reflectance, variations in lighting conditions, and vegetation heterogeneity. Therefore, this study evaluates the potential of multi-source feature fusion and convolutional neural networks (CNN) in estimating maize LAI. To achieve this goal, field experiments on maize were conducted in Xinxiang City and Xuzhou City, China. Subsequently, spectral features, texture features, and crop height were extracted from the multi-spectral remote sensing data to construct a multi-source feature dataset. Then, maize LAI estimation models were developed using multiple linear regression, gradient boosting decision tree, and CNN. The results showed that: (1) Multi-source feature fusion, which integrates spectral features, texture features, and crop height, demonstrated the highest accuracy in LAI estimation, with the R<sup>2</sup> ranging from 0.70 to 0.83, the RMSE ranging from 0.44 to 0.60, and the rRMSE ranging from 10.79 % to 14.57 %. In addition, the multi-source feature fusion demonstrates strong adaptability across different growth environments. In Xinxiang, the R<sup>2</sup> ranges from 0.76 to 0.88, the RMSE ranges from 0.35 to 0.50, and the rRMSE ranges from 8.73 % to 12.40 %. In Xuzhou, the R<sup>2</sup> ranges from 0.60 to 0.83, the RMSE ranges from 0.46 to 0.71, and the rRMSE ranges from 10.96 % to 17.11 %. (2) The CNN model outperformed traditional machine learning algorithms in most cases. Moreover, the combination of spectral features, texture features, and crop height using the CNN model achieved the highest accuracy in LAI estimation, with the R<sup>2</sup> ranging from 0.83 to 0.88, the RMSE ranging from 0.35 to 0.46, and the rRMSE ranging from 8.73 % to 10.96 %.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 3","pages":"Pages 482-495"},"PeriodicalIF":8.2,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143898806","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
End-to-end deep fusion of hyperspectral imaging and computer vision techniques for rapid detection of wheat seed quality 基于端到端高光谱成像和计算机视觉技术的小麦种子质量快速检测
IF 8.2 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2025-09-01 Epub Date: 2025-02-13 DOI: 10.1016/j.aiia.2025.02.003
Tingting Zhang , Jing Li , Jinpeng Tong , Yihu Song , Li Wang , Renye Wu , Xuan Wei , Yuanyuan Song , Rensen Zeng
Seeds are essential to the agri-food industry. However, their quality is vulnerable to biotic and abiotic stresses during production and storage, leading to various types of deterioration. Real-time monitoring and pre-sowing screening offer substantial potential for improved storage management, field performance, and flour quality. This study investigated diverse deterioration patterns in wheat seeds by analyzing 1000 high-quality and 1098 deteriorated seeds encompassing mold, aging, mechanical damage, insect damage, and internal insect infestation. Hyperspectral imaging (HSI) and computer vision (CV) were employed to capture surface data from both the embryo (EM) and endosperm (EN). Internal seed quality was further assessed using scanning electron microscopy, dissection, and standard germination tests. Both conventional machine learning algorithms and deep convolutional neural networks (DCNN) were employed to develop discriminative models using independent datasets. Results revealed that each data source contributed valuable information for seed quality assessment (validation set accuracy: 65.1–89.2 %), with the integration of HSI and CV showing considerable promise. A comparison of early and late fusion strategies led to the development of an end-to-end deep fusion model. The decision fusion-based DCNN model, integrating HSI-EM, HSI-EN, CV-EM, and CV-EN data, achieved the highest accuracy in both training (94.3 %) and validation (93.8 %) sets. Applying this model to seed lot screening increased the proportion of high-quality seeds from 47.7 % to 93.4 %. These findings were further supported by external samples and visualizations. The proposed end-to-end decision fusion DCNN model simplifies the training process compared to traditional two-stage fusion methods. This study presents a potentially efficient alternative for rapid, individual kernel quality detection and control during wheat production.
种子对农业食品工业至关重要。然而,在生产和储存过程中,它们的品质容易受到生物和非生物胁迫,导致各种类型的变质。实时监测和播前筛选为改善储存管理、田间性能和面粉质量提供了巨大的潜力。通过对1000粒优质小麦种子和1098粒优质小麦种子进行霉变、老化、机械损伤、虫蛀和内部虫害等方面的分析,研究了小麦种子的不同变质模式。采用高光谱成像(HSI)和计算机视觉(CV)技术对胚胎(EM)和胚乳(EN)的表面数据进行了采集。内部种子质量通过扫描电子显微镜、解剖和标准发芽试验进一步评估。采用传统的机器学习算法和深度卷积神经网络(DCNN)建立独立数据集的判别模型。结果表明,每个数据源都为种子质量评估提供了有价值的信息(验证集准确率为65.1 - 89.2%),HSI和CV的整合显示出相当大的前景。早期和晚期融合策略的比较导致了端到端深度融合模型的发展。基于决策融合的DCNN模型集成了HSI-EM、HSI-EN、CV-EM和CV-EN数据,在训练集(94.3%)和验证集(93.8%)上都达到了最高的准确率。将该模型应用于种子批次筛选,使优质种子比例由47.7%提高到93.4%。这些发现得到了外部样本和可视化的进一步支持。与传统的两阶段融合方法相比,提出的端到端决策融合DCNN模型简化了训练过程。本研究为小麦生产过程中籽粒质量的快速、个性化检测和控制提供了一种潜在的有效替代方法。
{"title":"End-to-end deep fusion of hyperspectral imaging and computer vision techniques for rapid detection of wheat seed quality","authors":"Tingting Zhang ,&nbsp;Jing Li ,&nbsp;Jinpeng Tong ,&nbsp;Yihu Song ,&nbsp;Li Wang ,&nbsp;Renye Wu ,&nbsp;Xuan Wei ,&nbsp;Yuanyuan Song ,&nbsp;Rensen Zeng","doi":"10.1016/j.aiia.2025.02.003","DOIUrl":"10.1016/j.aiia.2025.02.003","url":null,"abstract":"<div><div>Seeds are essential to the agri-food industry. However, their quality is vulnerable to biotic and abiotic stresses during production and storage, leading to various types of deterioration. Real-time monitoring and pre-sowing screening offer substantial potential for improved storage management, field performance, and flour quality. This study investigated diverse deterioration patterns in wheat seeds by analyzing 1000 high-quality and 1098 deteriorated seeds encompassing mold, aging, mechanical damage, insect damage, and internal insect infestation. Hyperspectral imaging (HSI) and computer vision (CV) were employed to capture surface data from both the embryo (EM) and endosperm (EN). Internal seed quality was further assessed using scanning electron microscopy, dissection, and standard germination tests. Both conventional machine learning algorithms and deep convolutional neural networks (DCNN) were employed to develop discriminative models using independent datasets. Results revealed that each data source contributed valuable information for seed quality assessment (validation set accuracy: 65.1–89.2 %), with the integration of HSI and CV showing considerable promise. A comparison of early and late fusion strategies led to the development of an end-to-end deep fusion model. The decision fusion-based DCNN model, integrating HSI-EM, HSI-EN, CV-EM, and CV-EN data, achieved the highest accuracy in both training (94.3 %) and validation (93.8 %) sets. Applying this model to seed lot screening increased the proportion of high-quality seeds from 47.7 % to 93.4 %. These findings were further supported by external samples and visualizations. The proposed end-to-end decision fusion DCNN model simplifies the training process compared to traditional two-stage fusion methods. This study presents a potentially efficient alternative for rapid, individual kernel quality detection and control during wheat production.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 3","pages":"Pages 537-549"},"PeriodicalIF":8.2,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144106555","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Mapping of soil sampling sites using terrain and hydrological attributes 利用地形和水文属性绘制土壤采样点图
IF 8.2 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2025-09-01 Epub Date: 2025-04-25 DOI: 10.1016/j.aiia.2025.04.007
Tan-Hanh Pham , Kristopher Osterloh , Kim-Doang Nguyen
Efficient soil sampling is essential for effective soil management and research on soil health. Traditional site selection methods are labor-intensive and fail to capture soil variability comprehensively. This study introduces a deep learning-based tool that automates soil sampling site selection using spectral images. The proposed framework consists of two key components: an extractor and a predictor. The extractor, based on a convolutional neural network (CNN), derives features from spectral images, while the predictor employs self-attention mechanisms to assess feature importance and generate prediction maps. The model is designed to process multiple spectral images and address the class imbalance in soil segmentation.
The model was trained on a soil dataset from 20 fields in eastern South Dakota, collected via drone-mounted LiDAR with high-precision GPS. Evaluation on a test set achieved a mean intersection over union (mIoU) of 69.46 % and a mean Dice coefficient (mDc) of 80.35 %, demonstrating strong segmentation performance. The results highlight the model's effectiveness in automating soil sampling site selection, providing an advanced tool for producers and soil scientists. Compared to existing state-of-the-art methods, the proposed approach improves accuracy and efficiency, optimizing soil sampling processes and enhancing soil research.
有效的土壤采样是有效的土壤管理和土壤健康研究的必要条件。传统的选址方法是劳动密集型的,不能全面地捕捉土壤的变异性。本研究介绍了一种基于深度学习的工具,该工具可以使用光谱图像自动选择土壤采样地点。提出的框架由两个关键组件组成:提取器和预测器。基于卷积神经网络(CNN)的提取器从光谱图像中提取特征,而预测器则使用自注意机制来评估特征的重要性并生成预测图。该模型能够处理多光谱图像,解决土壤分割中的类不平衡问题。该模型是在南达科他州东部20个农田的土壤数据集上进行训练的,这些数据集是通过无人机安装的激光雷达和高精度GPS收集的。在测试集上进行评估,平均交联率(mIoU)为69.46%,平均Dice系数(mDc)为80.35%,显示出较强的分割性能。结果表明,该模型在土壤采样点自动化选择中的有效性,为生产者和土壤科学家提供了一种先进的工具。与现有的最先进的方法相比,该方法提高了精度和效率,优化了土壤采样过程,加强了土壤研究。
{"title":"Mapping of soil sampling sites using terrain and hydrological attributes","authors":"Tan-Hanh Pham ,&nbsp;Kristopher Osterloh ,&nbsp;Kim-Doang Nguyen","doi":"10.1016/j.aiia.2025.04.007","DOIUrl":"10.1016/j.aiia.2025.04.007","url":null,"abstract":"<div><div>Efficient soil sampling is essential for effective soil management and research on soil health. Traditional site selection methods are labor-intensive and fail to capture soil variability comprehensively. This study introduces a deep learning-based tool that automates soil sampling site selection using spectral images. The proposed framework consists of two key components: an extractor and a predictor. The extractor, based on a convolutional neural network (CNN), derives features from spectral images, while the predictor employs self-attention mechanisms to assess feature importance and generate prediction maps. The model is designed to process multiple spectral images and address the class imbalance in soil segmentation.</div><div>The model was trained on a soil dataset from 20 fields in eastern South Dakota, collected via drone-mounted LiDAR with high-precision GPS. Evaluation on a test set achieved a mean intersection over union (mIoU) of 69.46 % and a mean Dice coefficient (mDc) of 80.35 %, demonstrating strong segmentation performance. The results highlight the model's effectiveness in automating soil sampling site selection, providing an advanced tool for producers and soil scientists. Compared to existing state-of-the-art methods, the proposed approach improves accuracy and efficiency, optimizing soil sampling processes and enhancing soil research.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 3","pages":"Pages 470-481"},"PeriodicalIF":8.2,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143887644","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AI-driven aquaculture: A review of technological innovations and their sustainable impacts 人工智能驱动的水产养殖:技术创新及其可持续影响综述
IF 8.2 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2025-09-01 Epub Date: 2025-02-06 DOI: 10.1016/j.aiia.2025.01.012
Hang Yang , Qi Feng , Shibin Xia , Zhenbin Wu , Yi Zhang
The integration of artificial intelligence (AI) in aquaculture has been identified as a transformative force, enhancing various operational aspects from water quality management to genetic optimization. This review provides a comprehensive synthesis of recent advancements in AI applications within the aquaculture sector, underscoring the significant enhancements in production efficiency and environmental sustainability. Key AI-driven improvements, such as predictive analytics for disease management and optimized feeding protocols, are highlighted, demonstrating their contributions to reducing waste and improving biomass outputs. However, challenges remain in terms of data quality, system integration, and the socio-economic impacts of technological adoption across diverse aquacultural environments. This review also addresses the gaps in current research, particularly the lack of robust, scalable AI models and frameworks that can be universally applied. Future directions are discussed, emphasizing the need for interdisciplinary research and development to fully leverage AI potential in aquaculture. This study not only maps the current landscape of AI applications but also serves as a call for continued innovation and strategic collaborations to overcome existing barriers and realize the full benefits of AI in aquaculture.
人工智能(AI)在水产养殖中的整合已被确定为一种变革力量,可以增强从水质管理到遗传优化的各个操作方面。本综述全面综合了人工智能在水产养殖部门应用方面的最新进展,强调了生产效率和环境可持续性的显著提高。重点介绍了人工智能驱动的关键改进,如疾病管理的预测分析和优化的饲养方案,展示了它们对减少浪费和提高生物质产量的贡献。然而,在数据质量、系统集成以及在不同水产养殖环境中采用技术的社会经济影响方面仍然存在挑战。这篇综述还解决了当前研究中的差距,特别是缺乏可以普遍应用的健壮的、可扩展的人工智能模型和框架。讨论了未来的发展方向,强调需要跨学科的研究和开发,以充分利用人工智能在水产养殖中的潜力。这项研究不仅描绘了人工智能应用的现状,而且还呼吁继续创新和战略合作,以克服现有障碍,实现人工智能在水产养殖中的全部效益。
{"title":"AI-driven aquaculture: A review of technological innovations and their sustainable impacts","authors":"Hang Yang ,&nbsp;Qi Feng ,&nbsp;Shibin Xia ,&nbsp;Zhenbin Wu ,&nbsp;Yi Zhang","doi":"10.1016/j.aiia.2025.01.012","DOIUrl":"10.1016/j.aiia.2025.01.012","url":null,"abstract":"<div><div>The integration of artificial intelligence (AI) in aquaculture has been identified as a transformative force, enhancing various operational aspects from water quality management to genetic optimization. This review provides a comprehensive synthesis of recent advancements in AI applications within the aquaculture sector, underscoring the significant enhancements in production efficiency and environmental sustainability. Key AI-driven improvements, such as predictive analytics for disease management and optimized feeding protocols, are highlighted, demonstrating their contributions to reducing waste and improving biomass outputs. However, challenges remain in terms of data quality, system integration, and the socio-economic impacts of technological adoption across diverse aquacultural environments. This review also addresses the gaps in current research, particularly the lack of robust, scalable AI models and frameworks that can be universally applied. Future directions are discussed, emphasizing the need for interdisciplinary research and development to fully leverage AI potential in aquaculture. This study not only maps the current landscape of AI applications but also serves as a call for continued innovation and strategic collaborations to overcome existing barriers and realize the full benefits of AI in aquaculture.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 3","pages":"Pages 508-525"},"PeriodicalIF":8.2,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144106613","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Addressing computation resource exhaustion associated with deep learning training of three-dimensional hyperspectral images using multiclass weed classification 利用多类杂草分类解决三维高光谱图像深度学习训练相关的计算资源耗尽问题
IF 8.2 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2025-06-01 Epub Date: 2025-02-11 DOI: 10.1016/j.aiia.2025.02.005
Billy G. Ram , Kirk Howatt , Joseph Mettler , Xin Sun
Addressing the computational bottleneck of training deep learning models on high-resolution, three-dimensional images, this study introduces an optimized approach, combining distributed learning (parallelism), image resolution, and data augmentation. We propose analysis methodologies that help train deep learning (DL) models on proximal hyperspectral images, demonstrating superior performance in eight-class crop (canola, field pea, sugarbeet and flax) and weed (redroot pigweed, resistant kochia, waterhemp and ragweed) classification. Utilizing state-of-the-art model architectures (ResNet-50, VGG-16, DenseNet, EfficientNet) in comparison with ResNet-50 inspired Hyper-Residual Convolutional Neural Network model. Our findings reveal that an image resolution of 100x100x54 maximizes accuracy while maintaining computational efficiency, surpassing the performance of 150x150x54 and 50x50x54 resolution images. By employing data parallelism, we overcome system memory limitations and achieve exceptional classification results, with test accuracies and F1-scores reaching 0.96 and 0.97, respectively. This research highlights the potential of residual-based networks for analyzing hyperspectral images. It offers valuable insights into optimizing deep learning models in resource-constrained environments. The research presents detailed training pipelines for deep learning models that utilize large (> 4k) hyperspectral training samples, including background and without any data preprocessing. This approach enables the training of deep learning models directly on raw hyperspectral data.
为了解决在高分辨率三维图像上训练深度学习模型的计算瓶颈,本研究引入了一种优化方法,将分布式学习(并行)、图像分辨率和数据增强相结合。我们提出的分析方法有助于在近端高光谱图像上训练深度学习(DL)模型,在八类作物(油菜籽、豌豆、甜菜和亚麻)和杂草(红根藜、抗性土匪、水麻和豚草)分类中显示出卓越的性能。利用最先进的模型架构(ResNet-50, VGG-16, DenseNet, EfficientNet)与ResNet-50启发的超残差卷积神经网络模型进行比较。我们的研究结果表明,100x100x54的图像分辨率在保持计算效率的同时最大限度地提高了精度,超过了150x150x54和50x50x54分辨率图像的性能。通过使用数据并行性,我们克服了系统内存的限制,取得了优异的分类效果,测试准确率和f1分数分别达到0.96和0.97。这项研究突出了残差网络分析高光谱图像的潜力。它为在资源受限的环境中优化深度学习模型提供了有价值的见解。该研究为深度学习模型提供了详细的训练管道,这些模型利用大量的(>;4k)高光谱训练样本,包括背景和未经任何数据预处理。这种方法可以直接在原始高光谱数据上训练深度学习模型。
{"title":"Addressing computation resource exhaustion associated with deep learning training of three-dimensional hyperspectral images using multiclass weed classification","authors":"Billy G. Ram ,&nbsp;Kirk Howatt ,&nbsp;Joseph Mettler ,&nbsp;Xin Sun","doi":"10.1016/j.aiia.2025.02.005","DOIUrl":"10.1016/j.aiia.2025.02.005","url":null,"abstract":"<div><div>Addressing the computational bottleneck of training deep learning models on high-resolution, three-dimensional images, this study introduces an optimized approach, combining distributed learning (parallelism), image resolution, and data augmentation. We propose analysis methodologies that help train deep learning (DL) models on proximal hyperspectral images, demonstrating superior performance in eight-class crop (canola, field pea, sugarbeet and flax) and weed (redroot pigweed, resistant kochia, waterhemp and ragweed) classification. Utilizing state-of-the-art model architectures (ResNet-50, VGG-16, DenseNet, EfficientNet) in comparison with ResNet-50 inspired Hyper-Residual Convolutional Neural Network model. Our findings reveal that an image resolution of 100x100x54 maximizes accuracy while maintaining computational efficiency, surpassing the performance of 150x150x54 and 50x50x54 resolution images. By employing data parallelism, we overcome system memory limitations and achieve exceptional classification results, with test accuracies and F1-scores reaching 0.96 and 0.97, respectively. This research highlights the potential of residual-based networks for analyzing hyperspectral images. It offers valuable insights into optimizing deep learning models in resource-constrained environments. The research presents detailed training pipelines for deep learning models that utilize large (&gt; 4k) hyperspectral training samples, including background and without any data preprocessing. This approach enables the training of deep learning models directly on raw hyperspectral data.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 2","pages":"Pages 131-146"},"PeriodicalIF":8.2,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143487682","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Digitalizing greenhouse trials: An automated approach for efficient and objective assessment of plant damage using deep learning 数字化温室试验:一种利用深度学习高效客观评估植物损害的自动化方法
IF 8.2 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2025-06-01 Epub Date: 2025-03-17 DOI: 10.1016/j.aiia.2025.03.001
Laura Gómez-Zamanillo , Arantza Bereciartúa-Pérez , Artzai Picón , Liliana Parra , Marian Oldenbuerger , Ramón Navarra-Mestre , Christian Klukas , Till Eggers , Jone Echazarra
The use of image based and, recently, deep learning-based systems have provided good results in several applications. Greenhouse trials are key part in the process of developing and testing new herbicides and analyze the response of the species to different products and doses in a controlled way. The assessment of the damage in the plant is daily done in all trials by visual evaluation by experts. This entails time consuming process and lack of repeatability. Greenhouse trials require new digital tools to reduce time consuming process and to endow the experts with more objective and repetitive methods for establishing the damage in the plants.
To this end, a novel method is proposed composed by an initial segmentation of the plant species followed by a multibranch convolutional neural network to estimate the damage level. In this way, we overcome the need for costly and unaffordable pixelwise manual segmentation for damage symptoms and we make use of global damage estimation values provided by the experts.
The algorithm has been deployed under real greenhouse trials conditions in a pilot study located in BASF in Germany and tested over four species (GLXMA, TRZAW, ECHCG, AMARE). The results show mean average error (MAE) values ranging from 5.20 for AMARE and 8.07 for ECHCG for the estimation of PDCU value, with correlation values (R2) higher than 0.85 in all situations, and up to 0.92 in AMARE. These results surpass the inter-rater variability of human experts demonstrating that the proposed automated method is appropriate for automatically assessing greenhouse damage trials.
基于图像和最近基于深度学习的系统的使用在几个应用中提供了良好的结果。温室试验是新型除草剂开发和试验的关键环节,是对除草剂品种对不同产品和剂量的反应进行控制分析的重要环节。在所有试验中,每天都由专家通过目视评估对工厂的损害进行评估。这需要耗时的过程和缺乏可重复性。温室试验需要新的数字工具来减少耗时的过程,并赋予专家更客观和重复的方法来确定植物的损害。为此,提出了一种基于多分支卷积神经网络对植物物种进行初始分割的损伤程度估计方法。通过这种方式,我们克服了对损伤症状进行昂贵且难以负担的像素级人工分割的需要,并且我们利用了专家提供的全局损伤估计值。该算法已在德国巴斯夫的一项试点研究中部署在真实的温室试验条件下,并对四种物种(GLXMA, TRZAW, ECHCG, AMARE)进行了测试。结果表明,AMARE估计PDCU值的平均误差(MAE)为5.20,ECHCG估计PDCU值的平均误差为8.07,相关系数(R2)均大于0.85,而AMARE估计PDCU值的相关系数(R2)最高可达0.92。这些结果超过了人类专家的内部变异性,表明所提出的自动化方法适用于自动评估温室损害试验。
{"title":"Digitalizing greenhouse trials: An automated approach for efficient and objective assessment of plant damage using deep learning","authors":"Laura Gómez-Zamanillo ,&nbsp;Arantza Bereciartúa-Pérez ,&nbsp;Artzai Picón ,&nbsp;Liliana Parra ,&nbsp;Marian Oldenbuerger ,&nbsp;Ramón Navarra-Mestre ,&nbsp;Christian Klukas ,&nbsp;Till Eggers ,&nbsp;Jone Echazarra","doi":"10.1016/j.aiia.2025.03.001","DOIUrl":"10.1016/j.aiia.2025.03.001","url":null,"abstract":"<div><div>The use of image based and, recently, deep learning-based systems have provided good results in several applications. Greenhouse trials are key part in the process of developing and testing new herbicides and analyze the response of the species to different products and doses in a controlled way. The assessment of the damage in the plant is daily done in all trials by visual evaluation by experts. This entails time consuming process and lack of repeatability. Greenhouse trials require new digital tools to reduce time consuming process and to endow the experts with more objective and repetitive methods for establishing the damage in the plants.</div><div>To this end, a novel method is proposed composed by an initial segmentation of the plant species followed by a multibranch convolutional neural network to estimate the damage level. In this way, we overcome the need for costly and unaffordable pixelwise manual segmentation for damage symptoms and we make use of global damage estimation values provided by the experts.</div><div>The algorithm has been deployed under real greenhouse trials conditions in a pilot study located in BASF in Germany and tested over four species (GLXMA, TRZAW, ECHCG, AMARE). The results show mean average error (MAE) values ranging from 5.20 for AMARE and 8.07 for ECHCG for the estimation of PDCU value, with correlation values (R<sup>2</sup>) higher than 0.85 in all situations, and up to 0.92 in AMARE. These results surpass the inter-rater variability of human experts demonstrating that the proposed automated method is appropriate for automatically assessing greenhouse damage trials.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 2","pages":"Pages 280-295"},"PeriodicalIF":8.2,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143684582","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Using UAV-based multispectral images and CGS-YOLO algorithm to distinguish maize seeding from weed 利用无人机多光谱图像和CGS-YOLO算法对玉米种子和杂草进行区分
IF 8.2 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2025-06-01 Epub Date: 2025-02-17 DOI: 10.1016/j.aiia.2025.02.007
Boyi Tang , Jingping Zhou , Chunjiang Zhao , Yuchun Pan , Yao Lu , Chang Liu , Kai Ma , Xuguang Sun , Ruifang Zhang , Xiaohe Gu
Accurate recognition of maize seedlings on the plot scale under the disturbance of weeds is crucial for early seedling replenishment and weed removal. Currently, UAV-based maize seedling recognition depends primarily on RGB images. The main purpose of this study is to compare the performances of multispectral images and RGB images of unmanned aerial vehicle (UAV) on maize seeding recognition using deep learning algorithms. Additionally, we aim to assess the disturbance of different weed coverage on the recognition of maize seeding. Firstly, principal component analysis was used in multispectral image transformation. Secondly, by introducing the CARAFE sampling operator and a small target detection layer (SLAY), we extracted the contextual information of each pixel to retain weak features in the maize seedling image. Thirdly, the global attention mechanism (GAM) was employed to capture the features of maize seedlings using the dual attention mechanism of spatial and channel information. The CGS-YOLO algorithm was constructed and formed. Finally, we compared the performance of the improved algorithm with a series of deep learning algorithms, including YOLO v3, v5, v6 and v8. The results show that after PCA transformation, the recognition mAP of maize seedlings reaches 82.6 %, representing 3.1 percentage points improvement compared to RGB images. Compared with YOLOv8, YOLOv6, YOLOv5, and YOLOv3, the CGS-YOLO algorithm has improved mAP by 3.8, 4.2, 4.5 and 6.6 percentage points, respectively. With the increase of weed coverage, the recognition effect of maize seedlings gradually decreased. When weed coverage was more than 70 %, the mAP difference becomes significant, but CGS-YOLO still maintains a recognition mAP of 72 %. Therefore, in maize seedings recognition, UAV-based multispectral images perform better than RGB images. The application of CGS-YOLO deep learning algorithm with UAV multi-spectral images proves beneficial in the recognition of maize seedlings under weed disturbance.
在杂草干扰下准确识别玉米幼苗在样地尺度上的位置,对早期补苗和除草至关重要。目前,基于无人机的玉米幼苗识别主要依赖于RGB图像。本研究的主要目的是利用深度学习算法比较无人机(UAV)多光谱图像和RGB图像对玉米种子识别的性能。此外,我们还旨在评估不同杂草覆盖对玉米播种识别的干扰。首先,将主成分分析应用于多光谱图像变换。其次,通过引入CARAFE采样算子和小目标检测层(SLAY),提取每个像素的上下文信息,保留玉米幼苗图像中的弱特征;第三,采用全局注意机制(GAM),利用空间信息和通道信息的双重注意机制捕捉玉米幼苗的特征。构造并形成了CGS-YOLO算法。最后,我们将改进算法与一系列深度学习算法(包括YOLO v3、v5、v6和v8)的性能进行了比较。结果表明,经过PCA变换后,玉米幼苗的mAP识别率达到82.6%,比RGB图像提高了3.1个百分点。与YOLOv8、YOLOv6、YOLOv5和YOLOv3相比,CGS-YOLO算法的mAP分别提高了3.8、4.2、4.5和6.6个百分点。随着杂草盖度的增加,玉米幼苗的识别效果逐渐降低。当杂草盖度大于70%时,mAP差异显著,但CGS-YOLO仍然保持72%的识别mAP。因此,在玉米种子识别中,基于无人机的多光谱图像优于RGB图像。将CGS-YOLO深度学习算法应用于无人机多光谱图像,可有效识别杂草干扰下的玉米幼苗。
{"title":"Using UAV-based multispectral images and CGS-YOLO algorithm to distinguish maize seeding from weed","authors":"Boyi Tang ,&nbsp;Jingping Zhou ,&nbsp;Chunjiang Zhao ,&nbsp;Yuchun Pan ,&nbsp;Yao Lu ,&nbsp;Chang Liu ,&nbsp;Kai Ma ,&nbsp;Xuguang Sun ,&nbsp;Ruifang Zhang ,&nbsp;Xiaohe Gu","doi":"10.1016/j.aiia.2025.02.007","DOIUrl":"10.1016/j.aiia.2025.02.007","url":null,"abstract":"<div><div>Accurate recognition of maize seedlings on the plot scale under the disturbance of weeds is crucial for early seedling replenishment and weed removal. Currently, UAV-based maize seedling recognition depends primarily on RGB images. The main purpose of this study is to compare the performances of multispectral images and RGB images of unmanned aerial vehicle (UAV) on maize seeding recognition using deep learning algorithms. Additionally, we aim to assess the disturbance of different weed coverage on the recognition of maize seeding. Firstly, principal component analysis was used in multispectral image transformation. Secondly, by introducing the CARAFE sampling operator and a small target detection layer (SLAY), we extracted the contextual information of each pixel to retain weak features in the maize seedling image. Thirdly, the global attention mechanism (GAM) was employed to capture the features of maize seedlings using the dual attention mechanism of spatial and channel information. The CGS-YOLO algorithm was constructed and formed. Finally, we compared the performance of the improved algorithm with a series of deep learning algorithms, including YOLO v3, v5, v6 and v8. The results show that after PCA transformation, the recognition mAP of maize seedlings reaches 82.6 %, representing 3.1 percentage points improvement compared to RGB images. Compared with YOLOv8, YOLOv6, YOLOv5, and YOLOv3, the CGS-YOLO algorithm has improved mAP by 3.8, 4.2, 4.5 and 6.6 percentage points, respectively. With the increase of weed coverage, the recognition effect of maize seedlings gradually decreased. When weed coverage was more than 70 %, the mAP difference becomes significant, but CGS-YOLO still maintains a recognition mAP of 72 %. Therefore, in maize seedings recognition, UAV-based multispectral images perform better than RGB images. The application of CGS-YOLO deep learning algorithm with UAV multi-spectral images proves beneficial in the recognition of maize seedlings under weed disturbance.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 2","pages":"Pages 162-181"},"PeriodicalIF":8.2,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143512498","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep learning-based classification, detection, and segmentation of tomato leaf diseases: A state-of-the-art review 基于深度学习的番茄叶病分类、检测和分割:最新进展综述
IF 8.2 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2025-06-01 Epub Date: 2025-02-20 DOI: 10.1016/j.aiia.2025.02.006
Aritra Das , Fahad Pathan , Jamin Rahman Jim , Md Mohsin Kabir , M.F. Mridha
The early identification and treatment of tomato leaf diseases are crucial for optimizing plant productivity, efficiency and quality. Misdiagnosis by the farmers poses the risk of inadequate treatments, harming both tomato plants and agroecosystems. Precision of disease diagnosis is essential, necessitating a swift and accurate response to misdiagnosis for early identification. Tropical regions are ideal for tomato plants, but there are inherent concerns, such as weather-related problems. Plant diseases largely cause financial losses in crop production. The slow detection periods of conventional approaches are insufficient for the timely detection of tomato diseases. Deep learning has emerged as a promising avenue for early disease identification. This study comprehensively analyzed techniques for classifying and detecting tomato leaf diseases and evaluating their strengths and weaknesses. The study delves into various diagnostic procedures, including image pre-processing, localization and segmentation. In conclusion, applying deep learning algorithms holds great promise for enhancing the accuracy and efficiency of tomato leaf disease diagnosis by offering faster and more effective results.
番茄叶片病害的早期识别和处理是优化植株产量、效率和品质的关键。农民的误诊造成了治疗不充分的风险,损害了番茄植株和农业生态系统。疾病诊断的准确性至关重要,需要对误诊作出迅速和准确的反应,以便及早发现。热带地区是种植番茄的理想之地,但也存在一些固有的问题,比如与天气有关的问题。植物病害在很大程度上造成作物生产的经济损失。传统方法的检测周期较慢,不足以及时发现番茄病害。深度学习已经成为早期疾病识别的一个有前途的途径。本文综合分析了番茄叶片病害的分类检测技术,并对其优缺点进行了评价。该研究深入研究了各种诊断程序,包括图像预处理,定位和分割。综上所述,应用深度学习算法可以提供更快、更有效的结果,从而提高番茄叶病诊断的准确性和效率。
{"title":"Deep learning-based classification, detection, and segmentation of tomato leaf diseases: A state-of-the-art review","authors":"Aritra Das ,&nbsp;Fahad Pathan ,&nbsp;Jamin Rahman Jim ,&nbsp;Md Mohsin Kabir ,&nbsp;M.F. Mridha","doi":"10.1016/j.aiia.2025.02.006","DOIUrl":"10.1016/j.aiia.2025.02.006","url":null,"abstract":"<div><div>The early identification and treatment of tomato leaf diseases are crucial for optimizing plant productivity, efficiency and quality. Misdiagnosis by the farmers poses the risk of inadequate treatments, harming both tomato plants and agroecosystems. Precision of disease diagnosis is essential, necessitating a swift and accurate response to misdiagnosis for early identification. Tropical regions are ideal for tomato plants, but there are inherent concerns, such as weather-related problems. Plant diseases largely cause financial losses in crop production. The slow detection periods of conventional approaches are insufficient for the timely detection of tomato diseases. Deep learning has emerged as a promising avenue for early disease identification. This study comprehensively analyzed techniques for classifying and detecting tomato leaf diseases and evaluating their strengths and weaknesses. The study delves into various diagnostic procedures, including image pre-processing, localization and segmentation. In conclusion, applying deep learning algorithms holds great promise for enhancing the accuracy and efficiency of tomato leaf disease diagnosis by offering faster and more effective results.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 2","pages":"Pages 192-220"},"PeriodicalIF":8.2,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143520260","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Prediction of sugar beet yield and quality parameters using Stacked-LSTM model with pre-harvest UAV time series data and meteorological factors 基于采收前无人机时间序列数据和气象因子的叠置- lstm模型预测甜菜产量和品质参数
IF 8.2 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2025-06-01 Epub Date: 2025-02-27 DOI: 10.1016/j.aiia.2025.02.004
Qing Wang , Ke Shao , Zhibo Cai , Yingpu Che , Haochong Chen , Shunfu Xiao , Ruili Wang , Yaling Liu , Baoguo Li , Yuntao Ma
Accurate pre-harvest prediction of sugar beet yield is vital for effective agricultural management and decision-making. However, traditional methods are constrained by reliance on empirical knowledge, time-consuming processes, resource intensiveness, and spatial-temporal variability in prediction accuracy. This study presented a plot-level approach that leverages UAV technology and recurrent neural networks to provide field yield predictions within the same growing season, addressing a significant gap in previous research that often focuses on regional scale predictions relied on multi-year history datasets. End-of-season yield and quality parameters were forecasted using UAV-derived time series data and meteorological factors collected at three critical growth stages, providing a timely and practical tool for farm management. Two years of data covering 185 sugar beet varieties were used to train a developed stacked Long Short-Term Memory (LSTM) model, which was compared with traditional machine learning approaches. Incorporating fresh weight estimates of aboveground and root biomass as predictive factors significantly enhanced prediction accuracy. Optimal performance in prediction was observed when utilizing data from all three growth periods, with R2 values of 0.761 (rRMSE = 7.1 %) for sugar content, 0.531 (rRMSE = 22.5 %) for root yield, and 0.478 (rRMSE = 23.4 %) for sugar yield. Furthermore, combining data from the first two growth periods shows promising results for making the predictions earlier. Key predictive features identified through the Permutation Importance (PIMP) method provided insights into the main factors influencing yield. These findings underscore the potential of using UAV time-series data and recurrent neural networks for accurate pre-harvest yield prediction at the field scale, supporting timely and precise agricultural decisions.
准确的收获前甜菜产量预测对有效的农业管理和决策至关重要。然而,传统的方法受到依赖经验知识、耗时、资源密集和预测精度时空变异性的限制。本研究提出了一种利用无人机技术和循环神经网络的地块级方法,在同一生长季节提供现场产量预测,解决了以往研究中依赖多年历史数据集的区域尺度预测的重大空白。利用无人机获取的三个关键生长阶段的时间序列数据和气象因子对季末产量和品质参数进行预测,为农场管理提供了及时实用的工具。研究人员使用了185个甜菜品种的两年数据来训练开发的堆叠长短期记忆(LSTM)模型,并将其与传统的机器学习方法进行了比较。将地上鲜重估算值和根系生物量作为预测因子显著提高了预测精度。利用所有三个生长期的数据进行预测的效果最佳,其中糖含量的R2值为0.761 (rRMSE = 7.1%),根产量的R2值为0.531 (rRMSE = 22.5%),糖产量的R2值为0.478 (rRMSE = 23.4%)。此外,结合前两个增长期的数据显示,提前做出预测的结果很有希望。通过排列重要性(PIMP)方法确定的关键预测特征可以深入了解影响产量的主要因素。这些发现强调了使用无人机时间序列数据和循环神经网络在田间规模上进行准确的收获前产量预测的潜力,支持及时和精确的农业决策。
{"title":"Prediction of sugar beet yield and quality parameters using Stacked-LSTM model with pre-harvest UAV time series data and meteorological factors","authors":"Qing Wang ,&nbsp;Ke Shao ,&nbsp;Zhibo Cai ,&nbsp;Yingpu Che ,&nbsp;Haochong Chen ,&nbsp;Shunfu Xiao ,&nbsp;Ruili Wang ,&nbsp;Yaling Liu ,&nbsp;Baoguo Li ,&nbsp;Yuntao Ma","doi":"10.1016/j.aiia.2025.02.004","DOIUrl":"10.1016/j.aiia.2025.02.004","url":null,"abstract":"<div><div>Accurate pre-harvest prediction of sugar beet yield is vital for effective agricultural management and decision-making. However, traditional methods are constrained by reliance on empirical knowledge, time-consuming processes, resource intensiveness, and spatial-temporal variability in prediction accuracy. This study presented a plot-level approach that leverages UAV technology and recurrent neural networks to provide field yield predictions within the same growing season, addressing a significant gap in previous research that often focuses on regional scale predictions relied on multi-year history datasets. End-of-season yield and quality parameters were forecasted using UAV-derived time series data and meteorological factors collected at three critical growth stages, providing a timely and practical tool for farm management. Two years of data covering 185 sugar beet varieties were used to train a developed stacked Long Short-Term Memory (LSTM) model, which was compared with traditional machine learning approaches. Incorporating fresh weight estimates of aboveground and root biomass as predictive factors significantly enhanced prediction accuracy. Optimal performance in prediction was observed when utilizing data from all three growth periods, with <em>R</em><sup>2</sup> values of 0.761 (rRMSE = 7.1 %) for sugar content, 0.531 (rRMSE = 22.5 %) for root yield, and 0.478 (rRMSE = 23.4 %) for sugar yield. Furthermore, combining data from the first two growth periods shows promising results for making the predictions earlier. Key predictive features identified through the Permutation Importance (PIMP) method provided insights into the main factors influencing yield. These findings underscore the potential of using UAV time-series data and recurrent neural networks for accurate pre-harvest yield prediction at the field scale, supporting timely and precise agricultural decisions.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 2","pages":"Pages 252-265"},"PeriodicalIF":8.2,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143643996","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Research on an orchard row centreline multipoint autonomous navigation method based on LiDAR 基于激光雷达的果园排中线多点自主导航方法研究
IF 8.2 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2025-06-01 Epub Date: 2024-12-19 DOI: 10.1016/j.aiia.2024.12.003
Chen Zhenyu , Dou Hanjie , Gao Yuanyuan , Zhai Changyuan , Wang Xiu , Zou Wei
Orchard intelligent equipment must perform autonomous navigation tasks along fruit tree row centrelines and headlands according to established operational requirements. The tree canopy obstructs satellite signals, limiting the accuracy and stability of the GNSS-based autonomous navigation system. This paper presents a multipoint autonomous navigation method with the orchard row centreline navigation capabilities by integrating light detection and ranging (LiDAR) and inertial measurement unit (IMU) data. The method begins by constructing a three-dimensional (3D) point cloud map of the orchard via the LIO_SAM algorithm, and a 3D point cloud-to-two-dimensional (2D) grid map algorithm is designed. This algorithm retains the tree trunk position information from the point cloud based on tree trunk features to obtain a 2D grid map for orchard navigation, and the navigation point coordinates were calculated based on tree trunk positions. A multipoint navigation method was designed, where the system automatically determines the completion status of the previous navigation point and sequentially issues navigation point coordinates, enabling autonomous navigation along the row centrelines and headlands during orchard operations. Row centreline navigation tests and headland turning tests were conducted, and the performances of 16-line and 32-line LiDAR with this method are compared. The research results reveal that the multipoint navigation method could achieve movement along orchard row centrelines and deploy autonomous turning. The 32-line LiDAR data demonstrated an average absolute lateral deviation of 1.83 cm, a standard deviation of 1.60 cm, and a maximum deviation of 10.30 cm at a 3-m navigation point interval, indicating greater precision. However, the turning time was longer, with increases of 8.11 % and 6.13 % with the two different turning methods compared to the 16-line LiDAR. The research results provide support for research on autonomous navigation technology for intelligent orchard equipment.
果园智能设备必须根据既定的操作要求,沿着果树排中心线和岬角执行自主导航任务。树冠遮挡卫星信号,限制了基于gnss的自主导航系统的精度和稳定性。本文提出了一种结合光探测与测距(LiDAR)和惯性测量单元(IMU)数据,具有果园排中线导航能力的多点自主导航方法。该方法首先通过LIO_SAM算法构建果园三维(3D)点云图,并设计了三维点云到二维(2D)网格图算法。该算法根据树干特征保留点云中的树干位置信息,获得用于果园导航的二维网格图,并根据树干位置计算导航点坐标。设计了一种多点导航方法,系统自动确定前一个导航点的完成状态,并依次给出导航点坐标,实现果园作业期间沿行中心线和岬角的自主导航。通过行中线导航试验和海岬转弯试验,比较了采用该方法的16线和32线激光雷达的性能。研究结果表明,多点导航方法可以实现果园排中心线移动和自主转弯。32线激光雷达数据显示,在3 m导航点间隔内,平均绝对侧向偏差为1.83 cm,标准偏差为1.60 cm,最大偏差为10.30 cm,精度较高。与16线激光雷达相比,两种转弯方式的转弯时间分别增加了8.11%和6.13%。研究结果为智能果园设备自主导航技术的研究提供了支撑。
{"title":"Research on an orchard row centreline multipoint autonomous navigation method based on LiDAR","authors":"Chen Zhenyu ,&nbsp;Dou Hanjie ,&nbsp;Gao Yuanyuan ,&nbsp;Zhai Changyuan ,&nbsp;Wang Xiu ,&nbsp;Zou Wei","doi":"10.1016/j.aiia.2024.12.003","DOIUrl":"10.1016/j.aiia.2024.12.003","url":null,"abstract":"<div><div>Orchard intelligent equipment must perform autonomous navigation tasks along fruit tree row centrelines and headlands according to established operational requirements. The tree canopy obstructs satellite signals, limiting the accuracy and stability of the GNSS-based autonomous navigation system. This paper presents a multipoint autonomous navigation method with the orchard row centreline navigation capabilities by integrating light detection and ranging (LiDAR) and inertial measurement unit (IMU) data. The method begins by constructing a three-dimensional (3D) point cloud map of the orchard via the LIO_SAM algorithm, and a 3D point cloud-to-two-dimensional (2D) grid map algorithm is designed. This algorithm retains the tree trunk position information from the point cloud based on tree trunk features to obtain a 2D grid map for orchard navigation, and the navigation point coordinates were calculated based on tree trunk positions. A multipoint navigation method was designed, where the system automatically determines the completion status of the previous navigation point and sequentially issues navigation point coordinates, enabling autonomous navigation along the row centrelines and headlands during orchard operations. Row centreline navigation tests and headland turning tests were conducted, and the performances of 16-line and 32-line LiDAR with this method are compared. The research results reveal that the multipoint navigation method could achieve movement along orchard row centrelines and deploy autonomous turning. The 32-line LiDAR data demonstrated an average absolute lateral deviation of 1.83 cm, a standard deviation of 1.60 cm, and a maximum deviation of 10.30 cm at a 3-m navigation point interval, indicating greater precision. However, the turning time was longer, with increases of 8.11 % and 6.13 % with the two different turning methods compared to the 16-line LiDAR. The research results provide support for research on autonomous navigation technology for intelligent orchard equipment.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 2","pages":"Pages 221-231"},"PeriodicalIF":8.2,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143611175","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Artificial Intelligence in Agriculture
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1