首页 > 最新文献

Artificial Intelligence in Agriculture最新文献

英文 中文
TGFN-SD: A text-guided multimodal fusion network for swine disease diagnosis TGFN-SD:文本引导的猪疾病诊断多模式融合网络
IF 8.2 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2025-03-14 DOI: 10.1016/j.aiia.2025.03.002
Gan Yang , Qifeng Li , Chunjiang Zhao , Chaoyuan Wang , Hua Yan , Rui Meng , Yu Liu , Ligen Yu
China is the world's largest producer of pigs, but traditional manual prevention, treatment, and diagnosis methods cannot satisfy the demands of the current intensive production environment. Existing computer-aided diagnosis (CAD) systems for pigs are dominated by expert systems, which cannot be widely applied because the collection and maintenance of knowledge is difficult, and most of them ignore the effect of multimodal information. A swine disease diagnosis model was proposed in this study, the Text-Guided Fusion Network-Swine Diagnosis (TGFN-SD) model, which integrated text case reports and disease images. The model integrated the differences and complementary information in the multimodal representation of diseases through the text-guided transformer module such that text case reports could carry the semantic information of disease images for disease identification. Moreover, it alleviated the phenotypic overlap problem caused by similar diseases in combination with supervised learning and self-supervised learning. Experimental results revealed that TGFN-SD achieved satisfactory performance on a constructed swine disease image and text dataset (SDT6K) that covered six disease classification datasets with accuracy and F1-score of 94.48 % and 94.4 % respectively. The accuracies and F1-scores increased by 8.35 % and 7.24 % compared with those under the unimodal situation and by 2.02 % and 1.63 % compared with those of the optimal baseline model under the multimodal fusion. Additionally, interpretability analysis revealed that the model focus area was consistent with the habits and rules of the veterinary clinical diagnosis of pigs, indicating the effectiveness of the proposed model and providing new ideas and perspectives for the study of swine disease CAD.
中国是世界上最大的生猪生产国,但传统的人工预防、治疗和诊断方法已不能满足当前集约化生产环境的需求。现有的猪计算机辅助诊断(CAD)系统以专家系统为主,由于知识的收集和维护困难,无法得到广泛应用,而且大多忽略了多模态信息的影响。本研究提出了一种猪疾病诊断模型——文本引导融合网络-猪诊断(TGFN-SD)模型,该模型集成了文本病例报告和疾病图像。该模型通过文本引导转换模块整合疾病多模态表示中的差异信息和互补信息,使文本病例报告能够携带疾病图像的语义信息进行疾病识别。并且结合监督学习和自监督学习,缓解了同类疾病引起的表型重叠问题。实验结果表明,TGFN-SD在包含6个疾病分类数据集的猪疾病图像和文本数据集(SDT6K)上取得了令人满意的性能,准确率和f1得分分别为94.48%和94.4%。与单模态下相比,准确率和f1评分分别提高了8.35%和7.24%,与多模态融合下的最优基线模型相比,准确率和f1评分分别提高了2.02%和1.63%。此外,可解释性分析表明,模型的焦点区域与猪的兽医临床诊断习惯和规律一致,表明所建模型的有效性,为猪病CAD的研究提供了新的思路和视角。
{"title":"TGFN-SD: A text-guided multimodal fusion network for swine disease diagnosis","authors":"Gan Yang ,&nbsp;Qifeng Li ,&nbsp;Chunjiang Zhao ,&nbsp;Chaoyuan Wang ,&nbsp;Hua Yan ,&nbsp;Rui Meng ,&nbsp;Yu Liu ,&nbsp;Ligen Yu","doi":"10.1016/j.aiia.2025.03.002","DOIUrl":"10.1016/j.aiia.2025.03.002","url":null,"abstract":"<div><div>China is the world's largest producer of pigs, but traditional manual prevention, treatment, and diagnosis methods cannot satisfy the demands of the current intensive production environment. Existing computer-aided diagnosis (CAD) systems for pigs are dominated by expert systems, which cannot be widely applied because the collection and maintenance of knowledge is difficult, and most of them ignore the effect of multimodal information. A swine disease diagnosis model was proposed in this study, the Text-Guided Fusion Network-Swine Diagnosis (TGFN-SD) model, which integrated text case reports and disease images. The model integrated the differences and complementary information in the multimodal representation of diseases through the text-guided transformer module such that text case reports could carry the semantic information of disease images for disease identification. Moreover, it alleviated the phenotypic overlap problem caused by similar diseases in combination with supervised learning and self-supervised learning. Experimental results revealed that TGFN-SD achieved satisfactory performance on a constructed swine disease image and text dataset (SDT6K) that covered six disease classification datasets with accuracy and F1-score of 94.48 % and 94.4 % respectively. The accuracies and F1-scores increased by 8.35 % and 7.24 % compared with those under the unimodal situation and by 2.02 % and 1.63 % compared with those of the optimal baseline model under the multimodal fusion. Additionally, interpretability analysis revealed that the model focus area was consistent with the habits and rules of the veterinary clinical diagnosis of pigs, indicating the effectiveness of the proposed model and providing new ideas and perspectives for the study of swine disease CAD.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 2","pages":"Pages 266-279"},"PeriodicalIF":8.2,"publicationDate":"2025-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143684646","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A review of the application prospects of cloud-edge-end collaborative technology in freshwater aquaculture 云-端协同技术在淡水养殖中的应用前景综述
IF 8.2 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2025-03-04 DOI: 10.1016/j.aiia.2025.02.008
Jihao Wang , Xiaochan Wang , Yinyan Shi , Haihui Yang , Bo Jia , Xiaolei Zhang , Lebin Lin
This paper reviews the application and potential of cloud-edge-end collaborative (CEEC) technology in the field of freshwater aquaculture, a rapidly developing sector driven by the growing global demand for aquatic products. The sustainable development of freshwater aquaculture has become a critical challenge due to issues such as water pollution and inefficient resource utilization in traditional farming methods. In response to these challenges, the integration of smart technologies has emerged as a promising solution to improve both efficiency and sustainability. Cloud computing and edge computing, when combined, form the backbone of CEEC technology, offering an innovative approach that can significantly enhance aquaculture practices. By leveraging the strengths of both technologies, CEEC enables efficient data processing through cloud infrastructure and real-time responsiveness via edge computing, making it a compelling solution for modern aquaculture. This review explores the key applications of CEEC in areas such as environmental monitoring, intelligent feeding systems, health management, and product traceability. The ability of CEEC technology to optimize the aquaculture environment, enhance product quality, and boost overall farming efficiency highlights its potential to become a mainstream solution in the industry. Furthermore, the paper discusses the limitations and challenges that need to be addressed in order to fully realize the potential of CEEC in freshwater aquaculture. In conclusion, this paper provides researchers and practitioners with valuable insights into the current state of CEEC technology in aquaculture, offering suggestions for future development and optimization to further enhance its contributions to the sustainable growth of freshwater aquaculture.
本文综述了云边缘协同(CEEC)技术在淡水养殖领域的应用和潜力。淡水养殖是在全球对水产品需求不断增长的推动下迅速发展的领域。由于传统养殖方式中存在水污染和资源利用效率低下等问题,淡水水产养殖的可持续发展已成为一个严峻的挑战。为了应对这些挑战,智能技术的集成已经成为提高效率和可持续性的有希望的解决方案。云计算和边缘计算结合起来,构成了中东欧国家技术的支柱,提供了一种可以显著提高水产养殖实践的创新方法。通过利用这两种技术的优势,CEEC通过云基础设施实现高效的数据处理,并通过边缘计算实现实时响应,使其成为现代水产养殖的引人注目的解决方案。本文综述了CEEC在环境监测、智能喂养系统、健康管理和产品可追溯性等领域的主要应用。CEEC技术在优化养殖环境、提高产品质量和提高整体养殖效率方面的能力凸显了其成为行业主流解决方案的潜力。此外,本文还讨论了为了充分发挥中东欧国家在淡水水产养殖方面的潜力,需要解决的限制和挑战。综上所述,本文为研究人员和从业者提供了对中东欧国家水产养殖技术现状的宝贵见解,为未来的发展和优化提供了建议,以进一步增强其对淡水水产养殖可持续增长的贡献。
{"title":"A review of the application prospects of cloud-edge-end collaborative technology in freshwater aquaculture","authors":"Jihao Wang ,&nbsp;Xiaochan Wang ,&nbsp;Yinyan Shi ,&nbsp;Haihui Yang ,&nbsp;Bo Jia ,&nbsp;Xiaolei Zhang ,&nbsp;Lebin Lin","doi":"10.1016/j.aiia.2025.02.008","DOIUrl":"10.1016/j.aiia.2025.02.008","url":null,"abstract":"<div><div>This paper reviews the application and potential of cloud-edge-end collaborative (CEEC) technology in the field of freshwater aquaculture, a rapidly developing sector driven by the growing global demand for aquatic products. The sustainable development of freshwater aquaculture has become a critical challenge due to issues such as water pollution and inefficient resource utilization in traditional farming methods. In response to these challenges, the integration of smart technologies has emerged as a promising solution to improve both efficiency and sustainability. Cloud computing and edge computing, when combined, form the backbone of CEEC technology, offering an innovative approach that can significantly enhance aquaculture practices. By leveraging the strengths of both technologies, CEEC enables efficient data processing through cloud infrastructure and real-time responsiveness via edge computing, making it a compelling solution for modern aquaculture. This review explores the key applications of CEEC in areas such as environmental monitoring, intelligent feeding systems, health management, and product traceability. The ability of CEEC technology to optimize the aquaculture environment, enhance product quality, and boost overall farming efficiency highlights its potential to become a mainstream solution in the industry. Furthermore, the paper discusses the limitations and challenges that need to be addressed in order to fully realize the potential of CEEC in freshwater aquaculture. In conclusion, this paper provides researchers and practitioners with valuable insights into the current state of CEEC technology in aquaculture, offering suggestions for future development and optimization to further enhance its contributions to the sustainable growth of freshwater aquaculture.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 2","pages":"Pages 232-251"},"PeriodicalIF":8.2,"publicationDate":"2025-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143620570","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Prediction of sugar beet yield and quality parameters using Stacked-LSTM model with pre-harvest UAV time series data and meteorological factors 基于采收前无人机时间序列数据和气象因子的叠置- lstm模型预测甜菜产量和品质参数
IF 8.2 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2025-02-27 DOI: 10.1016/j.aiia.2025.02.004
Qing Wang , Ke Shao , Zhibo Cai , Yingpu Che , Haochong Chen , Shunfu Xiao , Ruili Wang , Yaling Liu , Baoguo Li , Yuntao Ma
Accurate pre-harvest prediction of sugar beet yield is vital for effective agricultural management and decision-making. However, traditional methods are constrained by reliance on empirical knowledge, time-consuming processes, resource intensiveness, and spatial-temporal variability in prediction accuracy. This study presented a plot-level approach that leverages UAV technology and recurrent neural networks to provide field yield predictions within the same growing season, addressing a significant gap in previous research that often focuses on regional scale predictions relied on multi-year history datasets. End-of-season yield and quality parameters were forecasted using UAV-derived time series data and meteorological factors collected at three critical growth stages, providing a timely and practical tool for farm management. Two years of data covering 185 sugar beet varieties were used to train a developed stacked Long Short-Term Memory (LSTM) model, which was compared with traditional machine learning approaches. Incorporating fresh weight estimates of aboveground and root biomass as predictive factors significantly enhanced prediction accuracy. Optimal performance in prediction was observed when utilizing data from all three growth periods, with R2 values of 0.761 (rRMSE = 7.1 %) for sugar content, 0.531 (rRMSE = 22.5 %) for root yield, and 0.478 (rRMSE = 23.4 %) for sugar yield. Furthermore, combining data from the first two growth periods shows promising results for making the predictions earlier. Key predictive features identified through the Permutation Importance (PIMP) method provided insights into the main factors influencing yield. These findings underscore the potential of using UAV time-series data and recurrent neural networks for accurate pre-harvest yield prediction at the field scale, supporting timely and precise agricultural decisions.
准确的收获前甜菜产量预测对有效的农业管理和决策至关重要。然而,传统的方法受到依赖经验知识、耗时、资源密集和预测精度时空变异性的限制。本研究提出了一种利用无人机技术和循环神经网络的地块级方法,在同一生长季节提供现场产量预测,解决了以往研究中依赖多年历史数据集的区域尺度预测的重大空白。利用无人机获取的三个关键生长阶段的时间序列数据和气象因子对季末产量和品质参数进行预测,为农场管理提供了及时实用的工具。研究人员使用了185个甜菜品种的两年数据来训练开发的堆叠长短期记忆(LSTM)模型,并将其与传统的机器学习方法进行了比较。将地上鲜重估算值和根系生物量作为预测因子显著提高了预测精度。利用所有三个生长期的数据进行预测的效果最佳,其中糖含量的R2值为0.761 (rRMSE = 7.1%),根产量的R2值为0.531 (rRMSE = 22.5%),糖产量的R2值为0.478 (rRMSE = 23.4%)。此外,结合前两个增长期的数据显示,提前做出预测的结果很有希望。通过排列重要性(PIMP)方法确定的关键预测特征可以深入了解影响产量的主要因素。这些发现强调了使用无人机时间序列数据和循环神经网络在田间规模上进行准确的收获前产量预测的潜力,支持及时和精确的农业决策。
{"title":"Prediction of sugar beet yield and quality parameters using Stacked-LSTM model with pre-harvest UAV time series data and meteorological factors","authors":"Qing Wang ,&nbsp;Ke Shao ,&nbsp;Zhibo Cai ,&nbsp;Yingpu Che ,&nbsp;Haochong Chen ,&nbsp;Shunfu Xiao ,&nbsp;Ruili Wang ,&nbsp;Yaling Liu ,&nbsp;Baoguo Li ,&nbsp;Yuntao Ma","doi":"10.1016/j.aiia.2025.02.004","DOIUrl":"10.1016/j.aiia.2025.02.004","url":null,"abstract":"<div><div>Accurate pre-harvest prediction of sugar beet yield is vital for effective agricultural management and decision-making. However, traditional methods are constrained by reliance on empirical knowledge, time-consuming processes, resource intensiveness, and spatial-temporal variability in prediction accuracy. This study presented a plot-level approach that leverages UAV technology and recurrent neural networks to provide field yield predictions within the same growing season, addressing a significant gap in previous research that often focuses on regional scale predictions relied on multi-year history datasets. End-of-season yield and quality parameters were forecasted using UAV-derived time series data and meteorological factors collected at three critical growth stages, providing a timely and practical tool for farm management. Two years of data covering 185 sugar beet varieties were used to train a developed stacked Long Short-Term Memory (LSTM) model, which was compared with traditional machine learning approaches. Incorporating fresh weight estimates of aboveground and root biomass as predictive factors significantly enhanced prediction accuracy. Optimal performance in prediction was observed when utilizing data from all three growth periods, with <em>R</em><sup>2</sup> values of 0.761 (rRMSE = 7.1 %) for sugar content, 0.531 (rRMSE = 22.5 %) for root yield, and 0.478 (rRMSE = 23.4 %) for sugar yield. Furthermore, combining data from the first two growth periods shows promising results for making the predictions earlier. Key predictive features identified through the Permutation Importance (PIMP) method provided insights into the main factors influencing yield. These findings underscore the potential of using UAV time-series data and recurrent neural networks for accurate pre-harvest yield prediction at the field scale, supporting timely and precise agricultural decisions.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 2","pages":"Pages 252-265"},"PeriodicalIF":8.2,"publicationDate":"2025-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143643996","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep learning-based classification, detection, and segmentation of tomato leaf diseases: A state-of-the-art review 基于深度学习的番茄叶病分类、检测和分割:最新进展综述
IF 8.2 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2025-02-20 DOI: 10.1016/j.aiia.2025.02.006
Aritra Das , Fahad Pathan , Jamin Rahman Jim , Md Mohsin Kabir , M.F. Mridha
The early identification and treatment of tomato leaf diseases are crucial for optimizing plant productivity, efficiency and quality. Misdiagnosis by the farmers poses the risk of inadequate treatments, harming both tomato plants and agroecosystems. Precision of disease diagnosis is essential, necessitating a swift and accurate response to misdiagnosis for early identification. Tropical regions are ideal for tomato plants, but there are inherent concerns, such as weather-related problems. Plant diseases largely cause financial losses in crop production. The slow detection periods of conventional approaches are insufficient for the timely detection of tomato diseases. Deep learning has emerged as a promising avenue for early disease identification. This study comprehensively analyzed techniques for classifying and detecting tomato leaf diseases and evaluating their strengths and weaknesses. The study delves into various diagnostic procedures, including image pre-processing, localization and segmentation. In conclusion, applying deep learning algorithms holds great promise for enhancing the accuracy and efficiency of tomato leaf disease diagnosis by offering faster and more effective results.
番茄叶片病害的早期识别和处理是优化植株产量、效率和品质的关键。农民的误诊造成了治疗不充分的风险,损害了番茄植株和农业生态系统。疾病诊断的准确性至关重要,需要对误诊作出迅速和准确的反应,以便及早发现。热带地区是种植番茄的理想之地,但也存在一些固有的问题,比如与天气有关的问题。植物病害在很大程度上造成作物生产的经济损失。传统方法的检测周期较慢,不足以及时发现番茄病害。深度学习已经成为早期疾病识别的一个有前途的途径。本文综合分析了番茄叶片病害的分类检测技术,并对其优缺点进行了评价。该研究深入研究了各种诊断程序,包括图像预处理,定位和分割。综上所述,应用深度学习算法可以提供更快、更有效的结果,从而提高番茄叶病诊断的准确性和效率。
{"title":"Deep learning-based classification, detection, and segmentation of tomato leaf diseases: A state-of-the-art review","authors":"Aritra Das ,&nbsp;Fahad Pathan ,&nbsp;Jamin Rahman Jim ,&nbsp;Md Mohsin Kabir ,&nbsp;M.F. Mridha","doi":"10.1016/j.aiia.2025.02.006","DOIUrl":"10.1016/j.aiia.2025.02.006","url":null,"abstract":"<div><div>The early identification and treatment of tomato leaf diseases are crucial for optimizing plant productivity, efficiency and quality. Misdiagnosis by the farmers poses the risk of inadequate treatments, harming both tomato plants and agroecosystems. Precision of disease diagnosis is essential, necessitating a swift and accurate response to misdiagnosis for early identification. Tropical regions are ideal for tomato plants, but there are inherent concerns, such as weather-related problems. Plant diseases largely cause financial losses in crop production. The slow detection periods of conventional approaches are insufficient for the timely detection of tomato diseases. Deep learning has emerged as a promising avenue for early disease identification. This study comprehensively analyzed techniques for classifying and detecting tomato leaf diseases and evaluating their strengths and weaknesses. The study delves into various diagnostic procedures, including image pre-processing, localization and segmentation. In conclusion, applying deep learning algorithms holds great promise for enhancing the accuracy and efficiency of tomato leaf disease diagnosis by offering faster and more effective results.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 2","pages":"Pages 192-220"},"PeriodicalIF":8.2,"publicationDate":"2025-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143520260","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Using UAV-based multispectral images and CGS-YOLO algorithm to distinguish maize seeding from weed 利用无人机多光谱图像和CGS-YOLO算法对玉米种子和杂草进行区分
IF 8.2 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2025-02-17 DOI: 10.1016/j.aiia.2025.02.007
Boyi Tang , Jingping Zhou , Chunjiang Zhao , Yuchun Pan , Yao Lu , Chang Liu , Kai Ma , Xuguang Sun , Ruifang Zhang , Xiaohe Gu
Accurate recognition of maize seedlings on the plot scale under the disturbance of weeds is crucial for early seedling replenishment and weed removal. Currently, UAV-based maize seedling recognition depends primarily on RGB images. The main purpose of this study is to compare the performances of multispectral images and RGB images of unmanned aerial vehicle (UAV) on maize seeding recognition using deep learning algorithms. Additionally, we aim to assess the disturbance of different weed coverage on the recognition of maize seeding. Firstly, principal component analysis was used in multispectral image transformation. Secondly, by introducing the CARAFE sampling operator and a small target detection layer (SLAY), we extracted the contextual information of each pixel to retain weak features in the maize seedling image. Thirdly, the global attention mechanism (GAM) was employed to capture the features of maize seedlings using the dual attention mechanism of spatial and channel information. The CGS-YOLO algorithm was constructed and formed. Finally, we compared the performance of the improved algorithm with a series of deep learning algorithms, including YOLO v3, v5, v6 and v8. The results show that after PCA transformation, the recognition mAP of maize seedlings reaches 82.6 %, representing 3.1 percentage points improvement compared to RGB images. Compared with YOLOv8, YOLOv6, YOLOv5, and YOLOv3, the CGS-YOLO algorithm has improved mAP by 3.8, 4.2, 4.5 and 6.6 percentage points, respectively. With the increase of weed coverage, the recognition effect of maize seedlings gradually decreased. When weed coverage was more than 70 %, the mAP difference becomes significant, but CGS-YOLO still maintains a recognition mAP of 72 %. Therefore, in maize seedings recognition, UAV-based multispectral images perform better than RGB images. The application of CGS-YOLO deep learning algorithm with UAV multi-spectral images proves beneficial in the recognition of maize seedlings under weed disturbance.
在杂草干扰下准确识别玉米幼苗在样地尺度上的位置,对早期补苗和除草至关重要。目前,基于无人机的玉米幼苗识别主要依赖于RGB图像。本研究的主要目的是利用深度学习算法比较无人机(UAV)多光谱图像和RGB图像对玉米种子识别的性能。此外,我们还旨在评估不同杂草覆盖对玉米播种识别的干扰。首先,将主成分分析应用于多光谱图像变换。其次,通过引入CARAFE采样算子和小目标检测层(SLAY),提取每个像素的上下文信息,保留玉米幼苗图像中的弱特征;第三,采用全局注意机制(GAM),利用空间信息和通道信息的双重注意机制捕捉玉米幼苗的特征。构造并形成了CGS-YOLO算法。最后,我们将改进算法与一系列深度学习算法(包括YOLO v3、v5、v6和v8)的性能进行了比较。结果表明,经过PCA变换后,玉米幼苗的mAP识别率达到82.6%,比RGB图像提高了3.1个百分点。与YOLOv8、YOLOv6、YOLOv5和YOLOv3相比,CGS-YOLO算法的mAP分别提高了3.8、4.2、4.5和6.6个百分点。随着杂草盖度的增加,玉米幼苗的识别效果逐渐降低。当杂草盖度大于70%时,mAP差异显著,但CGS-YOLO仍然保持72%的识别mAP。因此,在玉米种子识别中,基于无人机的多光谱图像优于RGB图像。将CGS-YOLO深度学习算法应用于无人机多光谱图像,可有效识别杂草干扰下的玉米幼苗。
{"title":"Using UAV-based multispectral images and CGS-YOLO algorithm to distinguish maize seeding from weed","authors":"Boyi Tang ,&nbsp;Jingping Zhou ,&nbsp;Chunjiang Zhao ,&nbsp;Yuchun Pan ,&nbsp;Yao Lu ,&nbsp;Chang Liu ,&nbsp;Kai Ma ,&nbsp;Xuguang Sun ,&nbsp;Ruifang Zhang ,&nbsp;Xiaohe Gu","doi":"10.1016/j.aiia.2025.02.007","DOIUrl":"10.1016/j.aiia.2025.02.007","url":null,"abstract":"<div><div>Accurate recognition of maize seedlings on the plot scale under the disturbance of weeds is crucial for early seedling replenishment and weed removal. Currently, UAV-based maize seedling recognition depends primarily on RGB images. The main purpose of this study is to compare the performances of multispectral images and RGB images of unmanned aerial vehicle (UAV) on maize seeding recognition using deep learning algorithms. Additionally, we aim to assess the disturbance of different weed coverage on the recognition of maize seeding. Firstly, principal component analysis was used in multispectral image transformation. Secondly, by introducing the CARAFE sampling operator and a small target detection layer (SLAY), we extracted the contextual information of each pixel to retain weak features in the maize seedling image. Thirdly, the global attention mechanism (GAM) was employed to capture the features of maize seedlings using the dual attention mechanism of spatial and channel information. The CGS-YOLO algorithm was constructed and formed. Finally, we compared the performance of the improved algorithm with a series of deep learning algorithms, including YOLO v3, v5, v6 and v8. The results show that after PCA transformation, the recognition mAP of maize seedlings reaches 82.6 %, representing 3.1 percentage points improvement compared to RGB images. Compared with YOLOv8, YOLOv6, YOLOv5, and YOLOv3, the CGS-YOLO algorithm has improved mAP by 3.8, 4.2, 4.5 and 6.6 percentage points, respectively. With the increase of weed coverage, the recognition effect of maize seedlings gradually decreased. When weed coverage was more than 70 %, the mAP difference becomes significant, but CGS-YOLO still maintains a recognition mAP of 72 %. Therefore, in maize seedings recognition, UAV-based multispectral images perform better than RGB images. The application of CGS-YOLO deep learning algorithm with UAV multi-spectral images proves beneficial in the recognition of maize seedlings under weed disturbance.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 2","pages":"Pages 162-181"},"PeriodicalIF":8.2,"publicationDate":"2025-02-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143512498","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Stereo vision based broccoli recognition and attitude estimation method for field harvesting 基于立体视觉的西兰花田间收获识别与姿态估计方法
IF 8.2 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2025-02-13 DOI: 10.1016/j.aiia.2025.02.002
Zhenni He , Fahui Yuan , Yansuo Zhou , Bingbo Cui , Yong He , Yufei Liu
At present, automatic broccoli harvest in field still faces some issues. It is difficult to segment broccoli in real time under complex field background, and hard to pick tilt-growing broccoli for the end-effector of robot. In this research, an improved YOLOv8n-seg model, named YOLO-Broccoli-Seg was proposed for broccoli recognition. Through adding a triplet attention module to YOLOv8-Seg model, the feature fusion capability of the algorithm is improved significantly. The mean average precision mAP50 (Mask), mAP95 (Mask), mAP50 (Bounding Box, Bbox) and mAP95 (Bbox) of YOLO-Broccoli-Seg are 0.973, 0.683, 0.973 and 0.748 respectively. Precision P-value was improved the most, with an increment of 8.7 %. In addition, an attitude estimation method based on three-dimensional point cloud is proposed. When the tilt angle of broccoli is between −30°and 30°, the R2 between the estimated value and the true value is 0.934. It indicated that this method can well represent the growth attitude of broccoli. This research can provide the rich broccoli information and technical basis for the automated broccoli picking.
目前,西兰花田间自动收获还面临着一些问题。在复杂的田间背景下,对西兰花进行实时分割是困难的,对机器人末端执行器来说,对倾斜生长的西兰花进行选择也是困难的。本研究提出了一种改进的YOLOv8n-seg模型,命名为YOLOv8n-seg。通过在YOLOv8-Seg模型中加入三重关注模块,显著提高了算法的特征融合能力。YOLO-Broccoli-Seg的平均精度mAP50 (Mask)、mAP95 (Mask)、mAP50 (Bounding Box, Bbox)和mAP95 (Bbox)分别为0.973、0.683、0.973和0.748。精度p值提高幅度最大,达到8.7%。此外,提出了一种基于三维点云的姿态估计方法。西兰花倾斜角度在−30°~ 30°之间时,估计值与真实值的R2为0.934。结果表明,该方法能较好地反映西兰花的生长态度。本研究可为花椰菜自动化采摘提供丰富的花椰菜信息和技术依据。
{"title":"Stereo vision based broccoli recognition and attitude estimation method for field harvesting","authors":"Zhenni He ,&nbsp;Fahui Yuan ,&nbsp;Yansuo Zhou ,&nbsp;Bingbo Cui ,&nbsp;Yong He ,&nbsp;Yufei Liu","doi":"10.1016/j.aiia.2025.02.002","DOIUrl":"10.1016/j.aiia.2025.02.002","url":null,"abstract":"<div><div>At present, automatic broccoli harvest in field still faces some issues. It is difficult to segment broccoli in real time under complex field background, and hard to pick tilt-growing broccoli for the end-effector of robot. In this research, an improved YOLOv8n-seg model, named YOLO-Broccoli-Seg was proposed for broccoli recognition. Through adding a triplet attention module to YOLOv8-Seg model, the feature fusion capability of the algorithm is improved significantly. The mean average precision mAP50 (Mask), mAP95 (Mask), mAP50 (Bounding Box, Bbox) and mAP95 (Bbox) of YOLO-Broccoli-Seg are 0.973, 0.683, 0.973 and 0.748 respectively. Precision <em>P</em>-value was improved the most, with an increment of 8.7 %. In addition, an attitude estimation method based on three-dimensional point cloud is proposed. When the tilt angle of broccoli is between −30°and 30°, the R<sup>2</sup> between the estimated value and the true value is 0.934. It indicated that this method can well represent the growth attitude of broccoli. This research can provide the rich broccoli information and technical basis for the automated broccoli picking.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 3","pages":"Pages 526-536"},"PeriodicalIF":8.2,"publicationDate":"2025-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144106614","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
End-to-end deep fusion of hyperspectral imaging and computer vision techniques for rapid detection of wheat seed quality 基于端到端高光谱成像和计算机视觉技术的小麦种子质量快速检测
IF 8.2 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2025-02-13 DOI: 10.1016/j.aiia.2025.02.003
Tingting Zhang , Jing Li , Jinpeng Tong , Yihu Song , Li Wang , Renye Wu , Xuan Wei , Yuanyuan Song , Rensen Zeng
Seeds are essential to the agri-food industry. However, their quality is vulnerable to biotic and abiotic stresses during production and storage, leading to various types of deterioration. Real-time monitoring and pre-sowing screening offer substantial potential for improved storage management, field performance, and flour quality. This study investigated diverse deterioration patterns in wheat seeds by analyzing 1000 high-quality and 1098 deteriorated seeds encompassing mold, aging, mechanical damage, insect damage, and internal insect infestation. Hyperspectral imaging (HSI) and computer vision (CV) were employed to capture surface data from both the embryo (EM) and endosperm (EN). Internal seed quality was further assessed using scanning electron microscopy, dissection, and standard germination tests. Both conventional machine learning algorithms and deep convolutional neural networks (DCNN) were employed to develop discriminative models using independent datasets. Results revealed that each data source contributed valuable information for seed quality assessment (validation set accuracy: 65.1–89.2 %), with the integration of HSI and CV showing considerable promise. A comparison of early and late fusion strategies led to the development of an end-to-end deep fusion model. The decision fusion-based DCNN model, integrating HSI-EM, HSI-EN, CV-EM, and CV-EN data, achieved the highest accuracy in both training (94.3 %) and validation (93.8 %) sets. Applying this model to seed lot screening increased the proportion of high-quality seeds from 47.7 % to 93.4 %. These findings were further supported by external samples and visualizations. The proposed end-to-end decision fusion DCNN model simplifies the training process compared to traditional two-stage fusion methods. This study presents a potentially efficient alternative for rapid, individual kernel quality detection and control during wheat production.
种子对农业食品工业至关重要。然而,在生产和储存过程中,它们的品质容易受到生物和非生物胁迫,导致各种类型的变质。实时监测和播前筛选为改善储存管理、田间性能和面粉质量提供了巨大的潜力。通过对1000粒优质小麦种子和1098粒优质小麦种子进行霉变、老化、机械损伤、虫蛀和内部虫害等方面的分析,研究了小麦种子的不同变质模式。采用高光谱成像(HSI)和计算机视觉(CV)技术对胚胎(EM)和胚乳(EN)的表面数据进行了采集。内部种子质量通过扫描电子显微镜、解剖和标准发芽试验进一步评估。采用传统的机器学习算法和深度卷积神经网络(DCNN)建立独立数据集的判别模型。结果表明,每个数据源都为种子质量评估提供了有价值的信息(验证集准确率为65.1 - 89.2%),HSI和CV的整合显示出相当大的前景。早期和晚期融合策略的比较导致了端到端深度融合模型的发展。基于决策融合的DCNN模型集成了HSI-EM、HSI-EN、CV-EM和CV-EN数据,在训练集(94.3%)和验证集(93.8%)上都达到了最高的准确率。将该模型应用于种子批次筛选,使优质种子比例由47.7%提高到93.4%。这些发现得到了外部样本和可视化的进一步支持。与传统的两阶段融合方法相比,提出的端到端决策融合DCNN模型简化了训练过程。本研究为小麦生产过程中籽粒质量的快速、个性化检测和控制提供了一种潜在的有效替代方法。
{"title":"End-to-end deep fusion of hyperspectral imaging and computer vision techniques for rapid detection of wheat seed quality","authors":"Tingting Zhang ,&nbsp;Jing Li ,&nbsp;Jinpeng Tong ,&nbsp;Yihu Song ,&nbsp;Li Wang ,&nbsp;Renye Wu ,&nbsp;Xuan Wei ,&nbsp;Yuanyuan Song ,&nbsp;Rensen Zeng","doi":"10.1016/j.aiia.2025.02.003","DOIUrl":"10.1016/j.aiia.2025.02.003","url":null,"abstract":"<div><div>Seeds are essential to the agri-food industry. However, their quality is vulnerable to biotic and abiotic stresses during production and storage, leading to various types of deterioration. Real-time monitoring and pre-sowing screening offer substantial potential for improved storage management, field performance, and flour quality. This study investigated diverse deterioration patterns in wheat seeds by analyzing 1000 high-quality and 1098 deteriorated seeds encompassing mold, aging, mechanical damage, insect damage, and internal insect infestation. Hyperspectral imaging (HSI) and computer vision (CV) were employed to capture surface data from both the embryo (EM) and endosperm (EN). Internal seed quality was further assessed using scanning electron microscopy, dissection, and standard germination tests. Both conventional machine learning algorithms and deep convolutional neural networks (DCNN) were employed to develop discriminative models using independent datasets. Results revealed that each data source contributed valuable information for seed quality assessment (validation set accuracy: 65.1–89.2 %), with the integration of HSI and CV showing considerable promise. A comparison of early and late fusion strategies led to the development of an end-to-end deep fusion model. The decision fusion-based DCNN model, integrating HSI-EM, HSI-EN, CV-EM, and CV-EN data, achieved the highest accuracy in both training (94.3 %) and validation (93.8 %) sets. Applying this model to seed lot screening increased the proportion of high-quality seeds from 47.7 % to 93.4 %. These findings were further supported by external samples and visualizations. The proposed end-to-end decision fusion DCNN model simplifies the training process compared to traditional two-stage fusion methods. This study presents a potentially efficient alternative for rapid, individual kernel quality detection and control during wheat production.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 3","pages":"Pages 537-549"},"PeriodicalIF":8.2,"publicationDate":"2025-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144106555","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Addressing computation resource exhaustion associated with deep learning training of three-dimensional hyperspectral images using multiclass weed classification 利用多类杂草分类解决三维高光谱图像深度学习训练相关的计算资源耗尽问题
IF 8.2 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2025-02-11 DOI: 10.1016/j.aiia.2025.02.005
Billy G. Ram , Kirk Howatt , Joseph Mettler , Xin Sun
Addressing the computational bottleneck of training deep learning models on high-resolution, three-dimensional images, this study introduces an optimized approach, combining distributed learning (parallelism), image resolution, and data augmentation. We propose analysis methodologies that help train deep learning (DL) models on proximal hyperspectral images, demonstrating superior performance in eight-class crop (canola, field pea, sugarbeet and flax) and weed (redroot pigweed, resistant kochia, waterhemp and ragweed) classification. Utilizing state-of-the-art model architectures (ResNet-50, VGG-16, DenseNet, EfficientNet) in comparison with ResNet-50 inspired Hyper-Residual Convolutional Neural Network model. Our findings reveal that an image resolution of 100x100x54 maximizes accuracy while maintaining computational efficiency, surpassing the performance of 150x150x54 and 50x50x54 resolution images. By employing data parallelism, we overcome system memory limitations and achieve exceptional classification results, with test accuracies and F1-scores reaching 0.96 and 0.97, respectively. This research highlights the potential of residual-based networks for analyzing hyperspectral images. It offers valuable insights into optimizing deep learning models in resource-constrained environments. The research presents detailed training pipelines for deep learning models that utilize large (> 4k) hyperspectral training samples, including background and without any data preprocessing. This approach enables the training of deep learning models directly on raw hyperspectral data.
为了解决在高分辨率三维图像上训练深度学习模型的计算瓶颈,本研究引入了一种优化方法,将分布式学习(并行)、图像分辨率和数据增强相结合。我们提出的分析方法有助于在近端高光谱图像上训练深度学习(DL)模型,在八类作物(油菜籽、豌豆、甜菜和亚麻)和杂草(红根藜、抗性土匪、水麻和豚草)分类中显示出卓越的性能。利用最先进的模型架构(ResNet-50, VGG-16, DenseNet, EfficientNet)与ResNet-50启发的超残差卷积神经网络模型进行比较。我们的研究结果表明,100x100x54的图像分辨率在保持计算效率的同时最大限度地提高了精度,超过了150x150x54和50x50x54分辨率图像的性能。通过使用数据并行性,我们克服了系统内存的限制,取得了优异的分类效果,测试准确率和f1分数分别达到0.96和0.97。这项研究突出了残差网络分析高光谱图像的潜力。它为在资源受限的环境中优化深度学习模型提供了有价值的见解。该研究为深度学习模型提供了详细的训练管道,这些模型利用大量的(>;4k)高光谱训练样本,包括背景和未经任何数据预处理。这种方法可以直接在原始高光谱数据上训练深度学习模型。
{"title":"Addressing computation resource exhaustion associated with deep learning training of three-dimensional hyperspectral images using multiclass weed classification","authors":"Billy G. Ram ,&nbsp;Kirk Howatt ,&nbsp;Joseph Mettler ,&nbsp;Xin Sun","doi":"10.1016/j.aiia.2025.02.005","DOIUrl":"10.1016/j.aiia.2025.02.005","url":null,"abstract":"<div><div>Addressing the computational bottleneck of training deep learning models on high-resolution, three-dimensional images, this study introduces an optimized approach, combining distributed learning (parallelism), image resolution, and data augmentation. We propose analysis methodologies that help train deep learning (DL) models on proximal hyperspectral images, demonstrating superior performance in eight-class crop (canola, field pea, sugarbeet and flax) and weed (redroot pigweed, resistant kochia, waterhemp and ragweed) classification. Utilizing state-of-the-art model architectures (ResNet-50, VGG-16, DenseNet, EfficientNet) in comparison with ResNet-50 inspired Hyper-Residual Convolutional Neural Network model. Our findings reveal that an image resolution of 100x100x54 maximizes accuracy while maintaining computational efficiency, surpassing the performance of 150x150x54 and 50x50x54 resolution images. By employing data parallelism, we overcome system memory limitations and achieve exceptional classification results, with test accuracies and F1-scores reaching 0.96 and 0.97, respectively. This research highlights the potential of residual-based networks for analyzing hyperspectral images. It offers valuable insights into optimizing deep learning models in resource-constrained environments. The research presents detailed training pipelines for deep learning models that utilize large (&gt; 4k) hyperspectral training samples, including background and without any data preprocessing. This approach enables the training of deep learning models directly on raw hyperspectral data.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 2","pages":"Pages 131-146"},"PeriodicalIF":8.2,"publicationDate":"2025-02-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143487682","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Advancing precision agriculture: A comparative analysis of YOLOv8 for multi-class weed detection in cotton cultivation 推进精准农业:YOLOv8在棉花种植多类别杂草检测中的比较分析
IF 8.2 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2025-02-11 DOI: 10.1016/j.aiia.2025.01.013
Ameer Tamoor Khan , Signe Marie Jensen , Abdul Rehman Khan
Effective weed management plays a critical role in enhancing the productivity and sustainability of cotton cultivation. The rapid emergence of herbicide-resistant weeds has underscored the need for innovative solutions to address the challenges associated with precise weed detection. This paper investigates the potential of YOLOv8, the latest advancement in the YOLO family of object detectors, for multi-class weed detection in U.S. cotton fields. Leveraging the CottonWeedDet12 dataset, which includes diverse weed species captured under varying environmental conditions, this study provides a comprehensive evaluation of YOLOv8's performance. A comparative analysis with earlier YOLO variants reveals substantial improvements in detection accuracy, as evidenced by higher mean Average Precision (mAP) scores. These findings highlight YOLOv8's superior capability to generalize across complex field scenarios, making it a promising candidate for real-time applications in precision agriculture. The enhanced architecture of YOLOv8, featuring anchor-free detection, an advanced Feature Pyramid Network (FPN), and an optimized loss function, enables accurate detection even under challenging conditions. This research emphasizes the importance of machine vision technologies in modern agriculture, particularly for minimizing herbicide reliance and promoting sustainable farming practices. The results not only validate YOLOv8's efficacy in multi-class weed detection but also pave the way for its integration into autonomous agricultural systems, thereby contributing to the broader goals of precision agriculture and ecological sustainability.
有效的杂草管理对提高棉花种植的生产力和可持续性起着至关重要的作用。抗除草剂杂草的迅速出现强调了需要创新的解决方案来解决与精确杂草检测相关的挑战。本文研究了YOLO目标检测器家族的最新进展YOLOv8在美国棉花田多类别杂草检测中的潜力。利用CottonWeedDet12数据集,其中包括在不同环境条件下捕获的多种杂草,本研究对YOLOv8的性能进行了全面评估。与早期的YOLO变体的比较分析显示,检测精度有了实质性的提高,平均平均精度(mAP)得分更高。这些发现突出了YOLOv8在复杂现场场景中的卓越泛化能力,使其成为精准农业实时应用的有希望的候选者。YOLOv8的增强架构具有无锚检测,先进的特征金字塔网络(FPN)和优化的损失函数,即使在具有挑战性的条件下也能实现准确的检测。这项研究强调了机器视觉技术在现代农业中的重要性,特别是在减少对除草剂的依赖和促进可持续农业实践方面。研究结果不仅验证了YOLOv8在多类别杂草检测中的有效性,而且为其融入自主农业系统铺平了道路,从而为精准农业和生态可持续性的更广泛目标做出贡献。
{"title":"Advancing precision agriculture: A comparative analysis of YOLOv8 for multi-class weed detection in cotton cultivation","authors":"Ameer Tamoor Khan ,&nbsp;Signe Marie Jensen ,&nbsp;Abdul Rehman Khan","doi":"10.1016/j.aiia.2025.01.013","DOIUrl":"10.1016/j.aiia.2025.01.013","url":null,"abstract":"<div><div>Effective weed management plays a critical role in enhancing the productivity and sustainability of cotton cultivation. The rapid emergence of herbicide-resistant weeds has underscored the need for innovative solutions to address the challenges associated with precise weed detection. This paper investigates the potential of YOLOv8, the latest advancement in the YOLO family of object detectors, for multi-class weed detection in U.S. cotton fields. Leveraging the CottonWeedDet12 dataset, which includes diverse weed species captured under varying environmental conditions, this study provides a comprehensive evaluation of YOLOv8's performance. A comparative analysis with earlier YOLO variants reveals substantial improvements in detection accuracy, as evidenced by higher mean Average Precision (mAP) scores. These findings highlight YOLOv8's superior capability to generalize across complex field scenarios, making it a promising candidate for real-time applications in precision agriculture. The enhanced architecture of YOLOv8, featuring anchor-free detection, an advanced Feature Pyramid Network (FPN), and an optimized loss function, enables accurate detection even under challenging conditions. This research emphasizes the importance of machine vision technologies in modern agriculture, particularly for minimizing herbicide reliance and promoting sustainable farming practices. The results not only validate YOLOv8's efficacy in multi-class weed detection but also pave the way for its integration into autonomous agricultural systems, thereby contributing to the broader goals of precision agriculture and ecological sustainability.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 2","pages":"Pages 182-191"},"PeriodicalIF":8.2,"publicationDate":"2025-02-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143520252","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Precision agriculture technologies for soil site-specific nutrient management: A comprehensive review 土壤定点养分管理的精准农业技术:综述
IF 8.2 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2025-02-11 DOI: 10.1016/j.aiia.2025.02.001
Niharika Vullaganti, Billy G. Ram, Xin Sun
Amidst the growing food demands of an increasing population, agricultural intensification frequently depends on excessive chemical and fertilizer applications. While this approach initially boosts crop yields, it effects long-term sustainability through soil degradation and compromised food quality. Thus, prioritizing soil health while enhancing crop production is essential for sustainable food production. Site-Specific Nutrient Management (SSNM) emerges as a critical strategy to increase crop production, maintain soil health, and reduce environmental pollution. Despite its potential, the application of SSNM technologies remain limited in farmers' fields due to existing research gaps. This review critically analyzes and presents research conducted in SSNM in the past 11 years (2013–2024), identifying gaps and future research directions. A comprehensive study of 97 relevant research publications reveals several key findings: a) Electrochemical sensing and spectroscopy are the two widely explored areas in SSNM research, b) Despite numerous technologies in SSNM, each has its own limitation, preventing any single technology from being ideal, c) The selection of models and preprocessing techniques significantly impacts nutrient prediction accuracy, d) No single sensor or sensor combination can predict all soil properties, as suitability is highly attribute-specific. This review provides researchers, and technical personnel in precision agriculture, and farmers with detailed insights into SSNM research, its implementation, limitations, challenges, and future research directions.
在不断增长的人口不断增长的粮食需求中,农业集约化往往依赖于过度的化学和化肥施用。虽然这种方法最初能提高作物产量,但由于土壤退化和食品质量受损,影响了长期的可持续性。因此,在提高作物生产的同时优先考虑土壤健康,对可持续粮食生产至关重要。定点养分管理(SSNM)是提高作物产量、保持土壤健康和减少环境污染的关键策略。尽管具有潜力,但由于现有的研究差距,SSNM技术在农民领域的应用仍然有限。本文对过去11年(2013-2024)在SSNM领域的研究进行了批判性的分析和介绍,指出了差距和未来的研究方向。一项对97份相关研究出版物的综合研究揭示了以下几个关键发现:a)电化学传感和光谱是SSNM研究中被广泛探索的两个领域;b)尽管SSNM中有许多技术,但每种技术都有自己的局限性,阻止任何一种技术都是理想的;c)模型和预处理技术的选择显著影响养分预测的准确性;d)没有单一传感器或传感器组合可以预测所有的土壤性质,因为适用性是高度属性特异性的。本文旨在为精准农业研究人员、技术人员和农民提供关于SSNM研究、实施、局限性、挑战和未来研究方向的详细见解。
{"title":"Precision agriculture technologies for soil site-specific nutrient management: A comprehensive review","authors":"Niharika Vullaganti,&nbsp;Billy G. Ram,&nbsp;Xin Sun","doi":"10.1016/j.aiia.2025.02.001","DOIUrl":"10.1016/j.aiia.2025.02.001","url":null,"abstract":"<div><div>Amidst the growing food demands of an increasing population, agricultural intensification frequently depends on excessive chemical and fertilizer applications. While this approach initially boosts crop yields, it effects long-term sustainability through soil degradation and compromised food quality. Thus, prioritizing soil health while enhancing crop production is essential for sustainable food production. Site-Specific Nutrient Management (SSNM) emerges as a critical strategy to increase crop production, maintain soil health, and reduce environmental pollution. Despite its potential, the application of SSNM technologies remain limited in farmers' fields due to existing research gaps. This review critically analyzes and presents research conducted in SSNM in the past 11 years (2013–2024), identifying gaps and future research directions. A comprehensive study of 97 relevant research publications reveals several key findings: a) Electrochemical sensing and spectroscopy are the two widely explored areas in SSNM research, b) Despite numerous technologies in SSNM, each has its own limitation, preventing any single technology from being ideal, c) The selection of models and preprocessing techniques significantly impacts nutrient prediction accuracy, d) No single sensor or sensor combination can predict all soil properties, as suitability is highly attribute-specific. This review provides researchers, and technical personnel in precision agriculture, and farmers with detailed insights into SSNM research, its implementation, limitations, challenges, and future research directions.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 2","pages":"Pages 147-161"},"PeriodicalIF":8.2,"publicationDate":"2025-02-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143507923","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Artificial Intelligence in Agriculture
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1