首页 > 最新文献

Artificial Intelligence in Agriculture最新文献

英文 中文
Digitalizing greenhouse trials: An automated approach for efficient and objective assessment of plant damage using deep learning
IF 8.2 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2025-03-17 DOI: 10.1016/j.aiia.2025.03.001
Laura Gómez-Zamanillo , Arantza Bereciartúa-Pérez , Artzai Picón , Liliana Parra , Marian Oldenbuerger , Ramón Navarra-Mestre , Christian Klukas , Till Eggers , Jone Echazarra
The use of image based and, recently, deep learning-based systems have provided good results in several applications. Greenhouse trials are key part in the process of developing and testing new herbicides and analyze the response of the species to different products and doses in a controlled way. The assessment of the damage in the plant is daily done in all trials by visual evaluation by experts. This entails time consuming process and lack of repeatability. Greenhouse trials require new digital tools to reduce time consuming process and to endow the experts with more objective and repetitive methods for establishing the damage in the plants.
To this end, a novel method is proposed composed by an initial segmentation of the plant species followed by a multibranch convolutional neural network to estimate the damage level. In this way, we overcome the need for costly and unaffordable pixelwise manual segmentation for damage symptoms and we make use of global damage estimation values provided by the experts.
The algorithm has been deployed under real greenhouse trials conditions in a pilot study located in BASF in Germany and tested over four species (GLXMA, TRZAW, ECHCG, AMARE). The results show mean average error (MAE) values ranging from 5.20 for AMARE and 8.07 for ECHCG for the estimation of PDCU value, with correlation values (R2) higher than 0.85 in all situations, and up to 0.92 in AMARE. These results surpass the inter-rater variability of human experts demonstrating that the proposed automated method is appropriate for automatically assessing greenhouse damage trials.
{"title":"Digitalizing greenhouse trials: An automated approach for efficient and objective assessment of plant damage using deep learning","authors":"Laura Gómez-Zamanillo ,&nbsp;Arantza Bereciartúa-Pérez ,&nbsp;Artzai Picón ,&nbsp;Liliana Parra ,&nbsp;Marian Oldenbuerger ,&nbsp;Ramón Navarra-Mestre ,&nbsp;Christian Klukas ,&nbsp;Till Eggers ,&nbsp;Jone Echazarra","doi":"10.1016/j.aiia.2025.03.001","DOIUrl":"10.1016/j.aiia.2025.03.001","url":null,"abstract":"<div><div>The use of image based and, recently, deep learning-based systems have provided good results in several applications. Greenhouse trials are key part in the process of developing and testing new herbicides and analyze the response of the species to different products and doses in a controlled way. The assessment of the damage in the plant is daily done in all trials by visual evaluation by experts. This entails time consuming process and lack of repeatability. Greenhouse trials require new digital tools to reduce time consuming process and to endow the experts with more objective and repetitive methods for establishing the damage in the plants.</div><div>To this end, a novel method is proposed composed by an initial segmentation of the plant species followed by a multibranch convolutional neural network to estimate the damage level. In this way, we overcome the need for costly and unaffordable pixelwise manual segmentation for damage symptoms and we make use of global damage estimation values provided by the experts.</div><div>The algorithm has been deployed under real greenhouse trials conditions in a pilot study located in BASF in Germany and tested over four species (GLXMA, TRZAW, ECHCG, AMARE). The results show mean average error (MAE) values ranging from 5.20 for AMARE and 8.07 for ECHCG for the estimation of PDCU value, with correlation values (R<sup>2</sup>) higher than 0.85 in all situations, and up to 0.92 in AMARE. These results surpass the inter-rater variability of human experts demonstrating that the proposed automated method is appropriate for automatically assessing greenhouse damage trials.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 2","pages":"Pages 280-295"},"PeriodicalIF":8.2,"publicationDate":"2025-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143684582","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TGFN-SD: A text-guided multimodal fusion network for swine disease diagnosis
IF 8.2 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2025-03-14 DOI: 10.1016/j.aiia.2025.03.002
Gan Yang , Qifeng Li , Chunjiang Zhao , Chaoyuan Wang , Hua Yan , Rui Meng , Yu Liu , Ligen Yu
China is the world's largest producer of pigs, but traditional manual prevention, treatment, and diagnosis methods cannot satisfy the demands of the current intensive production environment. Existing computer-aided diagnosis (CAD) systems for pigs are dominated by expert systems, which cannot be widely applied because the collection and maintenance of knowledge is difficult, and most of them ignore the effect of multimodal information. A swine disease diagnosis model was proposed in this study, the Text-Guided Fusion Network-Swine Diagnosis (TGFN-SD) model, which integrated text case reports and disease images. The model integrated the differences and complementary information in the multimodal representation of diseases through the text-guided transformer module such that text case reports could carry the semantic information of disease images for disease identification. Moreover, it alleviated the phenotypic overlap problem caused by similar diseases in combination with supervised learning and self-supervised learning. Experimental results revealed that TGFN-SD achieved satisfactory performance on a constructed swine disease image and text dataset (SDT6K) that covered six disease classification datasets with accuracy and F1-score of 94.48 % and 94.4 % respectively. The accuracies and F1-scores increased by 8.35 % and 7.24 % compared with those under the unimodal situation and by 2.02 % and 1.63 % compared with those of the optimal baseline model under the multimodal fusion. Additionally, interpretability analysis revealed that the model focus area was consistent with the habits and rules of the veterinary clinical diagnosis of pigs, indicating the effectiveness of the proposed model and providing new ideas and perspectives for the study of swine disease CAD.
{"title":"TGFN-SD: A text-guided multimodal fusion network for swine disease diagnosis","authors":"Gan Yang ,&nbsp;Qifeng Li ,&nbsp;Chunjiang Zhao ,&nbsp;Chaoyuan Wang ,&nbsp;Hua Yan ,&nbsp;Rui Meng ,&nbsp;Yu Liu ,&nbsp;Ligen Yu","doi":"10.1016/j.aiia.2025.03.002","DOIUrl":"10.1016/j.aiia.2025.03.002","url":null,"abstract":"<div><div>China is the world's largest producer of pigs, but traditional manual prevention, treatment, and diagnosis methods cannot satisfy the demands of the current intensive production environment. Existing computer-aided diagnosis (CAD) systems for pigs are dominated by expert systems, which cannot be widely applied because the collection and maintenance of knowledge is difficult, and most of them ignore the effect of multimodal information. A swine disease diagnosis model was proposed in this study, the Text-Guided Fusion Network-Swine Diagnosis (TGFN-SD) model, which integrated text case reports and disease images. The model integrated the differences and complementary information in the multimodal representation of diseases through the text-guided transformer module such that text case reports could carry the semantic information of disease images for disease identification. Moreover, it alleviated the phenotypic overlap problem caused by similar diseases in combination with supervised learning and self-supervised learning. Experimental results revealed that TGFN-SD achieved satisfactory performance on a constructed swine disease image and text dataset (SDT6K) that covered six disease classification datasets with accuracy and F1-score of 94.48 % and 94.4 % respectively. The accuracies and F1-scores increased by 8.35 % and 7.24 % compared with those under the unimodal situation and by 2.02 % and 1.63 % compared with those of the optimal baseline model under the multimodal fusion. Additionally, interpretability analysis revealed that the model focus area was consistent with the habits and rules of the veterinary clinical diagnosis of pigs, indicating the effectiveness of the proposed model and providing new ideas and perspectives for the study of swine disease CAD.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 2","pages":"Pages 266-279"},"PeriodicalIF":8.2,"publicationDate":"2025-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143684646","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A review of the application prospects of cloud-edge-end collaborative technology in freshwater aquaculture 云-端协同技术在淡水养殖中的应用前景综述
IF 8.2 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2025-03-04 DOI: 10.1016/j.aiia.2025.02.008
Jihao Wang , Xiaochan Wang , Yinyan Shi , Haihui Yang , Bo Jia , Xiaolei Zhang , Lebin Lin
This paper reviews the application and potential of cloud-edge-end collaborative (CEEC) technology in the field of freshwater aquaculture, a rapidly developing sector driven by the growing global demand for aquatic products. The sustainable development of freshwater aquaculture has become a critical challenge due to issues such as water pollution and inefficient resource utilization in traditional farming methods. In response to these challenges, the integration of smart technologies has emerged as a promising solution to improve both efficiency and sustainability. Cloud computing and edge computing, when combined, form the backbone of CEEC technology, offering an innovative approach that can significantly enhance aquaculture practices. By leveraging the strengths of both technologies, CEEC enables efficient data processing through cloud infrastructure and real-time responsiveness via edge computing, making it a compelling solution for modern aquaculture. This review explores the key applications of CEEC in areas such as environmental monitoring, intelligent feeding systems, health management, and product traceability. The ability of CEEC technology to optimize the aquaculture environment, enhance product quality, and boost overall farming efficiency highlights its potential to become a mainstream solution in the industry. Furthermore, the paper discusses the limitations and challenges that need to be addressed in order to fully realize the potential of CEEC in freshwater aquaculture. In conclusion, this paper provides researchers and practitioners with valuable insights into the current state of CEEC technology in aquaculture, offering suggestions for future development and optimization to further enhance its contributions to the sustainable growth of freshwater aquaculture.
{"title":"A review of the application prospects of cloud-edge-end collaborative technology in freshwater aquaculture","authors":"Jihao Wang ,&nbsp;Xiaochan Wang ,&nbsp;Yinyan Shi ,&nbsp;Haihui Yang ,&nbsp;Bo Jia ,&nbsp;Xiaolei Zhang ,&nbsp;Lebin Lin","doi":"10.1016/j.aiia.2025.02.008","DOIUrl":"10.1016/j.aiia.2025.02.008","url":null,"abstract":"<div><div>This paper reviews the application and potential of cloud-edge-end collaborative (CEEC) technology in the field of freshwater aquaculture, a rapidly developing sector driven by the growing global demand for aquatic products. The sustainable development of freshwater aquaculture has become a critical challenge due to issues such as water pollution and inefficient resource utilization in traditional farming methods. In response to these challenges, the integration of smart technologies has emerged as a promising solution to improve both efficiency and sustainability. Cloud computing and edge computing, when combined, form the backbone of CEEC technology, offering an innovative approach that can significantly enhance aquaculture practices. By leveraging the strengths of both technologies, CEEC enables efficient data processing through cloud infrastructure and real-time responsiveness via edge computing, making it a compelling solution for modern aquaculture. This review explores the key applications of CEEC in areas such as environmental monitoring, intelligent feeding systems, health management, and product traceability. The ability of CEEC technology to optimize the aquaculture environment, enhance product quality, and boost overall farming efficiency highlights its potential to become a mainstream solution in the industry. Furthermore, the paper discusses the limitations and challenges that need to be addressed in order to fully realize the potential of CEEC in freshwater aquaculture. In conclusion, this paper provides researchers and practitioners with valuable insights into the current state of CEEC technology in aquaculture, offering suggestions for future development and optimization to further enhance its contributions to the sustainable growth of freshwater aquaculture.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 2","pages":"Pages 232-251"},"PeriodicalIF":8.2,"publicationDate":"2025-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143620570","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Prediction of sugar beet yield and quality parameters using Stacked-LSTM model with pre-harvest UAV time series data and meteorological factors
IF 8.2 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2025-02-27 DOI: 10.1016/j.aiia.2025.02.004
Qing Wang , Ke Shao , Zhibo Cai , Yingpu Che , Haochong Chen , Shunfu Xiao , Ruili Wang , Yaling Liu , Baoguo Li , Yuntao Ma
Accurate pre-harvest prediction of sugar beet yield is vital for effective agricultural management and decision-making. However, traditional methods are constrained by reliance on empirical knowledge, time-consuming processes, resource intensiveness, and spatial-temporal variability in prediction accuracy. This study presented a plot-level approach that leverages UAV technology and recurrent neural networks to provide field yield predictions within the same growing season, addressing a significant gap in previous research that often focuses on regional scale predictions relied on multi-year history datasets. End-of-season yield and quality parameters were forecasted using UAV-derived time series data and meteorological factors collected at three critical growth stages, providing a timely and practical tool for farm management. Two years of data covering 185 sugar beet varieties were used to train a developed stacked Long Short-Term Memory (LSTM) model, which was compared with traditional machine learning approaches. Incorporating fresh weight estimates of aboveground and root biomass as predictive factors significantly enhanced prediction accuracy. Optimal performance in prediction was observed when utilizing data from all three growth periods, with R2 values of 0.761 (rRMSE = 7.1 %) for sugar content, 0.531 (rRMSE = 22.5 %) for root yield, and 0.478 (rRMSE = 23.4 %) for sugar yield. Furthermore, combining data from the first two growth periods shows promising results for making the predictions earlier. Key predictive features identified through the Permutation Importance (PIMP) method provided insights into the main factors influencing yield. These findings underscore the potential of using UAV time-series data and recurrent neural networks for accurate pre-harvest yield prediction at the field scale, supporting timely and precise agricultural decisions.
{"title":"Prediction of sugar beet yield and quality parameters using Stacked-LSTM model with pre-harvest UAV time series data and meteorological factors","authors":"Qing Wang ,&nbsp;Ke Shao ,&nbsp;Zhibo Cai ,&nbsp;Yingpu Che ,&nbsp;Haochong Chen ,&nbsp;Shunfu Xiao ,&nbsp;Ruili Wang ,&nbsp;Yaling Liu ,&nbsp;Baoguo Li ,&nbsp;Yuntao Ma","doi":"10.1016/j.aiia.2025.02.004","DOIUrl":"10.1016/j.aiia.2025.02.004","url":null,"abstract":"<div><div>Accurate pre-harvest prediction of sugar beet yield is vital for effective agricultural management and decision-making. However, traditional methods are constrained by reliance on empirical knowledge, time-consuming processes, resource intensiveness, and spatial-temporal variability in prediction accuracy. This study presented a plot-level approach that leverages UAV technology and recurrent neural networks to provide field yield predictions within the same growing season, addressing a significant gap in previous research that often focuses on regional scale predictions relied on multi-year history datasets. End-of-season yield and quality parameters were forecasted using UAV-derived time series data and meteorological factors collected at three critical growth stages, providing a timely and practical tool for farm management. Two years of data covering 185 sugar beet varieties were used to train a developed stacked Long Short-Term Memory (LSTM) model, which was compared with traditional machine learning approaches. Incorporating fresh weight estimates of aboveground and root biomass as predictive factors significantly enhanced prediction accuracy. Optimal performance in prediction was observed when utilizing data from all three growth periods, with <em>R</em><sup>2</sup> values of 0.761 (rRMSE = 7.1 %) for sugar content, 0.531 (rRMSE = 22.5 %) for root yield, and 0.478 (rRMSE = 23.4 %) for sugar yield. Furthermore, combining data from the first two growth periods shows promising results for making the predictions earlier. Key predictive features identified through the Permutation Importance (PIMP) method provided insights into the main factors influencing yield. These findings underscore the potential of using UAV time-series data and recurrent neural networks for accurate pre-harvest yield prediction at the field scale, supporting timely and precise agricultural decisions.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 2","pages":"Pages 252-265"},"PeriodicalIF":8.2,"publicationDate":"2025-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143643996","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep learning-based classification, detection, and segmentation of tomato leaf diseases: A state-of-the-art review
IF 8.2 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2025-02-20 DOI: 10.1016/j.aiia.2025.02.006
Aritra Das , Fahad Pathan , Jamin Rahman Jim , Md Mohsin Kabir , M.F. Mridha
The early identification and treatment of tomato leaf diseases are crucial for optimizing plant productivity, efficiency and quality. Misdiagnosis by the farmers poses the risk of inadequate treatments, harming both tomato plants and agroecosystems. Precision of disease diagnosis is essential, necessitating a swift and accurate response to misdiagnosis for early identification. Tropical regions are ideal for tomato plants, but there are inherent concerns, such as weather-related problems. Plant diseases largely cause financial losses in crop production. The slow detection periods of conventional approaches are insufficient for the timely detection of tomato diseases. Deep learning has emerged as a promising avenue for early disease identification. This study comprehensively analyzed techniques for classifying and detecting tomato leaf diseases and evaluating their strengths and weaknesses. The study delves into various diagnostic procedures, including image pre-processing, localization and segmentation. In conclusion, applying deep learning algorithms holds great promise for enhancing the accuracy and efficiency of tomato leaf disease diagnosis by offering faster and more effective results.
{"title":"Deep learning-based classification, detection, and segmentation of tomato leaf diseases: A state-of-the-art review","authors":"Aritra Das ,&nbsp;Fahad Pathan ,&nbsp;Jamin Rahman Jim ,&nbsp;Md Mohsin Kabir ,&nbsp;M.F. Mridha","doi":"10.1016/j.aiia.2025.02.006","DOIUrl":"10.1016/j.aiia.2025.02.006","url":null,"abstract":"<div><div>The early identification and treatment of tomato leaf diseases are crucial for optimizing plant productivity, efficiency and quality. Misdiagnosis by the farmers poses the risk of inadequate treatments, harming both tomato plants and agroecosystems. Precision of disease diagnosis is essential, necessitating a swift and accurate response to misdiagnosis for early identification. Tropical regions are ideal for tomato plants, but there are inherent concerns, such as weather-related problems. Plant diseases largely cause financial losses in crop production. The slow detection periods of conventional approaches are insufficient for the timely detection of tomato diseases. Deep learning has emerged as a promising avenue for early disease identification. This study comprehensively analyzed techniques for classifying and detecting tomato leaf diseases and evaluating their strengths and weaknesses. The study delves into various diagnostic procedures, including image pre-processing, localization and segmentation. In conclusion, applying deep learning algorithms holds great promise for enhancing the accuracy and efficiency of tomato leaf disease diagnosis by offering faster and more effective results.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 2","pages":"Pages 192-220"},"PeriodicalIF":8.2,"publicationDate":"2025-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143520260","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Using UAV-based multispectral images and CGS-YOLO algorithm to distinguish maize seeding from weed
IF 8.2 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2025-02-17 DOI: 10.1016/j.aiia.2025.02.007
Boyi Tang , Jingping Zhou , Chunjiang Zhao , Yuchun Pan , Yao Lu , Chang Liu , Kai Ma , Xuguang Sun , Ruifang Zhang , Xiaohe Gu
Accurate recognition of maize seedlings on the plot scale under the disturbance of weeds is crucial for early seedling replenishment and weed removal. Currently, UAV-based maize seedling recognition depends primarily on RGB images. The main purpose of this study is to compare the performances of multispectral images and RGB images of unmanned aerial vehicle (UAV) on maize seeding recognition using deep learning algorithms. Additionally, we aim to assess the disturbance of different weed coverage on the recognition of maize seeding. Firstly, principal component analysis was used in multispectral image transformation. Secondly, by introducing the CARAFE sampling operator and a small target detection layer (SLAY), we extracted the contextual information of each pixel to retain weak features in the maize seedling image. Thirdly, the global attention mechanism (GAM) was employed to capture the features of maize seedlings using the dual attention mechanism of spatial and channel information. The CGS-YOLO algorithm was constructed and formed. Finally, we compared the performance of the improved algorithm with a series of deep learning algorithms, including YOLO v3, v5, v6 and v8. The results show that after PCA transformation, the recognition mAP of maize seedlings reaches 82.6 %, representing 3.1 percentage points improvement compared to RGB images. Compared with YOLOv8, YOLOv6, YOLOv5, and YOLOv3, the CGS-YOLO algorithm has improved mAP by 3.8, 4.2, 4.5 and 6.6 percentage points, respectively. With the increase of weed coverage, the recognition effect of maize seedlings gradually decreased. When weed coverage was more than 70 %, the mAP difference becomes significant, but CGS-YOLO still maintains a recognition mAP of 72 %. Therefore, in maize seedings recognition, UAV-based multispectral images perform better than RGB images. The application of CGS-YOLO deep learning algorithm with UAV multi-spectral images proves beneficial in the recognition of maize seedlings under weed disturbance.
{"title":"Using UAV-based multispectral images and CGS-YOLO algorithm to distinguish maize seeding from weed","authors":"Boyi Tang ,&nbsp;Jingping Zhou ,&nbsp;Chunjiang Zhao ,&nbsp;Yuchun Pan ,&nbsp;Yao Lu ,&nbsp;Chang Liu ,&nbsp;Kai Ma ,&nbsp;Xuguang Sun ,&nbsp;Ruifang Zhang ,&nbsp;Xiaohe Gu","doi":"10.1016/j.aiia.2025.02.007","DOIUrl":"10.1016/j.aiia.2025.02.007","url":null,"abstract":"<div><div>Accurate recognition of maize seedlings on the plot scale under the disturbance of weeds is crucial for early seedling replenishment and weed removal. Currently, UAV-based maize seedling recognition depends primarily on RGB images. The main purpose of this study is to compare the performances of multispectral images and RGB images of unmanned aerial vehicle (UAV) on maize seeding recognition using deep learning algorithms. Additionally, we aim to assess the disturbance of different weed coverage on the recognition of maize seeding. Firstly, principal component analysis was used in multispectral image transformation. Secondly, by introducing the CARAFE sampling operator and a small target detection layer (SLAY), we extracted the contextual information of each pixel to retain weak features in the maize seedling image. Thirdly, the global attention mechanism (GAM) was employed to capture the features of maize seedlings using the dual attention mechanism of spatial and channel information. The CGS-YOLO algorithm was constructed and formed. Finally, we compared the performance of the improved algorithm with a series of deep learning algorithms, including YOLO v3, v5, v6 and v8. The results show that after PCA transformation, the recognition mAP of maize seedlings reaches 82.6 %, representing 3.1 percentage points improvement compared to RGB images. Compared with YOLOv8, YOLOv6, YOLOv5, and YOLOv3, the CGS-YOLO algorithm has improved mAP by 3.8, 4.2, 4.5 and 6.6 percentage points, respectively. With the increase of weed coverage, the recognition effect of maize seedlings gradually decreased. When weed coverage was more than 70 %, the mAP difference becomes significant, but CGS-YOLO still maintains a recognition mAP of 72 %. Therefore, in maize seedings recognition, UAV-based multispectral images perform better than RGB images. The application of CGS-YOLO deep learning algorithm with UAV multi-spectral images proves beneficial in the recognition of maize seedlings under weed disturbance.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 2","pages":"Pages 162-181"},"PeriodicalIF":8.2,"publicationDate":"2025-02-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143512498","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Addressing computation resource exhaustion associated with deep learning training of three-dimensional hyperspectral images using multiclass weed classification
IF 8.2 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2025-02-11 DOI: 10.1016/j.aiia.2025.02.005
Billy G. Ram , Kirk Howatt , Joseph Mettler , Xin Sun
Addressing the computational bottleneck of training deep learning models on high-resolution, three-dimensional images, this study introduces an optimized approach, combining distributed learning (parallelism), image resolution, and data augmentation. We propose analysis methodologies that help train deep learning (DL) models on proximal hyperspectral images, demonstrating superior performance in eight-class crop (canola, field pea, sugarbeet and flax) and weed (redroot pigweed, resistant kochia, waterhemp and ragweed) classification. Utilizing state-of-the-art model architectures (ResNet-50, VGG-16, DenseNet, EfficientNet) in comparison with ResNet-50 inspired Hyper-Residual Convolutional Neural Network model. Our findings reveal that an image resolution of 100x100x54 maximizes accuracy while maintaining computational efficiency, surpassing the performance of 150x150x54 and 50x50x54 resolution images. By employing data parallelism, we overcome system memory limitations and achieve exceptional classification results, with test accuracies and F1-scores reaching 0.96 and 0.97, respectively. This research highlights the potential of residual-based networks for analyzing hyperspectral images. It offers valuable insights into optimizing deep learning models in resource-constrained environments. The research presents detailed training pipelines for deep learning models that utilize large (> 4k) hyperspectral training samples, including background and without any data preprocessing. This approach enables the training of deep learning models directly on raw hyperspectral data.
{"title":"Addressing computation resource exhaustion associated with deep learning training of three-dimensional hyperspectral images using multiclass weed classification","authors":"Billy G. Ram ,&nbsp;Kirk Howatt ,&nbsp;Joseph Mettler ,&nbsp;Xin Sun","doi":"10.1016/j.aiia.2025.02.005","DOIUrl":"10.1016/j.aiia.2025.02.005","url":null,"abstract":"<div><div>Addressing the computational bottleneck of training deep learning models on high-resolution, three-dimensional images, this study introduces an optimized approach, combining distributed learning (parallelism), image resolution, and data augmentation. We propose analysis methodologies that help train deep learning (DL) models on proximal hyperspectral images, demonstrating superior performance in eight-class crop (canola, field pea, sugarbeet and flax) and weed (redroot pigweed, resistant kochia, waterhemp and ragweed) classification. Utilizing state-of-the-art model architectures (ResNet-50, VGG-16, DenseNet, EfficientNet) in comparison with ResNet-50 inspired Hyper-Residual Convolutional Neural Network model. Our findings reveal that an image resolution of 100x100x54 maximizes accuracy while maintaining computational efficiency, surpassing the performance of 150x150x54 and 50x50x54 resolution images. By employing data parallelism, we overcome system memory limitations and achieve exceptional classification results, with test accuracies and F1-scores reaching 0.96 and 0.97, respectively. This research highlights the potential of residual-based networks for analyzing hyperspectral images. It offers valuable insights into optimizing deep learning models in resource-constrained environments. The research presents detailed training pipelines for deep learning models that utilize large (&gt; 4k) hyperspectral training samples, including background and without any data preprocessing. This approach enables the training of deep learning models directly on raw hyperspectral data.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 2","pages":"Pages 131-146"},"PeriodicalIF":8.2,"publicationDate":"2025-02-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143487682","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Advancing precision agriculture: A comparative analysis of YOLOv8 for multi-class weed detection in cotton cultivation
IF 8.2 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2025-02-11 DOI: 10.1016/j.aiia.2025.01.013
Ameer Tamoor Khan , Signe Marie Jensen , Abdul Rehman Khan
Effective weed management plays a critical role in enhancing the productivity and sustainability of cotton cultivation. The rapid emergence of herbicide-resistant weeds has underscored the need for innovative solutions to address the challenges associated with precise weed detection. This paper investigates the potential of YOLOv8, the latest advancement in the YOLO family of object detectors, for multi-class weed detection in U.S. cotton fields. Leveraging the CottonWeedDet12 dataset, which includes diverse weed species captured under varying environmental conditions, this study provides a comprehensive evaluation of YOLOv8's performance. A comparative analysis with earlier YOLO variants reveals substantial improvements in detection accuracy, as evidenced by higher mean Average Precision (mAP) scores. These findings highlight YOLOv8's superior capability to generalize across complex field scenarios, making it a promising candidate for real-time applications in precision agriculture. The enhanced architecture of YOLOv8, featuring anchor-free detection, an advanced Feature Pyramid Network (FPN), and an optimized loss function, enables accurate detection even under challenging conditions. This research emphasizes the importance of machine vision technologies in modern agriculture, particularly for minimizing herbicide reliance and promoting sustainable farming practices. The results not only validate YOLOv8's efficacy in multi-class weed detection but also pave the way for its integration into autonomous agricultural systems, thereby contributing to the broader goals of precision agriculture and ecological sustainability.
{"title":"Advancing precision agriculture: A comparative analysis of YOLOv8 for multi-class weed detection in cotton cultivation","authors":"Ameer Tamoor Khan ,&nbsp;Signe Marie Jensen ,&nbsp;Abdul Rehman Khan","doi":"10.1016/j.aiia.2025.01.013","DOIUrl":"10.1016/j.aiia.2025.01.013","url":null,"abstract":"<div><div>Effective weed management plays a critical role in enhancing the productivity and sustainability of cotton cultivation. The rapid emergence of herbicide-resistant weeds has underscored the need for innovative solutions to address the challenges associated with precise weed detection. This paper investigates the potential of YOLOv8, the latest advancement in the YOLO family of object detectors, for multi-class weed detection in U.S. cotton fields. Leveraging the CottonWeedDet12 dataset, which includes diverse weed species captured under varying environmental conditions, this study provides a comprehensive evaluation of YOLOv8's performance. A comparative analysis with earlier YOLO variants reveals substantial improvements in detection accuracy, as evidenced by higher mean Average Precision (mAP) scores. These findings highlight YOLOv8's superior capability to generalize across complex field scenarios, making it a promising candidate for real-time applications in precision agriculture. The enhanced architecture of YOLOv8, featuring anchor-free detection, an advanced Feature Pyramid Network (FPN), and an optimized loss function, enables accurate detection even under challenging conditions. This research emphasizes the importance of machine vision technologies in modern agriculture, particularly for minimizing herbicide reliance and promoting sustainable farming practices. The results not only validate YOLOv8's efficacy in multi-class weed detection but also pave the way for its integration into autonomous agricultural systems, thereby contributing to the broader goals of precision agriculture and ecological sustainability.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 2","pages":"Pages 182-191"},"PeriodicalIF":8.2,"publicationDate":"2025-02-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143520252","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Precision agriculture technologies for soil site-specific nutrient management: A comprehensive review
IF 8.2 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2025-02-11 DOI: 10.1016/j.aiia.2025.02.001
Niharika Vullaganti, Billy G. Ram, Xin Sun
Amidst the growing food demands of an increasing population, agricultural intensification frequently depends on excessive chemical and fertilizer applications. While this approach initially boosts crop yields, it effects long-term sustainability through soil degradation and compromised food quality. Thus, prioritizing soil health while enhancing crop production is essential for sustainable food production. Site-Specific Nutrient Management (SSNM) emerges as a critical strategy to increase crop production, maintain soil health, and reduce environmental pollution. Despite its potential, the application of SSNM technologies remain limited in farmers' fields due to existing research gaps. This review critically analyzes and presents research conducted in SSNM in the past 11 years (2013–2024), identifying gaps and future research directions. A comprehensive study of 97 relevant research publications reveals several key findings: a) Electrochemical sensing and spectroscopy are the two widely explored areas in SSNM research, b) Despite numerous technologies in SSNM, each has its own limitation, preventing any single technology from being ideal, c) The selection of models and preprocessing techniques significantly impacts nutrient prediction accuracy, d) No single sensor or sensor combination can predict all soil properties, as suitability is highly attribute-specific. This review provides researchers, and technical personnel in precision agriculture, and farmers with detailed insights into SSNM research, its implementation, limitations, challenges, and future research directions.
{"title":"Precision agriculture technologies for soil site-specific nutrient management: A comprehensive review","authors":"Niharika Vullaganti,&nbsp;Billy G. Ram,&nbsp;Xin Sun","doi":"10.1016/j.aiia.2025.02.001","DOIUrl":"10.1016/j.aiia.2025.02.001","url":null,"abstract":"<div><div>Amidst the growing food demands of an increasing population, agricultural intensification frequently depends on excessive chemical and fertilizer applications. While this approach initially boosts crop yields, it effects long-term sustainability through soil degradation and compromised food quality. Thus, prioritizing soil health while enhancing crop production is essential for sustainable food production. Site-Specific Nutrient Management (SSNM) emerges as a critical strategy to increase crop production, maintain soil health, and reduce environmental pollution. Despite its potential, the application of SSNM technologies remain limited in farmers' fields due to existing research gaps. This review critically analyzes and presents research conducted in SSNM in the past 11 years (2013–2024), identifying gaps and future research directions. A comprehensive study of 97 relevant research publications reveals several key findings: a) Electrochemical sensing and spectroscopy are the two widely explored areas in SSNM research, b) Despite numerous technologies in SSNM, each has its own limitation, preventing any single technology from being ideal, c) The selection of models and preprocessing techniques significantly impacts nutrient prediction accuracy, d) No single sensor or sensor combination can predict all soil properties, as suitability is highly attribute-specific. This review provides researchers, and technical personnel in precision agriculture, and farmers with detailed insights into SSNM research, its implementation, limitations, challenges, and future research directions.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 2","pages":"Pages 147-161"},"PeriodicalIF":8.2,"publicationDate":"2025-02-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143507923","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An efficient strawberry segmentation model based on Mask R-CNN and TensorRT
IF 8.2 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2025-02-03 DOI: 10.1016/j.aiia.2025.01.008
Anthony Crespo , Claudia Moncada , Fabricio Crespo , Manuel Eugenio Morocho-Cayamcela
Currently, artificial intelligence (AI), particularly computer vision (CV), has numerous applications in agriculture. In this field, the production and consumption of strawberries have experienced great growth in recent years, which makes meeting the growing demand a challenge that producers must face. However, one of the main problems regarding the cultivation of this fruit is the high cost and long picking times. In response, automatic harvesting has surged as an option to address this difficulty, and fruit instance segmentation plays a crucial role in these types of systems. Fruit segmentation is related to the identification and separation of individual fruits within a crop, allowing a more efficient and accurate harvesting process. Although deep learning (DL) techniques have shown potential for this activity, the complexity of the models leads to difficulty in their implementation in real-time systems. For this reason, a model capable of performing adequately in real-time, while also having good precision is of great interest. With this motivation, this work presents a efficient Mask R-CNN model to perform instance segmentation in strawberry fruits. The efficiency of the model is assessed considering the amount of frames per second (FPS) it can process, its size in megabytes (MB) and its mean average precision (mAP) value. Two approaches are provided: The first one consists on the training of the model using the Detectron2 library, while the second one focuses on the training of the model using the NVIDIA TAO Toolkit. In both cases, NVIDIA TensorRT is used to optimize the models. The results show that the best Mask R-CNN model, without optimization, has a performance of 83.45 mAP, 4 FPS, and 351 MB of size, which, after the TensorRT optimization, achieved 83.17 mAP, 25.46 FPS, and only 48.2 MB of size. It represents a suitable model for implementation in real-time systems.
{"title":"An efficient strawberry segmentation model based on Mask R-CNN and TensorRT","authors":"Anthony Crespo ,&nbsp;Claudia Moncada ,&nbsp;Fabricio Crespo ,&nbsp;Manuel Eugenio Morocho-Cayamcela","doi":"10.1016/j.aiia.2025.01.008","DOIUrl":"10.1016/j.aiia.2025.01.008","url":null,"abstract":"<div><div>Currently, artificial intelligence (AI), particularly computer vision (CV), has numerous applications in agriculture. In this field, the production and consumption of strawberries have experienced great growth in recent years, which makes meeting the growing demand a challenge that producers must face. However, one of the main problems regarding the cultivation of this fruit is the high cost and long picking times. In response, automatic harvesting has surged as an option to address this difficulty, and fruit instance segmentation plays a crucial role in these types of systems. Fruit segmentation is related to the identification and separation of individual fruits within a crop, allowing a more efficient and accurate harvesting process. Although deep learning (DL) techniques have shown potential for this activity, the complexity of the models leads to difficulty in their implementation in real-time systems. For this reason, a model capable of performing adequately in real-time, while also having good precision is of great interest. With this motivation, this work presents a efficient Mask R-CNN model to perform instance segmentation in strawberry fruits. The efficiency of the model is assessed considering the amount of frames per second (FPS) it can process, its size in megabytes (MB) and its mean average precision (mAP) value. Two approaches are provided: The first one consists on the training of the model using the Detectron2 library, while the second one focuses on the training of the model using the NVIDIA TAO Toolkit. In both cases, NVIDIA TensorRT is used to optimize the models. The results show that the best Mask R-CNN model, without optimization, has a performance of 83.45 mAP, 4 FPS, and 351 MB of size, which, after the TensorRT optimization, achieved 83.17 mAP, 25.46 FPS, and only 48.2 MB of size. It represents a suitable model for implementation in real-time systems.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 2","pages":"Pages 327-337"},"PeriodicalIF":8.2,"publicationDate":"2025-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143704966","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Artificial Intelligence in Agriculture
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1