首页 > 最新文献

Computers and Electronics in Agriculture最新文献

英文 中文
Insect-YOLO: A new method of crop insect detection
IF 7.7 1区 农林科学 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2025-02-08 DOI: 10.1016/j.compag.2025.110085
Nan Wang , Shaowen Fu , Qiong Rao , Guiyou Zhang , Mingquan Ding
The utilization of the pest monitoring and reporting system has gained widespread adoption for automating the surveillance of field-dwelling pests, serving as a viable alternative to the labor-intensive and time-consuming manual inspection methods. Nevertheless, the heterogeneous spectrum and variable sizes of crop pests, coupled with the imperative to manage costs associated with camera lenses employed in practical agricultural scenarios, result in low-resolution images. This low resolution significantly amplifies the intricacy of pest identification. Our research is dedicated to the detection of insects in low-resolution images, we collected a large dataset of low-resolution images of common pests from agricultural fields, with resolutions ranging from 8 to 12 million pixels, and deployed the Insect-YOLO model based on this dataset. Tailored for capturing pests on diverse crops, Insect-YOLO boasts streamlined parameters, swift detection speeds, and exceptional accuracy. Enhanced by the Convolutional Block Attention Module (CBAM), it systematically extracts complex pest features, integrating multi-scale information to optimize feature representation. In comparative evaluations against YOLO v5, v7, v8, RetinaNet, and Faster R-CNN, Insect-YOLO demonstrated exceptional performance, achieving a mean Average Precision at IoU 0.5 (mAP50) of 93.8%, highlighting its superiority in pest detection. Simultaneously, linear regression analysis was performed to assess the correlation between the computer-detected and manually counted insect numbers, revealing a strong correlation that underscores the efficacy of our method. Ultimately, the pest detection algorithm was integrated into the “Remote Pest Monitoring and Analysis System” of the Agricultural IoT Monitoring Platform. This integration enables high accuracy and efficiency in detecting diverse pests from real-time, low-resolution field images and constitutes a critical component of a comprehensive pest monitoring system, serving as a foundation for pest prediction and intelligent monitoring technologies.
{"title":"Insect-YOLO: A new method of crop insect detection","authors":"Nan Wang ,&nbsp;Shaowen Fu ,&nbsp;Qiong Rao ,&nbsp;Guiyou Zhang ,&nbsp;Mingquan Ding","doi":"10.1016/j.compag.2025.110085","DOIUrl":"10.1016/j.compag.2025.110085","url":null,"abstract":"<div><div>The utilization of the pest monitoring and reporting system has gained widespread adoption for automating the surveillance of field-dwelling pests, serving as a viable alternative to the labor-intensive and time-consuming manual inspection methods. Nevertheless, the heterogeneous spectrum and variable sizes of crop pests, coupled with the imperative to manage costs associated with camera lenses employed in practical agricultural scenarios, result in low-resolution images. This low resolution significantly amplifies the intricacy of pest identification. Our research is dedicated to the detection of insects in low-resolution images, we collected a large dataset of low-resolution images of common pests from agricultural fields, with resolutions ranging from 8 to 12 million pixels, and deployed the Insect-YOLO model based on this dataset. Tailored for capturing pests on diverse crops, Insect-YOLO boasts streamlined parameters, swift detection speeds, and exceptional accuracy. Enhanced by the Convolutional Block Attention Module (CBAM), it systematically extracts complex pest features, integrating multi-scale information to optimize feature representation. In comparative evaluations against YOLO v5, v7, v8, RetinaNet, and Faster R-CNN, Insect-YOLO demonstrated exceptional performance, achieving a mean Average Precision at IoU 0.5 (mAP<sub>50</sub>) of 93.8%, highlighting its superiority in pest detection. Simultaneously, linear regression analysis was performed to assess the correlation between the computer-detected and manually counted insect numbers, revealing a strong correlation that underscores the efficacy of our method. Ultimately, the pest detection algorithm was integrated into the “Remote Pest Monitoring and Analysis System” of the Agricultural IoT Monitoring Platform. This integration enables high accuracy and efficiency in detecting diverse pests from real-time, low-resolution field images and constitutes a critical component of a comprehensive pest monitoring system, serving as a foundation for pest prediction and intelligent monitoring technologies.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"232 ","pages":"Article 110085"},"PeriodicalIF":7.7,"publicationDate":"2025-02-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143350104","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A data augmentation method for computer vision task with feature conversion between class
IF 7.7 1区 农林科学 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2025-02-08 DOI: 10.1016/j.compag.2025.109909
Jiewen Lin , Gui Hu , Jian Chen
Agricultural samples are unbalanced, complex, and scarce, which is the main factor restricting the popularization and application of agricultural computer vision. This paper proposes a feature conversion between classes method for data augmentation of computer vision tasks. We make contributions in the following three aspects: 1) Proposing an optimization method of attention mechanism to optimize the generator of CycleGAN. Through the module: efficient convolutional block attention model (ECBAM), the generator network structure of CycleGAN is improved to learn the feature transformation from “healthy leaves” to “fake diseased leaves”. 2) An label assignment method based on proportionally assigned receptive field is proposed to realize the label replacement from “healthy leaves” to “fake diseased leaves”. 3) Enhanced the original data by a factor of n × oversampling. The experimental results show that the improved CycleGAN proposed in this paper can effectively generate “fake diseased leaves”, the Inception Score (IS) is 2.3 ± 0.14, the Fréchet Inception Distance (FID) is 41.49, and the Kernel Inception Distance (KID) is 0.025. We have verified the feasibility of the method for classification, object detection, and semantic segmentation tasks. When using the improved CycleGAN for data augmentation, the accuracy of ResNet152 has been improved by 1.71 %. We further verified the effectiveness of improved CycleGAN and reactive field object assignment(RFOA) methods for data augmentation. By testing in the object detection task, when t = 0.75, and n = 1, the mAP reaches 78.97 %. By testing in a semantic segmentation task, when t = 0.50&0.75, and n = 2, the mIOU reaches 81.41 %.
{"title":"A data augmentation method for computer vision task with feature conversion between class","authors":"Jiewen Lin ,&nbsp;Gui Hu ,&nbsp;Jian Chen","doi":"10.1016/j.compag.2025.109909","DOIUrl":"10.1016/j.compag.2025.109909","url":null,"abstract":"<div><div>Agricultural samples are unbalanced, complex, and scarce, which is the main factor restricting the popularization and application of agricultural computer vision. This paper proposes a feature conversion between classes method for data augmentation of computer vision tasks. We make contributions in the following three aspects: 1) Proposing an optimization method of attention mechanism to optimize the generator of CycleGAN. Through the module: efficient convolutional block attention model (ECBAM), the generator network structure of CycleGAN is improved to learn the feature transformation from “healthy leaves” to “fake diseased leaves”. 2) An label assignment method based on proportionally assigned receptive field is proposed to realize the label replacement from “healthy leaves” to “fake diseased leaves”. 3) Enhanced the original data by a factor of n <span><math><mrow><mo>×</mo></mrow></math></span> oversampling. The experimental results show that the improved CycleGAN proposed in this paper can effectively generate “fake diseased leaves”, the Inception Score (IS) is 2.3 ± 0.14, the Fréchet Inception Distance (FID) is 41.49, and the Kernel Inception Distance (KID) is 0.025. We have verified the feasibility of the method for classification, object detection, and semantic segmentation tasks. When using the improved CycleGAN for data augmentation, the accuracy of ResNet152 has been improved by 1.71 %. We further verified the effectiveness of improved CycleGAN and reactive field object assignment(RFOA) methods for data augmentation. By testing in the object detection task, when t = 0.75, and n = 1, the mAP reaches 78.97 %. By testing in a semantic segmentation task, when t = 0.50&amp;0.75, and n = 2, the mIOU reaches 81.41 %.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"231 ","pages":"Article 109909"},"PeriodicalIF":7.7,"publicationDate":"2025-02-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143349807","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PP-YOLO: Deep learning based detection model to detect apple and cherry trees in orchard based on Histogram and Wavelet preprocessing techniques
IF 7.7 1区 农林科学 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2025-02-08 DOI: 10.1016/j.compag.2025.110052
Cemalettin Akdoğan , Tolga Özer , Yüksel Oğuz
The number of technological systems such as unmanned aerial vehicles (UAVs) and artificial intelligence (AI) used in agricultural areas is increasing with the development of today’s technology. Therefore, it is predicted that spraying and fertilization processes will be carried out using AI-based drones. The implementation of these processes requires AI models. This study used the YOLOv5, YOLOv8, and YOLOv9 algorithms to detect cherry and apple trees in agricultural areas. The dataset used to build the YOLO model was generated using the DJI Mavic UAV. A data augmentation method was applied to the data to increase the number of images in the dataset. The dataset includes 2000 images of cherry trees and 1600 images of apple trees. Two approaches, Unprogressed YOLO (UP-YOLO) and Progressed and Preprocessed YOLO (PP-YOLO), were proposed in this study. UP-YOLO provides training for the YOLO models. In the proposed PP-YOLO method, the dimensions of the images are configured compared to the classical YOLO model. A spatial attention module (SAM), improves the model’s detection performance by highlighting the leaf color, leaf structure and branch texture of trees. This helps to reduce the rate of undetected objects. Additionally, PP-YOLO enhances the model’s performance by applying Histogram Equalization (HE) and Wavelet Transform (WT) image preprocessing techniques to the images. HE and WT pre-processing techniques were used to enhance the tree branch, leaf, and ground transitions and remove noise in the UAV images. While an F1 score of 94.3 % and mAP50 of 96.9 % were obtained with UP-YOLO, the YOLOv8m model with WT applied to the images in PP-YOLO obtained an F1 score of 95.8 % and mAP50 of 98.3 %. The results show that the F1 score and mAP50 of the PP-YOLO reach 1.5 % and 1.4 % higher than UP-YOLO, respectively. It was observed that preprocessing techniques increased the F1 score by 0.9 % and the SAM module by 0.6 % during the application of the proposed method. The developed deep learning model was highly accurate for cherry and apple tree detection.
{"title":"PP-YOLO: Deep learning based detection model to detect apple and cherry trees in orchard based on Histogram and Wavelet preprocessing techniques","authors":"Cemalettin Akdoğan ,&nbsp;Tolga Özer ,&nbsp;Yüksel Oğuz","doi":"10.1016/j.compag.2025.110052","DOIUrl":"10.1016/j.compag.2025.110052","url":null,"abstract":"<div><div>The number of technological systems such as unmanned aerial vehicles (UAVs) and artificial intelligence (AI) used in agricultural areas is increasing with the development of today’s technology. Therefore, it is predicted that spraying and fertilization processes will be carried out using AI-based drones. The implementation of these processes requires AI models. This study used the YOLOv5, YOLOv8, and YOLOv9 algorithms to detect cherry and apple trees in agricultural areas. The dataset used to build the YOLO model was generated using the DJI Mavic UAV. A data augmentation method was applied to the data to increase the number of images in the dataset. The dataset includes 2000 images of cherry trees and 1600 images of apple trees. Two approaches, Unprogressed YOLO (UP-YOLO) and Progressed and Preprocessed YOLO (PP-YOLO), were proposed in this study. UP-YOLO provides training for the YOLO models. In the proposed PP-YOLO method, the dimensions of the images are configured compared to the classical YOLO model. A spatial attention module (SAM), improves the model’s detection performance by highlighting the leaf color, leaf structure and branch texture of trees. This helps to reduce the rate of undetected objects. Additionally, PP-YOLO enhances the model’s performance by applying Histogram Equalization (HE) and Wavelet Transform (WT) image preprocessing techniques to the images. HE and WT pre-processing techniques were used to enhance the tree branch, leaf, and ground transitions and remove noise in the UAV images. While an F1 score of 94.3 % and mAP50 of 96.9 % were obtained with UP-YOLO, the YOLOv8m model with WT applied to the images in PP-YOLO obtained an F1 score of 95.8 % and mAP50 of 98.3 %. The results show that the F1 score and mAP50 of the PP-YOLO reach 1.5 % and 1.4 % higher than UP-YOLO, respectively. It was observed that preprocessing techniques increased the F1 score by 0.9 % and the SAM module by 0.6 % during the application of the proposed method. The developed deep learning model was highly accurate for cherry and apple tree detection.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"232 ","pages":"Article 110052"},"PeriodicalIF":7.7,"publicationDate":"2025-02-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143350103","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Branch segmentation and phenotype extraction of apple trees based on improved Laplace algorithm
IF 7.7 1区 农林科学 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2025-02-07 DOI: 10.1016/j.compag.2025.109998
Long Li , Wei Fu , Bin Zhang , Yuqi Yang , Yun Ge , Congju Shen
Phenotypic traits of crops reflect their physiological characteristics and provide a theoretical basis for predicting their growth. The 3D point cloud has a direct and accurate rendering ability, which has been widely used in phenotype extraction, especially with the help of accurate segmentation techniques. However, the inherent discrete nature of point clouds makes accurate organ segmentation an ongoing challenge in the field. In this study, we propose a tree phenotype acquisition method based on point cloud registration and skeleton segmentation. First, the Convex Hull-indexed Gaussian Mixture Model (CH-GMM) is employed to register the ground and aerial point cloud data. Then, a Laplace-multi-scale adaptive algorithm (LMSA) was proposed to obtain the crop skeleton structure, on the basis of which four phenotypic parameters, namely, plant height, crown width, branching number, and initial branching height, were extracted for fruit trees. In addition, the relationship between crown width and the number of branches was explored, where branches included initial, secondary, and tertiary branches. The results show that the proposed CH-GMM algorithm has a rotation error of less than 1.01°, a translation error of less than 10 mm, and a success rate of more than 95 %. The average precision, average recall, average F1 score, and average overall accuracy of the LMSA are 93.7 %, 96.2 %, 92.6 %, and 95.3 %, respectively. Finally, this study found a polynomial and exponential relationship between the number of bifurcations and crown size of fruit trees. The results of this study may provide new ideas for fruit tree phenotype acquisition and phenotype management.
{"title":"Branch segmentation and phenotype extraction of apple trees based on improved Laplace algorithm","authors":"Long Li ,&nbsp;Wei Fu ,&nbsp;Bin Zhang ,&nbsp;Yuqi Yang ,&nbsp;Yun Ge ,&nbsp;Congju Shen","doi":"10.1016/j.compag.2025.109998","DOIUrl":"10.1016/j.compag.2025.109998","url":null,"abstract":"<div><div>Phenotypic traits of crops reflect their physiological characteristics and provide a theoretical basis for predicting their growth. The 3D point cloud has a direct and accurate rendering ability, which has been widely used in phenotype extraction, especially with the help of accurate segmentation techniques. However, the inherent discrete nature of point clouds makes accurate organ segmentation an ongoing challenge in the field. In this study, we propose a tree phenotype acquisition method based on point cloud registration and skeleton segmentation. First, the Convex Hull-indexed Gaussian Mixture Model (CH-GMM) is employed to register the ground and aerial point cloud data. Then, a Laplace-multi-scale adaptive algorithm (LMSA) was proposed to obtain the crop skeleton structure, on the basis of which four phenotypic parameters, namely, plant height, crown width, branching number, and initial branching height, were extracted for fruit trees. In addition, the relationship between crown width and the number of branches was explored, where branches included initial, secondary, and tertiary branches. The results show that the proposed CH-GMM algorithm has a rotation error of less than 1.01°, a translation error of less than 10 mm, and a success rate of more than 95 %. The average precision, average recall, average F1 score, and average overall accuracy of the LMSA are 93.7 %, 96.2 %, 92.6 %, and 95.3 %, respectively. Finally, this study found a polynomial and exponential relationship between the number of bifurcations and crown size of fruit trees. The results of this study may provide new ideas for fruit tree phenotype acquisition and phenotype management.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"232 ","pages":"Article 109998"},"PeriodicalIF":7.7,"publicationDate":"2025-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143295931","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improved two-stage deep learning algorithm and lightweight YOLOv5n for classifying cottonseed damage
IF 7.7 1区 农林科学 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2025-02-07 DOI: 10.1016/j.compag.2025.110042
Weilong He , Fan Wu , Lori Unruh Snyder , Jean Cheng , Evelynn Wilcox , Lirong Xiang
With a rich historical background, the US cotton industry consistently maintains its position as one of the leading global producers. Due to the direct correlation between cottonseed quality and germination rate, conducting non-destructive testing to identify defects in cottonseeds becomes important to optimize yield performance. In this study, we propose an objective method for detecting cottonseed defects which classifies cottonseeds into four categories (Normal, Pinhole, Damage, and Very Damaged) and fourteen subcategories (N, R, C, RH, EH, CH, R Cut, C Cut, RV, CV, RH Expose, EH Expose, CH Expose, and V). Leveraging our customized cottonseed image dataset, we introduce a cottonseed defect detection and classification method based on a lightweight YOLOv5n model enhanced with Swin Transformer and an improved two-stage deep learning classification model. For cottonseed detection, our method achieves a 30.11 % reduction in model size and a 7.7 % increase in mAP50:95 compared to YOLOv5n. For individual cottonseed image classification, the accuracy, precision, recall, and F1 scores of our two-stage deep learning model are 97.34 %, 97.7 %, 97.3 %, and 97.3 %, respectively. The gradient-weighted class activation mapping (Grad-CAM) algorithm was then used to visually explain the model’s classification mechanism. Moreover, our algorithm demonstrates superior performance compared to six commonly used classification algorithms, including ResNet-18, ResNet-50, AlexNet, GoogleNet, VGG-16, and VGG-19, achieving a notable 1.65 % increase in accuracy over the best-performing algorithm among them. We then compared its performance with four state-of-the-art (SOTA) cottonseed damage classification methods. The findings demonstrate the potential for this design to advance the development of non-destructive seed damage detection.
{"title":"Improved two-stage deep learning algorithm and lightweight YOLOv5n for classifying cottonseed damage","authors":"Weilong He ,&nbsp;Fan Wu ,&nbsp;Lori Unruh Snyder ,&nbsp;Jean Cheng ,&nbsp;Evelynn Wilcox ,&nbsp;Lirong Xiang","doi":"10.1016/j.compag.2025.110042","DOIUrl":"10.1016/j.compag.2025.110042","url":null,"abstract":"<div><div>With a rich historical background, the US cotton industry consistently maintains its position as one of the leading global producers. Due to the direct correlation between cottonseed quality and germination rate, conducting non-destructive testing to identify defects in cottonseeds becomes important to optimize yield performance. In this study, we propose an objective method for detecting cottonseed defects which classifies cottonseeds into four categories (Normal, Pinhole, Damage, and Very Damaged) and fourteen subcategories (N, R, C, RH, EH, CH, R Cut, C Cut, RV, CV, RH Expose, EH Expose, CH Expose, and V). Leveraging our customized cottonseed image dataset, we introduce a cottonseed defect detection and classification method based on a lightweight YOLOv5n model enhanced with Swin Transformer and an improved two-stage deep learning classification model. For cottonseed detection, our method achieves a 30.11 % reduction in model size and a 7.7 % increase in <span><math><mrow><msub><mrow><mi>m</mi><mi>A</mi><mi>P</mi></mrow><mrow><mn>50</mn><mo>:</mo><mn>95</mn></mrow></msub></mrow></math></span> compared to YOLOv5n. For individual cottonseed image classification, the accuracy, precision, recall, and <span><math><mrow><msub><mi>F</mi><mn>1</mn></msub></mrow></math></span> scores of our two-stage deep learning model are 97.34 %, 97.7 %, 97.3 %, and 97.3 %, respectively. The gradient-weighted class activation mapping (Grad-CAM) algorithm was then used to visually explain the model’s classification mechanism. Moreover, our algorithm demonstrates superior performance compared to six commonly used classification algorithms, including ResNet-18, ResNet-50, AlexNet, GoogleNet, VGG-16, and VGG-19, achieving a notable 1.65 % increase in accuracy over the best-performing algorithm among them. We then compared its performance with four state-of-the-art (SOTA) cottonseed damage classification methods. The findings demonstrate the potential for this design to advance the development of non-destructive seed damage detection.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"232 ","pages":"Article 110042"},"PeriodicalIF":7.7,"publicationDate":"2025-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143349538","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Early detection of rice blast using UAV hyperspectral imagery and multi-scale integrator selection attention transformer network (MS-STNet)
IF 7.7 1区 农林科学 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2025-02-07 DOI: 10.1016/j.compag.2025.110007
Tan Liu , Yuan Qi , Fan Yang , Xiaoyun Yi , Songlin Guo , Peiyan Wu , Qingyun Yuan , Tongyu Xu
Rice blast is one of the most destructive diseases of rice leaves, seriously affecting rice production and quality. An accurate and rapid large-scale disease detection method is essential for rice production management. This study employed unmanned aerial vehicle (UAV) hyperspectral remote sensing technology for continuous observation of rice blast in the field. Advanced deep-learning techniques were utilized and combined with UAV data to detect rice blast. Firstly, the sensitivity and importance of canopy reflectance and texture features in disease monitoring were assessed. Considering the limitations of single texture features, the rice blast texture indices (RBTIs) were constructed by multiple texture features. Secondly, based on characteristic wavelengths, RBTIs, and their combinations, an effective rice blast detection framework based on the transformer network, multi-scale integrator selection attention transformer network (MS-STNet) model, was proposed. By incorporating multi-scale integrator and adopting a multi-scale and multi-pooling strategy that considered the interactions between different layers, the ability of the model to capture fine-grained information was enhanced. The top-k selection mechanism was introduced to generate corresponding attention masks, preserving the most contributive feature combinations while maintaining the global structural information of the input. The results demonstrated that the MS-STNet model could adequately learn significant features at different scales, demonstrating excellent accuracy and strong spatial adaptability in both field experiments. Compared with single texture features, the model using RBTIs as inputs demonstrated superior classification performance, with a maximum increase in overall accuracy (OA) of 4.27%. Furthermore, the model constructed by combining spectral features and RBTIs outperformed models built using only spectral features or RBTIs, with a maximum OA of 96.98% and Kappa of 96.22%. Overall, the feature-based combination method can improve the early phases of rice blast classification accuracy. The study results can provide valuable reference for accurately monitoring rice blast using UAV hyperspectral imagery.
{"title":"Early detection of rice blast using UAV hyperspectral imagery and multi-scale integrator selection attention transformer network (MS-STNet)","authors":"Tan Liu ,&nbsp;Yuan Qi ,&nbsp;Fan Yang ,&nbsp;Xiaoyun Yi ,&nbsp;Songlin Guo ,&nbsp;Peiyan Wu ,&nbsp;Qingyun Yuan ,&nbsp;Tongyu Xu","doi":"10.1016/j.compag.2025.110007","DOIUrl":"10.1016/j.compag.2025.110007","url":null,"abstract":"<div><div>Rice blast is one of the most destructive diseases of rice leaves, seriously affecting rice production and quality. An accurate and rapid large-scale disease detection method is essential for rice production management. This study employed unmanned aerial vehicle (UAV) hyperspectral remote sensing technology for continuous observation of rice blast in the field. Advanced deep-learning techniques were utilized and combined with UAV data to detect rice blast. Firstly, the sensitivity and importance of canopy reflectance and texture features in disease monitoring were assessed. Considering the limitations of single texture features, the rice blast texture indices (RBTIs) were constructed by multiple texture features. Secondly, based on characteristic wavelengths, RBTIs, and their combinations, an effective rice blast detection framework based on the transformer network, multi-scale integrator selection attention transformer network (MS-STNet) model, was proposed. By incorporating multi-scale integrator and adopting a multi-scale and multi-pooling strategy that considered the interactions between different layers, the ability of the model to capture fine-grained information was enhanced. The top-k selection mechanism was introduced to generate corresponding attention masks, preserving the most contributive feature combinations while maintaining the global structural information of the input. The results demonstrated that the MS-STNet model could adequately learn significant features at different scales, demonstrating excellent accuracy and strong spatial adaptability in both field experiments. Compared with single texture features, the model using RBTIs as inputs demonstrated superior classification performance, with a maximum increase in overall accuracy (OA) of 4.27%. Furthermore, the model constructed by combining spectral features and RBTIs outperformed models built using only spectral features or RBTIs, with a maximum OA of 96.98% and Kappa of 96.22%. Overall, the feature-based combination method can improve the early phases of rice blast classification accuracy. The study results can provide valuable reference for accurately monitoring rice blast using UAV hyperspectral imagery.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"231 ","pages":"Article 110007"},"PeriodicalIF":7.7,"publicationDate":"2025-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143348647","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sizing optimisation under irradiance uncertainty of irrigation systems powered by off-grid solar panels
IF 7.7 1区 农林科学 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2025-02-07 DOI: 10.1016/j.compag.2025.110034
F.J. Navarro-González , J. Manzano , M.A. Pardo
Sizing a photovoltaic installation is crucial for decision-makers, researchers and practitioners managing pressurised irrigation networks powered by solar panels. Photovoltaic off-grid installations offer energy efficiency, lower operation costs, environmental benefits and economic profitability. Network managers must strategically account for the energy limitations of solar installations when irrigating. Moreover, the manager has the challenge of synchronising energy production with the energy consumption of the pumping equipment that supplies water to crops. We propose a technique to optimise the sizing of photovoltaic installations to maximise energy consumption in pumps, thereby meeting the water demands of crops while considering the uncertainties associated with non-clear sky conditions. This approach enhances the management of installations and can schedule the opening and closing of hydrants and irrigation intakes to supply water to crops efficiently. Finally, a real case study in the University of Alicante irrigation network was conducted for two scenarios. The first is to calculate irradiance and the electrical production curve using a theoretical model (very close to reality in latitudes like Alicante, Spain). Real data obtained from a nearby meteorological station is used for the second scenario. In both cases, non-clear sky conditions are considered to establish a relationship between the probability of clear sky (α=1) and the minimum number of PV modules (537 or 509 for the theoretical model and real data, respectively). For days without direct normal irradiance (α=0), the minimum number of modules is 2145 and 991. Practitioners or decision-makers must find a compromise that meets water demands while minimising the size of the installation.
{"title":"Sizing optimisation under irradiance uncertainty of irrigation systems powered by off-grid solar panels","authors":"F.J. Navarro-González ,&nbsp;J. Manzano ,&nbsp;M.A. Pardo","doi":"10.1016/j.compag.2025.110034","DOIUrl":"10.1016/j.compag.2025.110034","url":null,"abstract":"<div><div>Sizing a photovoltaic installation is crucial for decision-makers, researchers and practitioners managing pressurised irrigation networks powered by solar panels. Photovoltaic off-grid installations offer energy efficiency, lower operation costs, environmental benefits and economic profitability. Network managers must strategically account for the energy limitations of solar installations when irrigating. Moreover, the manager has the challenge of synchronising energy production with the energy consumption of the pumping equipment that supplies water to crops. We propose a technique to optimise the sizing of photovoltaic installations to maximise energy consumption in pumps, thereby meeting the water demands of crops while considering the uncertainties associated with non-clear sky conditions. This approach enhances the management of installations and can schedule the opening and closing of hydrants and irrigation intakes to supply water to crops efficiently. Finally, a real case study in the University of Alicante irrigation network was conducted for two scenarios. The first is to calculate irradiance and the electrical production curve using a theoretical model (very close to reality in latitudes like Alicante, Spain). Real data obtained from a nearby meteorological station is used for the second scenario. In both cases, non-clear sky conditions are considered to establish a relationship between the probability of clear sky (<span><math><mi>α</mi></math></span>=1) and the minimum number of PV modules (537 or 509 for the theoretical model and real data, respectively). For days without direct normal irradiance (<span><math><mi>α</mi></math></span>=0), the minimum number of modules is 2145 and 991. Practitioners or decision-makers must find a compromise that meets water demands while minimising the size of the installation.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"232 ","pages":"Article 110034"},"PeriodicalIF":7.7,"publicationDate":"2025-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143349539","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
In-field blueberry fruit phenotyping with a MARS-PhenoBot and customized BerryNet
IF 7.7 1区 农林科学 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2025-02-07 DOI: 10.1016/j.compag.2025.110057
Zhengkun Li , Rui Xu , Changying Li , Patricio Munoz , Fumiomi Takeda , Bruno Leme
Accurate blueberry fruit phenotyping, including yield, fruit maturity, and cluster compactness, is crucial for optimizing crop breeding and management practices. Recent advances in machine vision and deep learning have shown promising potential to automate phenotyping and replace manual sampling. This paper presented a robotic blueberry phenotyping system, called MARS-Phenobot, that collects data in the field and measures fruit-related phenotypic traits such as fruit number, maturity, and compactness. Our workflow comprised four components: a robotic multi-view imaging system for high-throughput data collection, a vision foundation model (Segment Anything Model, SAM) for mask-free data labeling, a customized BerryNet deep learning model for detecting blueberry clusters and segmenting fruit, as well as a post-processing module for estimating yield, maturity, and cluster compactness. A customized deep learning model, BerryNet, was designed for detecting fruit clusters and segmenting individual berries by integrating low-level pyramid features, rapid partial convolutional blocks, and BiFPN feature fusion. It outperformed other networks and achieved mean average precision (mAP50) of 54.9 % in cluster detection and 85.8 % in fruit segmentation with fewer parameters and fewer computation requirements. We evaluated the phenotypic traits derived from our methods and the ground truth on 26 individual blueberry plants across 17 genotypes. The results demonstrated that both the fruit count and cluster count extracted from images were strongly correlated with the yield. Integrating multi-view fruit counts enhanced yield estimation accuracy, achieving a Mean Absolute Percentage Error (MAPE) of 23.1 % and the highest R2 value of 0.73, while maturity level estimations closely aligned with manual calculations, exhibiting a Mean Absolute Error (MAE) of approximately 5 %. Furthermore, two metrics related to fruit compactness were introduced, including cluster compactness and fruit distance, which could be useful for breeders to assess the machine and hand harvestability across genotypes. Finally, we evaluated the proposed robotic blueberry fruit phenotyping pipeline on eleven blueberry genotypes, proving the potential to distinguish the high-yield, early-maturity, and loose-clustering cultivars. Our methodology provides a promising solution for automated in-field blueberry fruit phenotyping, potentially replacing labor-intensive manual sampling. Furthermore, this approach could advance blueberry breeding programs, precision management, and mechanical/robotic harvesting.
{"title":"In-field blueberry fruit phenotyping with a MARS-PhenoBot and customized BerryNet","authors":"Zhengkun Li ,&nbsp;Rui Xu ,&nbsp;Changying Li ,&nbsp;Patricio Munoz ,&nbsp;Fumiomi Takeda ,&nbsp;Bruno Leme","doi":"10.1016/j.compag.2025.110057","DOIUrl":"10.1016/j.compag.2025.110057","url":null,"abstract":"<div><div>Accurate blueberry fruit phenotyping, including yield, fruit maturity, and cluster compactness, is crucial for optimizing crop breeding and management practices. Recent advances in machine vision and deep learning have shown promising potential to automate phenotyping and replace manual sampling. This paper presented a robotic blueberry phenotyping system, called MARS-Phenobot, that collects data in the field and measures fruit-related phenotypic traits such as fruit number, maturity, and compactness. Our workflow comprised four components: a robotic multi-view imaging system for high-throughput data collection, a vision foundation model (Segment Anything Model, SAM) for mask-free data labeling, a customized BerryNet deep learning model for detecting blueberry clusters and segmenting fruit, as well as a post-processing module for estimating yield, maturity, and cluster compactness. A customized deep learning model, BerryNet, was designed for detecting fruit clusters and segmenting individual berries by integrating low-level pyramid features, rapid partial convolutional blocks, and BiFPN feature fusion. It outperformed other networks and achieved mean average precision (mAP50) of 54.9 % in cluster detection and 85.8 % in fruit segmentation with fewer parameters and fewer computation requirements. We evaluated the phenotypic traits derived from our methods and the ground truth on 26 individual blueberry plants across 17 genotypes. The results demonstrated that both the fruit count and cluster count extracted from images were strongly correlated with the yield. Integrating multi-view fruit counts enhanced yield estimation accuracy, achieving a Mean Absolute Percentage Error (MAPE) of 23.1 % and the highest R<sup>2</sup> value of 0.73, while maturity level estimations closely aligned with manual calculations, exhibiting a Mean Absolute Error (MAE) of approximately 5 %. Furthermore, two metrics related to fruit compactness were introduced, including cluster compactness and fruit distance, which could be useful for breeders to assess the machine and hand harvestability across genotypes. Finally, we evaluated the proposed robotic blueberry fruit phenotyping pipeline on eleven blueberry genotypes, proving the potential to distinguish the high-yield, early-maturity, and loose-clustering cultivars. Our methodology provides a promising solution for automated in-field blueberry fruit phenotyping, potentially replacing labor-intensive manual sampling. Furthermore, this approach could advance blueberry breeding programs, precision management, and mechanical/robotic harvesting.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"232 ","pages":"Article 110057"},"PeriodicalIF":7.7,"publicationDate":"2025-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143348546","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards high throughput in-field detection and quantification of wheat foliar diseases using deep learning
IF 7.7 1区 农林科学 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2025-02-06 DOI: 10.1016/j.compag.2024.109854
Radek Zenkl , Bruce A. McDonald , Achim Walter , Jonas Anderegg
Reliable, quantitative information on the presence and severity of crop diseases is essential for site-specific crop management and resistance breeding. Successful analysis of leaves under naturally variable lighting, presenting multiple disorders, and across phenological stages is a critical step towards high-throughput disease assessments directly in the field.
Here, we present a dataset comprising 422 high resolution images of flattened leaves captured under variable outdoor lighting with polygon annotations of leaves, leaf necrosis and insect damage as well as point annotations of Septoria tritici blotch (STB) fruiting bodies (pycnidia) and rust pustules.
Based on this dataset, we demonstrate the capability of deep learning for keypoint detection of pycnidia (F1=0.76) and rust pustules (F1=0.77) combined with semantic segmentation of leaves (IoU=0.96), leaf necrosis (IoU=0.77) and insect damage (IoU=0.69) to reliably detect and quantify the presence of STB, leaf rusts, and insect damage on symptom level under natural outdoor conditions. An analysis of intra- and inter-annotator agreement on selected images demonstrated that the proposed method achieved a performance close to that of annotators in the majority of the scenarios.
We validated the generalization capabilities of the proposed method by testing it on images of unstructured canopies acquired directly in the field and without manual interaction with single leaves. This enables significantly higher throughput and automated data acquisition, which is critical to harness the full potential of image-based disease assessments. Model predictions were in good agreement with visual assessments of in-focus regions in these images, despite the presence of new challenges such as variable orientation of leaves and more complex lighting. This underscores the principle feasibility of diagnosing and quantifying the severity of foliar diseases under field conditions using the proposed imaging setup and image processing methods.
By demonstrating the ability to diagnose and quantify the severity of multiple diseases in highly complex field scenarios, we lay the groundwork for high-throughput in-field assessments of foliar diseases that can support resistance breeding and the implementation of core principles of precision agriculture.
{"title":"Towards high throughput in-field detection and quantification of wheat foliar diseases using deep learning","authors":"Radek Zenkl ,&nbsp;Bruce A. McDonald ,&nbsp;Achim Walter ,&nbsp;Jonas Anderegg","doi":"10.1016/j.compag.2024.109854","DOIUrl":"10.1016/j.compag.2024.109854","url":null,"abstract":"<div><div>Reliable, quantitative information on the presence and severity of crop diseases is essential for site-specific crop management and resistance breeding. Successful analysis of leaves under naturally variable lighting, presenting multiple disorders, and across phenological stages is a critical step towards high-throughput disease assessments directly in the field.</div><div>Here, we present a dataset comprising 422 high resolution images of flattened leaves captured under variable outdoor lighting with polygon annotations of leaves, leaf necrosis and insect damage as well as point annotations of Septoria tritici blotch (STB) fruiting bodies (pycnidia) and rust pustules.</div><div>Based on this dataset, we demonstrate the capability of deep learning for keypoint detection of pycnidia (<span><math><mrow><mi>F</mi><mn>1</mn><mo>=</mo><mn>0</mn><mo>.</mo><mn>76</mn></mrow></math></span>) and rust pustules (<span><math><mrow><mi>F</mi><mn>1</mn><mo>=</mo><mn>0</mn><mo>.</mo><mn>77</mn></mrow></math></span>) combined with semantic segmentation of leaves (<span><math><mrow><mi>I</mi><mi>o</mi><mi>U</mi><mo>=</mo><mn>0</mn><mo>.</mo><mn>96</mn></mrow></math></span>), leaf necrosis (<span><math><mrow><mi>I</mi><mi>o</mi><mi>U</mi><mo>=</mo><mn>0</mn><mo>.</mo><mn>77</mn></mrow></math></span>) and insect damage (<span><math><mrow><mi>I</mi><mi>o</mi><mi>U</mi><mo>=</mo><mn>0</mn><mo>.</mo><mn>69</mn></mrow></math></span>) to reliably detect and quantify the presence of STB, leaf rusts, and insect damage on symptom level under natural outdoor conditions. An analysis of intra- and inter-annotator agreement on selected images demonstrated that the proposed method achieved a performance close to that of annotators in the majority of the scenarios.</div><div>We validated the generalization capabilities of the proposed method by testing it on images of unstructured canopies acquired directly in the field and without manual interaction with single leaves. This enables significantly higher throughput and automated data acquisition, which is critical to harness the full potential of image-based disease assessments. Model predictions were in good agreement with visual assessments of in-focus regions in these images, despite the presence of new challenges such as variable orientation of leaves and more complex lighting. This underscores the principle feasibility of diagnosing and quantifying the severity of foliar diseases under field conditions using the proposed imaging setup and image processing methods.</div><div>By demonstrating the ability to diagnose and quantify the severity of multiple diseases in highly complex field scenarios, we lay the groundwork for high-throughput in-field assessments of foliar diseases that can support resistance breeding and the implementation of core principles of precision agriculture.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"232 ","pages":"Article 109854"},"PeriodicalIF":7.7,"publicationDate":"2025-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143295932","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adaptive LiDAR odometry and mapping for autonomous agricultural mobile robots in unmanned farms
IF 7.7 1区 农林科学 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2025-02-06 DOI: 10.1016/j.compag.2025.110023
Hanzhe Teng, Yipeng Wang, Dimitrios Chatziparaschis, Konstantinos Karydis
Unmanned and intelligent agricultural systems are crucial for enhancing agricultural efficiency and for helping mitigate the effect of labor shortage. However, unlike urban environments, agricultural fields impose distinct and unique challenges on autonomous robotic systems, such as the unstructured and dynamic nature of the environment, the rough and uneven terrain, and the resulting non-smooth robot motion. To address these challenges, this work introduces an adaptive LiDAR odometry and mapping framework tailored for autonomous agricultural mobile robots operating in complex agricultural environments. The proposed framework consists of a robust LiDAR odometry algorithm based on dense Generalized-ICP scan matching, and an adaptive mapping module that considers motion stability and point cloud consistency for selective map updates. The key design principle of this framework is to prioritize the incremental consistency of the map by rejecting motion-distorted points and sparse dynamic objects, which in turn leads to high accuracy in odometry estimated from scan matching against the map. The effectiveness of the proposed method is validated via extensive evaluation against state-of-the-art methods on field datasets collected in real-world agricultural environments featuring various planting types, terrain types, and robot motion profiles. Results demonstrate that our method can achieve accurate odometry estimation and mapping results consistently and robustly across diverse agricultural settings, whereas other methods are sensitive to abrupt robot motion and accumulated drift in unstructured environments. Further, the computational efficiency of our method is competitive compared with other methods. The source code of the developed method and the associated field dataset are publicly available at https://github.com/UCR-Robotics/AG-LOAM.
无人智能农业系统对于提高农业效率和缓解劳动力短缺至关重要。然而,与城市环境不同,农田给自主机器人系统带来了明显而独特的挑战,例如环境的非结构化和动态性质、崎岖不平的地形以及由此产生的非平稳机器人运动。为了应对这些挑战,本研究为在复杂农业环境中运行的自主农业移动机器人量身定制了一个自适应激光雷达测距和绘图框架。所提出的框架包括基于密集广义 ICP 扫描匹配的鲁棒激光雷达测距算法,以及考虑运动稳定性和点云一致性的自适应地图更新模块。该框架的主要设计原则是通过剔除运动失真点和稀疏动态物体来优先考虑地图的增量一致性,进而通过与地图的扫描匹配来实现高精度的里程测量。通过对在真实世界农业环境中收集的具有各种种植类型、地形类型和机器人运动特征的田间数据集进行广泛评估,对照最先进的方法,验证了所提方法的有效性。结果表明,我们的方法可以在各种农业环境中持续、稳健地实现精确的里程估算和绘图结果,而其他方法则对机器人在非结构化环境中的突然运动和累积漂移非常敏感。此外,与其他方法相比,我们方法的计算效率也很有竞争力。所开发方法的源代码和相关的实地数据集可在 https://github.com/UCR-Robotics/AG-LOAM 网站上公开获取。
{"title":"Adaptive LiDAR odometry and mapping for autonomous agricultural mobile robots in unmanned farms","authors":"Hanzhe Teng,&nbsp;Yipeng Wang,&nbsp;Dimitrios Chatziparaschis,&nbsp;Konstantinos Karydis","doi":"10.1016/j.compag.2025.110023","DOIUrl":"10.1016/j.compag.2025.110023","url":null,"abstract":"<div><div>Unmanned and intelligent agricultural systems are crucial for enhancing agricultural efficiency and for helping mitigate the effect of labor shortage. However, unlike urban environments, agricultural fields impose distinct and unique challenges on autonomous robotic systems, such as the unstructured and dynamic nature of the environment, the rough and uneven terrain, and the resulting non-smooth robot motion. To address these challenges, this work introduces an adaptive LiDAR odometry and mapping framework tailored for autonomous agricultural mobile robots operating in complex agricultural environments. The proposed framework consists of a robust LiDAR odometry algorithm based on dense Generalized-ICP scan matching, and an adaptive mapping module that considers motion stability and point cloud consistency for selective map updates. The key design principle of this framework is to prioritize the incremental consistency of the map by rejecting motion-distorted points and sparse dynamic objects, which in turn leads to high accuracy in odometry estimated from scan matching against the map. The effectiveness of the proposed method is validated via extensive evaluation against state-of-the-art methods on field datasets collected in real-world agricultural environments featuring various planting types, terrain types, and robot motion profiles. Results demonstrate that our method can achieve accurate odometry estimation and mapping results consistently and robustly across diverse agricultural settings, whereas other methods are sensitive to abrupt robot motion and accumulated drift in unstructured environments. Further, the computational efficiency of our method is competitive compared with other methods. The source code of the developed method and the associated field dataset are publicly available at <span><span>https://github.com/UCR-Robotics/AG-LOAM</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"232 ","pages":"Article 110023"},"PeriodicalIF":7.7,"publicationDate":"2025-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143352802","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computers and Electronics in Agriculture
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1