首页 > 最新文献

Computers and Electronics in Agriculture最新文献

英文 中文
Peanut yield prediction using remote sensing and machine learning approaches based on phenological characteristics
IF 7.7 1区 农林科学 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2025-02-20 DOI: 10.1016/j.compag.2025.110084
Xuehui Hou , Junyong Zhang , Xiubin Luo , Shiwei Zeng , Yan Lu , Qinggang Wei , Jia Liu , Wenjie Feng , Qiaoyu Li
Yield prediction of root-fruit crops before harvest is significant for implementing precise field management. However, unlike crops such as wheat and corn, non-destructively predicting the yield of root-fruit crops non-destructively is challenging owing to their edible parts being located underground. Remote sensing offers a potential solution to this problem. Studies on predicting peanut yield through remote sensing are rare. Most of these studies relying on specific vegetation indices, such as the normalized difference vegetation index (NDVI), but have limitations in terms of model accuracy when other phenological parameters influencing peanut yield formation are not considered. 355 peanut yield samples were collected from two distinct cultivation patterns in 2022, 2023 and 2024 and In the study of peanut yield prediction, two modeling methods, linear regression and random forest, were employed to develop prediction models. Considering the contributions of early-stage material accumulation and late- stage material transfer to peanut yield, the results showed that incorporating multiple phenological parameters into peanut yield prediction models enhances accuracy beyond that achieved by models relying solely on early growth stage vegetation indices such as maximum NDVI.. Furthermore, the random forest algorithm has demonstrated its effectiveness in predicting peanut yields, particularly for summer peanuts, as evidenced by its successful application in related studies. The R2 reached a high of 0.8201, while the lowest MAE and RMSE values were recorded at 0.2878 and 0.4048 t/ha, respectively. This study’s findings have significantly contributed to remote sensing-based yield prediction for root-fruit crops, further refining precision management practices in the cultivating of crops such as peanuts.
{"title":"Peanut yield prediction using remote sensing and machine learning approaches based on phenological characteristics","authors":"Xuehui Hou ,&nbsp;Junyong Zhang ,&nbsp;Xiubin Luo ,&nbsp;Shiwei Zeng ,&nbsp;Yan Lu ,&nbsp;Qinggang Wei ,&nbsp;Jia Liu ,&nbsp;Wenjie Feng ,&nbsp;Qiaoyu Li","doi":"10.1016/j.compag.2025.110084","DOIUrl":"10.1016/j.compag.2025.110084","url":null,"abstract":"<div><div>Yield prediction of root-fruit crops before harvest is significant for implementing precise field management. However, unlike crops such as wheat and corn, non-destructively predicting the yield of root-fruit crops non-destructively is challenging owing to their edible parts being located underground. Remote sensing offers a potential solution to this problem. Studies on predicting peanut yield through remote sensing are rare. Most of these studies relying on specific vegetation indices, such as the normalized difference vegetation index (NDVI), but have limitations in terms of model accuracy when other phenological parameters influencing peanut yield formation are not considered. 355 peanut yield samples were collected from two distinct cultivation patterns in 2022, 2023 and 2024 and In the study of peanut yield prediction, two modeling methods, linear regression and random forest, were employed to develop prediction models. Considering the contributions of early-stage material accumulation and late- stage material transfer to peanut yield, the results showed that incorporating multiple phenological parameters into peanut yield prediction models enhances accuracy beyond that achieved by models relying solely on early growth stage vegetation indices such as maximum NDVI.. Furthermore, the random forest algorithm has demonstrated its effectiveness in predicting peanut yields, particularly for summer peanuts, as evidenced by its successful application in related studies. The R<sup>2</sup> reached a high of 0.8201, while the lowest MAE and RMSE values were recorded at 0.2878 and 0.4048 t/ha, respectively. This study’s findings have significantly contributed to remote sensing-based yield prediction for root-fruit crops, further refining precision management practices in the cultivating of crops such as peanuts.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"232 ","pages":"Article 110084"},"PeriodicalIF":7.7,"publicationDate":"2025-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143446111","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Lightweight lotus phenotype recognition based on MobileNetV2-SE with reliable pseudo-labels
IF 7.7 1区 农林科学 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2025-02-19 DOI: 10.1016/j.compag.2025.110080
Peisen Yuan , Zixin Chen , Qijiang Jin , Yingchun Xu , Huanliang Xu
Due to the wide variety of lotus species and the need for phenotypic categorization, traditional recognition is limited by the current manual observation and measurement of lotus phenotypes. In this paper, a lotus species recognition technique based on MobileNetV2-SE with reliable pseudo-labelling is proposed to construct an image dataset containing 94 different lotus species, and various data enhancement techniques are employed. Within MobileNetV2-SE, the classical MobileNetV2 network is improved by embedding the SE (Squeeze-and-Excitation) module, and then pseudo-labelling technical of semi-supervised learning is adopted to improve the classification performance of the model by generating high-quality labelling data. The test results show that the model in this paper can achieve an accuracy of 98.11% for lotus phenotype classification, and the precision, recall and F1 value can reach 98.45%, 98.47% and 98.40%, respectively, and the number of parameters and the amount of computation are 2.41×106 and 3.41×108 FLOPs, which are significantly better than other networks. This paper provides an effective solution for the automatic identification of lotus varieties and provides a reference for other plant variety identification tasks.
{"title":"Lightweight lotus phenotype recognition based on MobileNetV2-SE with reliable pseudo-labels","authors":"Peisen Yuan ,&nbsp;Zixin Chen ,&nbsp;Qijiang Jin ,&nbsp;Yingchun Xu ,&nbsp;Huanliang Xu","doi":"10.1016/j.compag.2025.110080","DOIUrl":"10.1016/j.compag.2025.110080","url":null,"abstract":"<div><div>Due to the wide variety of lotus species and the need for phenotypic categorization, traditional recognition is limited by the current manual observation and measurement of lotus phenotypes. In this paper, a lotus species recognition technique based on MobileNetV2-SE with reliable pseudo-labelling is proposed to construct an image dataset containing 94 different lotus species, and various data enhancement techniques are employed. Within MobileNetV2-SE, the classical MobileNetV2 network is improved by embedding the SE (Squeeze-and-Excitation) module, and then pseudo-labelling technical of semi-supervised learning is adopted to improve the classification performance of the model by generating high-quality labelling data. The test results show that the model in this paper can achieve an accuracy of 98.11% for lotus phenotype classification, and the precision, recall and <span><math><msub><mrow><mi>F</mi></mrow><mrow><mn>1</mn></mrow></msub></math></span> value can reach 98.45%, 98.47% and 98.40%, respectively, and the number of parameters and the amount of computation are <span><math><mrow><mn>2</mn><mo>.</mo><mn>41</mn><mo>×</mo><mn>1</mn><msup><mrow><mn>0</mn></mrow><mrow><mn>6</mn></mrow></msup></mrow></math></span> and <span><math><mrow><mn>3</mn><mo>.</mo><mn>41</mn><mo>×</mo><mn>1</mn><msup><mrow><mn>0</mn></mrow><mrow><mn>8</mn></mrow></msup></mrow></math></span> FLOPs, which are significantly better than other networks. This paper provides an effective solution for the automatic identification of lotus varieties and provides a reference for other plant variety identification tasks.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"232 ","pages":"Article 110080"},"PeriodicalIF":7.7,"publicationDate":"2025-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143446112","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Global maize yield responses to essential climate variables: Assessment using atmospheric reanalysis and future climate scenarios
IF 7.7 1区 农林科学 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2025-02-19 DOI: 10.1016/j.compag.2025.110140
Zhi-Wei Zhao , Pei Leng , Xiao-Jing Han , Guo-Fei Shang
Maize is recognized as one of the four major crops in the world and plays an important role in global agri-food systems. Therefore, understanding how maize yield responds to climate change is essential for addressing the challenges of exponential population growth and food security. In this study, maize yield data in 77 countries from 1982 to 2016 and seven essential climate variables (ECVs) were collected to assess the effects of climate change on maize yield variation. To this end, potential ECVs were first divided into three groups: energy availability (net surface solar radiation, air temperature and land surface temperature), water availability (soil moisture and precipitation), and exchange efficiency (relative humidity and wind speed), which are closely related to crop growth. Correlation analysis was conducted to determine the best ECVs for further investigation. Furthermore, the generalized additive model (GAM) was used to express yield as function to the ECVs in each country. Specifically, the first-order difference in maize yield and ECVs were considered in data process to maximize the effects of reductions from other factors such as crop management and cultivars. Finally, the performance of the proposed approach was compared with that of widely used multiple regression method. The results indicate that: (1) a global average of 46% of maize yield variability can be explained by ECVs variability, yet significant discrepancies exist for different countries; (2) over 73% of countries are dominated by more than two groups of ECVs; (3) GAM generally outperforms the traditional multiple regression method in more than 80% of the investigated countries. This study offers a fresh perspective for investigating maize yield responses to climate change.
{"title":"Global maize yield responses to essential climate variables: Assessment using atmospheric reanalysis and future climate scenarios","authors":"Zhi-Wei Zhao ,&nbsp;Pei Leng ,&nbsp;Xiao-Jing Han ,&nbsp;Guo-Fei Shang","doi":"10.1016/j.compag.2025.110140","DOIUrl":"10.1016/j.compag.2025.110140","url":null,"abstract":"<div><div>Maize is recognized as one of the four major crops in the world and plays an important role in global agri-food systems. Therefore, understanding how maize yield responds to climate change is essential for addressing the challenges of exponential population growth and food security. In this study, maize yield data in 77 countries from 1982 to 2016 and seven essential climate variables (ECVs) were collected to assess the effects of climate change on maize yield variation. To this end, potential ECVs were first divided into three groups: energy availability (net surface solar radiation, air temperature and land surface temperature), water availability (soil moisture and precipitation), and exchange efficiency (relative humidity and wind speed), which are closely related to crop growth. Correlation analysis was conducted to determine the best ECVs for further investigation. Furthermore, the generalized additive model (GAM) was used to express yield as function to the ECVs in each country. Specifically, the first-order difference in maize yield and ECVs were considered in data process to maximize the effects of reductions from other factors such as crop management and cultivars. Finally, the performance of the proposed approach was compared with that of widely used multiple regression method. The results indicate that: (1) a global average of 46% of maize yield variability can be explained by ECVs variability, yet significant discrepancies exist for different countries; (2) over 73% of countries are dominated by more than two groups of ECVs; (3) GAM generally outperforms the traditional multiple regression method in more than 80% of the investigated countries. This study offers a fresh perspective for investigating maize yield responses to climate change.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"232 ","pages":"Article 110140"},"PeriodicalIF":7.7,"publicationDate":"2025-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143436929","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Thresholding and continuous wavelet transform (CWT) analysis of Ground Penetrating Radar (GPR) data for estimation of potato biomass
IF 7.7 1区 农林科学 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2025-02-19 DOI: 10.1016/j.compag.2025.110114
Henry Ruiz-Guzman , Tyler Adams , Afolabi Agbona , Matthew Wolfe , Mark Everett , Jean-Francois Chamberland , Dirk B. Hays
Potato (Solanum tuberosum) is widely recognized as the leading vegetable crop in the United States, with millions of tons produced annually. Despite many advancements in cultivars, crop production still suffers from meager progress in the assessment of early maturity. One potential solution to this problem is Ground-Penetrating Radar (GPR), a near-surface geophysical tool that has recently been applied to agriculture for assessment of root systems by detecting dielectric variations in sub-surface and soil layers by means of electromagnetic waves emitted into the ground. This study seeks to assess GPR’s capability to serve as a non-destructive proximal-sensing technique for quantifying potato tuber biomass by estimating the size of potatoes by measuring changes in the reflected GPR signal. Two methods, thresholding analysis and continuous wavelet transform (CWT), were employed in this study to extract features from GPR responses to predict tuber biomass. The dataset was collected in a controlled sandbox system. Thresholding analysis on the interpolated amplitude values yielded significant results, being able to predict tuber biomass with an accuracy of r = 0.82 and R2 = 0.64 based upon Multiple Linear regression. CWT was somewhat less successful, yet still significant, with a prediction accuracy of r = 0.6 and R2 = 0.32. These results indicate that GPR technology is suitable as a decision-support tool for potato breeders seeking to monitor tuber growth.
{"title":"Thresholding and continuous wavelet transform (CWT) analysis of Ground Penetrating Radar (GPR) data for estimation of potato biomass","authors":"Henry Ruiz-Guzman ,&nbsp;Tyler Adams ,&nbsp;Afolabi Agbona ,&nbsp;Matthew Wolfe ,&nbsp;Mark Everett ,&nbsp;Jean-Francois Chamberland ,&nbsp;Dirk B. Hays","doi":"10.1016/j.compag.2025.110114","DOIUrl":"10.1016/j.compag.2025.110114","url":null,"abstract":"<div><div>Potato <em>(Solanum tuberosum)</em> is widely recognized as the leading vegetable crop in the United States, with millions of tons produced annually. Despite many advancements in cultivars, crop production still suffers from meager progress in the assessment of early maturity. One potential solution to this problem is Ground-Penetrating Radar (GPR), a near-surface geophysical tool that has recently been applied to agriculture for assessment of root systems by detecting dielectric variations in sub-surface and soil layers by means of electromagnetic waves emitted into the ground. This study seeks to assess GPR’s capability to serve as a non-destructive proximal-sensing technique for quantifying potato tuber biomass by estimating the size of potatoes by measuring changes in the reflected GPR signal. Two methods, thresholding analysis and continuous wavelet transform (CWT), were employed in this study to extract features from GPR responses to predict tuber biomass. The dataset was collected in a controlled sandbox system. Thresholding analysis on the interpolated amplitude values yielded significant results, being able to predict tuber biomass with an accuracy of r = 0.82 and R2 = 0.64 based upon Multiple Linear regression. CWT was somewhat less successful, yet still significant, with a prediction accuracy of r = 0.6 and R2 = 0.32. These results indicate that GPR technology is suitable as a decision-support tool for potato breeders seeking to monitor tuber growth.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"232 ","pages":"Article 110114"},"PeriodicalIF":7.7,"publicationDate":"2025-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143436380","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SDYOLO-Tracker: An efficient multi-fish hypoxic behavior recognition and tracking method
IF 7.7 1区 农林科学 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2025-02-19 DOI: 10.1016/j.compag.2025.110079
Jiaxuan Yu , Guangxu Wang , Xin Li , Zhuangzhuang Du , Wenkai Xu , Muhammad Akhter , Daoliang Li
Real-time monitoring and tracking of fish in aquaculture and timely detection of fish hypoxia behavior play a crucial role in advancing the intelligent aquaculture. This study proposes an effective multi-fish behavior identification and tracking method (SDYOLO-Tracker) to solve the lag problem of hypoxia behavior monitoring in different species of fish. Our work aims to enable early monitoring of abnormal hypoxic behavior in fish. To enhance the model ability to detect small objects, we improved the YOLOv8n model using SPD-Conv and D-LKA modules. Then, we integrate the improved YOLOv8n (SDYOLOv8) model with Bytetrack multi-object tracking (MOT) algorithm. This approach utilizes a complementary 3D motion prediction strategy, which effectively addresses issues such as sudden fish motion and disappearance, resulting in an accurate multi-fish identification and tracking. The experimental results indicate that the enhanced MOTA, HOTA, IDR and IDF1 have shown improvement of 3.32 %, 3.53 %, 6.68 %, and 5.36 %, respectively when compared with the original YOLOv8n model. Moreover, in the tested video, the ID number switching is reduced by 18.75 %, significantly improving the accuracy of multi-fish tracking without compromising the speed of the model. In the comparative experiments with other MOT algorithms, this method achieved the highest IDR and IDF1 metrics, with a frame per second (FPS) of 45.03, demonstrating the best tracking stability and the fastest model processing speed. In addition, SDYOLO-Tracker can conduct qualitative and quantitative analyses of fish behavior under varying dissolved oxygen concentrations, reflecting changes in locomotion, average velocity, and maximum instantaneous velocity. These movement indexes allow for the determination of hypoxia thresholds of different fish species providing a novel approach for studying hypoxia indicators in fish. In conclusion, this research holds both theoretical and practical significance for the study of early hypoxia behavior in fish.
{"title":"SDYOLO-Tracker: An efficient multi-fish hypoxic behavior recognition and tracking method","authors":"Jiaxuan Yu ,&nbsp;Guangxu Wang ,&nbsp;Xin Li ,&nbsp;Zhuangzhuang Du ,&nbsp;Wenkai Xu ,&nbsp;Muhammad Akhter ,&nbsp;Daoliang Li","doi":"10.1016/j.compag.2025.110079","DOIUrl":"10.1016/j.compag.2025.110079","url":null,"abstract":"<div><div>Real-time monitoring and tracking of fish in aquaculture and timely detection of fish hypoxia behavior play a crucial role in advancing the intelligent aquaculture. This study proposes an effective multi-fish behavior identification and tracking method (SDYOLO-Tracker) to solve the lag problem of hypoxia behavior monitoring in different species of fish. Our work aims to enable early monitoring of abnormal hypoxic behavior in fish. To enhance the model ability to detect small objects, we improved the YOLOv8n model using SPD-Conv and D-LKA modules. Then, we integrate the improved YOLOv8n (SDYOLOv8) model with Bytetrack multi-object tracking (MOT) algorithm. This approach utilizes a complementary 3D motion prediction strategy, which effectively addresses issues such as sudden fish motion and disappearance, resulting in an accurate multi-fish identification and tracking. The experimental results indicate that the enhanced MOTA, HOTA, IDR and IDF1 have shown improvement of 3.32 %, 3.53 %, 6.68 %, and 5.36 %, respectively when compared with the original YOLOv8n model. Moreover, in the tested video, the ID number switching is reduced by 18.75 %, significantly improving the accuracy of multi-fish tracking without compromising the speed of the model. In the comparative experiments with other MOT algorithms, this method achieved the highest IDR and IDF1 metrics, with a frame per second (FPS) of 45.03, demonstrating the best tracking stability and the fastest model processing speed. In addition, SDYOLO-Tracker can conduct qualitative and quantitative analyses of fish behavior under varying dissolved oxygen concentrations, reflecting changes in locomotion, average velocity, and maximum instantaneous velocity. These movement indexes allow for the determination of hypoxia thresholds of different fish species providing a novel approach for studying hypoxia indicators in fish. In conclusion, this research holds both theoretical and practical significance for the study of early hypoxia behavior in fish.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"232 ","pages":"Article 110079"},"PeriodicalIF":7.7,"publicationDate":"2025-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143436930","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
JuDifformer: Multimodal fusion model with transformer and diffusion for jujube disease detection
IF 7.7 1区 农林科学 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2025-02-18 DOI: 10.1016/j.compag.2025.110008
Lexin Zhang , Yang Zhao , Chengcheng Zhou , Jiahe Zhang, Yuhan Yan, Tailai Chen, Chunli Lv
This paper proposes a deep learning model based on multimodal data fusion for detecting jujube tree diseases in desert environments. Due to the complex lighting and environmental conditions in desert areas, existing disease detection methods face significant limitations in feature extraction and accuracy. By fusing image and sensor data, this study designs a feature extraction mechanism that combines transformer and diffusion modules to achieve precise capture of disease features. Experimental results demonstrate that the proposed model outperforms mainstream object detection models and state-of-the-art methods across multiple metrics. Specifically, the model achieves an accuracy of 0.90, precision of 0.93, recall of 0.89, and an mAP of 0.91, which are significantly higher than those of other comparison models. Compared to DETR, YOLOv10, EfficientDet, and others, the proposed method not only converges faster but also shows superior final performance. When compared to recent methods, the proposed model also exhibits better detection performance and robustness. The results highlight the practical value of the proposed model in disease detection under challenging environmental conditions, demonstrating its capability to effectively handle low-light and high-dust scenarios while maintaining high detection accuracy and robustness. These findings confirm the model’s potential for improving disease monitoring efficiency in large-scale agricultural applications. Future work will further optimize the model’s real-time performance and lightweight design to adapt to more real-world scenarios.
{"title":"JuDifformer: Multimodal fusion model with transformer and diffusion for jujube disease detection","authors":"Lexin Zhang ,&nbsp;Yang Zhao ,&nbsp;Chengcheng Zhou ,&nbsp;Jiahe Zhang,&nbsp;Yuhan Yan,&nbsp;Tailai Chen,&nbsp;Chunli Lv","doi":"10.1016/j.compag.2025.110008","DOIUrl":"10.1016/j.compag.2025.110008","url":null,"abstract":"<div><div>This paper proposes a deep learning model based on multimodal data fusion for detecting jujube tree diseases in desert environments. Due to the complex lighting and environmental conditions in desert areas, existing disease detection methods face significant limitations in feature extraction and accuracy. By fusing image and sensor data, this study designs a feature extraction mechanism that combines transformer and diffusion modules to achieve precise capture of disease features. Experimental results demonstrate that the proposed model outperforms mainstream object detection models and state-of-the-art methods across multiple metrics. Specifically, the model achieves an accuracy of 0.90, precision of 0.93, recall of 0.89, and an mAP of 0.91, which are significantly higher than those of other comparison models. Compared to DETR, YOLOv10, EfficientDet, and others, the proposed method not only converges faster but also shows superior final performance. When compared to recent methods, the proposed model also exhibits better detection performance and robustness. The results highlight the practical value of the proposed model in disease detection under challenging environmental conditions, demonstrating its capability to effectively handle low-light and high-dust scenarios while maintaining high detection accuracy and robustness. These findings confirm the model’s potential for improving disease monitoring efficiency in large-scale agricultural applications. Future work will further optimize the model’s real-time performance and lightweight design to adapt to more real-world scenarios.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"232 ","pages":"Article 110008"},"PeriodicalIF":7.7,"publicationDate":"2025-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143427490","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Portable vibrational spectroscopy instruments and chemometrics for the classification of cotton fibers according the length (UHM)
IF 7.7 1区 农林科学 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2025-02-18 DOI: 10.1016/j.compag.2025.110100
Darlei Gutierrez Dantas Bernardo Oliveria , Maria Fernanda Pimentel , Everaldo Paulo de Medeiros , Simone da Silva Simões
In this study, novel methods using portable NIR and Raman spectroscopy instruments associated with multivariate classification were developed to classify cotton fibers according to their length. The Upper Half Mean (UHM) length is considered a quality parameter by the cotton fiber market and is traditionally determined using a high-volume system (HVI), which entails high installation costs and labor-intensive analyses. As UHM correlates with cellulose polymerization, its determination can be achieved through vibrational spectroscopy techniques such as near-infrared (NIR) and Raman. These technologies offer advantages such as low cost, ease of handling, and rapid data acquisition, making them suitable for field use. This study aimed to develop a method and demonstrate the feasibility of using portable NIR and Raman spectrometers coupled with pattern recognition (PR) methods for routine analysis of cotton fibers, serving as a proof of concept for practical application in the industry. A total of 142 samples of cotton fibers from cotton improvement experiments conducted by the Brazilian Agricultural Research Corporation (EMBRAPA) were employed. Two classification approaches based on cotton lint length and the related economic value were employed. The first aimed to differentiate between short (SM) and long (LF) fibers, while the second aimed to further classify long fibers into internal classes (L, VL, and EL). Overall, methods using portable Raman spectrometer exhibited 100% accuracy performance regardless of the PR technique used. Meanwhile, methods based on NIR spectrometers achieved accuracies of 100% depending on the PR method and variable selection employed. The use of GLSW resulted in a reduction of a latent variable. In conclusion, the use of portable NIR and Raman spectrometers combined with PR methods emerges as an innovative and viable technology for the classification of cotton fibers based on their length.
{"title":"Portable vibrational spectroscopy instruments and chemometrics for the classification of cotton fibers according the length (UHM)","authors":"Darlei Gutierrez Dantas Bernardo Oliveria ,&nbsp;Maria Fernanda Pimentel ,&nbsp;Everaldo Paulo de Medeiros ,&nbsp;Simone da Silva Simões","doi":"10.1016/j.compag.2025.110100","DOIUrl":"10.1016/j.compag.2025.110100","url":null,"abstract":"<div><div>In this study, novel methods using portable NIR and Raman spectroscopy instruments associated with multivariate classification were developed to classify cotton fibers according to their length. The Upper Half Mean (UHM) length is considered a quality parameter by the cotton fiber market and is traditionally determined using a high-volume system (HVI), which entails high installation costs and labor-intensive analyses. As UHM correlates with cellulose polymerization, its determination can be achieved through vibrational spectroscopy techniques such as near-infrared (NIR) and Raman. These technologies offer advantages such as low cost, ease of handling, and rapid data acquisition, making them suitable for field use. This study aimed to develop a method and demonstrate the feasibility of using portable NIR and Raman spectrometers coupled with pattern recognition (PR) methods for routine analysis of cotton fibers, serving as a proof of concept for practical application in the industry. A total of 142 samples of cotton fibers from cotton improvement experiments conducted by the Brazilian Agricultural Research Corporation (EMBRAPA) were employed. Two classification approaches based on cotton lint length and the related economic value were employed. The first aimed to differentiate between short (SM) and long (LF) fibers, while the second aimed to further classify long fibers into internal classes (L, VL, and EL). Overall, methods using portable Raman spectrometer exhibited 100% accuracy performance regardless of the PR technique used. Meanwhile, methods based on NIR spectrometers achieved accuracies of 100% depending on the PR method and variable selection employed. The use of GLSW resulted in a reduction of a latent variable. In conclusion, the use of portable NIR and Raman spectrometers combined with PR methods emerges as an innovative and viable technology for the classification of cotton fibers based on their length.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"232 ","pages":"Article 110100"},"PeriodicalIF":7.7,"publicationDate":"2025-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143427493","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A composite sliding mode controller with extended disturbance observer for 4WSS agricultural robots in unstructured farmlands
IF 7.7 1区 农林科学 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2025-02-18 DOI: 10.1016/j.compag.2025.110069
Yafei Zhang, Yue Shen, Hui Liu, Siwei He, Zohaib Khan
Autonomous agricultural robots have gained increasing attention in recent years, as they hold great potential for a wide range of applications in agriculture. However, accurately tracking a specified path for these robots is challenging due to wheel slip disturbances arising from unstructured farmlands characterized by uneven, undulating, and slippery terrain. In this paper, an extended disturbance observer based sliding mode controller (EDO-SMC) is proposed for Four-Wheel Self-Steering (4WSS) agricultural robots subject to lateral and longitudinal wheel slip. First, the novel differential steering structure of the 4WSS robot is introduced. To take slipping effects into account, an improved kinematic model which explicitly integrates the unknown slip disturbances is developed. An extended disturbance observer is then introduced to estimate the slip disturbances and their rates of change, facilitating timely compensation for these time-varying disturbances. To enhance practical applicability in agriculture, an improved sliding surface is designed to mitigate excessive control effort resulting from observer-induced overcompensation under initial conditions. Furthermore, a rigorous Lyapunov stability analysis of the proposed composite control strategy is conducted. Finally, the proposed composite controller is validated through co-simulations and field tests, meeting the control accuracy and robustness requirements of agricultural robot operations in unstructured farmlands.
{"title":"A composite sliding mode controller with extended disturbance observer for 4WSS agricultural robots in unstructured farmlands","authors":"Yafei Zhang,&nbsp;Yue Shen,&nbsp;Hui Liu,&nbsp;Siwei He,&nbsp;Zohaib Khan","doi":"10.1016/j.compag.2025.110069","DOIUrl":"10.1016/j.compag.2025.110069","url":null,"abstract":"<div><div>Autonomous agricultural robots have gained increasing attention in recent years, as they hold great potential for a wide range of applications in agriculture. However, accurately tracking a specified path for these robots is challenging due to wheel slip disturbances arising from unstructured farmlands characterized by uneven, undulating, and slippery terrain. In this paper, an extended disturbance observer based sliding mode controller (EDO-SMC) is proposed for Four-Wheel Self-Steering (4WSS) agricultural robots subject to lateral and longitudinal wheel slip. First, the novel differential steering structure of the 4WSS robot is introduced. To take slipping effects into account, an improved kinematic model which explicitly integrates the unknown slip disturbances is developed. An extended disturbance observer is then introduced to estimate the slip disturbances and their rates of change, facilitating timely compensation for these time-varying disturbances. To enhance practical applicability in agriculture, an improved sliding surface is designed to mitigate excessive control effort resulting from observer-induced overcompensation under initial conditions. Furthermore, a rigorous Lyapunov stability analysis of the proposed composite control strategy is conducted. Finally, the proposed composite controller is validated through co-simulations and field tests, meeting the control accuracy and robustness requirements of agricultural robot operations in unstructured farmlands.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"232 ","pages":"Article 110069"},"PeriodicalIF":7.7,"publicationDate":"2025-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143427491","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dense object detection based canopy characteristics encoding for precise spraying in peach orchards
IF 7.7 1区 农林科学 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2025-02-18 DOI: 10.1016/j.compag.2025.110097
Shengli Xu , Siqi Zheng , Rahul Rai
Accurate and precise spraying in orchards is paramount for optimized agricultural practices, ensuring efficient pesticide utilization, minimized environmental impact, and enhanced crop yield by targeting specific areas with the right amount of treatment. The asymmetrical distribution of foliage and flowers in peach orchards poses a formidable challenge to achieving precise spray accuracy, impeding the uniform application of treatments and compromising the overall efficacy of pest and disease control measures. In response to the prevailing challenges in achieving accurate spray application caused by the asymmetrical distribution of foliage and flowers in peach orchards, this paper introduces a novel deep neural network to map the RGB image and corresponding depth to the density map of peach flowers or foliage. The model consists of components: (1) two backbones based on ResNet-50 that extract contextual features from the RGB image and depth features from depth data at multiple scales and levels; (2) an optimized depth-enhanced module that effectively fuses the distinct features extracted from the two input streams; and (3) a two-stage decoder that aggregates the high-level cross-modal features to regress the coarse density map and subsequently integrates it with the low-level cross-modal features for final density map prediction. To evaluate the performance of our model, we collected 493 frames (206,095 instances) of peach flowers and 475 frames (350,833 instances) of foliage from the peach orchards utilizing our sprayer prototype equipped with stereo cameras. The proposed method outperforms state-of-the-art models on our datasets, demonstrating the superiority and efficacy for encoding canopy characteristics in the form of flower and foliage density maps for blossom and cover sprays. It attains significant computational efficiency, exhibiting a frame rate of 20 FPS, and showcases exceptional accuracy with a WMAPE of 12.11% for peach flowers and a WMAPE of 13.37% for leaves.
{"title":"Dense object detection based canopy characteristics encoding for precise spraying in peach orchards","authors":"Shengli Xu ,&nbsp;Siqi Zheng ,&nbsp;Rahul Rai","doi":"10.1016/j.compag.2025.110097","DOIUrl":"10.1016/j.compag.2025.110097","url":null,"abstract":"<div><div>Accurate and precise spraying in orchards is paramount for optimized agricultural practices, ensuring efficient pesticide utilization, minimized environmental impact, and enhanced crop yield by targeting specific areas with the right amount of treatment. The asymmetrical distribution of foliage and flowers in peach orchards poses a formidable challenge to achieving precise spray accuracy, impeding the uniform application of treatments and compromising the overall efficacy of pest and disease control measures. In response to the prevailing challenges in achieving accurate spray application caused by the asymmetrical distribution of foliage and flowers in peach orchards, this paper introduces a novel deep neural network to map the RGB image and corresponding depth to the density map of peach flowers or foliage. The model consists of components: (1) two backbones based on ResNet-50 that extract contextual features from the RGB image and depth features from depth data at multiple scales and levels; (2) an optimized depth-enhanced module that effectively fuses the distinct features extracted from the two input streams; and (3) a two-stage decoder that aggregates the high-level cross-modal features to regress the coarse density map and subsequently integrates it with the low-level cross-modal features for final density map prediction. To evaluate the performance of our model, we collected 493 frames (206,095 instances) of peach flowers and 475 frames (350,833 instances) of foliage from the peach orchards utilizing our sprayer prototype equipped with stereo cameras. The proposed method outperforms state-of-the-art models on our datasets, demonstrating the superiority and efficacy for encoding canopy characteristics in the form of flower and foliage density maps for blossom and cover sprays. It attains significant computational efficiency, exhibiting a frame rate of 20 FPS, and showcases exceptional accuracy with a WMAPE of 12.11% for peach flowers and a WMAPE of 13.37% for leaves.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"232 ","pages":"Article 110097"},"PeriodicalIF":7.7,"publicationDate":"2025-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143427492","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ASGP-IDet: Temporal behaviour localisation of beef cattle in untrimmed surveillance videos
IF 7.7 1区 农林科学 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2025-02-18 DOI: 10.1016/j.compag.2025.110059
Yamin Han , Jie Wu , Qi Zhang , Xilong Feng , Yang Xu , Taoping Zhang , Bowen Wang , Hongming Zhang
An accurate analysis of beef cattle behaviour provides valuable information about their important characteristics such as health status and fertility. Recent studies have utilised computer vision technologies to recognise beef cattle behaviour in trimmed videos with a single behaviour. However, these methods ignore the fact that surveillance videos in real farm circumstances are usually untrimmed and contain multiple behaviour instances and background scenes, which limit their applicability. To address this issue, we propose a temporal behaviour localisation method using aggregate scalable-granularity perception instance detection (ASGP-IDet) to localise beef cattle behaviours in untrimmed videos. It provides semantic information, such as “ when does a specific behaviour start and end?” and “ duration of a specific behaviour”. To this end, a feature pyramid with ASGP blocks was designed to aggregate information across different temporal granularities. The trident head was then employed to achieve precise behaviour boundary predictions, and the classification head was used to predict the behaviour category of the instance. Finally, a novel centre–start–end instant offset loss (CSEIO Loss) is proposed for correct offsets at the start, end, and temporal centre of behaviours. Experiments on the newly collected Cattle Temporal Action dataset demonstrated that ASGP-IDet outperformed other state-of-the-art approaches. It achieved mAP scores of 93.93%, 93.74%, 93.22%, 92.29%, and 87.46% at tIoU thresholds [0.3:0.7:0.1], specifically, an average mAP of 92.13%, and an average processing time of 92.9 ms per video. These findings introduce an efficient method for localising the temporal behaviour of beef cattle in untrimmed farm surveillance videos and further support precision livestock farming.
{"title":"ASGP-IDet: Temporal behaviour localisation of beef cattle in untrimmed surveillance videos","authors":"Yamin Han ,&nbsp;Jie Wu ,&nbsp;Qi Zhang ,&nbsp;Xilong Feng ,&nbsp;Yang Xu ,&nbsp;Taoping Zhang ,&nbsp;Bowen Wang ,&nbsp;Hongming Zhang","doi":"10.1016/j.compag.2025.110059","DOIUrl":"10.1016/j.compag.2025.110059","url":null,"abstract":"<div><div>An accurate analysis of beef cattle behaviour provides valuable information about their important characteristics such as health status and fertility. Recent studies have utilised computer vision technologies to recognise beef cattle behaviour in trimmed videos with a single behaviour. However, these methods ignore the fact that surveillance videos in real farm circumstances are usually untrimmed and contain multiple behaviour instances and background scenes, which limit their applicability. To address this issue, we propose a temporal behaviour localisation method using aggregate scalable-granularity perception instance detection (ASGP-IDet) to localise beef cattle behaviours in untrimmed videos. It provides semantic information, such as “ when does a specific behaviour start and end?” and “ duration of a specific behaviour”. To this end, a feature pyramid with ASGP blocks was designed to aggregate information across different temporal granularities. The trident head was then employed to achieve precise behaviour boundary predictions, and the classification head was used to predict the behaviour category of the instance. Finally, a novel centre–start–end instant offset loss (CSEIO Loss) is proposed for correct offsets at the start, end, and temporal centre of behaviours. Experiments on the newly collected Cattle Temporal Action dataset demonstrated that ASGP-IDet outperformed other state-of-the-art approaches. It achieved mAP scores of 93.93%, 93.74%, 93.22%, 92.29%, and 87.46% at tIoU thresholds [0.3:0.7:0.1], specifically, an average mAP of 92.13%, and an average processing time of 92.9 ms per video. These findings introduce an efficient method for localising the temporal behaviour of beef cattle in untrimmed farm surveillance videos and further support precision livestock farming.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"232 ","pages":"Article 110059"},"PeriodicalIF":7.7,"publicationDate":"2025-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143427495","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computers and Electronics in Agriculture
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1