Pub Date : 2024-11-26DOI: 10.1016/j.compag.2024.109679
Wanyuan Huang, Haolin Wang, Wei Dai, Ming Zhang, Dezhi Ren, Wei Wang
An innovative residual film recycling machine for the plough layer (RFRMPL) is proposed in view of difficulty in picking up residual film and the easy missing out on picking up fine residual film. In this study, the soil throwing device is designed and optimized, as the soil throwing efficiency of the throwing device is essential for residual film separation efficiency of the RFRMPL. The soil throwing efficiency is selected as evaluation index, and a mechanical simulation model of throwing device based on Discrete Element Method (DEM) and Rocky is built up according to structure and working principle of the soil throwing device. The optimal combination of working parameters of the throwing device is obtained via theoretical calculations, single and multi-factorial simulation test. The results show that the optimal working parameters of rotation speed of the rotary tilling mechanism, speed of the soil elevating mechanism and the distance between the rotary tilling mechanism and soil elevating mechanism are 200 rpm, 320 rpm and 130 mm respectively. The field validation test is carried out based on the optimal combination parameters. The results show that soil throwing efficiency of the soil throwing device is 87.45 %. The error between the field validation test results and the simulation results (90.42 %) is 3.4 %, which proves the correctness of the simulation model. It can provide theoretical reference for the design and optimization of the RFRMPL.
{"title":"Study on the throwing device of residual film recycling machine for the plough layer","authors":"Wanyuan Huang, Haolin Wang, Wei Dai, Ming Zhang, Dezhi Ren, Wei Wang","doi":"10.1016/j.compag.2024.109679","DOIUrl":"10.1016/j.compag.2024.109679","url":null,"abstract":"<div><div>An innovative residual film recycling machine for the plough layer (RFRMPL) is proposed in view of difficulty in picking up residual film and the easy missing out on picking up fine residual film. In this study, the soil throwing device is designed and optimized, as the soil throwing efficiency of the throwing device is essential for residual film separation efficiency of the RFRMPL. The soil throwing efficiency is selected as evaluation index, and a mechanical simulation model of throwing device based on Discrete Element Method (DEM) and Rocky is built up according to structure and working principle of the soil throwing device. The optimal combination of working parameters of the throwing device is obtained via theoretical calculations, single and multi-factorial simulation test. The results show that the optimal working parameters of rotation speed of the rotary tilling mechanism, speed of the soil elevating mechanism and the distance between the rotary tilling mechanism and soil elevating mechanism are 200 rpm, 320 rpm and 130 mm respectively. The field validation test is carried out based on the optimal combination parameters. The results show that soil throwing efficiency of the soil throwing device is 87.45 %. The error between the field validation test results and the simulation results (90.42 %) is 3.4 %, which proves the correctness of the simulation model. It can provide theoretical reference for the design and optimization of the RFRMPL.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"227 ","pages":"Article 109679"},"PeriodicalIF":7.7,"publicationDate":"2024-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142700045","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-25DOI: 10.1016/j.compag.2024.109646
He Zhang, Yun Ge, Hao Xia, Chao Sun
Visual recognition is crucial for robotic harvesting of safflower filaments in field. However, accurate detection and localization is challenging due to complex backgrounds, leaves and branches shielding, and variable safflower morphology. This study proposes a safflower picking points localization method during the full harvest period based on SBP-YOLOv8s-seg network. The method enhanced the accuracy by improving the performance of the detection and segmentation network and implementing phased localization. Specifically, SBP-YOLOv8s-seg network based on self-calibration was constructed for precise segmentation of safflower filaments and fruit balls. Additionally, different morphological features of safflower during the full harvest period were analyzed. The segmented masks underwent Principal Component Analysis (PCA) computation, region of interest (ROI) extraction, and contour fitting to extract the principal eigenvectors that express information about the filaments. To address the issue of picking position being invisible due to the occlusion of safflower necking, the picking points were determined in conjunction with the positional relationship between filaments and fruit balls. Experimental results demonstrated that the segmentation performance of SBP-YOLOv8s-seg network was superior to other networks, achieving a significant improvement in mean average precision (mAP) compared to YOLOv5s-seg, YOLOv6s-seg, YOLOv7s-seg, and YOLOv8s-seg, with improvements of 5.1 %, 2.3 %, 4.1 %, and 1.3 % respectively. The precision, recall and mAP of SBP-YOLOv8s-seg network in the segmentation task increased from 87.9 %, 79 %, and 84.4 % of YOLOv8s-seg to 89.1 %, 79.7 %, and 85.7 %. The accuracy of blooming safflower and decaying safflower calculated by the proposed method were 93.0 % and 91.9 %, respectively. The overall localization accuracy of safflower picking points was 92.9 %. Field experiments showed that the picking success rate was 90.7 %. This study provides a theoretical basis and data support for visual localization of safflower picking robot in the future.
{"title":"Safflower picking points localization method during the full harvest period based on SBP-YOLOv8s-seg network","authors":"He Zhang, Yun Ge, Hao Xia, Chao Sun","doi":"10.1016/j.compag.2024.109646","DOIUrl":"10.1016/j.compag.2024.109646","url":null,"abstract":"<div><div>Visual recognition is crucial for robotic harvesting of safflower filaments in field. However, accurate detection and localization is challenging due to complex backgrounds, leaves and branches shielding, and variable safflower morphology. This study proposes a safflower picking points localization method during the full harvest period based on SBP-YOLOv8s-seg network. The method enhanced the accuracy by improving the performance of the detection and segmentation network and implementing phased localization. Specifically, SBP-YOLOv8s-seg network based on self-calibration was constructed for precise segmentation of safflower filaments and fruit balls. Additionally, different morphological features of safflower during the full harvest period were analyzed. The segmented masks underwent Principal Component Analysis (PCA) computation, region of interest (ROI) extraction, and contour fitting to extract the principal eigenvectors that express information about the filaments. To address the issue of picking position being invisible due to the occlusion of safflower necking, the picking points were determined in conjunction with the positional relationship between filaments and fruit balls. Experimental results demonstrated that the segmentation performance of SBP-YOLOv8s-seg network was superior to other networks, achieving a significant improvement in mean average precision (mAP) compared to YOLOv5s-seg, YOLOv6s-seg, YOLOv7s-seg, and YOLOv8s-seg, with improvements of 5.1 %, 2.3 %, 4.1 %, and 1.3 % respectively. The precision, recall and mAP of SBP-YOLOv8s-seg network in the segmentation task increased from 87.9 %, 79 %, and 84.4 % of YOLOv8s-seg to 89.1 %, 79.7 %, and 85.7 %. The accuracy of blooming safflower and decaying safflower calculated by the proposed method were 93.0 % and 91.9 %, respectively. The overall localization accuracy of safflower picking points was 92.9 %. Field experiments showed that the picking success rate was 90.7 %. This study provides a theoretical basis and data support for visual localization of safflower picking robot in the future.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"227 ","pages":"Article 109646"},"PeriodicalIF":7.7,"publicationDate":"2024-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142699952","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Optimization of water inputs is possible through precision irrigation based on prescription maps. The crop water stress index (CWSI) is an indicator of spatial and dynamic changes in plant water status that can serve irrigation management decision-making. The driving hypothesis was that in-season CWSI maps based on combined static and spatial-dynamic variables could be used to delineate irrigation MZs. A primary incentive was to minimize thermal-imaging campaigns and to complement CWSI maps between campaigns with cost-effective multi-spectral imaging campaigns producing normalized difference vegetative index (NDVI) maps. A spatial machine-learning model based on a random-forest (RF) algorithm combined with spatial statistical methods was developed to predict the spatial and temporal variability in CWSI of single vines in a vineyard. Model criteria and objectives included the reduction of sample data and input variables to a minimum without impacting prediction accuracy, consideration of only variables readily available to farmers, and accounting for spatial location and spatial processes.
The model was developed and tested on data from a ‘Cabernet Sauvignon’ vineyard in Israel over two years. Prediction of CWSI was driven by terrain parameters, slope, aspect and topographical wetness index, soil apparent electrical conductivity (ECa), and NDVI.
Spatial models based on RF were found to support CWSI prediction. Adding a geospatial component significantly improved model performance and accuracy, particularly when raw data was represented as z-scores or when z-scores were used as weights. NDVI, followed by ECa, aspect, or slope, was the most important variable predicting CWSI in the non-spatial models. The stronger the variable importance of NDVI, the better the model performed. The weaker the effect of NDVI in predicting CWSI, the stronger the effect of terrain and soil variables. In the spatial models, based on z-transformed values or on weighted values, the most important variable in predicting CWSI was either NDVI or location.
The model, based on a limited and readily accessible number of variables, can serve as the basis for user-friendly decision support tools for precision irrigation. Additional research is needed to evaluate alternative prediction variables and to account for case studies in more geographical locations to address overfitting specific input data. Socio-economic and cost-benefit considerations should be integrated to examine whether precision irrigation management based on such models has the desired effects on water consumption and yield.
{"title":"A spatial machine-learning model for predicting crop water stress index for precision irrigation of vineyards","authors":"Aviva Peeters , Yafit Cohen , Idan Bahat , Noa Ohana-Levi , Eitan Goldshtein , Yishai Netzer , Tomás R. Tenreiro , Victor Alchanatis , Alon Ben-Gal","doi":"10.1016/j.compag.2024.109578","DOIUrl":"10.1016/j.compag.2024.109578","url":null,"abstract":"<div><div>Optimization of water inputs is possible through precision irrigation based on prescription maps. The crop water stress index (CWSI) is an indicator of spatial and dynamic changes in plant water status that can serve irrigation management decision-making. The driving hypothesis was that in-season CWSI maps based on combined static and spatial-dynamic variables could be used to delineate irrigation MZs. A primary incentive was to minimize thermal-imaging campaigns and to complement CWSI maps between campaigns with cost-effective multi-spectral imaging campaigns producing normalized difference vegetative index (NDVI) maps. A spatial machine-learning model based on a random-forest (RF) algorithm combined with spatial statistical methods was developed to predict the spatial and temporal variability in CWSI of single vines in a vineyard. Model criteria and objectives included the reduction of sample data and input variables to a minimum without impacting prediction accuracy, consideration of only variables readily available to farmers, and accounting for spatial location and spatial processes.</div><div>The model was developed and tested on data from a ‘Cabernet Sauvignon’ vineyard in Israel over two years. Prediction of CWSI was driven by terrain parameters, slope, aspect and topographical wetness index, soil apparent electrical conductivity (ECa), and NDVI.</div><div>Spatial models based on RF were found to support CWSI prediction. Adding a geospatial component significantly improved model performance and accuracy, particularly when raw data was represented as z-scores or when z-scores were used as weights. NDVI, followed by ECa, aspect, or slope, was the most important variable predicting CWSI in the non-spatial models. The stronger the variable importance of NDVI, the better the model performed. The weaker the effect of NDVI in predicting CWSI, the stronger the effect of terrain and soil variables. In the spatial models, based on z-transformed values or on weighted values, the most important variable in predicting CWSI was either NDVI or location.</div><div>The model, based on a limited and readily accessible number of variables, can serve as the basis for user-friendly decision support tools for precision irrigation. Additional research is needed to evaluate alternative prediction variables and to account for case studies in more geographical locations to address overfitting specific input data. Socio-economic and cost-benefit considerations should be integrated to examine whether precision irrigation management based on such models has the desired effects on water consumption and yield.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"227 ","pages":"Article 109578"},"PeriodicalIF":7.7,"publicationDate":"2024-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142699943","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-23DOI: 10.1016/j.compag.2024.109631
Zhigang Ren, Han Zheng, Jian Chen, Tao Chen, Pengyang Xie, Yunzhe Xu, Jiaming Deng, Huanzhe Wang, Mingjiang Sun, Wenchi Jiao
Industrialized agriculture is the direction of future agricultural development, which is developing in the direction of scale, diversification, unmanned and integration. The cooperative operation of UAV, UGV and UAV-UGV is a hot topic in the field of intelligent agricultural multi-machine research. However, at present, most of the research projects have not systematically given the solutions of UAV, UGV and UAV-UGV collaborative application in the future industrialized agriculture. Therefore, we propose the development model of future industrialized agriculture, which derives the key technologies and applications of agricultural UAV, UGV and UAV-UGV collaboration. We summarize and discuss the difficulties and innovative design of the application of UAV, UGV and UAV-UGV collaboration technology in the future industrialized environment, and analyze the opportunities and challenges of the application of UAV, UGV and UAV-UGV collaboration technology in combination with future industrialized agricultural production. Finally, we describe that more technologies (multi-modal sensing technology, embodied intelligent control technology, edge computing technology, end-edge cloud collaborative management and control technology, virtual reality, augmented reality, etc.) are the future research directions for the application of UAV, UGV and UAV-UGV collaboration in industrialized agriculture.
{"title":"Integrating UAV, UGV and UAV-UGV collaboration in future industrialized agriculture: Analysis, opportunities and challenges","authors":"Zhigang Ren, Han Zheng, Jian Chen, Tao Chen, Pengyang Xie, Yunzhe Xu, Jiaming Deng, Huanzhe Wang, Mingjiang Sun, Wenchi Jiao","doi":"10.1016/j.compag.2024.109631","DOIUrl":"10.1016/j.compag.2024.109631","url":null,"abstract":"<div><div>Industrialized agriculture is the direction of future agricultural development, which is developing in the direction of scale, diversification, unmanned and integration. The cooperative operation of UAV, UGV and UAV-UGV is a hot topic in the field of intelligent agricultural multi-machine research. However, at present, most of the research projects have not systematically given the solutions of UAV, UGV and UAV-UGV collaborative application in the future industrialized agriculture. Therefore, we propose the development model of future industrialized agriculture, which derives the key technologies and applications of agricultural UAV, UGV and UAV-UGV collaboration. We summarize and discuss the difficulties and innovative design of the application of UAV, UGV and UAV-UGV collaboration technology in the future industrialized environment, and analyze the opportunities and challenges of the application of UAV, UGV and UAV-UGV collaboration technology in combination with future industrialized agricultural production. Finally, we describe that more technologies (multi-modal sensing technology, embodied intelligent control technology, edge computing technology, end-edge cloud collaborative management and control technology, virtual reality, augmented reality, etc.) are the future research directions for the application of UAV, UGV and UAV-UGV collaboration in industrialized agriculture.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"227 ","pages":"Article 109631"},"PeriodicalIF":7.7,"publicationDate":"2024-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142699905","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Efficient health monitoring in Nile tilapia aquaculture is critical due to the substantial economic losses from diseases, underlining the necessity for innovative monitoring solutions. This study introduces an advanced, automated health monitoring system known as the “Automated System for Identifying Disease in Nile Tilapia (AS-ID-NT),” which incorporates a heterogeneous ensemble deep learning model using the Artificial Multiple Intelligence System (AMIS) as the decision fusion strategy (HE-DLM-AMIS). This system enhances the accuracy and efficiency of disease detection in Nile tilapia. The research utilized two specially curated video datasets, NT-1 and NT-2, each consisting of short videos lasting between 3–10 s, showcasing various behaviors of Nile tilapia in controlled environments. These datasets were critical for training and validating the ensemble model. Comparative analysis reveals that the HE-DLM-AMIS embedded in AS-ID-NT achieves superior performance, with an accuracy of 92.48% in detecting health issues in tilapia. This system outperforms both single model configurations, such as the 3D Convolutional Neural Network and Vision Transformer (ViT-large), which recorded accuracies of 84.64% and 85.7% respectively, and homogeneous ensemble models like ViT-large-Ho and ConvLSTM-Ho, which achieved accuracies of 88.49% and 86.84% respectively. AS-ID-NT provides a non-invasive, continuous, and automated solution for timely intervention, successfully identifying both healthy and unhealthy (infected and environmentally stressed) fish. This system not only demonstrates the potential of advanced AI and machine learning techniques in enhancing aquaculture management but also promotes sustainable practices and food security by maintaining healthier fish populations and supporting the economic viability of tilapia farms.
{"title":"Application of AMIS-optimized vision transformer in identifying disease in Nile Tilapia","authors":"Chutchai Kaewta , Rapeepan Pitakaso , Surajet Khonjun , Thanatkij Srichok , Peerawat Luesak , Sarayut Gonwirat , Prem Enkvetchakul , Achara Jutagate , Tuanthong Jutagate","doi":"10.1016/j.compag.2024.109676","DOIUrl":"10.1016/j.compag.2024.109676","url":null,"abstract":"<div><div>Efficient health monitoring in Nile tilapia aquaculture is critical due to the substantial economic losses from diseases, underlining the necessity for innovative monitoring solutions. This study introduces an advanced, automated health monitoring system known as the “Automated System for Identifying Disease in Nile Tilapia (AS-ID-NT),” which incorporates a heterogeneous ensemble deep learning model using the Artificial Multiple Intelligence System (AMIS) as the decision fusion strategy (HE-DLM-AMIS). This system enhances the accuracy and efficiency of disease detection in Nile tilapia. The research utilized two specially curated video datasets, NT-1 and NT-2, each consisting of short videos lasting between 3–10 s, showcasing various behaviors of Nile tilapia in controlled environments. These datasets were critical for training and validating the ensemble model. Comparative analysis reveals that the HE-DLM-AMIS embedded in AS-ID-NT achieves superior performance, with an accuracy of 92.48% in detecting health issues in tilapia. This system outperforms both single model configurations, such as the 3D Convolutional Neural Network and Vision Transformer (ViT-large), which recorded accuracies of 84.64% and 85.7% respectively, and homogeneous ensemble models like ViT-large-Ho and ConvLSTM-Ho, which achieved accuracies of 88.49% and 86.84% respectively. AS-ID-NT provides a non-invasive, continuous, and automated solution for timely intervention, successfully identifying both healthy and unhealthy (infected and environmentally stressed) fish. This system not only demonstrates the potential of advanced AI and machine learning techniques in enhancing aquaculture management but also promotes sustainable practices and food security by maintaining healthier fish populations and supporting the economic viability of tilapia farms.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"227 ","pages":"Article 109676"},"PeriodicalIF":7.7,"publicationDate":"2024-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142699949","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-23DOI: 10.1016/j.compag.2024.109685
Xia Li , Birong You , Xuhui Wang , Zhipeng Zhao , Tianyu Qi , Jinyou Xu
The virtual model forms the foundation for building a digital twin system; however, methods for modelling dynamically changing soil in subsoiling have not yet been studied. To provide technical guidance for constructing such a system, this study employs a line structured light method for soil model construction. After conducting field and indoor trials, the extreme value method, grayscale centroid method, and Steger algorithm are used to extract the laser centreline. Results indicate that the extreme value method and grayscale centroid method require relatively little processing time—approximately 1.9 ms and 16 ms, respectively—with processing times being nearly the same in different environments. In contrast, the Steger algorithm requires over 300 ms. Regarding memory usage, the three methods demonstrate similar memory consumption when processing images of different environmental conditions: the extreme value method stabilizes at 86.48 MB, the grayscale centroid method at 105.72 MB, and the Steger algorithm fluctuates around 110 MB. The grayscale centroid method exhibits the best stability, making it most suitable for centreline extraction in the digital twin system. During 3D reconstruction, camera capture frequency is positively correlated with reconstruction quality, while movement speed negatively correlates. Each image’s processing time is under 1 ms, showing that the line laser 3D reconstruction method meets the real-time requirements of the digital twin system for subsoiling.
{"title":"A study of soil modelling methods based on line-structured light—Preparing for the subsoiling digital twin","authors":"Xia Li , Birong You , Xuhui Wang , Zhipeng Zhao , Tianyu Qi , Jinyou Xu","doi":"10.1016/j.compag.2024.109685","DOIUrl":"10.1016/j.compag.2024.109685","url":null,"abstract":"<div><div>The virtual model forms the foundation for building a digital twin system; however, methods for modelling dynamically changing soil in subsoiling have not yet been studied. To provide technical guidance for constructing such a system, this study employs a line structured light method for soil model construction. After conducting field and indoor trials, the extreme value method, grayscale centroid method, and Steger algorithm are used to extract the laser centreline. Results indicate that the extreme value method and grayscale centroid method require relatively little processing time—approximately 1.9 ms and 16 ms, respectively—with processing times being nearly the same in different environments. In contrast, the Steger algorithm requires over 300 ms. Regarding memory usage, the three methods demonstrate similar memory consumption when processing images of different environmental conditions: the extreme value method stabilizes at 86.48 MB, the grayscale centroid method at 105.72 MB, and the Steger algorithm fluctuates around 110 MB. The grayscale centroid method exhibits the best stability, making it most suitable for centreline extraction in the digital twin system. During 3D reconstruction, camera capture frequency is positively correlated with reconstruction quality, while movement speed negatively correlates. Each image’s processing time is under 1 ms, showing that the line laser 3D reconstruction method meets the real-time requirements of the digital twin system for subsoiling.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"227 ","pages":"Article 109685"},"PeriodicalIF":7.7,"publicationDate":"2024-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142699948","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-22DOI: 10.1016/j.compag.2024.109648
Mingshuang Bai , Tao Chen , Jia Yuan , Gang Zhou , Jiajia Wang , Zhenhong Jia
Monitoring of crop pests in the field can be achieved by using sticky traps that capture pests. However, due to the small size and high density of the captured pests, conventional object detection methods relying on bounding boxes struggle to accurately identify and count pests, as they are highly sensitive to positional deviations. Therefore, we propose a novel point framework for multi-species insect identification and counting, termed MS-P2P, which is free from the limitation of Bounding box. Specifically, we employ the lightweight object detection network YOLOv7-tiny for feature extraction and incorporate a lightweight attention detection head (LAHead) for point coordinate regression and insect classification. The LAHead enhances the model’s sensitivity to subtle insect features in complex environments. Additionally, we utilize point proposal prediction and the Hungarian matching algorithm to achieve one-to-one matching of optimal prediction points for targets, which simplifies post-processing methods significantly. Finally, we introduce SmoothL1 Loss and Focal Loss to address the issues of matching instability and class imbalance in the point estimation strategy, respectively. Extensive experiments on the self-built NSC dataset and the publicly available YST dataset have demonstrated the effectiveness of our designed MS-P2P. In particular, on our self-built dataset of 9 insect species, the overall counting metrics achieved a MAE of 18.9 and a RMSE of 28.8. The combined localization and counting metric, nAP0.5, reached 86.4%. Compared with other state-of-the-art algorithms, MS-P2P achieved the best overall results in both localization and counting metrics.
{"title":"A point-based method for identification and counting of tiny object insects in cotton fields","authors":"Mingshuang Bai , Tao Chen , Jia Yuan , Gang Zhou , Jiajia Wang , Zhenhong Jia","doi":"10.1016/j.compag.2024.109648","DOIUrl":"10.1016/j.compag.2024.109648","url":null,"abstract":"<div><div>Monitoring of crop pests in the field can be achieved by using sticky traps that capture pests. However, due to the small size and high density of the captured pests, conventional object detection methods relying on bounding boxes struggle to accurately identify and count pests, as they are highly sensitive to positional deviations. Therefore, we propose a novel point framework for multi-species insect identification and counting, termed MS-P2P, which is free from the limitation of Bounding box. Specifically, we employ the lightweight object detection network YOLOv7-tiny for feature extraction and incorporate a lightweight attention detection head (LAHead) for point coordinate regression and insect classification. The LAHead enhances the model’s sensitivity to subtle insect features in complex environments. Additionally, we utilize point proposal prediction and the Hungarian matching algorithm to achieve one-to-one matching of optimal prediction points for targets, which simplifies post-processing methods significantly. Finally, we introduce SmoothL1 Loss and Focal Loss to address the issues of matching instability and class imbalance in the point estimation strategy, respectively. Extensive experiments on the self-built NSC dataset and the publicly available YST dataset have demonstrated the effectiveness of our designed MS-P2P. In particular, on our self-built dataset of 9 insect species, the overall counting metrics achieved a MAE of 18.9 and a RMSE of 28.8. The combined localization and counting metric, nAP0.5, reached 86.4%. Compared with other state-of-the-art algorithms, MS-P2P achieved the best overall results in both localization and counting metrics.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"227 ","pages":"Article 109648"},"PeriodicalIF":7.7,"publicationDate":"2024-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142700047","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-22DOI: 10.1016/j.compag.2024.109680
Zhiming Zhao , Yining Lyu , Jinqing Lyu , Xiaoxin Zhu , Jicheng Li , Deqiu Yang
The existing seed-metering device has the problems of low qualified index and high multiple index of minituber mechanized seeding. In this work, a seed-metering device suitable for precision seeding of minituber was designed to solve the above problems and improve the seeding efficiency. By analyzing the motion mechanism of minituber on the seeding plate, it is determined that the diameter of the suction seeding hole, the rotation speed and tilt angle of the seeding plate and the negative pressure value are the main factors affecting the seeding performance of the seed-metering device. The steady-state airflow in the negative pressure chamber was analyzed by computational fluid dynamics. When the diameter of the suction seeding hole is 8 mm and the rotation speed of the seeding plate is 40 r/min, the highest negative pressure value is reached at the suction seeding hole. The CFD-DEM coupling simulation method was used to investigate the stress of minituber and the effect of adsorption of minituber by suction seeding hole under different tilt angles of seeding plate and negative pressures. The coupling simulation results were verified and optimized by bench test, and the movement of the minituber on the seeding plate was observed by a high-speed camera. Design Expert was used to optimize the test results, and it is found that when the tilt angle is 20° and the negative pressure is −6000 Pa, the working effect of the seed-metering device could achieve the multiple index is below 3.5 %, the miss seeding index no more than 1.5 %, the qualified index remained above 94.5 %, and the coefficient of variation is kept under 11 %. This work puts forward new ideas in improving the seeding quality of high-speed precision seed-metering device, and also provides a new design idea for the development of seeding device.
{"title":"The influence of a seeding plate of the air-suction minituber precision seed-metering device on seeding quality","authors":"Zhiming Zhao , Yining Lyu , Jinqing Lyu , Xiaoxin Zhu , Jicheng Li , Deqiu Yang","doi":"10.1016/j.compag.2024.109680","DOIUrl":"10.1016/j.compag.2024.109680","url":null,"abstract":"<div><div>The existing seed-metering device has the problems of low qualified index and high multiple index of minituber mechanized seeding. In this work, a seed-metering device suitable for precision seeding of minituber was designed to solve the above problems and improve the seeding efficiency. By analyzing the motion mechanism of minituber on the seeding plate, it is determined that the diameter of the suction seeding hole, the rotation speed and tilt angle of the seeding plate and the negative pressure value are the main factors affecting the seeding performance of the seed-metering device. The steady-state airflow in the negative pressure chamber was analyzed by computational fluid dynamics. When the diameter of the suction seeding hole is 8 mm and the rotation speed of the seeding plate is 40 r/min, the highest negative pressure value is reached at the suction seeding hole. The CFD-DEM coupling simulation method was used to investigate the stress of minituber and the effect of adsorption of minituber by suction seeding hole under different tilt angles of seeding plate and negative pressures. The coupling simulation results were verified and optimized by bench test, and the movement of the minituber on the seeding plate was observed by a high-speed camera. Design Expert was used to optimize the test results, and it is found that when the tilt angle is 20° and the negative pressure is −6000 Pa, the working effect of the seed-metering device could achieve the multiple index is below 3.5 %, the miss seeding index no more than 1.5 %, the qualified index remained above 94.5 %, and the coefficient of variation is kept under 11 %. This work puts forward new ideas in improving the seeding quality of high-speed precision seed-metering device, and also provides a new design idea for the development of seeding device.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"227 ","pages":"Article 109680"},"PeriodicalIF":7.7,"publicationDate":"2024-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142699880","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-22DOI: 10.1016/j.compag.2024.109625
Liangliang Yang, Tomoki Noguchi, Yohei Hoshino
It is a hard job for farmers to harvest heavy fruits such as pumpkin fruits because of the aging problem of farmers. To solve this problem, this study aims to develop an automatic pick-and-place robot system that alleviates labor demands in pumpkin harvesting. We proposed a system capable of detecting pumpkins in the field and obtaining their three-dimensional (3D) coordinate values using artificial intelligence (AI) object detection methods and RGB-D camera, respectively. The harvesting system incorporates a crawler-type vehicle as the base platform, while a collaborative robot arm is employed to lift the pumpkin fruits. A newly designed robot hand, mounted at the end of the robot arm, is responsible for grasping the pumpkins. In this paper, we utilized various versions of YOLO (from version 2 to 8) for pumpkin fruit detection, and compare the results obtained from these different versions. The RGB-D camera, that was mounted at the root of the robot arm, captures the position of the pumpkin fruits in camera coordinates. We proposed a calibration method can simply transform the position to the coordinates of robot arm. In addition, we finished all the software and hardware of the pumpkin fruits pick-and-place robot system. Field experiments were conducted at an outdoor pumpkin field. The experiments demonstrate the fruits detection accuracy rate exceeding 99% and a picking success rate surpassing 90%. However, fruits that were surrounded by excessive vines could not be successfully grasped.
{"title":"Development of a pumpkin fruits pick-and-place robot using an RGB-D camera and a YOLO based object detection AI model","authors":"Liangliang Yang, Tomoki Noguchi, Yohei Hoshino","doi":"10.1016/j.compag.2024.109625","DOIUrl":"10.1016/j.compag.2024.109625","url":null,"abstract":"<div><div>It is a hard job for farmers to harvest heavy fruits such as pumpkin fruits because of the aging problem of farmers. To solve this problem, this study aims to develop an automatic pick-and-place robot system that alleviates labor demands in pumpkin harvesting. We proposed a system capable of detecting pumpkins in the field and obtaining their three-dimensional (3D) coordinate values using artificial intelligence (AI) object detection methods and RGB-D camera, respectively. The harvesting system incorporates a crawler-type vehicle as the base platform, while a collaborative robot arm is employed to lift the pumpkin fruits. A newly designed robot hand, mounted at the end of the robot arm, is responsible for grasping the pumpkins. In this paper, we utilized various versions of YOLO (from version 2 to 8) for pumpkin fruit detection, and compare the results obtained from these different versions. The RGB-D camera, that was mounted at the root of the robot arm, captures the position of the pumpkin fruits in camera coordinates. We proposed a calibration method can simply transform the position to the coordinates of robot arm. In addition, we finished all the software and hardware of the pumpkin fruits pick-and-place robot system. Field experiments were conducted at an outdoor pumpkin field. The experiments demonstrate the fruits detection accuracy rate exceeding 99% and a picking success rate surpassing 90%. However, fruits that were surrounded by excessive vines could not be successfully grasped.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"227 ","pages":"Article 109625"},"PeriodicalIF":7.7,"publicationDate":"2024-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142699950","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-22DOI: 10.1016/j.compag.2024.109678
Yang Liu , Fuqin Yang , Jibo Yue , Wanxue Zhu , Yiguang Fan , Jiejie Fan , Yanpeng Ma , Mingbo Bian , Riqiang Chen , Guijun Yang , Haikuan Feng
Current techniques to estimate crop aboveground biomass (AGB) across the multiple growth stages mainly used optical remote-sensing techniques. However, this technology was limited by saturation of the canopy spectrum. To meet this problem, this study used digital images obtained by an unmanned aerial vehicle to extract the spectral and structural indicators of the crop canopy in three key potato growth stages. We took the color parameters (CP) of assorted color space transformations as the canopy spectral information, and crop height (CH), crop coverage (CC), and crop canopy volume (CCV) as the canopy structural indicators. Based on the complementary advantages of CP and CCV, we proposed a new metric: the color parameter-weighted crop-canopy volume (CCVCP). Results showed that the CH, CCV, and CCVCP correlated more strongly with potato AGB during the multi-growth stages than do CP and CC. The hue-weighted crop-canopy volume (CCVH) correlated most strongly with the potato AGB among all structural indicators. Using CH was more accurate in estimating potato AGB compared to CP and CC. Combining indicators (CP + CC/CH, CP + CC + CH) improved the accuracy of potato AGB estimation over the multi-growth stages. Except for the CP + CC + CH model, other AGB estimation models produced inaccurate AGB estimation than the models based on CCV and CCVH. The AGB estimation accuracy produced by the univariate-based CCVH model (R2 = 0.65, RMSE = 281 kg/hm2, and NRMSE = 23.61 %) was comparable to that of the complex model [CP + CC + CH using random forest (RF) or multiple stepwise regression (MSR)]. Compared with CP + CC + CH using RF and MSR, the RMSE decreased and increased by 0.35 % and 4.24 %, respectively. Compared with CP, CP + CC, CP + CH, and CCV, the use of CCVH to estimate AGB decreased the RMSE by 10.24 %, 7.42 %, 6.36 %, and 6.33 %, respectively. Meanwhile, the performance of CCVH was verified at different stages and among varieties. Thus, this indicator can be used for monitoring potato growth to help guide field production management.
{"title":"Crop canopy volume weighted by color parameters from UAV-based RGB imagery to estimate above-ground biomass of potatoes","authors":"Yang Liu , Fuqin Yang , Jibo Yue , Wanxue Zhu , Yiguang Fan , Jiejie Fan , Yanpeng Ma , Mingbo Bian , Riqiang Chen , Guijun Yang , Haikuan Feng","doi":"10.1016/j.compag.2024.109678","DOIUrl":"10.1016/j.compag.2024.109678","url":null,"abstract":"<div><div>Current techniques to estimate crop aboveground biomass (AGB) across the multiple growth stages mainly used optical remote-sensing techniques. However, this technology was limited by saturation of the canopy spectrum. To meet this problem, this study used digital images obtained by an unmanned aerial vehicle to extract the spectral and structural indicators of the crop canopy in three key potato growth stages. We took the color parameters (CP) of assorted color space transformations as the canopy spectral information, and crop height (CH), crop coverage (CC), and crop canopy volume (CCV) as the canopy structural indicators. Based on the complementary advantages of CP and CCV, we proposed a new metric: the color parameter-weighted crop-canopy volume (CCV<sub>CP</sub>). Results showed that the CH, CCV, and CCV<sub>CP</sub> correlated more strongly with potato AGB during the multi-growth stages than do CP and CC. The hue-weighted crop-canopy volume (CCV<sub>H</sub>) correlated most strongly with the potato AGB among all structural indicators. Using CH was more accurate in estimating potato AGB compared to CP and CC. Combining indicators (CP + CC/CH, CP + CC + CH) improved the accuracy of potato AGB estimation over the multi-growth stages. Except for the CP + CC + CH model, other AGB estimation models produced inaccurate AGB estimation than the models based on CCV and CCV<sub>H</sub>. The AGB estimation accuracy produced by the univariate-based CCV<sub>H</sub> model (R<sup>2</sup> = 0.65, RMSE = 281 kg/hm<sup>2</sup>, and NRMSE = 23.61 %) was comparable to that of the complex model [CP + CC + CH using random forest (RF) or multiple stepwise regression (MSR)]. Compared with CP + CC + CH using RF and MSR, the RMSE decreased and increased by 0.35 % and 4.24 %, respectively. Compared with CP, CP + CC, CP + CH, and CCV, the use of CCV<sub>H</sub> to estimate AGB decreased the RMSE by 10.24 %, 7.42 %, 6.36 %, and 6.33 %, respectively. Meanwhile, the performance of CCV<sub>H</sub> was verified at different stages and among varieties. Thus, this indicator can be used for monitoring potato growth to help guide field production management.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"227 ","pages":"Article 109678"},"PeriodicalIF":7.7,"publicationDate":"2024-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142699903","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}