首页 > 最新文献

Computers and Electronics in Agriculture最新文献

英文 中文
Morphological characteristic extraction of unopened cotton bolls using image analysis and geometric modeling methods
IF 7.7 1区 农林科学 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2025-02-17 DOI: 10.1016/j.compag.2025.110094
Cheng Cao , Pei Yang , Chaoyuan Tang, Fubin Liang, Jingshan Tian, Yali Zhang, Wangfeng Zhang
Extracting cotton boll phenotypic parameters from imaging data is a prerequisite for intelligently characterizing boll growth and development. However, current methods relying on manual measurements are inefficient and often inaccurate. To address this, we developed a cotton boll phenotypic parameter extraction program (CPVS), a tool designed to estimate the morphological characteristics of unopened cotton bolls from images. CPVS integrates semi-automatic data extraction with advanced algorithms to calculate length, width, volume, and surface area. Length and width estimation algorithms were developed using a custom “Fixed” image set, which links pixel dimensions to actual measurements. Volume and surface area models were based on shape classification using a custom “Random” image set, trait correlations, and measured data. Testing showed strong performance, with R2 values of 0.880 and 0.769 and root mean square error (RMSE) values of 0.173 and 0.188 for length and width, respectively. The volume model achieved an R2 of 0.91 and an RMSE of 1.76, while surface area models had R2 values of 0.76 and RMSEs of 2.37 and 2.41. These results indicate that CPVS is a robust tool, providing theoretical and practical support for efficient, accurate characterization of cotton boll morphology.
{"title":"Morphological characteristic extraction of unopened cotton bolls using image analysis and geometric modeling methods","authors":"Cheng Cao ,&nbsp;Pei Yang ,&nbsp;Chaoyuan Tang,&nbsp;Fubin Liang,&nbsp;Jingshan Tian,&nbsp;Yali Zhang,&nbsp;Wangfeng Zhang","doi":"10.1016/j.compag.2025.110094","DOIUrl":"10.1016/j.compag.2025.110094","url":null,"abstract":"<div><div>Extracting cotton boll phenotypic parameters from imaging data is a prerequisite for intelligently characterizing boll growth and development. However, current methods relying on manual measurements are inefficient and often inaccurate. To address this, we developed a cotton boll phenotypic parameter extraction program (CPVS), a tool designed to estimate the morphological characteristics of unopened cotton bolls from images. CPVS integrates semi-automatic data extraction with advanced algorithms to calculate length, width, volume, and surface area. Length and width estimation algorithms were developed using a custom “Fixed” image set, which links pixel dimensions to actual measurements. Volume and surface area models were based on shape classification using a custom “Random” image set, trait correlations, and measured data. Testing showed strong performance, with R<sup>2</sup> values of 0.880 and 0.769 and root mean square error (RMSE) values of 0.173 and 0.188 for length and width, respectively. The volume model achieved an R<sup>2</sup> of 0.91 and an RMSE of 1.76, while surface area models had R<sup>2</sup> values of 0.76 and RMSEs of 2.37 and 2.41. These results indicate that CPVS is a robust tool, providing theoretical and practical support for efficient, accurate characterization of cotton boll morphology.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"232 ","pages":"Article 110094"},"PeriodicalIF":7.7,"publicationDate":"2025-02-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143420613","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Individual tree branch and leaf metrics extraction in dense plantation scenario through the fusion of drone and terrestrial LiDAR
IF 7.7 1区 农林科学 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2025-02-17 DOI: 10.1016/j.compag.2025.110070
Yupan Zhang , Yiliu Tan , Xin Xu , Hangkai You , Yuichi Onda , Takashi Gomi
In forest ecosystems, branch and leaf structures play crucial roles in hydrological and vegetative physiology. However, accurately characterizing branch and leaf structures in dense forest scenarios is challenging, limiting our understanding of how branch and leaf structures affect processes such as interception loss, stemflow, and throughfall. Both terrestrial and drone LiDAR technologies have demonstrated impressive performances in providing detailed insights into forest structures from different perspectives. By leveraging the fusion of point clouds, we classified the leaf and branch of three Japanese cypress trees. Leaf points occupied voxel space was calculated using voxelization, visible branches were fitted using line segments, and the angles and lengths of the invisible branches within the canopy were estimated using the tree-form coefficient. The quantitative analysis results showed that leaf points occupied voxel space at the single-tree and plot scales average were 0.89 ± 0.42 m3/m2. Then, 82, 53, and 58 visible branches were fitted and 23, 14, and 12 invisible branches were estimated for the three trees, respectively. Destructive harvesting was conducted on a single tree to assess the accuracy of branch identification and parameter extraction at the individual branch level. The results yielded an F1-score of 0.76 for branch identification and nRMSEs of 32.14 % for branch length and 13.68 % for branch angle, respectively. Our method solves the problem of extracting the branch and leaf structures of single trees in dense forest scenarios with heavy occlusion. The reconstructed tree model can be further applied to estimate tree attributes and canopy hydrology simulations accurately.
{"title":"Individual tree branch and leaf metrics extraction in dense plantation scenario through the fusion of drone and terrestrial LiDAR","authors":"Yupan Zhang ,&nbsp;Yiliu Tan ,&nbsp;Xin Xu ,&nbsp;Hangkai You ,&nbsp;Yuichi Onda ,&nbsp;Takashi Gomi","doi":"10.1016/j.compag.2025.110070","DOIUrl":"10.1016/j.compag.2025.110070","url":null,"abstract":"<div><div>In forest ecosystems, branch and leaf structures play crucial roles in hydrological and vegetative physiology. However, accurately characterizing branch and leaf structures in dense forest scenarios is challenging, limiting our understanding of how branch and leaf structures affect processes such as interception loss, stemflow, and throughfall. Both terrestrial and drone LiDAR technologies have demonstrated impressive performances in providing detailed insights into forest structures from different perspectives. By leveraging the fusion of point clouds, we classified the leaf and branch of three Japanese cypress trees. Leaf points occupied voxel space was calculated using voxelization, visible branches were fitted using line segments, and the angles and lengths of the invisible branches within the canopy were estimated using the tree-form coefficient. The quantitative analysis results showed that leaf points occupied voxel space at the single-tree and plot scales average were 0.89 ± 0.42 m<sup>3</sup>/m<sup>2</sup>. Then, 82, 53, and 58 visible branches were fitted and 23, 14, and 12 invisible branches were estimated for the three trees, respectively. Destructive harvesting was conducted on a single tree to assess the accuracy of branch identification and parameter extraction at the individual branch level. The results yielded an <span><math><mrow><mi>F</mi><mn>1</mn><mo>-</mo><mi>s</mi><mi>c</mi><mi>o</mi><mi>r</mi><mi>e</mi></mrow></math></span> of 0.76 for branch identification and nRMSEs of 32.14 % for branch length and 13.68 % for branch angle, respectively. Our method solves the problem of extracting the branch and leaf structures of single trees in dense forest scenarios with heavy occlusion. The reconstructed tree model can be further applied to estimate tree attributes and canopy hydrology simulations accurately.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"232 ","pages":"Article 110070"},"PeriodicalIF":7.7,"publicationDate":"2025-02-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143420554","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Estimation of potato above-ground biomass based on the VGC-AGB model and deep learning
IF 7.7 1区 农林科学 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2025-02-17 DOI: 10.1016/j.compag.2025.110122
Haikuan Feng , Yiguang Fan , Jibo Yue , Mingbo Bian , Yang Liu , Riqiang Chen , Yanpeng Ma , Jiejie Fan , Guijun Yang , Chunjiang Zhao
Accurate estimation of above-ground biomass (AGB) in potato plants is essential for effective monitoring of potato growth and reliable yield prediction. Remote sensing technology has emerged as a promising method for monitoring crop growth parameters due to its high throughput, non-destructive nature, and rapid acquisition of information. However, the sensitivity of remote sensing vegetation indices to crop AGB parameters declines at moderate to high crop coverage, known as the “saturation phenomenon,” which limits accurate AGB monitoring during the mid-to-late growth stages. This challenge also hinders the development of a multi-growth-cycle AGB estimation model. In this study, a novel VGC-AGB model integrated with hyperspectral remote sensing was utilized for multi-stage estimation of potato AGB. This study consists of three main components: (1) addressing the “saturation problem” encountered when using spectral indices from remote sensing to monitor crop biomass across multiple growth stages. The VGC-AGB model calculates the leaf biomass by multiplying leaf dry mass content (Cm) and leaf area index (LAI) and vertical organ biomass using the multiplication of crop density (Cd), crop height (Ch) and the crop stem and reproductive organs’ average dry mass content (Csm); (2) estimating the VGC-AGB model parameters Cm and LAI by integrating hyperspectral remote sensing data with a deep learning model; (3) comparing the performance of three methods—(i) hyperspectral + Ch, (ii) ground-measured parameters + VGC-AGB model, and (iii) hyperspectral remote sensing + VGC-AGB model—using a five-year dataset of potato above-ground biomass. Results indicate that (1) the VGC-AGB model achieved high accuracy in estimating AGB (R2 = 0.853, RMSE = 751.12 kg/ha), significantly outperforming the deep learning model based on hyperspectral + Ch data (R2 = 0.683, RMSE = 1122.03 kg/ha); (2) the combination of the VGC-AGB model and hyperspectral remote sensing provided highly accurate results in estimating AGB (R2 = 0.760, RMSE = 965.59 kg/ha), surpassing the results obtained using the hyperspectral + Ch-based method. Future research will primarily focus on streamlining the acquisition of VGC-AGB model parameters, optimizing the acquisition and processing of remote sensing data, and enhancing model validation and application. Furthermore, it is essential to conduct cross-regional validation and optimize model parameters for various crops to improve the universality and adaptability of the proposed model.
{"title":"Estimation of potato above-ground biomass based on the VGC-AGB model and deep learning","authors":"Haikuan Feng ,&nbsp;Yiguang Fan ,&nbsp;Jibo Yue ,&nbsp;Mingbo Bian ,&nbsp;Yang Liu ,&nbsp;Riqiang Chen ,&nbsp;Yanpeng Ma ,&nbsp;Jiejie Fan ,&nbsp;Guijun Yang ,&nbsp;Chunjiang Zhao","doi":"10.1016/j.compag.2025.110122","DOIUrl":"10.1016/j.compag.2025.110122","url":null,"abstract":"<div><div>Accurate estimation of above-ground biomass (AGB) in potato plants is essential for effective monitoring of potato growth and reliable yield prediction. Remote sensing technology has emerged as a promising method for monitoring crop growth parameters due to its high throughput, non-destructive nature, and rapid acquisition of information. However, the sensitivity of remote sensing vegetation indices to crop AGB parameters declines at moderate to high crop coverage, known as the “saturation phenomenon,” which limits accurate AGB monitoring during the mid-to-late growth stages. This challenge also hinders the development of a multi-growth-cycle AGB estimation model. In this study, a novel VGC-AGB model integrated with hyperspectral remote sensing was utilized for multi-stage estimation of potato AGB. This study consists of three main components: (1) addressing the “saturation problem” encountered when using spectral indices from remote sensing to monitor crop biomass across multiple growth stages. The VGC-AGB model calculates the leaf biomass by multiplying leaf dry mass content (Cm) and leaf area index (LAI) and vertical organ biomass using the multiplication of crop density (Cd), crop height (Ch) and the crop stem and reproductive organs’ average dry mass content (Csm); (2) estimating the VGC-AGB model parameters Cm and LAI by integrating hyperspectral remote sensing data with a deep learning model; (3) comparing the performance of three methods—(i) hyperspectral + Ch, (ii) ground-measured parameters + VGC-AGB model, and (iii) hyperspectral remote sensing + VGC-AGB model—using a five-year dataset of potato above-ground biomass. Results indicate that (1) the VGC-AGB model achieved high accuracy in estimating AGB (<em>R</em><sup>2</sup> = 0.853, RMSE = 751.12 kg/ha), significantly outperforming the deep learning model based on hyperspectral + Ch data (<em>R</em><sup>2</sup> = 0.683, RMSE = 1122.03 kg/ha); (2) the combination of the VGC-AGB model and hyperspectral remote sensing provided highly accurate results in estimating AGB (<em>R</em><sup>2</sup> = 0.760, RMSE = 965.59 kg/ha), surpassing the results obtained using the hyperspectral + Ch-based method. Future research will primarily focus on streamlining the acquisition of VGC-AGB model parameters, optimizing the acquisition and processing of remote sensing data, and enhancing model validation and application. Furthermore, it is essential to conduct cross-regional validation and optimize model parameters for various crops to improve the universality and adaptability of the proposed model.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"232 ","pages":"Article 110122"},"PeriodicalIF":7.7,"publicationDate":"2025-02-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143420548","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Alfalfa detection and stem count from proximal images using a combination of deep neural networks and machine learning
IF 7.7 1区 农林科学 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2025-02-16 DOI: 10.1016/j.compag.2025.110115
Hazhir Bahrami , Karem Chokmani , Saeid Homayouni , Viacheslav I. Adamchuk , Md Saifuzzaman , Maxime Leduc
Among various types of forages, Alfalfa (Medicago sativa) is a crucial forage crop that plays a vital role in livestock nutrition and sustainable agriculture. As a result of its ability to adapt to different weather conditions and its high nitrogen fixation capability, this crop produces high-quality forage that contains between 15 and 22 % protein. It is fortunately possible to improve the overall prediction of forage biomass and quality prior to harvest through remote sensing technologies. The recent advent of deep Convolution Neural Networks (deep CNNs) enables researchers to utilize these incredible algorithms. This study aims to build a model to count the number of alfalfa stems from proximal images. To this end, we first utilized a deep CNN encoder-decoder to segment alfalfa and other background objects in a field, such as soil and grass. Subsequently, we employed the alfalfa cover fractions derived from the proximal images to develop and train machine learning regression models for estimating the stem count in the images. This study uses many proximal images taken from significant number of fields in four provinces of Canada over three consecutive years. A combination of real and synthetic images has been utilized to feed the deep neural network encoder-decoder. This study gathered roughly 3447 alfalfa images, 5332 grass images, and 9241 background images for training the encoder-decoder model. With data augmentation, we prepared about 60,000 annotated images of alfalfa fields containing alfalfa, grass, and background utilizing a pre-trained model in less than an hour. Several convolutional neural network encoder-decoder models have also been utilized in this study. Simple U-Net, Attention U-Net (Att U-Net), and ResU-Net with attention gates have been trained to detect alfalfa and differentiate it from other objects. The best Intersections over Union (IoU) for simple U-Net classes were 0.98, 0.93, and 0.80 for background, alfalfa and grass, respectively. Simple U-Net with synthetic data provides a promising result over unseen real images and requires an RGB iPad image for field-specific alfalfa detection. It was also observed that simple U-Net has slightly better accuracy than attention U-Net and attention ResU-Net. Finally, we built regression models between the alfalfa cover fraction in the original images taken by iPad, and the mean alfalfa stems per square foot. Random forest (RF), Support Vector Regression (SVR), and Extreme Gradient Boosting (XGB) methods have been utilized to estimate the number of stems in the images. RF was the best model for estimating the number of alfalfa stems relative to other machine learning algorithms, with a coefficient of determination (R2) of 0.82, root-mean-square error of 13.00, and mean absolute error of 10.07.
{"title":"Alfalfa detection and stem count from proximal images using a combination of deep neural networks and machine learning","authors":"Hazhir Bahrami ,&nbsp;Karem Chokmani ,&nbsp;Saeid Homayouni ,&nbsp;Viacheslav I. Adamchuk ,&nbsp;Md Saifuzzaman ,&nbsp;Maxime Leduc","doi":"10.1016/j.compag.2025.110115","DOIUrl":"10.1016/j.compag.2025.110115","url":null,"abstract":"<div><div>Among various types of forages, Alfalfa (Medicago sativa) is a crucial forage crop that plays a vital role in livestock nutrition and sustainable agriculture. As a result of its ability to adapt to different weather conditions and its high nitrogen fixation capability, this crop produces high-quality forage that contains between 15 and 22 % protein. It is fortunately possible to improve the overall prediction of forage biomass and quality prior to harvest through remote sensing technologies. The recent advent of deep Convolution Neural Networks (deep CNNs) enables researchers to utilize these incredible algorithms. This study aims to build a model to count the number of alfalfa stems from proximal images. To this end, we first utilized a deep CNN encoder-decoder to segment alfalfa and other background objects in a field, such as soil and grass. Subsequently, we employed the alfalfa cover fractions derived from the proximal images to develop and train machine learning regression models for estimating the stem count in the images. This study uses many proximal images taken from significant number of fields in four provinces of Canada over three consecutive years. A combination of real and synthetic images has been utilized to feed the deep neural network encoder-decoder. This study gathered roughly 3447 alfalfa images, 5332 grass images, and 9241 background images for training the encoder-decoder model. With data augmentation, we prepared about 60,000 annotated images of alfalfa fields containing alfalfa, grass, and background utilizing a pre-trained model in less than an hour. Several convolutional neural network encoder-decoder models have also been utilized in this study. Simple U-Net, Attention U-Net (Att U-Net), and ResU-Net with attention gates have been trained to detect alfalfa and differentiate it from other objects. The best Intersections over Union (IoU) for simple U-Net classes were 0.98, 0.93, and 0.80 for background, alfalfa and grass, respectively. Simple U-Net with synthetic data provides a promising result over unseen real images and requires an RGB iPad image for field-specific alfalfa detection. It was also observed that simple U-Net has slightly better accuracy than attention U-Net and attention ResU-Net. Finally, we built regression models between the alfalfa cover fraction in the original images taken by iPad, and the mean alfalfa stems per square foot. Random forest (RF), Support Vector Regression (SVR), and Extreme Gradient Boosting (XGB) methods have been utilized to estimate the number of stems in the images. RF was the best model for estimating the number of alfalfa stems relative to other machine learning algorithms, with a coefficient of determination (R<sup>2</sup>) of 0.82, root-mean-square error of 13.00, and mean absolute error of 10.07.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"232 ","pages":"Article 110115"},"PeriodicalIF":7.7,"publicationDate":"2025-02-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143420502","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Weed image augmentation by ControlNet-added stable diffusion for multi-class weed detection
IF 7.7 1区 农林科学 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2025-02-16 DOI: 10.1016/j.compag.2025.110123
Boyang Deng, Yuzhen Lu
Robust weed recognition for vision-guided weeding relies on curating large-scale, diverse field datasets, which however are practically difficult to come by. Text-to-image generative artificial intelligence opens new avenues for synthesizing perceptually realistic images beneficial for wide-ranging computer vision tasks in precision agriculture. This study investigates the efficacy of state-of-the-art diffusion models as an image augmentation technique for synthesizing multi-class weed images towards enhanced weed detection performance. A three-season 10-weed-class dataset was created as a testbed for image generation and weed detection tasks. The ControlNet-added stable diffusion models were trained to generate weed images with broad intra-class variations of targeted weed species and diverse backgrounds to adapt to changing field conditions. The quality of generated images was assessed using metrics including the Fréchet Inception Distance (FID) and Inception Score (IS), resulting in an average FID of 0.98 and IS of 3.63. The generated weed images were selected to supplement real-world images for weed detection by YOLOv8-large. Combining the manually selected, generated images with real images yielded an overall mAP@50:95 of 88.3 % and mAP@50 of 95.0 %, representing performance gains of 1.4 % and 0.8 %, respectively, compared to the baseline model trained using only real images. It also performed competitively or comparably with modeling by combining real images with the images generated by external, traditional data augmentation techniques. The proposed automated post-generation image filtering approach still needs improvements to select high-quality images for enhanced weed detection. Both the weed dataset1 and software programs2 developed in this study have been made publicly available. Considerable research is needed to exploit more controllable diffusion models for generating high-fidelity, diverse weed images to substantially enhance weed detection in changing field conditions.
{"title":"Weed image augmentation by ControlNet-added stable diffusion for multi-class weed detection","authors":"Boyang Deng,&nbsp;Yuzhen Lu","doi":"10.1016/j.compag.2025.110123","DOIUrl":"10.1016/j.compag.2025.110123","url":null,"abstract":"<div><div>Robust weed recognition for vision-guided weeding relies on curating large-scale, diverse field datasets, which however are practically difficult to come by. Text-to-image generative artificial intelligence opens new avenues for synthesizing perceptually realistic images beneficial for wide-ranging computer vision tasks in precision agriculture. This study investigates the efficacy of state-of-the-art diffusion models as an image augmentation technique for synthesizing multi-class weed images towards enhanced weed detection performance. A three-season 10-weed-class dataset was created as a testbed for image generation and weed detection tasks. The ControlNet-added stable diffusion models were trained to generate weed images with broad intra-class variations of targeted weed species and diverse backgrounds to adapt to changing field conditions. The quality of generated images was assessed using metrics including the Fréchet Inception Distance (FID) and Inception Score (IS), resulting in an average FID of 0.98 and IS of 3.63. The generated weed images were selected to supplement real-world images for weed detection by YOLOv8-large. Combining the manually selected, generated images with real images yielded an overall mAP@50:95 of 88.3 % and mAP@50 of 95.0 %, representing performance gains of 1.4 % and 0.8 %, respectively, compared to the baseline model trained using only real images. It also performed competitively or comparably with modeling by combining real images with the images generated by external, traditional data augmentation techniques. The proposed automated post-generation image filtering approach still needs improvements to select high-quality images for enhanced weed detection. Both the weed dataset<span><span><sup>1</sup></span></span> and software programs<span><span><sup>2</sup></span></span> developed in this study have been made publicly available. Considerable research is needed to exploit more controllable diffusion models for generating high-fidelity, diverse weed images to substantially enhance weed detection in changing field conditions.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"232 ","pages":"Article 110123"},"PeriodicalIF":7.7,"publicationDate":"2025-02-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143420553","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Tactile Sensing & Visually-Impaired Navigation in Densely Planted Row Crops, for Precision Fertilization by Small UGVs
IF 7.7 1区 农林科学 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2025-02-16 DOI: 10.1016/j.compag.2025.110003
Adam M. Gronewold, Philip Mulford, Eliana Ray, Laura E. Ray
Navigating outdoor agricultural environments with cameras or ranging sensors is challenging due to sensor occlusion, lighting variability, and dense vegetation, particularly in tightly-spaced row crops. These environments present visually similar surfaces, making it difficult for vision-based systems to distinguish between rigid obstacles and flexible, traversable objects like weeds. As plant density increases, the margin of error narrows, limiting the effectiveness of traditional visual sensing. To overcome these challenges, we present a novel tactile-based perception system for autonomous navigation without any form of remote sensing. The system uses a mechanical feeler with rotary encoders to detect and map rigid obstacles, such as corn stalks, while filtering out flexible features like leaves and weeds. Through real-time classification of sensor deflections, the system achieves approximately 97 % accuracy in detecting obstacles and global positioning accuracy within 4 cm of a plant’s true location. The tactile sensor system, alongside blind-adapted path-planning (A*) and path-following (pure pursuit) algorithms, further allow an unmanned ground vehicle to autonomously navigate cornfields. Prototype sensors and the navigation method were tested in simulation, a controlled real-world environment, and a mature, unmanicured cornfield, demonstrating autonomous capabilities of > 100 m in simulated and > 30 m in real-world cornfields, prior to needing intervention. The tactile system overcomes row curvature, planting gaps, dense weeds, and canopy variability—without relying on vision or ranging sensors. With additional refinement, visual and tactile sensing modalities may be combined for more reliable obstacle detection and navigation for small robots operating in visually-occluded agricultural environments.
{"title":"Tactile Sensing & Visually-Impaired Navigation in Densely Planted Row Crops, for Precision Fertilization by Small UGVs","authors":"Adam M. Gronewold,&nbsp;Philip Mulford,&nbsp;Eliana Ray,&nbsp;Laura E. Ray","doi":"10.1016/j.compag.2025.110003","DOIUrl":"10.1016/j.compag.2025.110003","url":null,"abstract":"<div><div>Navigating outdoor agricultural environments with cameras or ranging sensors is challenging due to sensor occlusion, lighting variability, and dense vegetation, particularly in tightly-spaced row crops. These environments present visually similar surfaces, making it difficult for vision-based systems to distinguish between rigid obstacles and flexible, traversable objects like weeds. As plant density increases, the margin of error narrows, limiting the effectiveness of traditional visual sensing. To overcome these challenges, we present a novel tactile-based perception system for autonomous navigation without any form of remote sensing. The system uses a mechanical feeler with rotary encoders to detect and map rigid obstacles, such as corn stalks, while filtering out flexible features like leaves and weeds. Through real-time classification of sensor deflections, the system achieves approximately 97 % accuracy in detecting obstacles and global positioning accuracy within 4 cm of a plant’s true location. The tactile sensor system, alongside blind-adapted path-planning (A*) and path-following (pure pursuit) algorithms, further allow an unmanned ground vehicle to autonomously navigate cornfields. Prototype sensors and the navigation method were tested in simulation, a controlled real-world environment, and a mature, unmanicured cornfield, demonstrating autonomous capabilities of &gt; 100 m in simulated and &gt; 30 m in real-world cornfields, prior to needing intervention. The tactile system overcomes row curvature, planting gaps, dense weeds, and canopy variability—without relying on vision or ranging sensors. With additional refinement, visual and tactile sensing modalities may be combined for more reliable obstacle detection and navigation for small robots operating in visually-occluded agricultural environments.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"231 ","pages":"Article 110003"},"PeriodicalIF":7.7,"publicationDate":"2025-02-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143420444","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep learning model optimization methods and performance evaluation of YOLOv8 for enhanced weed detection in soybeans
IF 7.7 1区 农林科学 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2025-02-16 DOI: 10.1016/j.compag.2025.110117
Estéfani Sulzbach , Ismael Scheeren , Manuel Speranza Torres Veras , Maurício Cagliari Tosin , William Augusto Ellert Kroth , Aldo Merotto Jr , Catarine Markus
The neural network YOLO (You Only Look Once) is a supervised learning technique for object detection in real time. This deep learning architecture can be a tool for weed recognition and site-specific weed management. The objective of this study was to evaluate different strategies for real-time weed detection in soybean crops by means of YOLOv8 variants. A dataset for weed recognition was created which largely consisted of 10 weed species which were divided into broadleaf and narrowleaf. Three classes were designated as soybean (Glycine max), broadleaf, and narrowleaf. An experiment conducted with the original versions of YOLOv8 (nano, small, medium, large, and extra-large) showed that the transfer learning strategy was more efficient in the case of the two largest YOLOv8 architectures, where there was an increase in the mAP50 from 0.71 to 0.73. This study put forward four new variants: YOLOv8 (femto), YOLOv8 (atto), YOLOv8 (pico), and YOLOv8 (zepto), which had optimizing model parameters that led to a reduction in GFLOPs of up to 84.2 %. The YOLOv8 models (especially YOLOVv8 nano and femto) have shown a great potential for real-time weed detection in soybeans, and achieved promising results in recognizing different types of weeds in soybean crops, which are comparable to state-of-the-art methods. The use of the semi-supervised technique showed an increase in mAP50 from 0.70 to 0.73. Reducing the YOLOv8 model parameters does not affect the model accuracy and is a key factor in reducing complexity and processing time, as a means of enhancing real-time weed detection systems.
{"title":"Deep learning model optimization methods and performance evaluation of YOLOv8 for enhanced weed detection in soybeans","authors":"Estéfani Sulzbach ,&nbsp;Ismael Scheeren ,&nbsp;Manuel Speranza Torres Veras ,&nbsp;Maurício Cagliari Tosin ,&nbsp;William Augusto Ellert Kroth ,&nbsp;Aldo Merotto Jr ,&nbsp;Catarine Markus","doi":"10.1016/j.compag.2025.110117","DOIUrl":"10.1016/j.compag.2025.110117","url":null,"abstract":"<div><div>The neural network YOLO (<em>You Only Look Once</em>) is a supervised learning technique for object detection in real time. This deep learning architecture can be a tool for weed recognition and site-specific weed management. The objective of this study was to evaluate different strategies for real-time weed detection in soybean crops by means of YOLOv8 variants. A dataset for weed recognition was created which largely consisted of 10 weed species which were divided into broadleaf and narrowleaf. Three classes were designated as soybean (<em>Glycine max</em>), broadleaf, and narrowleaf. An experiment conducted with the original versions of YOLOv8 (nano, small, medium, large, and extra-large) showed that the transfer learning strategy was more efficient in the case of the two largest YOLOv8 architectures, where there was an increase in the mAP50 from 0.71 to 0.73. This study put forward four new variants: YOLOv8 (femto), YOLOv8 (atto), YOLOv8 (pico), and YOLOv8 (zepto), which had optimizing model parameters that led to a reduction in GFLOPs of up to 84.2 %. The YOLOv8 models (especially YOLOVv8 nano and femto) have shown a great potential for real-time weed detection in soybeans, and achieved promising results in recognizing different types of weeds in soybean crops, which are comparable to state-of-the-art methods. The use of the semi-supervised technique showed an increase in mAP50 from 0.70 to 0.73. Reducing the YOLOv8 model parameters does not affect the model accuracy and is a key factor in reducing complexity and processing time, as a means of enhancing real-time weed detection systems.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"232 ","pages":"Article 110117"},"PeriodicalIF":7.7,"publicationDate":"2025-02-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143420473","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hypoxia monitoring of fish in intensive aquaculture based on underwater multi-target tracking
IF 7.7 1区 农林科学 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2025-02-16 DOI: 10.1016/j.compag.2025.110127
Yuxiang Li, Hequn Tan, Yuxuan Deng, Dianzhuo Zhou, Ming Zhu
Monitoring hypoxia is crucial in intensive aquaculture because changes in dissolved oxygen levels directly affect fish growth and health. Machine vision provides a cost-effective and easily calibrated alternative to traditional sensors for hypoxia monitoring. However, in practical aquaculture settings, high stocking densities and turbid water can affect monitoring accuracy. To address this, a vision-based method is proposed to monitor hypoxia by analyzing the fish behavioral changes. This method incorporates a novel tracking model, OFPTrack, and a hypoxia predictor based on a long short-term memory (LSTM) network. OFPTrack employs a tracking-by-detection strategy and enhances the precision of fish behavior data capture by leveraging underwater camera imaging principles and the three-dimensional motion characteristics of fish. Furthermore, two rematching modules are introduced to resolve short-term and long-term tracking losses by utilizing multimodal data, such as fish appearance features and the spatiotemporal characteristics of trajectories. Accuracy tests show that the MOTA of OFPTrack is competitive with models like ByteTrack and BoT-SORT, while its IDsw are significantly lower, reduced by 66.3% and 41.3%, respectively. Practical applications demonstrated that the proposed method effectively and rapidly monitors fish hypoxia. The source codes and part of the dataset are available at: https://github.com/Pixel-uu/OFPTrack.
{"title":"Hypoxia monitoring of fish in intensive aquaculture based on underwater multi-target tracking","authors":"Yuxiang Li,&nbsp;Hequn Tan,&nbsp;Yuxuan Deng,&nbsp;Dianzhuo Zhou,&nbsp;Ming Zhu","doi":"10.1016/j.compag.2025.110127","DOIUrl":"10.1016/j.compag.2025.110127","url":null,"abstract":"<div><div>Monitoring hypoxia is crucial in intensive aquaculture because changes in dissolved oxygen levels directly affect fish growth and health. Machine vision provides a cost-effective and easily calibrated alternative to traditional sensors for hypoxia monitoring. However, in practical aquaculture settings, high stocking densities and turbid water can affect monitoring accuracy. To address this, a vision-based method is proposed to monitor hypoxia by analyzing the fish behavioral changes. This method incorporates a novel tracking model, OFPTrack, and a hypoxia predictor based on a long short-term memory (LSTM) network. OFPTrack employs a tracking-by-detection strategy and enhances the precision of fish behavior data capture by leveraging underwater camera imaging principles and the three-dimensional motion characteristics of fish. Furthermore, two rematching modules are introduced to resolve short-term and long-term tracking losses by utilizing multimodal data, such as fish appearance features and the spatiotemporal characteristics of trajectories. Accuracy tests show that the MOTA of OFPTrack is competitive with models like ByteTrack and BoT-SORT, while its IDsw are significantly lower, reduced by 66.3% and 41.3%, respectively. Practical applications demonstrated that the proposed method effectively and rapidly monitors fish hypoxia. The source codes and part of the dataset are available at: https://github.com/Pixel-uu/OFPTrack.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"232 ","pages":"Article 110127"},"PeriodicalIF":7.7,"publicationDate":"2025-02-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143420552","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-modal image fusion of visible and infrared for precise positioning of UAVs in agricultural fields
IF 7.7 1区 农林科学 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2025-02-16 DOI: 10.1016/j.compag.2025.110024
Xiaodong Liu, Meibo Lv, Chenyuhao Ma, Zhe Fu, Lei Zhang
Image matching is a common method to assist drone positioning in agriculture, but it is affected by environmental changes. We propose a scene matching method based on Multi-modal image fusion to enable precise positioning of unmanned aerial vehicles (UAVs). We develop a fusion network that uses a local attention mechanism for visible and infrared images, which filters out low-frequency vegetation information and improves the matching accuracy using satellite images. Moreover, we incorporate an interaction mechanism that adaptively enhances the low-quality modal. Experimental results show that the proposed method reduces the average positioning error by more than 84 % compared to using a single modality, and achieves an error of less than 2.5 m. The experimental results show that our method can enable UAVs to perform precise positioning in the agricultural environment.
{"title":"Multi-modal image fusion of visible and infrared for precise positioning of UAVs in agricultural fields","authors":"Xiaodong Liu,&nbsp;Meibo Lv,&nbsp;Chenyuhao Ma,&nbsp;Zhe Fu,&nbsp;Lei Zhang","doi":"10.1016/j.compag.2025.110024","DOIUrl":"10.1016/j.compag.2025.110024","url":null,"abstract":"<div><div>Image matching is a common method to assist drone positioning in agriculture, but it is affected by environmental changes. We propose a scene matching method based on Multi-modal image fusion to enable precise positioning of unmanned aerial vehicles (UAVs). We develop a fusion network that uses a local attention mechanism for visible and infrared images, which filters out low-frequency vegetation information and improves the matching accuracy using satellite images. Moreover, we incorporate an interaction mechanism that adaptively enhances the low-quality modal. Experimental results show that the proposed method reduces the average positioning error by more than 84 % compared to using a single modality, and achieves an error of less than 2.5 m. The experimental results show that our method can enable UAVs to perform precise positioning in the agricultural environment.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"232 ","pages":"Article 110024"},"PeriodicalIF":7.7,"publicationDate":"2025-02-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143420550","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Rectifying the extremely weakened signals for cassava leaf disease detection
IF 7.7 1区 农林科学 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2025-02-15 DOI: 10.1016/j.compag.2025.110107
Jiayu Zhang , Baohua Zhang , Innocent Nyalala , Peter Mecha , Junlong Chen , Kunjie Chen , Junfeng Gao
The performance of neural networks is heavily dependent on the integrity of the feature signals. As these signals are extracted and transmitted, they tend to weaken, which can negatively affect their ability to represent and utilize semantic information, particularly in weakly supervised learning tasks. This study aims to address hidden and severely weakened signals. To address the underlying causes, the rectification block of the third stage of PR-ArsenicNetPlus (Positive Rectified ArsenicNetPlus Neural Network) was modified to include a nonlinear fitting method based on the variant Hölder inequality. This method adjusts the magnitude and angular frequency of an extremely weak signal, and its effectiveness is evaluated using Parseval’s relationship. When tested on cassava leaf disease datasets, the proposed method significantly improved the prediction accuracy in 7-fold cross-validation, achieving an accuracy of 96.18 %, a loss of 1.373, and an F1-score of 0.9618. These results outperformed those of ResNet-101, EfficientNet-B5, RepVGG-B3g4, and AlexNet.
{"title":"Rectifying the extremely weakened signals for cassava leaf disease detection","authors":"Jiayu Zhang ,&nbsp;Baohua Zhang ,&nbsp;Innocent Nyalala ,&nbsp;Peter Mecha ,&nbsp;Junlong Chen ,&nbsp;Kunjie Chen ,&nbsp;Junfeng Gao","doi":"10.1016/j.compag.2025.110107","DOIUrl":"10.1016/j.compag.2025.110107","url":null,"abstract":"<div><div>The performance of neural networks is heavily dependent on the integrity of the feature signals. As these signals are extracted and transmitted, they tend to weaken, which can negatively affect their ability to represent and utilize semantic information, particularly in weakly supervised learning tasks. This study aims to address hidden and severely weakened signals. To address the underlying causes, the rectification block of the third stage of PR-ArsenicNetPlus (Positive Rectified ArsenicNetPlus Neural Network) was modified to include a nonlinear fitting method based on the variant Hölder inequality. This method adjusts the magnitude and angular frequency of an extremely weak signal, and its effectiveness is evaluated using Parseval’s relationship. When tested on cassava leaf disease datasets, the proposed method significantly improved the prediction accuracy in 7-fold cross-validation, achieving an accuracy of 96.18 %, a loss of 1.373, and an F1-score of 0.9618. These results outperformed those of ResNet-101, EfficientNet-B5, RepVGG-B3g4, and AlexNet.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"232 ","pages":"Article 110107"},"PeriodicalIF":7.7,"publicationDate":"2025-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143420500","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computers and Electronics in Agriculture
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1