首页 > 最新文献

Computers and Electronics in Agriculture最新文献

英文 中文
Simulation soil water-salt dynamic and groundwater depth of spring maize based on SWAP model in salinized irrigation district
IF 7.7 1区 农林科学 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2025-02-01 DOI: 10.1016/j.compag.2025.109992
Chengfu Yuan , Yanxin Pan , Siyuan Jing
In order to explore the reasonable groundwater depth under current condition of water-saving implementation in Hetao Irrigation District, the SWAP (Soil-Water-Atmosphere-Plant) model was calibrated and validated based on field experiments data of spring maize in 2019 and 2020. The SWAP model was used to simulate soil water-salt flux and water-salt balance for 0–100 cm soil layer under current condition of groundwater depth, soil water-salt balance for 0–100 cm soil layer under different groundwater depth scenarios after model calibration and validation. The results showed that soil water flux cumulant of 0–100 cm soil layer was 111.6 mm and 63.1 mm during the two-year simulation periods under current condition of groundwater depth, respectively. Soil salt flux cumulant of 0–100 cm soil layer was −10.3 mg·cm−2 and −11.1 mg·cm−2 during the two-year simulation periods under current condition of groundwater depth, respectively. Soil salinity increased by 7.7 mg·cm−2 and 6.9 mg·cm−2 in 0–100 cm soil layer during the whole growth periods of spring maize under current condition of groundwater depth in 2019 and 2020, respectively. It had a risk of soil secondary salinization under current condition of groundwater depth in study area. It was necessary to regulate the groundwater depth to reduce soil secondary salinization. The simulation results of soil water-salt balance under different groundwater depth scenarios showed that when the average groundwater depth was about 1.96 m, it was conducive to crop growth and avoided soil secondary salinization. It was the appropriate groundwater depth under the condition of spring maize water-saving irrigation in study area. The underground pipe drainage system can be used to reduce the average groundwater depth to below 1.96 m, and the risk of soil secondary salinization is slight in study area.
{"title":"Simulation soil water-salt dynamic and groundwater depth of spring maize based on SWAP model in salinized irrigation district","authors":"Chengfu Yuan ,&nbsp;Yanxin Pan ,&nbsp;Siyuan Jing","doi":"10.1016/j.compag.2025.109992","DOIUrl":"10.1016/j.compag.2025.109992","url":null,"abstract":"<div><div>In order to explore the reasonable groundwater depth under current condition of water-saving implementation in Hetao Irrigation District, the SWAP (Soil-Water-Atmosphere-Plant) model was calibrated and validated based on field experiments data of spring maize in 2019 and 2020. The SWAP model was used to simulate soil water-salt flux and water-salt balance for 0–100 cm soil layer under current condition of groundwater depth, soil water-salt balance for 0–100 cm soil layer under different groundwater depth scenarios after model calibration and validation. The results showed that soil water flux cumulant of 0–100 cm soil layer was 111.6 mm and 63.1 mm during the two-year simulation periods under current condition of groundwater depth, respectively. Soil salt flux cumulant of 0–100 cm soil layer was −10.3 mg·cm<sup>−2</sup> and −11.1 mg·cm<sup>−2</sup> during the two-year simulation periods under current condition of groundwater depth, respectively. Soil salinity increased by 7.7 mg·cm<sup>−2</sup> and 6.9 mg·cm<sup>−2</sup> in 0–100 cm soil layer during the whole growth periods of spring maize under current condition of groundwater depth in 2019 and 2020, respectively. It had a risk of soil secondary salinization under current condition of groundwater depth in study area. It was necessary to regulate the groundwater depth to reduce soil secondary salinization. The simulation results of soil water-salt balance under different groundwater depth scenarios showed that when the average groundwater depth was about 1.96 m, it was conducive to crop growth and avoided soil secondary salinization. It was the appropriate groundwater depth under the condition of spring maize water-saving irrigation in study area. The underground pipe drainage system can be used to reduce the average groundwater depth to below 1.96 m, and the risk of soil secondary salinization is slight in study area.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"231 ","pages":"Article 109992"},"PeriodicalIF":7.7,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143095787","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A digital interactive decision dashboard for crop yield trials
IF 7.7 1区 农林科学 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2025-02-01 DOI: 10.1016/j.compag.2025.110037
Pedro Cisdeli , Gustavo Nocera Santiago , Carlos Hernandez , Ana Carcedo , P.V. Vara Prasad , Michael Stamm , Jane Lingenfelser , Ignacio Ciampitti
Globally, farmers face many challenges when taking rapid decisions related to crop management. Therefore, to serve as a decision-support tool, the outputs from research trials should be communicated near real-time (immediately after harvest) to avoid the lag time between data collection and publication in printed or electronic formats. Historically, crop yield trials provided invaluable information to farmers to help them decide the best crop genotypes based on their specific geographic locations. The aim of this application note is to highlight the development of a digital interactive decision dashboard for sharing crop yield trial data, in addition to functioning as a data repository. The current testing dataset involves yield trials for multiple crops in Kansas (within the United States, US) and winter canola across multiple US states. The development of the user interface involved Python programming with the Dash framework, while data manipulations were executed via the Pandas library. The tool empowers users to rapidly assess genotype yield trends year-to-year, incorporating location data for informed decision-making. The user-friendly interface facilitates data input, enabling non-programmers to analyze personal data effortlessly. The database is open to be expanded to include more trials around the globe, developing a comprehensive and more relevant yield data repository.
{"title":"A digital interactive decision dashboard for crop yield trials","authors":"Pedro Cisdeli ,&nbsp;Gustavo Nocera Santiago ,&nbsp;Carlos Hernandez ,&nbsp;Ana Carcedo ,&nbsp;P.V. Vara Prasad ,&nbsp;Michael Stamm ,&nbsp;Jane Lingenfelser ,&nbsp;Ignacio Ciampitti","doi":"10.1016/j.compag.2025.110037","DOIUrl":"10.1016/j.compag.2025.110037","url":null,"abstract":"<div><div>Globally, farmers face many challenges when taking rapid decisions related to crop management. Therefore, to serve as a decision-support tool, the outputs from research trials should be communicated near real-time (immediately after harvest) to avoid the lag time between data collection and publication in printed or electronic formats. Historically, crop yield trials provided invaluable information to farmers to help them decide the best crop genotypes based on their specific geographic locations. The aim of this application note is to highlight the development of a digital interactive decision dashboard for sharing crop yield trial data, in addition to functioning as a data repository. The current testing dataset involves yield trials for multiple crops in Kansas (within the United States, US) and winter canola across multiple US states. The development of the user interface involved Python programming with the Dash framework, while data manipulations were executed via the Pandas library. The tool empowers users to rapidly assess genotype yield trends year-to-year, incorporating location data for informed decision-making. The user-friendly interface facilitates data input, enabling non-programmers to analyze personal data effortlessly. The database is open to be expanded to include more trials around the globe, developing a comprehensive and more relevant yield data repository.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"231 ","pages":"Article 110037"},"PeriodicalIF":7.7,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143095841","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Field-scale UAV-based multispectral phenomics: Leveraging machine learning, explainable AI, and hybrid feature engineering for enhancements in potato phenotyping
IF 7.7 1区 农林科学 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2025-02-01 DOI: 10.1016/j.compag.2024.109746
Janez Lapajne , Andrej Vončina , Ana Vojnović , Daša Donša , Peter Dolničar , Uroš Žibrat
Fast and accurate identification of potato plant traits is essential for formulating effective cultivation strategies. The integration of spectral cameras on Unmanned Aerial Vehicles (UAVs) has demonstrated appealing potential, facilitating non-invasive investigations on a large scale by providing valuable features for construction of machine learning models. Nevertheless, interpreting these features, and those derived from them, remains a challenge, limiting confident utilization in real-world applications. In this study, the interpretability of machine learning models is addressed by employing SHAP (SHapley Additive exPlanations) and UMAP (Uniform Manifold Approximation and Projection) to better understand the modeling process. The XGBoost model was trained on a multispectral dataset of potato plants and evaluated on various tasks, i.e. variety classification, physiological measures estimation, and detection of early blight disease. To optimize its performance, nearly 100 vegetation indices and over 500 auto-generated features were utilized for training. The results indicate successful separation of plant varieties with up to 97.10% accuracy, estimation of physiological values with a maximum R2 and rNRMSE of 0.57 and 0.129, respectively, and detection of early blight with an F1 score of 0.826. Furthermore, both UMAP and SHAP proved beneficial for comprehensive analysis. UMAP visual observations closely corresponded to computed metrics, enhancing confidence for variety differentiation. Concurrently, SHAP identified the most informative features – green, red edge, and NIR channels – for most tasks, aligning tightly with existing literature. This study highlights potential improvements in farming efficiency, crop yield, and sustainability, and promotes the development of interpretable machine learning models for remote sensing applications.
{"title":"Field-scale UAV-based multispectral phenomics: Leveraging machine learning, explainable AI, and hybrid feature engineering for enhancements in potato phenotyping","authors":"Janez Lapajne ,&nbsp;Andrej Vončina ,&nbsp;Ana Vojnović ,&nbsp;Daša Donša ,&nbsp;Peter Dolničar ,&nbsp;Uroš Žibrat","doi":"10.1016/j.compag.2024.109746","DOIUrl":"10.1016/j.compag.2024.109746","url":null,"abstract":"<div><div>Fast and accurate identification of potato plant traits is essential for formulating effective cultivation strategies. The integration of spectral cameras on Unmanned Aerial Vehicles (UAVs) has demonstrated appealing potential, facilitating non-invasive investigations on a large scale by providing valuable features for construction of machine learning models. Nevertheless, interpreting these features, and those derived from them, remains a challenge, limiting confident utilization in real-world applications. In this study, the interpretability of machine learning models is addressed by employing SHAP (SHapley Additive exPlanations) and UMAP (Uniform Manifold Approximation and Projection) to better understand the modeling process. The XGBoost model was trained on a multispectral dataset of potato plants and evaluated on various tasks, i.e. variety classification, physiological measures estimation, and detection of early blight disease. To optimize its performance, nearly 100 vegetation indices and over 500 auto-generated features were utilized for training. The results indicate successful separation of plant varieties with up to 97.10% accuracy, estimation of physiological values with a maximum R<sup>2</sup> and rNRMSE of 0.57 and 0.129, respectively, and detection of early blight with an F1 score of 0.826. Furthermore, both UMAP and SHAP proved beneficial for comprehensive analysis. UMAP visual observations closely corresponded to computed metrics, enhancing confidence for variety differentiation. Concurrently, SHAP identified the most informative features – green, red edge, and NIR channels – for most tasks, aligning tightly with existing literature. This study highlights potential improvements in farming efficiency, crop yield, and sustainability, and promotes the development of interpretable machine learning models for remote sensing applications.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"229 ","pages":"Article 109746"},"PeriodicalIF":7.7,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143174011","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Machine learning models based on hyperspectral imaging for pre-harvest tomato fruit quality monitoring
IF 7.7 1区 农林科学 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2025-02-01 DOI: 10.1016/j.compag.2024.109788
Eitan Fass , Eldar Shlomi , Carmit Ziv , Oren Glickman , David Helman
Traditional methods for assessing tomato quality are time-consuming, expensive, and limited in scope. Here we developed a non-destructive spectral-based model using a handheld hyperspectral camera with 204 bands at the 400–1000 nm range, focusing on data reduction, paving the way for an economically viable device designed to assess seven key tomato quality parameters. We collected 567 fruits from five cultivars of various types and analyzed them for weight, firmness, total soluble solids (TSS), citric acid, ascorbic acid, lycopene, and pH after taking hyperspectral images of the fruits. Five commonly used spectral indices, thousands of normalized difference spectral index (NDSI) combinations, a multivariable regression model (MVR), and three machine learning (ML) algorithms (random forest – RF, extreme gradient boosting – XGBoost, and artificial neural network – ANN) were employed to predict the quality parameters from as few bands as possible. Results show that the ML models with bands selected via a hotspot overlapping method significantly improved quality prediction compared to the common spectral index approaches. Among ML algorithms, RF stood out with the best results with R2 of 0.94 for weight, 0.89 for firmness, 0.79 for lycopene, 0.72 for TSS, 0.67 for pH, 0.62 for citric acid, and 0.45 for ascorbic acid, with the only exception of ANN, which was slightly better for weight and lycopene (R2 of 0.95 and 0.85, respectively). Overall, models with only five bands were enough to predict all seven quality parameters with comparable performance to models with a larger number of bands. Our study offers an efficient and cost-effective method for assessing pre-harvest tomato quality, benefiting farmers and the food industry, as well as scientific research on fruit development and nutrition.
{"title":"Machine learning models based on hyperspectral imaging for pre-harvest tomato fruit quality monitoring","authors":"Eitan Fass ,&nbsp;Eldar Shlomi ,&nbsp;Carmit Ziv ,&nbsp;Oren Glickman ,&nbsp;David Helman","doi":"10.1016/j.compag.2024.109788","DOIUrl":"10.1016/j.compag.2024.109788","url":null,"abstract":"<div><div>Traditional methods for assessing tomato quality are time-consuming, expensive, and limited in scope. Here we developed a non-destructive spectral-based model using a handheld hyperspectral camera with 204 bands at the 400–1000 nm range, focusing on data reduction, paving the way for an economically viable device designed to assess seven key tomato quality parameters. We collected 567 fruits from five cultivars of various types and analyzed them for weight, firmness, total soluble solids (TSS), citric acid, ascorbic acid, lycopene, and pH after taking hyperspectral images of the fruits. Five commonly used spectral indices, thousands of normalized difference spectral index (NDSI) combinations, a multivariable regression model (MVR), and three machine learning (ML) algorithms (random forest – RF, extreme gradient boosting – XGBoost, and artificial neural network – ANN) were employed to predict the quality parameters from as few bands as possible. Results show that the ML models with bands selected via a hotspot overlapping method significantly improved quality prediction compared to the common spectral index approaches. Among ML algorithms, RF stood out with the best results with R<sup>2</sup> of 0.94 for weight, 0.89 for firmness, 0.79 for lycopene, 0.72 for TSS, 0.67 for pH, 0.62 for citric acid, and 0.45 for ascorbic acid, with the only exception of ANN, which was slightly better for weight and lycopene (R<sup>2</sup> of 0.95 and 0.85, respectively). Overall, models with only five bands were enough to predict all seven quality parameters with comparable performance to models with a larger number of bands. Our study offers an efficient and cost-effective method for assessing pre-harvest tomato quality, benefiting farmers and the food industry, as well as scientific research on fruit development and nutrition.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"229 ","pages":"Article 109788"},"PeriodicalIF":7.7,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143174045","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Neural network-based method for contactless estimation of carcass weight from live beef images
IF 7.7 1区 农林科学 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2025-02-01 DOI: 10.1016/j.compag.2024.109830
Haoyu Zhang, Yuqi Zhang, Kai Niu, Zhiqiang He
Accurately estimating the carcass weight of the live beef cattle is crucial in the breeding industry as it is essential for evaluating the quality and production capacity of beef cattle and directly impacts the economic benefits of breeding farms. Although current animal husbandry research predominantly focuses on estimating live body weight, few studies explore the relationship between live images and carcass weight. Additionally, existing methods for estimating carcass weight rely on manually measured body dimensions, a process that is time-consuming, laborious, and compromises animal welfare. In this study, we propose a contactless method utilizing dual-input deep neural networks to estimate the carcass weight of live beef cattle, and explore the impact of both top and side views images on the estimation results while performing experimental analyses of specific scenarios encountered in practical applications to highlight the model’s robustness. The feature extraction network employs two SE-ResNeXt-50 models to extract back features from top view images and abdominal features from side view images, respectively. By merging the extracted information from both views, the combined features are processed through a network to obtain the estimated carcass weight. The proposed model has been trained and tested on a dataset collected by our team, demonstrating superior performance compared to other typical deep learning models across four indicators: MAE, RMSE, MAPE, and R2, particularly achieving a notable RMSE of 17.713 kg. Ablation experiments are conducted to validate the contributions of the group convolution structure and the Squeeze and Excitation (SE) block. Overall, the method presented in this study bears significant implications for animal quality and production capacity evaluation in the breeding industry.
{"title":"Neural network-based method for contactless estimation of carcass weight from live beef images","authors":"Haoyu Zhang,&nbsp;Yuqi Zhang,&nbsp;Kai Niu,&nbsp;Zhiqiang He","doi":"10.1016/j.compag.2024.109830","DOIUrl":"10.1016/j.compag.2024.109830","url":null,"abstract":"<div><div>Accurately estimating the carcass weight of the live beef cattle is crucial in the breeding industry as it is essential for evaluating the quality and production capacity of beef cattle and directly impacts the economic benefits of breeding farms. Although current animal husbandry research predominantly focuses on estimating live body weight, few studies explore the relationship between live images and carcass weight. Additionally, existing methods for estimating carcass weight rely on manually measured body dimensions, a process that is time-consuming, laborious, and compromises animal welfare. In this study, we propose a contactless method utilizing dual-input deep neural networks to estimate the carcass weight of live beef cattle, and explore the impact of both top and side views images on the estimation results while performing experimental analyses of specific scenarios encountered in practical applications to highlight the model’s robustness. The feature extraction network employs two SE-ResNeXt-50 models to extract back features from top view images and abdominal features from side view images, respectively. By merging the extracted information from both views, the combined features are processed through a network to obtain the estimated carcass weight. The proposed model has been trained and tested on a dataset collected by our team, demonstrating superior performance compared to other typical deep learning models across four indicators: MAE, RMSE, MAPE, and R<sup>2</sup>, particularly achieving a notable RMSE of 17.713 kg. Ablation experiments are conducted to validate the contributions of the group convolution structure and the Squeeze and Excitation (SE) block. Overall, the method presented in this study bears significant implications for animal quality and production capacity evaluation in the breeding industry.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"229 ","pages":"Article 109830"},"PeriodicalIF":7.7,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143174046","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Kiwifruit segmentation and identification of picking point on its stem in orchards
IF 7.7 1区 农林科学 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2025-02-01 DOI: 10.1016/j.compag.2024.109748
Li Li , Kai Li , Zhi He , Hao Li , Yongjie Cui
Automated picking of kiwifruit with retained stems is crucial for extending the fruit’s freshness period and ensuring its quality during storage. Accurately obtaining kiwifruit picking points based on kiwifruit stem detection is necessary to effectively achieve this goal. The small size and similar colour characteristics of kiwifruit stems to fruit make fruit stem detection more difficult and pose a challenge in accurately identifying picking points. This study proposed a DS-UNet method based on improved convolutional networks as a biomedical image segmentation model for the segmentation of kiwifruit and its stem, identification of picking points to segment the characteristics of kiwifruit and its stems and identification and localisation of the corresponding picking points in trellis cultivation. First, to improve convolutional networks for biomedical image segmentation (UNet) models, conventional convolution is replaced by depth-wise-separable convolution in the encoding stage. A spatial attention mechanism is added after the convolutional layer in the decoding stage, which increases the model’s computing power and segmentation efficiency. Then, constraint conditions were set to establish the relationship between the fruit stem and fruit and lock the target fruit stem by determining the positional relationship between the growth of the kiwifruit and its stems. Finally, the centroid of the minimum bounding rectangle of the kiwifruit stem characteristic area was identified and used as an effective target for fruit stem picking point. Experimental results demonstrate that the proposed DS-UNet instance segmentation algorithm can achieve increased mPA, mIoU, P and R values for kiwifruit and its stems by 6.76%, 10.98%, 10.10% and 12.46%, respectively, compared to those of the original UNet. The inference time was shortened by 87.50%. Using the proposed method, the probability of effectively predicting the picking point was 91.65%. This study provides a solid foundation for developing an information perception system for smart picking equipment and the storage and fresh-keeping of kiwifruit after harvest. This study also provides a reference for picking point prediction of other fruits and vegetables with similar growth characteristics.
{"title":"Kiwifruit segmentation and identification of picking point on its stem in orchards","authors":"Li Li ,&nbsp;Kai Li ,&nbsp;Zhi He ,&nbsp;Hao Li ,&nbsp;Yongjie Cui","doi":"10.1016/j.compag.2024.109748","DOIUrl":"10.1016/j.compag.2024.109748","url":null,"abstract":"<div><div>Automated picking of kiwifruit with retained stems is crucial for extending the fruit’s freshness period and ensuring its quality during storage. Accurately obtaining kiwifruit picking points based on kiwifruit stem detection is necessary to effectively achieve this goal. The small size and similar colour characteristics of kiwifruit stems to fruit make fruit stem detection more difficult and pose a challenge in accurately identifying picking points. This study proposed a DS-UNet method based on improved convolutional networks as a biomedical image segmentation model for the segmentation of kiwifruit and its stem, identification of picking points to segment the characteristics of kiwifruit and its stems and identification and localisation of the corresponding picking points in trellis cultivation. First, to improve convolutional networks for biomedical image segmentation (UNet) models, conventional convolution is replaced by depth-wise-separable convolution in the encoding stage. A spatial attention mechanism is added after the convolutional layer in the decoding stage, which increases the model’s computing power and segmentation efficiency. Then, constraint conditions were set to establish the relationship between the fruit stem and fruit and lock the target fruit stem by determining the positional relationship between the growth of the kiwifruit and its stems. Finally, the centroid of the minimum bounding rectangle of the kiwifruit stem characteristic area was identified and used as an effective target for fruit stem picking point. Experimental results demonstrate that the proposed DS-UNet instance segmentation algorithm can achieve increased <em>mPA</em>, <em>mIoU</em>, <em>P</em> and <em>R</em> values for kiwifruit and its stems by 6.76%, 10.98%, 10.10% and 12.46%, respectively, compared to those of the original UNet. The inference time was shortened by 87.50%. Using the proposed method, the probability of effectively predicting the picking point was 91.65%. This study provides a solid foundation for developing an information perception system for smart picking equipment and the storage and fresh-keeping of kiwifruit after harvest. This study also provides a reference for picking point prediction of other fruits and vegetables with similar growth characteristics.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"229 ","pages":"Article 109748"},"PeriodicalIF":7.7,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143175120","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Wheat Fusarium head blight severity grading using generative adversarial networks and semi-supervised segmentation
IF 7.7 1区 农林科学 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2025-02-01 DOI: 10.1016/j.compag.2024.109817
Guoqing Feng , Ying Gu , Cheng Wang , Dongyan Zhang , Rui Xu , Zhanwang Zhu , Bin Luo
The severity of Fusarium head blight (FHB), a highly destructive disease of wheat spikes, can be graded using RGB images. To reduce the various costs required for image acquisition and the annotation costs required for segmentation models and to achieve accurate wheat FHB severity grading, this study proposed data augmentation strategies comprising StyleGAN3, Real-ESRGAN, and different input image resolutions, as well as the semi-supervised three-class segmentation model. StyleGAN3 and Real-ESRGAN, which use a generative adversarial network structure, were used in wheat spike image generation and super-resolution reconstruction in this study, respectively. High-quality generated images were screened based on their contribution to the FID scores for more reliable datasets. In addition, a semi-supervised segmentation network based on L-U2NetP and knowledge distillation was proposed, which reduced the annotation requirements by 60% while achieving three-class segmentation and severity grading of wheat spikes with FHB. This study also proposed the use of images of different resolutions at the input end and compared them with the proposed method. Results indicated that medium-resolution images could assist the model in achieving segmentation accuracy of 95.37% and grading accuracy of 96.88% while ensuring the integrity of the disease information. Compared with inputting high-resolution images, it can improve the transmission and super-resolution reconstruction rate on the application side. Meanwhile, high-resolution images also assisted the model in achieving segmentation accuracy of 95.75% and grading accuracy of 95.00%. The obtained models demonstrated strong feature extraction capabilities in heterogeneous test sets with complicated image backgrounds. Therefore, the proposed method can be used for image generation and application detection under different resource configurations and is a reliable and flexible tool for wheat FHB severity grading.
{"title":"Wheat Fusarium head blight severity grading using generative adversarial networks and semi-supervised segmentation","authors":"Guoqing Feng ,&nbsp;Ying Gu ,&nbsp;Cheng Wang ,&nbsp;Dongyan Zhang ,&nbsp;Rui Xu ,&nbsp;Zhanwang Zhu ,&nbsp;Bin Luo","doi":"10.1016/j.compag.2024.109817","DOIUrl":"10.1016/j.compag.2024.109817","url":null,"abstract":"<div><div>The severity of Fusarium head blight (FHB), a highly destructive disease of wheat spikes, can be graded using RGB images. To reduce the various costs required for image acquisition and the annotation costs required for segmentation models and to achieve accurate wheat FHB severity grading, this study proposed data augmentation strategies comprising StyleGAN3, Real-ESRGAN, and different input image resolutions, as well as the semi-supervised three-class segmentation model. StyleGAN3 and Real-ESRGAN, which use a generative adversarial network structure, were used in wheat spike image generation and super-resolution reconstruction in this study, respectively. High-quality generated images were screened based on their contribution to the FID scores for more reliable datasets. In addition, a semi-supervised segmentation network based on L-U2NetP and knowledge distillation was proposed, which reduced the annotation requirements by 60% while achieving three-class segmentation and severity grading of wheat spikes with FHB. This study also proposed the use of images of different resolutions at the input end and compared them with the proposed method. Results indicated that medium-resolution images could assist the model in achieving segmentation accuracy of 95.37% and grading accuracy of 96.88% while ensuring the integrity of the disease information. Compared with inputting high-resolution images, it can improve the transmission and super-resolution reconstruction rate on the application side. Meanwhile, high-resolution images also assisted the model in achieving segmentation accuracy of 95.75% and grading accuracy of 95.00%. The obtained models demonstrated strong feature extraction capabilities in heterogeneous test sets with complicated image backgrounds. Therefore, the proposed method can be used for image generation and application detection under different resource configurations and is a reliable and flexible tool for wheat FHB severity grading.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"229 ","pages":"Article 109817"},"PeriodicalIF":7.7,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143175178","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cow depth image restoration method based on RGB guided network with modulation branch in the cowshed environment
IF 7.7 1区 农林科学 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2025-02-01 DOI: 10.1016/j.compag.2024.109773
Yanxing Li , Xin Dai , Baisheng Dai , Peng Song , Xinjie Wang , Xinchao Chen , Yang Li , Weizheng Shen
Depth images were widely applied in smart animal husbandry. The raw depth images collected by the RGB-D cameras generally existed amount of missing depth values due to the light reflected from white pattern of cows and direct sunlight in the cowshed. The incomplete cows in depth images would affect the application of depth images in health monitoring. This study proposed a cow depth image restoration method based on RGB guided network with a modulation branch. Firstly, removing the outliers caused by light from the depth image and determining the depth value missing area of the cow’s body. Second, RGB and depth features were extracted through multiple convolutions and fused in the S-C (Self-attention and Convolution attention) fusion module of encoder. Then, the prediction head generated a coarsely predicted depth image after deconvolution combined with a modulation branch. Finally, the repaired depth image was generated in the SPN (Spatial Propagation Network) refinement module of the decoder. In terms of dataset construction, 7260 depth images were collected in a commercial dairy farm. To make up for lacking ground truth complete depth images corresponded to the raw depth images with missing value, two ways for generating missing depth images were designed. The experimental results shown that the method had improved restoration quality of cow’s incomplete body in depth images. By comparing with other depth restoration works, the proposed method achieved significantly superior performance on RMSE = 36.32 and MAE = 12.77, and the percentage of predicted pixels within the error range at 1.25 reached 0.999. Additionally, a smoother transition between missing and restoration regions was demonstrated in the repaired depth images and point cloud results. And compared with the depth images with missing regions, the Precision, Recall rate and F1-score of the repaired depth images were improved for cow body condition scoring. This study could improve the effectiveness of the collected data and make the depth images more practical for smart animal husbandry.
{"title":"Cow depth image restoration method based on RGB guided network with modulation branch in the cowshed environment","authors":"Yanxing Li ,&nbsp;Xin Dai ,&nbsp;Baisheng Dai ,&nbsp;Peng Song ,&nbsp;Xinjie Wang ,&nbsp;Xinchao Chen ,&nbsp;Yang Li ,&nbsp;Weizheng Shen","doi":"10.1016/j.compag.2024.109773","DOIUrl":"10.1016/j.compag.2024.109773","url":null,"abstract":"<div><div>Depth images were widely applied in smart animal husbandry. The raw depth images collected by the RGB-D cameras generally existed amount of missing depth values due to the light reflected from white pattern of cows and direct sunlight in the cowshed. The incomplete cows in depth images would affect the application of depth images in health monitoring. This study proposed a cow depth image restoration method based on RGB guided network with a modulation branch. Firstly, removing the outliers caused by light from the depth image and determining the depth value missing area of the cow’s body. Second, RGB and depth features were extracted through multiple convolutions and fused in the S-C (Self-attention and Convolution attention) fusion module of encoder. Then, the prediction head generated a coarsely predicted depth image after deconvolution combined with a modulation branch. Finally, the repaired depth image was generated in the SPN (Spatial Propagation Network) refinement module of the decoder. In terms of dataset construction, 7260 depth images were collected in a commercial dairy farm. To make up for lacking ground truth complete depth images corresponded to the raw depth images with missing value, two ways for generating missing depth images were designed. The experimental results shown that the method had improved restoration quality of cow’s incomplete body in depth images. By comparing with other depth restoration works, the proposed method achieved significantly superior performance on RMSE = 36.32 and MAE = 12.77, and the percentage of predicted pixels within the error range at 1.25 reached 0.999. Additionally, a smoother transition between missing and restoration regions was demonstrated in the repaired depth images and point cloud results. And compared with the depth images with missing regions, the Precision, Recall rate and F1-score of the repaired depth images were improved for cow body condition scoring. This study could improve the effectiveness of the collected data and make the depth images more practical for smart animal husbandry.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"229 ","pages":"Article 109773"},"PeriodicalIF":7.7,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143175182","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Single-Stage Navigation Path Extraction Network for agricultural robots in orchards
IF 7.7 1区 农林科学 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2025-02-01 DOI: 10.1016/j.compag.2024.109687
Hui Liu, Xiao Zeng, Yue Shen, Jie Xu, Zohaib Khan
The real-time and precise extraction of navigation paths holds significant importance in ensuring the autonomous navigation of agricultural robots. Although widely used in orchards, path extraction for agricultural robots remains a complex, multi-stage process. To address the limitations of current vision-based algorithms, this paper proposes a novel approach: the Single-Stage Navigation Path Extraction Network (NPENet). NPENet simplifies the path extraction process by reducing unnecessary parameterization and redefining the road centerline as the neural network’s primary prediction target, with a corresponding tailored loss function. Utilizing residual modules, NPENet effectively extracts navigation path features in orchard environments. The model’s performance is further enhanced by optimizing the network structure. A dataset of 25,720 images from various orchard scenes was used to train and test the model. Experimental results demonstrate that NPENet achieves 92.14% accuracy in road centerline detection and 91.6% recall, with a detection speed of 10.1 ms per 448x448 pixel frame on a Jetson Xavier, and a parameter size of only 1.5 M. These findings show that NPENet outperforms existing visual detection and segmentation methods, providing efficient and accurate road information for mobile robots in orchard environments. This approach offers a promising solution for autonomous navigation in agriculture.
{"title":"A Single-Stage Navigation Path Extraction Network for agricultural robots in orchards","authors":"Hui Liu,&nbsp;Xiao Zeng,&nbsp;Yue Shen,&nbsp;Jie Xu,&nbsp;Zohaib Khan","doi":"10.1016/j.compag.2024.109687","DOIUrl":"10.1016/j.compag.2024.109687","url":null,"abstract":"<div><div>The real-time and precise extraction of navigation paths holds significant importance in ensuring the autonomous navigation of agricultural robots. Although widely used in orchards, path extraction for agricultural robots remains a complex, multi-stage process. To address the limitations of current vision-based algorithms, this paper proposes a novel approach: the Single-Stage Navigation Path Extraction Network (NPENet). NPENet simplifies the path extraction process by reducing unnecessary parameterization and redefining the road centerline as the neural network’s primary prediction target, with a corresponding tailored loss function. Utilizing residual modules, NPENet effectively extracts navigation path features in orchard environments. The model’s performance is further enhanced by optimizing the network structure. A dataset of 25,720 images from various orchard scenes was used to train and test the model. Experimental results demonstrate that NPENet achieves 92.14% accuracy in road centerline detection and 91.6% recall, with a detection speed of 10.1 ms per 448x448 pixel frame on a Jetson Xavier, and a parameter size of only 1.5 M. These findings show that NPENet outperforms existing visual detection and segmentation methods, providing efficient and accurate road information for mobile robots in orchard environments. This approach offers a promising solution for autonomous navigation in agriculture.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"229 ","pages":"Article 109687"},"PeriodicalIF":7.7,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143175188","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Selective fruit harvesting prediction and 6D pose estimation based on YOLOv7 multi-parameter recognition
IF 7.7 1区 农林科学 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2025-02-01 DOI: 10.1016/j.compag.2024.109815
Guorui Zhao , Shi Dong , Jian Wen , Yichen Ban , Xiaowei Zhang
For crops that need to be harvested in batches based on maturity, the harvesting operation needs to select individual fruits that have developed and matured for harvest. Therefore, The real-time performance and accuracy of fruit target recognition and localization tasks in selective harvesting robotic operations play a crucial role in the improvement of harvesting efficiency. Unlike the spatial localization of fruits, the estimation of the 6D pose of fruits, which requires more parameters, negatively impacts the network’s real-time and generalization. Therefore, this study proposes selective harvest recognition and a 6D pose estimation algorithm based on YOLOv7 multi-parameter recognition using cucumber as the research object. First, the YOLOV7-hv algorithm is proposed by improving the structure of the YOLOv7 network, adding the key points recognition branch and the mask generation branch to identify the suitable fruit targets for harvesting, and realizing the pose key points recognition and the instance segmentation of the suitable fruit targets. Based on the multinomial parameters recognized by the YOLOV7-hv network, the YOLOv7-hv picking 6D pose estimation algorithm is proposed to realize the estimation of fruit picking 6D pose. In the cucumber fruit datasets captured in this study, the YOLOV7-hv algorithm has AP of 94.0 % for target detection, OKS of 0.882 for key points detection, mIOU of 93.8 % for the segmentation task, Fps of 43 for the overall network,the mean position error of 6.18 mm for localization, the average time consumption of 8.2 ms for localization, the angular error of 6.25° for pose estimation and the average time consumption of 8.4 ms for pose estimation. In embedded devices, the model still maintains close evaluation metrics and good real-time performance. The various performance metrics indicate that the method proposed in this paper enables real-time and accurate recognition and pose estimation of fruit targets.
{"title":"Selective fruit harvesting prediction and 6D pose estimation based on YOLOv7 multi-parameter recognition","authors":"Guorui Zhao ,&nbsp;Shi Dong ,&nbsp;Jian Wen ,&nbsp;Yichen Ban ,&nbsp;Xiaowei Zhang","doi":"10.1016/j.compag.2024.109815","DOIUrl":"10.1016/j.compag.2024.109815","url":null,"abstract":"<div><div>For crops that need to be harvested in batches based on maturity, the harvesting operation needs to select individual fruits that have developed and matured for harvest. Therefore, The real-time performance and accuracy of fruit target recognition and localization tasks in selective harvesting robotic operations play a crucial role in the improvement of harvesting efficiency. Unlike the spatial localization of fruits, the estimation of the 6D pose of fruits, which requires more parameters, negatively impacts the network’s real-time and generalization. Therefore, this study proposes selective harvest recognition and a 6D pose estimation algorithm based on YOLOv7 multi-parameter recognition using cucumber as the research object. First, the YOLOV7-hv algorithm is proposed by improving the structure of the YOLOv7 network, adding the key points recognition branch and the mask generation branch to identify the suitable fruit targets for harvesting, and realizing the pose key points recognition and the instance segmentation of the suitable fruit targets. Based on the multinomial parameters recognized by the YOLOV7-hv network, the YOLOv7-hv picking 6D pose estimation algorithm is proposed to realize the estimation of fruit picking 6D pose. In the cucumber fruit datasets captured in this study, the YOLOV7-hv algorithm has AP of 94.0 % for target detection, OKS of 0.882 for key points detection, mIOU of 93.8 % for the segmentation task, Fps of 43 for the overall network,the mean position error of 6.18 mm for localization, the average time consumption of 8.2 ms for localization, the angular error of 6.25° for pose estimation and the average time consumption of 8.4 ms for pose estimation. In embedded devices, the model still maintains close evaluation metrics and good real-time performance. The various performance metrics indicate that the method proposed in this paper enables real-time and accurate recognition and pose estimation of fruit targets.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"229 ","pages":"Article 109815"},"PeriodicalIF":7.7,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143175189","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computers and Electronics in Agriculture
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1