首页 > 最新文献

Computers and Electronics in Agriculture最新文献

英文 中文
Towards reliable and damage-less robotic fragile fruit grasping: An enveloping gripper with multimodal strategy inspired by Asian elephant trunk
IF 7.7 1区 农林科学 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2025-03-10 DOI: 10.1016/j.compag.2025.110198
Qingyu Wang , Kaixin Bai , Lei Zhang , Zhizhong Sun , Tianze Jia , Dong Hu , Qiang Li , Jianwei Zhang , Alois Knoll , Huanyu Jiang , Mingchuan Zhou , Yibin Ying
Fruit uploading and packaging are labor-intensive and time-consuming steps in postharvest industry, which involve continuous pick-and-place manipulation. In this case, we aim to replace manual working with robotic grasping. However, for robotic fragile fruit grasping, the main difficulty is to reduce early stage bruise while maintaining the grasping reliability. In this study, we aim to solve this problem and achieve reliable and damage-less robotic fragile fruit grasping. Inspired by the structure of the Asian elephant trunk with its feeding behavior, a bionic and pneumatic soft gripper was designed, and a multimodal grasping strategy was proposed. Similar with Asian elephant trunk, the gripper has two designed trapezoid air chambers to control the two individual parts, including fingertip-like process and enveloping structure. Enveloping grasping behavior was imitated with larger area of contact, less contact force, and larger pull off force. A visuo-tactile multimodal grasping strategy was integrated into the robotic grasping system. The visual modality was developed for positioning and grasp pose estimation. The tactile modality was employed for grasping pose confirmation and closed-loop grasping force control. In the experiment on the enveloping gripper, the maximum contact force and the pull off force reached a good balance and were 0.7083 N and 7.959 N, respectively. With the proposed multimodal grasping strategy, the grasping success rate increased 4.23 % to 96.70 %. As for closed-loop control of the grasping force, the average value for steady-state error and maximum overshoot were 0.0856 N and 26.43 %, respectively. The experiment on Spatial Frequency Domain Imaging (SFDI) demonstrated the effectiveness of our enveloping gripper in reducing the early stage bruise. To some extent, the designed enveloping gripper with the proposed multimodal strategy could achieve reliable and damage-less fragile fruit grasping, which is promising in fruit postharvest industry.
{"title":"Towards reliable and damage-less robotic fragile fruit grasping: An enveloping gripper with multimodal strategy inspired by Asian elephant trunk","authors":"Qingyu Wang ,&nbsp;Kaixin Bai ,&nbsp;Lei Zhang ,&nbsp;Zhizhong Sun ,&nbsp;Tianze Jia ,&nbsp;Dong Hu ,&nbsp;Qiang Li ,&nbsp;Jianwei Zhang ,&nbsp;Alois Knoll ,&nbsp;Huanyu Jiang ,&nbsp;Mingchuan Zhou ,&nbsp;Yibin Ying","doi":"10.1016/j.compag.2025.110198","DOIUrl":"10.1016/j.compag.2025.110198","url":null,"abstract":"<div><div>Fruit uploading and packaging are labor-intensive and time-consuming steps in postharvest industry, which involve continuous pick-and-place manipulation. In this case, we aim to replace manual working with robotic grasping. However, for robotic fragile fruit grasping, the main difficulty is to reduce early stage bruise while maintaining the grasping reliability. In this study, we aim to solve this problem and achieve reliable and damage-less robotic fragile fruit grasping. Inspired by the structure of the Asian elephant trunk with its feeding behavior, a bionic and pneumatic soft gripper was designed, and a multimodal grasping strategy was proposed. Similar with Asian elephant trunk, the gripper has two designed trapezoid air chambers to control the two individual parts, including fingertip-like process and enveloping structure. Enveloping grasping behavior was imitated with larger area of contact, less contact force, and larger pull off force. A visuo-tactile multimodal grasping strategy was integrated into the robotic grasping system. The visual modality was developed for positioning and grasp pose estimation. The tactile modality was employed for grasping pose confirmation and closed-loop grasping force control. In the experiment on the enveloping gripper, the maximum contact force and the pull off force reached a good balance and were 0.7083 N and 7.959 N, respectively. With the proposed multimodal grasping strategy, the grasping success rate increased 4.23 % to 96.70 %. As for closed-loop control of the grasping force, the average value for steady-state error and maximum overshoot were 0.0856 N and 26.43 %, respectively. The experiment on Spatial Frequency Domain Imaging (SFDI) demonstrated the effectiveness of our enveloping gripper in reducing the early stage bruise. To some extent, the designed enveloping gripper with the proposed multimodal strategy could achieve reliable and damage-less fragile fruit grasping, which is promising in fruit postharvest industry.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"234 ","pages":"Article 110198"},"PeriodicalIF":7.7,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143578132","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A study on the variation of knot width in Larix olgensis based on a Mixed-Effects model
IF 7.7 1区 农林科学 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2025-03-10 DOI: 10.1016/j.compag.2025.110215
Zelin Li , Weiwei Jia , Fengri Li , Yang Zhao , Haotian Guo , Fan Wang
Knots are common internal defects in wood that significantly affect its mechanical strength and visual quality. Controlling knot size is an effective approach to improving wood quality, and knot width is a key indicator for measuring knot size. This study investigated 27 plantation-grown Larix olgensis trees from the Mengjiagang Forest Farm in Heilongjiang Province, China. Variables at both the tree and knot levels were incorporated to develop fixed-effects and mixed-effects models to simulate changes in knot width. The results showed that the mixed-effects model exhibited better fitting performance compared to the fixed-effects model. Additionally, the study evaluated the impact of four different sampling strategies on the predictive accuracy of the models. The findings indicated that the Type 2 sampling strategy, which involves selecting seven knot samples from the upper trunk, yielded the best predictive performance. The study also revealed that knot width increased with greater branch insertion height and angle, but decreased with higher height-diameter ratios, peaking at around the 10th year. These findings provide scientific evidence for optimizing pruning strategies, effectively controlling knot size, and increasing the proportion of knot-free timber, offering significant practical value.
{"title":"A study on the variation of knot width in Larix olgensis based on a Mixed-Effects model","authors":"Zelin Li ,&nbsp;Weiwei Jia ,&nbsp;Fengri Li ,&nbsp;Yang Zhao ,&nbsp;Haotian Guo ,&nbsp;Fan Wang","doi":"10.1016/j.compag.2025.110215","DOIUrl":"10.1016/j.compag.2025.110215","url":null,"abstract":"<div><div>Knots are common internal defects in wood that significantly affect its mechanical strength and visual quality. Controlling knot size is an effective approach to improving wood quality, and knot width is a key indicator for measuring knot size. This study investigated 27 plantation-grown Larix olgensis trees from the Mengjiagang Forest Farm in Heilongjiang Province, China. Variables at both the tree and knot levels were incorporated to develop fixed-effects and mixed-effects models to simulate changes in knot width. The results showed that the mixed-effects model exhibited better fitting performance compared to the fixed-effects model. Additionally, the study evaluated the impact of four different sampling strategies on the predictive accuracy of the models. The findings indicated that the Type 2 sampling strategy, which involves selecting seven knot samples from the upper trunk, yielded the best predictive performance. The study also revealed that knot width increased with greater branch insertion height and angle, but decreased with higher height-diameter ratios, peaking at around the 10th year. These findings provide scientific evidence for optimizing pruning strategies, effectively controlling knot size, and increasing the proportion of knot-free timber, offering significant practical value.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"234 ","pages":"Article 110215"},"PeriodicalIF":7.7,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143592061","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Grapevine red blotch virus detection in the vineyard: Leveraging machine learning with VIS/NIR hyperspectral images for asymptomatic and symptomatic vines
IF 7.7 1区 农林科学 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2025-03-10 DOI: 10.1016/j.compag.2025.110251
E. Laroche-Pinel , K. Singh , M. Flasco , M.L. Cooper , M. Fuchs , L. Brillante
A decade after the discovery of grapevine red blotch virus (GRBV), there is ample evidence of its detrimental impacts on grapevine physiology, grape composition, and wine production. To mitigate the spread of GRBV in vineyards, roguing is recommended as a disease management response. The imperative to identify and remove diseased vines justifies the development of autonomous scouting. In this study, nearly 700 ground-based hyperspectral images, encompassing both symptomatic and asymptomatic vine canopies, were collected in a Cabernet Franc vineyard during two growing seasons, capturing pre- and post-veraison vine development stages. Spanning 230 bands from visible (VIS) to near-infrared (NIR) domains (510 to 900 nm with 1.7 nm width), canopy spectral signals were isolated from the background through semantic segmentation using U-Net. Simultaneously, the GRBV status of each vine was established in the laboratory through polymerase chain reaction. These two intertwined datasets were used for training various machine learning algorithms and their ensembles. In addition, strategies to reduce dataset size through spectral binning and testing three different feature selection methods (Recursive Feature Elimination, Univariate Feature Selection, and taking into consideration autocorrelation) were explored. Our findings revealed that hyperspectral imagery identified GRBV-infected vines with an accuracy of 75.7 % around harvest, coinciding with the peak of disease symptom expression, utilizing only 19 bands with a 16 nm bin width. Prior to veraison when most vines are asymptomatic, an accuracy of 74.2 % was achieved, employing 5 bands with a 16 nm bin width. This study substantiates the utility of hyperspectral images in the identification of GRBV-infected vines, offering a robust foundation for the development of a streamlined sensing system that holds great promise for the grape and wine industry in effectively scouting vineyards for GRBV.
{"title":"Grapevine red blotch virus detection in the vineyard: Leveraging machine learning with VIS/NIR hyperspectral images for asymptomatic and symptomatic vines","authors":"E. Laroche-Pinel ,&nbsp;K. Singh ,&nbsp;M. Flasco ,&nbsp;M.L. Cooper ,&nbsp;M. Fuchs ,&nbsp;L. Brillante","doi":"10.1016/j.compag.2025.110251","DOIUrl":"10.1016/j.compag.2025.110251","url":null,"abstract":"<div><div>A decade after the discovery of grapevine red blotch virus (GRBV), there is ample evidence of its detrimental impacts on grapevine physiology, grape composition, and wine production. To mitigate the spread of GRBV in vineyards, roguing is recommended as a disease management response. The imperative to identify and remove diseased vines justifies the development of autonomous scouting. In this study, nearly 700 ground-based hyperspectral images, encompassing both symptomatic and asymptomatic vine canopies, were collected in a Cabernet Franc vineyard during two growing seasons, capturing pre- and post-veraison vine development stages. Spanning 230 bands from visible (VIS) to near-infrared (NIR) domains (510 to 900 nm with 1.7 nm width), canopy spectral signals were isolated from the background through semantic segmentation using U-Net. Simultaneously, the GRBV status of each vine was established in the laboratory through polymerase chain reaction. These two intertwined datasets were used for training various machine learning algorithms and their ensembles. In addition, strategies to reduce dataset size through spectral binning and testing three different feature selection methods (Recursive Feature Elimination, Univariate Feature Selection, and taking into consideration autocorrelation) were explored. Our findings revealed that hyperspectral imagery identified GRBV-infected vines with an accuracy of 75.7 % around harvest, coinciding with the peak of disease symptom expression, utilizing only 19 bands with a 16 nm bin width. Prior to veraison when most vines are asymptomatic, an accuracy of 74.2 % was achieved, employing 5 bands with a 16 nm bin width. This study substantiates the utility of hyperspectral images in the identification of GRBV-infected vines, offering a robust foundation for the development of a streamlined sensing system that holds great promise for the grape and wine industry in effectively scouting vineyards for GRBV.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"234 ","pages":"Article 110251"},"PeriodicalIF":7.7,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143592063","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improved field-scale drought monitoring using MODIS and Sentinel-2 data for vegetation temperature condition index generation through a fusion framework
IF 7.7 1区 农林科学 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2025-03-09 DOI: 10.1016/j.compag.2025.110256
Mingqi Li , Pengxin Wang , Kevin Tansey , Yuanfei Sun , Fengwei Guo , Ji Zhou
Drought has a wide range of damaging impacts. Continuous and precise time series drought monitoring is crucial for agriculture. Most existing drought monitoring studies lack sufficient spatiotemporal resolution, making them inadequate for field-scale drought monitoring. In the past decades, Vegetation Temperature Condition Index (VTCI) derived from Moderate Resolution Imaging Spectroradiometer (MODIS) has proven effective for drought monitoring. However, only using MODIS data to derive VTCI for drought monitoring presents a limitation in spatial resolution. To address these limitations, this study combined spatiotemporal fusion techniques and machine learning to develop a novel framework for drought monitoring at both a fine resolution (20 m) and a 10-day interval. The framework includes using biophysical parameters calculated by Sentinel-2 data and Digital Elevation Model (DEM) data as downscaling parameters to perform land Surface Temperature (LST) spatial downscaling. The Enhanced Spatial and Temporal Adaptive Reflectance Fusion Model (ESTARFM) was applied to fuse Sentinel-2 and MODIS data. Two fusion strategies were applied for calculating field-scale VTCI: Blend-then-Index (BI) and Index-then-Blend (IB). Results showed that the two fusion strategies effectively enhanced the spatial resolution of VTCI compared to MODIS VTCI. However, the BI fusion strategy represents drought conditions effectively in cropland, and shows higher consistency (R > 0.83) and lower RMSE (RMSE < 0.05) with MODIS VTCI. In addition, the downscaled LST has consistency with MODIS LST (Correlation Coefficient (R) > 0.77, Root Mean Squared Error (RMSE) < 1.42 K) and retained more spatial details. Overall, we achieved continuous time series drought monitoring at the field scale and 10-day intervals.
{"title":"Improved field-scale drought monitoring using MODIS and Sentinel-2 data for vegetation temperature condition index generation through a fusion framework","authors":"Mingqi Li ,&nbsp;Pengxin Wang ,&nbsp;Kevin Tansey ,&nbsp;Yuanfei Sun ,&nbsp;Fengwei Guo ,&nbsp;Ji Zhou","doi":"10.1016/j.compag.2025.110256","DOIUrl":"10.1016/j.compag.2025.110256","url":null,"abstract":"<div><div>Drought has a wide range of damaging impacts. Continuous and precise time series drought monitoring is crucial for agriculture. Most existing drought monitoring studies lack sufficient spatiotemporal resolution, making them inadequate for field-scale drought monitoring. In the past decades, Vegetation Temperature Condition Index (VTCI) derived from Moderate Resolution Imaging Spectroradiometer (MODIS) has proven effective for drought monitoring. However, only using MODIS data to derive VTCI for drought monitoring presents a limitation in spatial resolution. To address these limitations, this study combined spatiotemporal fusion techniques and machine learning to develop a novel framework for drought monitoring at both a fine resolution (20 m) and a 10-day interval. The framework includes using biophysical parameters calculated by Sentinel-2 data and Digital Elevation Model (DEM) data as downscaling parameters to perform land Surface Temperature (LST) spatial downscaling. The Enhanced Spatial and Temporal Adaptive Reflectance Fusion Model (ESTARFM) was applied to fuse Sentinel-2 and MODIS data. Two fusion strategies were applied for calculating field-scale VTCI: Blend-then-Index (BI) and Index-then-Blend (IB). Results showed that the two fusion strategies effectively enhanced the spatial resolution of VTCI compared to MODIS VTCI. However, the BI fusion strategy represents drought conditions effectively in cropland, and shows higher consistency (R &gt; 0.83) and lower RMSE (RMSE &lt; 0.05) with MODIS VTCI. In addition, the downscaled LST has consistency with MODIS LST (Correlation Coefficient (R) &gt; 0.77, Root Mean Squared Error (RMSE) &lt; 1.42 K) and retained more spatial details. Overall, we achieved continuous time series drought monitoring at the field scale and 10-day intervals.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"234 ","pages":"Article 110256"},"PeriodicalIF":7.7,"publicationDate":"2025-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143578144","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Path curvature incorporated reinforcement learning method for accurate path tracking of agricultural vehicles
IF 7.7 1区 农林科学 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2025-03-09 DOI: 10.1016/j.compag.2025.110243
Linhuan Zhang , Ruirui Zhang , Danzhu Zhang , Tongchuan Yi , Chenchen Ding , Liping Chen
Traditional path tracking control of agricultural vehicles greatly relay on precision modelling or parameter tuning, cause sensitive to the environment condition change such as different land slip rate and unflat field. To address those issues and to realize stable and accuracy path tracking, this research presents a deep reinforcement learning-based path tracking control algorithm that incorporates path curvature. A Deep Q-Network (DQN) based on a five-layer Back Propagation (BP) neural network was constructed, achieving a lightweight and highly portable algorithm. The network’s input state is optimized by integrating the average path curvature over a set distance ahead of the vehicle, thereby enhancing the vehicle’s path tracking precision. The convergence of the designed DQN-based path tracking control algorithm was validated in simulated and hardened road environments; in addition, its tracking performance was compared with the pure pursuit control (PPC) method under two different field ground conditions. On soft and flat ground, the average tracking errors of the vehicle on straight path segments at 6 m and 5 m intervals were 0.023 m and 0.026 m, respectively, and 0.024 m and 0.036 m on curved path segments. On hard and uneven ground, the average tracking errors at 6 m and 5 m intervals were 0.029 m and 0.034 m, respectively, and 0.037 m and 0.035 m on curved segments, all outperforming the PPC method. These findings confirm that the proposed path tracking control algorithm exhibits excellent adaptability and stability and achieves precise path tracking under different road conditions and path curvatures.
{"title":"Path curvature incorporated reinforcement learning method for accurate path tracking of agricultural vehicles","authors":"Linhuan Zhang ,&nbsp;Ruirui Zhang ,&nbsp;Danzhu Zhang ,&nbsp;Tongchuan Yi ,&nbsp;Chenchen Ding ,&nbsp;Liping Chen","doi":"10.1016/j.compag.2025.110243","DOIUrl":"10.1016/j.compag.2025.110243","url":null,"abstract":"<div><div>Traditional path tracking control of agricultural vehicles greatly relay on precision modelling or parameter tuning, cause sensitive to the environment condition change such as different land slip rate and unflat field. To address those issues and to realize stable and accuracy path tracking, this research presents a deep reinforcement learning-based path tracking control algorithm that incorporates path curvature. A Deep Q-Network (DQN) based on a five-layer Back Propagation (BP) neural network was constructed, achieving a lightweight and highly portable algorithm. The network’s input state is optimized by integrating the average path curvature over a set distance ahead of the vehicle, thereby enhancing the vehicle’s path tracking precision. The convergence of the designed DQN-based path tracking control algorithm was validated in simulated and hardened road environments; in addition, its tracking performance was compared with the pure pursuit control (PPC) method under two different field ground conditions. On soft and flat ground, the average tracking errors of the vehicle on straight path segments at 6 m and 5 m intervals were 0.023 m and 0.026 m, respectively, and 0.024 m and 0.036 m on curved path segments. On hard and uneven ground, the average tracking errors at 6 m and 5 m intervals were 0.029 m and 0.034 m, respectively, and 0.037 m and 0.035 m on curved segments, all outperforming the PPC method. These findings confirm that the proposed path tracking control algorithm exhibits excellent adaptability and stability and achieves precise path tracking under different road conditions and path curvatures.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"234 ","pages":"Article 110243"},"PeriodicalIF":7.7,"publicationDate":"2025-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143578143","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The assessment of individual tree canopies using drone-based intra-canopy photogrammetry
IF 7.7 1区 农林科学 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2025-03-09 DOI: 10.1016/j.compag.2025.110200
Lukas G. Olson , Nicholas C. Coops , Guillaume Moreau , Richard C. Hamelin , Alexis Achim
With many forests experiencing rapidly declining health, effective management requires increasingly accurate and precise tools to measure tree attributes across scales. Tree health, especially in deciduous species, is strongly correlated with crown condition, specifically crown transparency and dieback. Present-day assessment of these attributes is undertaken using ground-based visual approaches, which can be imprecise and subjective. Here we evaluate the feasibility of applying drone-based digital aerial photogrammetry (DAP) below, within, and above the tree canopy to estimate tree height, diameter at breast height, canopy transparency, and canopy spread. Video imagery was acquired across 18 deciduous trees under leaf-off and leaf-on conditions in Metro Vancouver, British Columbia, Canada, using small, lightweight first-person-view drones. Images were extracted and processed into coloured 3D point clouds using digital Structure-from-Motion Multiview-Stereo photogrammetry. Photogrammetry estimates were compared with field measurements and above-canopy drone-based aerial Light Detection and Ranging (lidar) estimates. The DAP estimates explained significant variance in the field observations and were strongly correlated with both ground-based measurements and lidar estimates, with correlations of height (DAP vs. ground: r = 0.93, RMSE = 1.54 m; DAP vs. lidar: r = 0.94), DBH (DAP vs. ground: r = 0.98, RMSE = 2.90 cm), transparency (DAP vs. ground: r = 0.66, RMSE = 12.61 %), and crown spread (DAP vs. ground: r = 0.88, RMSE = 3.35 m; DAP vs. lidar: r = 0.89). The reconstruction time for each tree using the drone footage was strongly correlated with tree size and seasonal condition, with minimal influence from crown form. This work suggests that first-person view drones can provide accurate information on individual tree attributes associated with tree health, offering a reliable alternative or complement to both ground-based methods and lidar for tree-level measurements in ongoing forest health assessment programs.
{"title":"The assessment of individual tree canopies using drone-based intra-canopy photogrammetry","authors":"Lukas G. Olson ,&nbsp;Nicholas C. Coops ,&nbsp;Guillaume Moreau ,&nbsp;Richard C. Hamelin ,&nbsp;Alexis Achim","doi":"10.1016/j.compag.2025.110200","DOIUrl":"10.1016/j.compag.2025.110200","url":null,"abstract":"<div><div>With many forests experiencing rapidly declining health, effective management requires increasingly accurate and precise tools to measure tree attributes across scales. Tree health, especially in deciduous species, is strongly correlated with crown condition, specifically crown transparency and dieback. Present-day assessment of these attributes is undertaken using ground-based visual approaches, which can be imprecise and subjective. Here we evaluate the feasibility of applying drone-based digital aerial photogrammetry (DAP) below, within, and above the tree canopy to estimate tree height, diameter at breast height, canopy transparency, and canopy spread. Video imagery was acquired across 18 deciduous trees under leaf-off and leaf-on conditions in Metro Vancouver, British Columbia, Canada, using small, lightweight first-person-view drones. Images were extracted and processed into coloured 3D point clouds using digital Structure-from-Motion Multiview-Stereo photogrammetry. Photogrammetry estimates were compared with field measurements and above-canopy drone-based aerial Light Detection and Ranging (lidar) estimates. The DAP estimates explained significant variance in the field observations and were strongly correlated with both ground-based measurements and lidar estimates, with correlations of height (DAP vs. ground: r = 0.93, RMSE = 1.54 m; DAP vs. lidar: r = 0.94), DBH (DAP vs. ground: r = 0.98, RMSE = 2.90 cm), transparency (DAP vs. ground: r = 0.66, RMSE = 12.61 %), and crown spread (DAP vs. ground: r = 0.88, RMSE = 3.35 m; DAP vs. lidar: r = 0.89). The reconstruction time for each tree using the drone footage was strongly correlated with tree size and seasonal condition, with minimal influence from crown form. This work suggests that first-person view drones can provide accurate information on individual tree attributes associated with tree health, offering a reliable alternative or complement to both ground-based methods and lidar for tree-level measurements in ongoing forest health assessment programs.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"234 ","pages":"Article 110200"},"PeriodicalIF":7.7,"publicationDate":"2025-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143578145","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A predictive model of photosynthetic rates for eggplants: Integrating physiological and environmental parameters
IF 7.7 1区 农林科学 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2025-03-09 DOI: 10.1016/j.compag.2025.110241
Pan Gao , Miao Lu , Yongxia Yang , Huiming Li , Shijie Tian , Jin Hu
Photosynthesis plays a pivotal role in vegetable growth. However, its intricate interplay with plant physiology and environmental factors complicates precise prediction of photosynthetic rates (Pn). Current predictive models primarily focus on environmental influences on photosynthesis, limiting their applicability to leaves exhibiting different physiological traits. To address the challenge, we introduce a novel approach that incorporates chlorophyll fluorescence (ChlF) parameters into a model for predicting Pn across diverse leaf ontogenies. Eggplant leaves were used as experimental samples. We collected 5280 Pn data of leaves with different ChlF parameters under controlled changes in temperature, [CO2], and light intensity. The Fo (initial fluorescence) and Fv/Fm (Maximum light energy conversion efficiency of PSII system) were selected as key ChlF indicators using the entropy method. Fo and Fv/Fm, along with temperature, [CO2], and light intensity, are key features, while Pn serves as a label, forming a robust modeling dataset. Then, we proposed a Convolutional Neural Network Regression model with Input Encoding and Genetic Algorithm optimization (CNNR-IEGA) to train these environment and fluorescence data and develop the predictive model for eggplant Pn.The results indicate that the model exhibits excellent performance in predicting Pn. On unknown datasets, the root mean square error of the model is only 0.97 μmol·m−2·s−1, with a high coefficient of determination reaching 0.99. Compared to models established by other algorithms (including multiple nonlinear regression, support vector regression, and back propagation neural network), the proposed model demonstrates superior performance across training, testing, and validation sets. Furthermore, compared to models without ChlF parameters and those with single ChlF parameters, the proposed model has the highest accuracy. This demonstrates the validity of using fluorescence to characterize crop photosynthetic performance. CNNR-IEGA can serve as a basis for crop growth environment assessment, greenhouse control, and production warning, offering new theories and opportunities for the development of precision agriculture.
{"title":"A predictive model of photosynthetic rates for eggplants: Integrating physiological and environmental parameters","authors":"Pan Gao ,&nbsp;Miao Lu ,&nbsp;Yongxia Yang ,&nbsp;Huiming Li ,&nbsp;Shijie Tian ,&nbsp;Jin Hu","doi":"10.1016/j.compag.2025.110241","DOIUrl":"10.1016/j.compag.2025.110241","url":null,"abstract":"<div><div>Photosynthesis plays a pivotal role in vegetable growth. However, its intricate interplay with plant physiology and environmental factors complicates precise prediction of photosynthetic rates (Pn). Current predictive models primarily focus on environmental influences on photosynthesis, limiting their applicability to leaves exhibiting different physiological traits. To address the challenge, we introduce a novel approach that incorporates chlorophyll fluorescence (ChlF) parameters into a model for predicting Pn across diverse leaf ontogenies. Eggplant leaves were used as experimental samples. We collected 5280 Pn data of leaves with different ChlF parameters under controlled changes in temperature, [CO<sub>2</sub>], and light intensity. The <em>F<sub>o</sub></em> (initial fluorescence) and <em>F<sub>v</sub></em>/<em>F<sub>m</sub></em> (Maximum light energy conversion efficiency of PSII system) were selected as key ChlF indicators using the entropy method. <em>F<sub>o</sub></em> and <em>F<sub>v</sub></em>/<em>F<sub>m</sub></em>, along with temperature, [CO<sub>2</sub>], and light intensity, are key features, while Pn serves as a label, forming a robust modeling dataset. Then, we proposed a Convolutional Neural Network Regression model with Input Encoding and Genetic Algorithm optimization (CNNR-IEGA) to train these environment and fluorescence data and develop the predictive model for eggplant Pn.The results indicate that the model exhibits excellent performance in predicting Pn. On unknown datasets, the root mean square error of the model is only 0.97 μmol·m<sup>−2</sup>·s<sup>−1</sup>, with a high coefficient of determination reaching 0.99. Compared to models established by other algorithms (including multiple nonlinear regression, support vector regression, and back propagation neural network), the proposed model demonstrates superior performance across training, testing, and validation sets. Furthermore, compared to models without ChlF parameters and those with single ChlF parameters, the proposed model has the highest accuracy. This demonstrates the validity of using fluorescence to characterize crop photosynthetic performance. CNNR-IEGA can serve as a basis for crop growth environment assessment, greenhouse control, and production warning, offering new theories and opportunities for the development of precision agriculture.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"234 ","pages":"Article 110241"},"PeriodicalIF":7.7,"publicationDate":"2025-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143578146","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A vision-based robotic system for precision pollination of apples
IF 7.7 1区 农林科学 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2025-03-09 DOI: 10.1016/j.compag.2025.110158
Uddhav Bhattarai , Ranjan Sapkota , Safal Kshetri , Changki Mo , Matthew D. Whiting , Qin Zhang , Manoj Karkee
Global food production depends upon successful pollination, a process that relies on natural and managed pollinators. However, natural pollinators are declining due to factors such as climate change, habitat loss, and pesticide use. This paper presents an integrated robotic system for precision pollination in apples. The system consisted of a machine vision system to identify target flower clusters and estimate their positions and orientations, and a manipulator motion planning and actuation system to guide the sprayer to apply charged pollen suspension to the target flower clusters. The system was tested in the lab, followed by field evaluation in Honeycrisp and Fuji orchards. In the Honeycrisp variety, the robotic pollination system achieved a fruit set of 34.8% of sprayed flowers with 87.5% of flower clusters having at least one fruit when a 2 gm/l pollen suspension was used. In comparison, the natural pollination technique achieved a fruit set of 43.1% with 94.9% of clusters with at least one fruit. In Fuji apples, the robotic system with same pollen concentration achieved lower pollination success, with 7.2% of sprayed flowers setting fruit and 20.6% of clusters having at least one fruit, compared to 33.1% and 80.6%, respectively, with natural pollination. Fruit quality analysis showed that robotically pollinated fruits were comparable to naturally pollinated fruits in terms of color, weight, diameter, firmness, soluble solids, and starch content. Additionally, the system cycle time was 6.5 s per cluster. The results showed a promise for robotic pollination in apple orchards. However, further research and development is needed to improve the system and assess its suitability across diverse orchard environments and apple cultivars.
{"title":"A vision-based robotic system for precision pollination of apples","authors":"Uddhav Bhattarai ,&nbsp;Ranjan Sapkota ,&nbsp;Safal Kshetri ,&nbsp;Changki Mo ,&nbsp;Matthew D. Whiting ,&nbsp;Qin Zhang ,&nbsp;Manoj Karkee","doi":"10.1016/j.compag.2025.110158","DOIUrl":"10.1016/j.compag.2025.110158","url":null,"abstract":"<div><div>Global food production depends upon successful pollination, a process that relies on natural and managed pollinators. However, natural pollinators are declining due to factors such as climate change, habitat loss, and pesticide use. This paper presents an integrated robotic system for precision pollination in apples. The system consisted of a machine vision system to identify target flower clusters and estimate their positions and orientations, and a manipulator motion planning and actuation system to guide the sprayer to apply charged pollen suspension to the target flower clusters. The system was tested in the lab, followed by field evaluation in Honeycrisp and Fuji orchards. In the Honeycrisp variety, the robotic pollination system achieved a fruit set of 34.8% of sprayed flowers with 87.5% of flower clusters having at least one fruit when a 2 gm/l pollen suspension was used. In comparison, the natural pollination technique achieved a fruit set of 43.1% with 94.9% of clusters with at least one fruit. In Fuji apples, the robotic system with same pollen concentration achieved lower pollination success, with 7.2% of sprayed flowers setting fruit and 20.6% of clusters having at least one fruit, compared to 33.1% and 80.6%, respectively, with natural pollination. Fruit quality analysis showed that robotically pollinated fruits were comparable to naturally pollinated fruits in terms of color, weight, diameter, firmness, soluble solids, and starch content. Additionally, the system cycle time was 6.5 s per cluster. The results showed a promise for robotic pollination in apple orchards. However, further research and development is needed to improve the system and assess its suitability across diverse orchard environments and apple cultivars.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"234 ","pages":"Article 110158"},"PeriodicalIF":7.7,"publicationDate":"2025-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143577880","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Laboratory and field comparison of onboard and remote sensors for canopy characterisation in vineyards
IF 7.7 1区 农林科学 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2025-03-09 DOI: 10.1016/j.compag.2025.110240
Jordi Biscamps, Francisco Garcia-Ruiz, Ramón Salcedo, Bernat Salas, Emilio Gil
Accurate canopy characterisation is crucial for the targeted application of plant protection products following the variable rate application (VRA) concept. In this study, two different canopy measurement systems were compared: ultrasonic (US) sensors and UAV-based photogrammetry. A specific device was developed to host a series of US sensors that could conduct a fully automatic canopy characterisation of two vine rows in a single pass. The results of canopy characterisation (canopy width, canopy height, leaf wall area, and tree row volume) were compared with those obtained after complete data processing of the images obtained using a multispectral camera embedded on a UAV. Results indicated that no significant differences have been obtained in the definition of main canopy parameters. Field tests indicated that US sensors offered stable canopy height readings but exhibited variability in width measurements due to factors like ground conditions and sensor placement. Compared to with UAV photogrammetry, US sensors provided comparable results for canopy height and width at a lower cost and with less precision. Therefore, the choice between US sensors and UAVs should consider the resolution requirements, cost, and field conditions. Field data were collected from two commercial vineyards in the Penedès region close to Barcelona (Spain). Before this, laboratory tests were performed using an artificial target to achieve an accurate evaluation of the US sensors. Overall, this study highlighted the potential of ground-based sensing systems for precise and repeatable canopy measurements, contributing to improved vineyard management practices and advanced technological integration for agricultural monitoring.
{"title":"Laboratory and field comparison of onboard and remote sensors for canopy characterisation in vineyards","authors":"Jordi Biscamps,&nbsp;Francisco Garcia-Ruiz,&nbsp;Ramón Salcedo,&nbsp;Bernat Salas,&nbsp;Emilio Gil","doi":"10.1016/j.compag.2025.110240","DOIUrl":"10.1016/j.compag.2025.110240","url":null,"abstract":"<div><div>Accurate canopy characterisation is crucial for the targeted application of plant protection products following the variable rate application (VRA) concept. In this study, two different canopy measurement systems were compared: ultrasonic (US) sensors and UAV-based photogrammetry. A specific device was developed to host a series of US sensors that could conduct a fully automatic canopy characterisation of two vine rows in a single pass. The results of canopy characterisation (canopy width, canopy height, leaf wall area, and tree row volume) were compared with those obtained after complete data processing of the images obtained using a multispectral camera embedded on a UAV. Results indicated that no significant differences have been obtained in the definition of main canopy parameters. Field tests indicated that US sensors offered stable canopy height readings but exhibited variability in width measurements due to factors like ground conditions and sensor placement. Compared to with UAV photogrammetry, US sensors provided comparable results for canopy height and width at a lower cost and with less precision. Therefore, the choice between US sensors and UAVs should consider the resolution requirements, cost, and field conditions. Field data were collected from two commercial vineyards in the Penedès region close to Barcelona (Spain). Before this, laboratory tests were performed using an artificial target to achieve an accurate evaluation of the US sensors. Overall, this study highlighted the potential of ground-based sensing systems for precise and repeatable canopy measurements, contributing to improved vineyard management practices and advanced technological integration for agricultural monitoring.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"234 ","pages":"Article 110240"},"PeriodicalIF":7.7,"publicationDate":"2025-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143577971","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improving long-tailed pest classification using diffusion model-based data augmentation
IF 7.7 1区 农林科学 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2025-03-08 DOI: 10.1016/j.compag.2025.110244
Mengze Du , Fei Wang , Yu Wang , Kun Li , Wenhui Hou , Lu Liu , Yong He , Yuwei Wang
Long-tail problem is common in large-scale agricultural datasets, posing significant challenges to agricultural research. It is often resulting from the prohibitively high costs of data collection, the challenges of obtaining accurate, comprehensive data, and the restricted access to diverse sources of information. This issue manifests especially within agricultural pest datasets, where the imbalance in the frequency of different pest types can severely hinder detection accuracy. To counteract this pervasive challenge, this paper introduces a robust method leveraging the power of a diffusion model to address the long-tailed problem effectively. Our method focuses on fine-tuning specialized pre-trained models to generate highly realistic pest images, providing a critical solution for balancing the dataset’s distribution. This paper also presents a visualization technique that offers a clear, intuitive representation of the long-tailed problem’s impact on the dataset. By producing high-quality synthetic images using the diffusion model, our method not only balances the uneven data distribution but also reduces the discrepancies between real and synthetic data, effectively mitigating the under-representation of tail categories. The experimental results, tested on the widely-used IP102 large-scale pest dataset, confirm the superiority of our approach. The method strikes an optimal balance between sample fidelity and diversity, outperforming traditional methods in image quality. Moreover, it demonstrates remarkable performance in pest classification tasks, achieving the highest evaluation metrics and showcasing its ability to address the long-tailed problem with notable success.
{"title":"Improving long-tailed pest classification using diffusion model-based data augmentation","authors":"Mengze Du ,&nbsp;Fei Wang ,&nbsp;Yu Wang ,&nbsp;Kun Li ,&nbsp;Wenhui Hou ,&nbsp;Lu Liu ,&nbsp;Yong He ,&nbsp;Yuwei Wang","doi":"10.1016/j.compag.2025.110244","DOIUrl":"10.1016/j.compag.2025.110244","url":null,"abstract":"<div><div>Long-tail problem is common in large-scale agricultural datasets, posing significant challenges to agricultural research. It is often resulting from the prohibitively high costs of data collection, the challenges of obtaining accurate, comprehensive data, and the restricted access to diverse sources of information. This issue manifests especially within agricultural pest datasets, where the imbalance in the frequency of different pest types can severely hinder detection accuracy. To counteract this pervasive challenge, this paper introduces a robust method leveraging the power of a diffusion model to address the long-tailed problem effectively. Our method focuses on fine-tuning specialized pre-trained models to generate highly realistic pest images, providing a critical solution for balancing the dataset’s distribution. This paper also presents a visualization technique that offers a clear, intuitive representation of the long-tailed problem’s impact on the dataset. By producing high-quality synthetic images using the diffusion model, our method not only balances the uneven data distribution but also reduces the discrepancies between real and synthetic data, effectively mitigating the under-representation of tail categories. The experimental results, tested on the widely-used IP102 large-scale pest dataset, confirm the superiority of our approach. The method strikes an optimal balance between sample fidelity and diversity, outperforming traditional methods in image quality. Moreover, it demonstrates remarkable performance in pest classification tasks, achieving the highest evaluation metrics and showcasing its ability to address the long-tailed problem with notable success.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"234 ","pages":"Article 110244"},"PeriodicalIF":7.7,"publicationDate":"2025-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143578141","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computers and Electronics in Agriculture
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1