Pub Date : 2026-03-15Epub Date: 2026-01-20DOI: 10.1016/j.compag.2026.111462
Alberto Carraro , Giulia Bugin , Francesco Marinello, Maddi Aguirrebengoa, Stefano Frattini, Andrea Pezzuolo
Accurate assessment of dairy cow cleanliness is essential for ensuring animal welfare, maintaining udder health, and optimising milk production. Traditional visual inspections are subjective and often fail to distinguish dirt from natural coat patterns, especially in spotted breeds. This research investigates the applicability of a two-stage approach for automated cleanliness evaluation, consisting of (i) semantic segmentation of dirt areas on cow coats and (ii) regression from the resulting masks to numerical cleanliness scores. The first stage was implemented using the U-Net and DeepLabV3 architectures, which were trained on either RGB-only or RGB-Thermal (RGB-T) images. Incorporating thermal information significantly improved segmentation accuracy: U-Net achieved a mean Intersection over Union (mIoU) of 0.5244 on RGB-T images, compared to 0.3537 on RGB images, while DeepLabV3 on RGB-T images reached an mIoU of 0.5049. The second stage compared two regression strategies: multiple linear regression (MLR) on the number of pixels classified as dirt, and convolutional neural networks (CNNs) trained directly on the masks. CNN-based regression consistently outperformed MLR, with the best performance obtained by combining RGB-T segmentation and CNN regression (DeepLabV3 + CNN: MAPE 23.05 %; U-Net + CNN: MAPE 25.24 %). These findings support the feasibility of a two-stage RGB-T-based approach for objective cleanliness evaluation, highlighting the benefits of thermal information for segmentation and CNNs for score prediction.
{"title":"AI-driven analysis of animal cleanliness: A data-fusion model using RGB and thermal imaging","authors":"Alberto Carraro , Giulia Bugin , Francesco Marinello, Maddi Aguirrebengoa, Stefano Frattini, Andrea Pezzuolo","doi":"10.1016/j.compag.2026.111462","DOIUrl":"10.1016/j.compag.2026.111462","url":null,"abstract":"<div><div>Accurate assessment of dairy cow cleanliness is essential for ensuring animal welfare, maintaining udder health, and optimising milk production. Traditional visual inspections are subjective and often fail to distinguish dirt from natural coat patterns, especially in spotted breeds. This research investigates the applicability of a two-stage approach for automated cleanliness evaluation, consisting of (i) semantic segmentation of dirt areas on cow coats and (ii) regression from the resulting masks to numerical cleanliness scores. The first stage was implemented using the U-Net and DeepLabV3 architectures, which were trained on either RGB-only or RGB-Thermal (RGB-T) images. Incorporating thermal information significantly improved segmentation accuracy: U-Net achieved a mean Intersection over Union (mIoU) of 0.5244 on RGB-T images, compared to 0.3537 on RGB images, while DeepLabV3 on RGB-T images reached an mIoU of 0.5049. The second stage compared two regression strategies: multiple linear regression (MLR) on the number of pixels classified as dirt, and convolutional neural networks (CNNs) trained directly on the masks. CNN-based regression consistently outperformed MLR, with the best performance obtained by combining RGB-T segmentation and CNN regression (DeepLabV3 + CNN: MAPE 23.05 %; U-Net + CNN: MAPE 25.24 %). These findings support the feasibility of a two-stage RGB-T-based approach for objective cleanliness evaluation, highlighting the benefits of thermal information for segmentation and CNNs for score prediction.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"244 ","pages":"Article 111462"},"PeriodicalIF":8.9,"publicationDate":"2026-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146025266","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-15Epub Date: 2026-01-19DOI: 10.1016/j.compag.2026.111425
Baocheng Zhou , Shaochun Ma , Wenzhi Li , Jinzhi Ma , Yansu Xie , Sha Yang
Real-time adjustment of extractor speed according to feed rate is essential to reduce impurity content and cane loss in mechanized sugarcane harvesting. An automatic control system for sugarcane harvester extractor was developed in this study aiming to achieve dynamic matching between speed and feed rate, thereby reducing impurity content and cane loss during harvesting. An optimal control strategy between feed rate and rotational speed was established using impurity content and cane loss as indicators. A variable universe fuzzy multi-parameter adaptive PID (VUFMA-PID) control method was proposed and modeled in Simulink. Compared with conventional PID and fuzzy PID, the VUFMA-PID achieved the shortest steady-state response time, 0.32 s and 0.26 s faster than PID and fuzzy PID, with both steady-state error and maximum overshoot reduced to zero. Field experiments were conducted under different feed rate fluctuation orders, with fixed extractor speed and manual adjustment speed based on operator experience used as control groups. The results indicated that, compared to manual and constant mode, the average power consumption of the automatic control mode was reduced by 17.44 % and 30.40 % respectively. The average impurity content was 4.00 %, which decreased by 23.58 % and 10.71 %. The average cane loss was 1.89 %, which decreased by 25.01 % and 28.52 %. The developed automatic control system effectively adapts to varying feed rates and significantly improves harvesting quality. It provides a feasible solution and theoretical support for intelligent control in mechanized sugarcane harvesting.
{"title":"Development and performance evaluation of an automatic control system for sugarcane harvester extractor","authors":"Baocheng Zhou , Shaochun Ma , Wenzhi Li , Jinzhi Ma , Yansu Xie , Sha Yang","doi":"10.1016/j.compag.2026.111425","DOIUrl":"10.1016/j.compag.2026.111425","url":null,"abstract":"<div><div>Real-time adjustment of extractor speed according to feed rate is essential to reduce impurity content and cane loss in mechanized sugarcane harvesting. An automatic control system for sugarcane harvester extractor was developed in this study aiming to achieve dynamic matching between speed and feed rate, thereby reducing impurity content and cane loss during harvesting. An optimal control strategy between feed rate and rotational speed was established using impurity content and cane loss as indicators. A variable universe fuzzy multi-parameter adaptive PID (VUFMA-PID) control method was proposed and modeled in Simulink. Compared with conventional PID and fuzzy PID, the VUFMA-PID achieved the shortest steady-state response time, 0.32 s and 0.26 s faster than PID and fuzzy PID, with both steady-state error and maximum overshoot reduced to zero. Field experiments were conducted under different feed rate fluctuation orders, with fixed extractor speed and manual adjustment speed based on operator experience used as control groups. The results indicated that, compared to manual and constant mode, the average power consumption of the automatic control mode was reduced by 17.44 % and 30.40 % respectively. The average impurity content was 4.00 %, which decreased by 23.58 % and 10.71 %. The average cane loss was 1.89 %, which decreased by 25.01 % and 28.52 %. The developed automatic control system effectively adapts to varying feed rates and significantly improves harvesting quality. It provides a feasible solution and theoretical support for intelligent control in mechanized sugarcane harvesting.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"244 ","pages":"Article 111425"},"PeriodicalIF":8.9,"publicationDate":"2026-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146025087","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-15Epub Date: 2026-02-04DOI: 10.1016/j.compag.2026.111531
Ambra Tosto , Alejandro Morales , Niels P.R. Anten , Pieter A. Zuidema , Jochem B. Evers
Pruning affects tree functioning by removing biomass and triggering compensatory responses. Functional-structural plant (FSP) models, combining three-dimensional plant architecture with physiological processes, are suitable tools to study pruning effects. We present and evaluate the first FSP model for cocoa trees and we simulate pruning impact on young cocoa tree functioning.
We performed two experiments: a parametrization experiment, assessing branching responses to pruning treatments (heading and thinning); and an evaluation experiment measuring the pruning effects on stem radius, leaf number and crown diameter of cocoa trees.
We developed an FSP model that simulates tree growth as a result of the interaction between physiological processes, tree architecture and pruning-induced changes in branching patterns. Bud break is simulated stochastically, based on bud position and pruning interventions and was parameterized with field observations. The evaluation experiment was replicated in silico to evaluate model predictions and quantify the effect of pruning on tree functioning.
Our model captured the immediate effects of pruning on tree structure and partially simulated the compensatory response in leaf production observed in the experiment. In the simulations, pruning reduced total light interception. The simulated mean light interception per unit leaf area was increased in one treatment. However, this advantage was quickly lost due to induced branch production.
Our model is a novel tool to study the impact of pruning, as it explicitly simulates tree architecture and pruning-induced responses. Our results highlight the necessity of dynamic simulations to understand pruning impact.
{"title":"Quantifying the impact of pruning on young cocoa trees using a functional-structural plant model","authors":"Ambra Tosto , Alejandro Morales , Niels P.R. Anten , Pieter A. Zuidema , Jochem B. Evers","doi":"10.1016/j.compag.2026.111531","DOIUrl":"10.1016/j.compag.2026.111531","url":null,"abstract":"<div><div>Pruning affects tree functioning by removing biomass and triggering compensatory responses. Functional-structural plant (FSP) models, combining three-dimensional plant architecture with physiological processes, are suitable tools to study pruning effects. We present and evaluate the first FSP model for cocoa trees and we simulate pruning impact on young cocoa tree functioning.</div><div>We performed two experiments: a parametrization experiment, assessing branching responses to pruning treatments (heading and thinning); and an evaluation experiment measuring the pruning effects on stem radius, leaf number and crown diameter of cocoa trees.</div><div>We developed an FSP model that simulates tree growth as a result of the interaction between physiological processes, tree architecture and pruning-induced changes in branching patterns. Bud break is simulated stochastically, based on bud position and pruning interventions and was parameterized with field observations. The evaluation experiment was replicated <em>in silico</em> to evaluate model predictions and quantify the effect of pruning on tree functioning.</div><div>Our model captured the immediate effects of pruning on tree structure and partially simulated the compensatory response in leaf production observed in the experiment. In the simulations, pruning reduced total light interception. The simulated mean light interception per unit leaf area was increased in one treatment. However, this advantage was quickly lost due to induced branch production.</div><div>Our model is a novel tool to study the impact of pruning, as it explicitly simulates tree architecture and pruning-induced responses. Our results highlight the necessity of dynamic simulations to understand pruning impact.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"244 ","pages":"Article 111531"},"PeriodicalIF":8.9,"publicationDate":"2026-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146173940","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-15Epub Date: 2026-02-04DOI: 10.1016/j.compag.2026.111514
Zhiyan Liang , Luhan Wang , Hexiang Wang , Baohua Zhang , Chengliang Liu
Autonomous navigation of robots primarily relies on environment mapping, localization, path planning, and obstacle avoidance. However, when operating in large-scale and complex orchard environments over extended periods, robots often suffer from mapping drift and accumulated localization errors, posing significant challenges to perception and path planning. This study presents a multi-sensor fusion hardware platform specifically designed for agricultural orchard settings. Based on this platform, an enhanced FAST-LIO2 framework is proposed, incorporating loop closure detection and factor graph optimization to reduce point cloud matching errors and obtain a more accurate prior map. Building on the improved FAST-LIO2, a relocalization module based on the Normal Distributions Transform (NDT) point cloud matching algorithm is introduced to ensure more precise pose estimation. The 3D point cloud map is then processed using methods such as Statistical Outlier Removal (SOR) filtering and pass-through filtering before being projected into a 2D grid map. Path planning is subsequently performed using the RRT* and Timed Elastic Band (TEB) algorithms, leveraging the 2D map and real-time relocalization data. The proposed autonomous navigation system is evaluated in various orchard environments. The integration of backend optimization and relocalization significantly enhanced system performance, reducing point cloud matching errors by up to 93% in large-scale uneven terrains, with a root mean square error (RMSE) as low as 0.77 m. Moreover, the global planner RRT* and local planner TEB demonstrated the ability to generate safer and smoother trajectories. The results validate the safety and robustness of the proposed method, highlighting its promising application in autonomous navigation for orchard scenarios.
{"title":"Autonomous obstacle avoidance and path planning for mobile robots in orchard environments combining with map construction and positioning methods","authors":"Zhiyan Liang , Luhan Wang , Hexiang Wang , Baohua Zhang , Chengliang Liu","doi":"10.1016/j.compag.2026.111514","DOIUrl":"10.1016/j.compag.2026.111514","url":null,"abstract":"<div><div>Autonomous navigation of robots primarily relies on environment mapping, localization, path planning, and obstacle avoidance. However, when operating in large-scale and complex orchard environments over extended periods, robots often suffer from mapping drift and accumulated localization errors, posing significant challenges to perception and path planning. This study presents a multi-sensor fusion hardware platform specifically designed for agricultural orchard settings. Based on this platform, an enhanced FAST-LIO2 framework is proposed, incorporating loop closure detection and factor graph optimization to reduce point cloud matching errors and obtain a more accurate prior map. Building on the improved FAST-LIO2, a relocalization module based on the Normal Distributions Transform (NDT) point cloud matching algorithm is introduced to ensure more precise pose estimation. The 3D point cloud map is then processed using methods such as Statistical Outlier Removal (SOR) filtering and pass-through filtering before being projected into a 2D grid map. Path planning is subsequently performed using the RRT* and Timed Elastic Band (TEB) algorithms, leveraging the 2D map and real-time relocalization data. The proposed autonomous navigation system is evaluated in various orchard environments. The integration of backend optimization and relocalization significantly enhanced system performance, reducing point cloud matching errors by up to 93% in large-scale uneven terrains, with a root mean square error (RMSE) as low as 0.77 m. Moreover, the global planner RRT* and local planner TEB demonstrated the ability to generate safer and smoother trajectories. The results validate the safety and robustness of the proposed method, highlighting its promising application in autonomous navigation for orchard scenarios.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"244 ","pages":"Article 111514"},"PeriodicalIF":8.9,"publicationDate":"2026-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146173941","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-15Epub Date: 2026-02-07DOI: 10.1016/j.compag.2026.111519
Drupadi Ciptaningtyas , Nadia Fitriani , Ahmad Thoriq , Lukito Hasta Pratopo , Takeo Shiina
Tomato ripening integrates color development, texture softening, respiration, and ethylene dynamics, under postharvest conditions. This review consolidates mathematical and simulation models that describe quality change over time using explicit variables, parameters, and equations. We organize five model families for tomato ripening: (i) empirical sigmoids; (ii) temperature-dependent kinetics (Arrhenius/Q10, thermal-time); (iii) mechanistic ODE/PDE mass-balance; (iv) survival/time-to-event endpoints; and (v) hybrid/state-space formulations. We align observables (e.g., CIE a*, firmness, headspace gases), with estimation targets, and outline leakage-safe validation (grouped splits, external tests), uncertainty reporting, and reproducible practices. Key contributions include a practitioner-oriented Model-Choice Matrix that links objectives and data constraints to appropriate model classes, and consolidated guidance on sensitivity analysis, calibration and transportability to supports postharvest decision support across cultivars, seasons, and packaging regimes. The result is a structured roadmap for selecting, validating, and reporting ripening models to enable reliable deployment in postharvest operations and embedded into emerging digital decision support systems.
{"title":"Mathematical modeling of tomato ripening: Formulations, validation, and postharvest decision support — A review","authors":"Drupadi Ciptaningtyas , Nadia Fitriani , Ahmad Thoriq , Lukito Hasta Pratopo , Takeo Shiina","doi":"10.1016/j.compag.2026.111519","DOIUrl":"10.1016/j.compag.2026.111519","url":null,"abstract":"<div><div>Tomato ripening integrates color development, texture softening, respiration, and ethylene dynamics, under postharvest conditions. This review consolidates mathematical and simulation models that describe quality change over time using explicit variables, parameters, and equations. We organize five model families for tomato ripening: (i) empirical sigmoids; (ii) temperature-dependent kinetics (Arrhenius/Q<sub>10</sub>, thermal-time); (iii) mechanistic ODE/PDE mass-balance; (iv) survival/time-to-event endpoints; and (v) hybrid/state-space formulations. We align observables (e.g., CIE a*, firmness, headspace gases), with estimation targets, and outline leakage-safe validation (grouped splits, external tests), uncertainty reporting, and reproducible practices. Key contributions include a practitioner-oriented Model-Choice Matrix that links objectives and data constraints to appropriate model classes, and consolidated guidance on sensitivity analysis, calibration and transportability to supports postharvest decision support across cultivars, seasons, and packaging regimes. The result is a structured roadmap for selecting, validating, and reporting ripening models to enable reliable deployment in postharvest operations and embedded into emerging digital decision support systems.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"244 ","pages":"Article 111519"},"PeriodicalIF":8.9,"publicationDate":"2026-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146173942","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-15Epub Date: 2026-02-03DOI: 10.1016/j.compag.2026.111489
Martín Molina , Julio Godoy , John W. Castro , Vladimir Riffo
In an era where global apple production exceeds 80 million tons annually, ensuring high fruit quality is essential for consumer satisfaction and economic success. However, surface defects like wounds, rot, and sunburn cause millions in losses through manual inspections, which are often subjective, inefficient, and costly in packing plants. This study fills important gaps in automated quality control by using advanced deep learning to classify apple damages with unmatched efficiency and industrial usefulness. Through a review of the literature and various web repositories that include information up to 2025, we constructed a novel, balanced dataset from scratch, capturing diverse real-world defects that were underrepresented in previous studies. We rigorously evaluated nine advanced convolutional neural network architectures –including VGG16/19, multiple ResNet variants, and YOLOv9c for classifying different types of damage in apples– before optimizing the top-performing ResNet101 through systematic hyperparameter tuning. Achieving an impressive 95% accuracy on unseen data for damage classification and 81% for preliminary detection, our optimized model aims to reduce waste and boost supply chain efficiency, setting a new standard for sustainable agriculture. Moving forward, this framework opens the door to multimodal integrations such as hyperspectral imaging and robotic sorting, adaptable to other fruits, transforming post-harvest processing and inspiring further innovations in AI-driven food security.
{"title":"Apple damages classification: Using the best convolutional neural network to discard low surface quality fruit in packing plants","authors":"Martín Molina , Julio Godoy , John W. Castro , Vladimir Riffo","doi":"10.1016/j.compag.2026.111489","DOIUrl":"10.1016/j.compag.2026.111489","url":null,"abstract":"<div><div>In an era where global apple production exceeds 80 million tons annually, ensuring high fruit quality is essential for consumer satisfaction and economic success. However, surface defects like wounds, rot, and sunburn cause millions in losses through manual inspections, which are often subjective, inefficient, and costly in packing plants. This study fills important gaps in automated quality control by using advanced deep learning to classify apple damages with unmatched efficiency and industrial usefulness. Through a review of the literature and various web repositories that include information up to 2025, we constructed a novel, balanced dataset from scratch, capturing diverse real-world defects that were underrepresented in previous studies. We rigorously evaluated nine advanced convolutional neural network architectures –including VGG16/19, multiple ResNet variants, and YOLOv9c for classifying different types of damage in apples– before optimizing the top-performing ResNet101 through systematic hyperparameter tuning. Achieving an impressive 95% accuracy on unseen data for damage classification and 81% for preliminary detection, our optimized model aims to reduce waste and boost supply chain efficiency, setting a new standard for sustainable agriculture. Moving forward, this framework opens the door to multimodal integrations such as hyperspectral imaging and robotic sorting, adaptable to other fruits, transforming post-harvest processing and inspiring further innovations in AI-driven food security.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"244 ","pages":"Article 111489"},"PeriodicalIF":8.9,"publicationDate":"2026-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146174054","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-15Epub Date: 2026-02-03DOI: 10.1016/j.compag.2026.111482
Wenjun Luo , Haiyan Zhang , Limeng Xu
The timely and accurate detection of forest pests is crucial for protecting ecosystems and maintaining ecological balance, as it directly affects the efficacy of pest control measures. Although deep learning is widely used for forest pest detection, challenges remain due to the small size of pests, complex environments, and their diverse morphologies across developmental stages. Traditional detection models often underperform in these environments. To overcome these challenges, we propose CALDS-RTDETR, an enhanced RT-DETR model designed specifically for detecting small pests in complex forest environments. We evaluated the model on a real-world dataset comprising 15 pest species. Compared to the RT-DETR-R18 baseline, CALDS-RTDETR achieved a precision of 75.5%, recall of 61.8%, mAP0.5 of 63.8%, mAP0.75 of 49.7%, and mAP0.5:0.95 of 45.3%. It also attained an mAPs of 8.9%, mAPm of 31.1%, and mAPl of 54.4%, while maintaining a compact model size of 20.10 M parameters. These results show the model’s enhanced performance in complex forest environments, demonstrating the significant potential of CALDS-RTDETR for pest monitoring and practical deployment. Future work will expand the model to include additional species and optimize it for real-world applications.
{"title":"CALDS-RTDETR: a robust forestry pest detection model for small targets in complex environments","authors":"Wenjun Luo , Haiyan Zhang , Limeng Xu","doi":"10.1016/j.compag.2026.111482","DOIUrl":"10.1016/j.compag.2026.111482","url":null,"abstract":"<div><div>The timely and accurate detection of forest pests is crucial for protecting ecosystems and maintaining ecological balance, as it directly affects the efficacy of pest control measures. Although deep learning is widely used for forest pest detection, challenges remain due to the small size of pests, complex environments, and their diverse morphologies across developmental stages. Traditional detection models often underperform in these environments. To overcome these challenges, we propose CALDS-RTDETR, an enhanced RT-DETR model designed specifically for detecting small pests in complex forest environments. We evaluated the model on a real-world dataset comprising 15 pest species. Compared to the RT-DETR-R18 baseline, CALDS-RTDETR achieved a precision of 75.5%, recall of 61.8%, mAP<sub>0.5</sub> of 63.8%, mAP<sub>0.75</sub> of 49.7%, and mAP<sub>0.5:0.9</sub><sub>5</sub> of 45.3%. It also attained an mAPs of 8.9%, mAPm of 31.1%, and mAPl of 54.4%, while maintaining a compact model size of 20.10 M parameters. These results show the model’s enhanced performance in complex forest environments, demonstrating the significant potential of CALDS-RTDETR for pest monitoring and practical deployment. Future work will expand the model to include additional species and optimize it for real-world applications.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"244 ","pages":"Article 111482"},"PeriodicalIF":8.9,"publicationDate":"2026-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146174055","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-15Epub Date: 2026-01-27DOI: 10.1016/j.compag.2026.111486
Demin Xu , Xinguang Zhang , Michael Henke , Liang Wang , Jinyu Zhu , Fang Ji , Yuntao Ma
Light is essential for photosynthesis and directly influences crop yield. During winter and spring, limited natural light makes well-managed supplemental lighting crucial for greenhouse production. Traditional lighting design methods, which rely on manual measurements, are inefficient for optimizing light distribution and energy use. This study proposes a 3D simulation framework to optimize supplemental lighting in greenhouses. The virtual model incorporates the spectral power distribution (SPD) and propagation characteristics of light-emitting diode (LED) modules, the optical properties of greenhouse materials, and the greenhouse’s geometric structure to simulate artificial light environments. Validation of the model demonstrated high accuracy, with an R2 of 0.982 and a RMSE of 14.38 μmol·m−2·s−1. Based on simulation outputs, the spatial layout of supplemental lighting modules was determined, and the hourly light integral (HLI) was used as a control variable to develop a time-segmented lighting strategy. For this study, the production performance of tomato was evaluated under four lighting treatments: HLI-driven fixed supplementary lighting (HFS), HLI-driven mobile supplementary lighting (HMS), nighttime timed supplementary lighting (TS), and only natural light (CK). The optimal lighting configuration was achieved when fixtures were positioned 1.7 m above the planting troughs. Tomato yield per plant under the HFS treatment increased by 25.2% compared to CK and by 21.6% compared to TS. While HMS showed higher energy-use efficiency and quantum yield, its yield improvement was relatively modest. Overall, HFS enhanced light energy-use efficiency and quantum yield by 5.5% and 55.3%, respectively, compared to TS. This study provides a practical decision-support tool for greenhouse lighting management, enabling data-driven optimization of light distribution and energy use. The proposed 3D modeling framework not only improves light-thermal synergy but also offers strong scalability for different greenhouse structures and crops. By integrating physical modeling and intelligent control, it contributes to the development of sustainable and smart agricultural production systems.
{"title":"Exploring the application mode of artificial light sources in solar greenhouses based on functional-structural plant model","authors":"Demin Xu , Xinguang Zhang , Michael Henke , Liang Wang , Jinyu Zhu , Fang Ji , Yuntao Ma","doi":"10.1016/j.compag.2026.111486","DOIUrl":"10.1016/j.compag.2026.111486","url":null,"abstract":"<div><div>Light is essential for photosynthesis and directly influences crop yield. During winter and spring, limited natural light makes well-managed supplemental lighting crucial for greenhouse production. Traditional lighting design methods, which rely on manual measurements, are inefficient for optimizing light distribution and energy use. This study proposes a 3D simulation framework to optimize supplemental lighting in greenhouses. The virtual model incorporates the spectral power distribution (SPD) and propagation characteristics of light-emitting diode (LED) modules, the optical properties of greenhouse materials, and the greenhouse’s geometric structure to simulate artificial light environments. Validation of the model demonstrated high accuracy, with an R<sup>2</sup> of 0.982 and a RMSE of 14.38 μmol·m<sup>−2</sup>·s<sup>−1</sup>. Based on simulation outputs, the spatial layout of supplemental lighting modules was determined, and the hourly light integral (HLI) was used as a control variable to develop a time-segmented lighting strategy. For this study, the production performance of tomato was evaluated under four lighting treatments: HLI-driven fixed supplementary lighting (HFS), HLI-driven mobile supplementary lighting (HMS), nighttime timed supplementary lighting (TS), and only natural light (CK). The optimal lighting configuration was achieved when fixtures were positioned 1.7 m above the planting troughs. Tomato yield per plant under the HFS treatment increased by 25.2% compared to CK and by 21.6% compared to TS. While HMS showed higher energy-use efficiency and quantum yield, its yield improvement was relatively modest. Overall, HFS enhanced light energy-use efficiency and quantum yield by 5.5% and 55.3%, respectively, compared to TS. This study provides a practical decision-support tool for greenhouse lighting management, enabling data-driven optimization of light distribution and energy use. The proposed 3D modeling framework not only improves light-thermal synergy but also offers strong scalability for different greenhouse structures and crops. By integrating physical modeling and intelligent control, it contributes to the development of sustainable and smart agricultural production systems.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"244 ","pages":"Article 111486"},"PeriodicalIF":8.9,"publicationDate":"2026-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146079799","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-15Epub Date: 2026-01-27DOI: 10.1016/j.compag.2026.111474
Mengjie Liu , Yanlong Miao , Yida Li , Wenyi Sheng , Ruicheng Qiu , Minjuan Wang , Han Li , Man Zhang
<div><div>Maize leaf phenotypic parameters effectively reflect the photosynthesis and growth information of maize plants, which is crucial for breeding superior maize varieties. Current challenges include separating stems and leaves from a single maize plant and accurately measuring the phenotypic parameters of maize leaves. This study proposes a stem-leaf segmentation method based on region growing, incorporating adaptive cuboid region growing and slice region growing, alongside techniques for measuring phenotypic parameters of maize leaves. First, terrestrial laser scanning (TLS) was employed to obtain three-dimensional (3D) point cloud data of maize at the five-leaf (V5) and six-leaf (V6) stages. The point cloud data were then preprocessed to isolate single plant point clouds. Next, the maize point clouds were pre-segmented into three categories—central point clouds, partially expanded leaf point clouds, and unexpanded leaf point clouds—using center-edge segmentation, statistical filtering, and leaf classification. Adaptive cuboid region growing was applied to segment the unexpanded leaf point clouds, while slice region growing was used for partially expanded leaves, with Euclidean clustering optimizing the leaf point clouds, completing the segmentation process. Finally, various methods—including clustering counting, point-to-point distance accumulation, point-to-line distance, vector angle, point cloud triangulation, and triangle area accumulation—were utilized to automatically measure the number of maize leaves, leaf length, leaf width, leaf inclination angle, and leaf area. Compared with other point cloud stem-leaf segmentation methods based on geometric features and common 3D point cloud deep learning models (PointNet++, PointTransformer), the method proposed in this paper performs better. The segmentation results indicated that the Precision (<em>P</em>), Recall (<em>R</em>) and <em>F<sub>1</sub></em>-Score (<em>F<sub>1</sub></em>) for stem-leaf segmentation of all maize plants at the V5 stage exceeded 92.00%, with average values of 96.87%, 97.08%, and 96.97%, respectively. At the V6 stage, <em>P</em>, <em>R</em>, and <em>F<sub>1</sub></em> exceeded 95.00%, with averages of 97.73%, 97.01%, and 97.67%, respectively. The algorithm accurately measured the number of leaves at the V5 stage, while a small error was noted at the V6 stage, yielding a percentage error (<em>PE</em>) of 0.93%. Measurement accuracy for leaf length, width, and area at both growth stages was greater than 93.80%, 92.80%, and 89.50%, respectively. Measurement accuracy for leaf inclination angle was lower, at 82.00% and 88.02% for the V5 and V6 stages, respectively. The proposed methods for stem-leaf segmentation and measurement of leaf phenotypic parameters are fast and accurate, providing technical support for high-quality breeding and intelligent management of maize. Our point cloud data of maize and source code is available from https://github.com/lmj-cau/stem-leaf-se
{"title":"A stem-leaf segmentation method of maize plant point cloud based on region growing and leaf phenotypic parameters measurement","authors":"Mengjie Liu , Yanlong Miao , Yida Li , Wenyi Sheng , Ruicheng Qiu , Minjuan Wang , Han Li , Man Zhang","doi":"10.1016/j.compag.2026.111474","DOIUrl":"10.1016/j.compag.2026.111474","url":null,"abstract":"<div><div>Maize leaf phenotypic parameters effectively reflect the photosynthesis and growth information of maize plants, which is crucial for breeding superior maize varieties. Current challenges include separating stems and leaves from a single maize plant and accurately measuring the phenotypic parameters of maize leaves. This study proposes a stem-leaf segmentation method based on region growing, incorporating adaptive cuboid region growing and slice region growing, alongside techniques for measuring phenotypic parameters of maize leaves. First, terrestrial laser scanning (TLS) was employed to obtain three-dimensional (3D) point cloud data of maize at the five-leaf (V5) and six-leaf (V6) stages. The point cloud data were then preprocessed to isolate single plant point clouds. Next, the maize point clouds were pre-segmented into three categories—central point clouds, partially expanded leaf point clouds, and unexpanded leaf point clouds—using center-edge segmentation, statistical filtering, and leaf classification. Adaptive cuboid region growing was applied to segment the unexpanded leaf point clouds, while slice region growing was used for partially expanded leaves, with Euclidean clustering optimizing the leaf point clouds, completing the segmentation process. Finally, various methods—including clustering counting, point-to-point distance accumulation, point-to-line distance, vector angle, point cloud triangulation, and triangle area accumulation—were utilized to automatically measure the number of maize leaves, leaf length, leaf width, leaf inclination angle, and leaf area. Compared with other point cloud stem-leaf segmentation methods based on geometric features and common 3D point cloud deep learning models (PointNet++, PointTransformer), the method proposed in this paper performs better. The segmentation results indicated that the Precision (<em>P</em>), Recall (<em>R</em>) and <em>F<sub>1</sub></em>-Score (<em>F<sub>1</sub></em>) for stem-leaf segmentation of all maize plants at the V5 stage exceeded 92.00%, with average values of 96.87%, 97.08%, and 96.97%, respectively. At the V6 stage, <em>P</em>, <em>R</em>, and <em>F<sub>1</sub></em> exceeded 95.00%, with averages of 97.73%, 97.01%, and 97.67%, respectively. The algorithm accurately measured the number of leaves at the V5 stage, while a small error was noted at the V6 stage, yielding a percentage error (<em>PE</em>) of 0.93%. Measurement accuracy for leaf length, width, and area at both growth stages was greater than 93.80%, 92.80%, and 89.50%, respectively. Measurement accuracy for leaf inclination angle was lower, at 82.00% and 88.02% for the V5 and V6 stages, respectively. The proposed methods for stem-leaf segmentation and measurement of leaf phenotypic parameters are fast and accurate, providing technical support for high-quality breeding and intelligent management of maize. Our point cloud data of maize and source code is available from https://github.com/lmj-cau/stem-leaf-se","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"244 ","pages":"Article 111474"},"PeriodicalIF":8.9,"publicationDate":"2026-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146080284","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Accurate and temporally consistent multispectral observations are essential for monitoring alfalfa yield and quality, given its frequent harvest cycles and rapid regrowth. However, optical satellite imagery is often constrained by cloud cover, revisit intervals, and sensor availability. To overcome these limitations, we propose a novel Alfalfa Multimodal Generative Adversarial Network (AMGAN) designed for near-daily multispectral image reconstruction. Unlike conventional image-to-image or spatiotemporal fusion methods that overlook crop-specific characteristics, are restricted to observed timestamps, or depend heavily on dense temporal series, AMGAN leverages multisource (Landsat-8/9, Sentinel-1, PlanetScope) and multimodal (climate, geographic, temporal) information within an adversarial learning paradigm. This enables high-quality image generation from minimal inputs. Extensive experiments across five major alfalfa-producing states in the United States (2022–2024) show that AMGAN consistently surpasses four state-of-the-art (SOTA) deep learning baselines. It achieves higher reconstruction accuracy across all spectral bands, with pronounced gains in red-edge and near-infrared (NIR) regions critical for vegetation assessment. Multisource integration and multimodal cues enhance robustness, ensuring reliable performance under diverse observation scenarios. The reconstructed imagery was subsequently evaluated in alfalfa yield and quality prediction tasks. Results demonstrated high predictive accuracy for dry matter yield (DM) in the cross validation (CV) experiment with a coefficient of determination (R2) of 0.80, and moderate correlations for selected quality traits such as crude protein (CP), non-fiber carbohydrates (NFC), and minerals, while nutritive value traits tied to complex biochemical processes remained more challenging. Overall, this study underscores the potential of multimodal adversarial learning to bridge observational gaps in alfalfa monitoring. The proposed framework provides a scalable, crop-specific approach for generating temporally dense imagery, supporting precision management for biomass-related and proximate quality traits, while performance for digestibility traits remains limited.
{"title":"AMGAN: A multimodal generative adversarial network for near-daily alfalfa multispectral image reconstruction","authors":"Tong Yu , Jiang Chen , Jerome H. Cherney , Zhou Zhang","doi":"10.1016/j.compag.2026.111468","DOIUrl":"10.1016/j.compag.2026.111468","url":null,"abstract":"<div><div>Accurate and temporally consistent multispectral observations are essential for monitoring alfalfa yield and quality, given its frequent harvest cycles and rapid regrowth. However, optical satellite imagery is often constrained by cloud cover, revisit intervals, and sensor availability. To overcome these limitations, we propose a novel Alfalfa Multimodal Generative Adversarial Network (AMGAN) designed for near-daily multispectral image reconstruction. Unlike conventional image-to-image or spatiotemporal fusion methods that overlook crop-specific characteristics, are restricted to observed timestamps, or depend heavily on dense temporal series, AMGAN leverages multisource (Landsat-8/9, Sentinel-1, PlanetScope) and multimodal (climate, geographic, temporal) information within an adversarial learning paradigm. This enables high-quality image generation from minimal inputs. Extensive experiments across five major alfalfa-producing states in the United States (2022–2024) show that AMGAN consistently surpasses four state-of-the-art (SOTA) deep learning baselines. It achieves higher reconstruction accuracy across all spectral bands, with pronounced gains in red-edge and near-infrared (NIR) regions critical for vegetation assessment. Multisource integration and multimodal cues enhance robustness, ensuring reliable performance under diverse observation scenarios. The reconstructed imagery was subsequently evaluated in alfalfa yield and quality prediction tasks. Results demonstrated high predictive accuracy for dry matter yield (DM) in the cross validation (CV) experiment with a coefficient of determination (R<sup>2</sup>) of 0.80, and moderate correlations for selected quality traits such as crude protein (CP), non-fiber carbohydrates (NFC), and minerals, while nutritive value traits tied to complex biochemical processes remained more challenging. Overall, this study underscores the potential of multimodal adversarial learning to bridge observational gaps in alfalfa monitoring. The proposed framework provides a scalable, crop-specific approach for generating temporally dense imagery, supporting precision management for biomass-related and proximate quality traits, while performance for digestibility traits remains limited.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"244 ","pages":"Article 111468"},"PeriodicalIF":8.9,"publicationDate":"2026-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146025260","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}