首页 > 最新文献

Computers and Electronics in Agriculture最新文献

英文 中文
Innovative photosynthesis model twinning after intelligent interpretation of complex sensor analytics 在复杂传感器分析的智能解释后,创新的光合作用模型孪生
IF 8.9 1区 农林科学 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2026-01-31 DOI: 10.1016/j.compag.2026.111496
Xiaotong Wang , Xuejiao Tong , Bingguang Han , Zhulin Li , Qingji Li , Xianmin Liu , Zhouping Sun , Nick Sigrimis , Tianlai Li
Accurate canopy photosynthesis modeling is essential for understanding and optimizing crop growth and yield in greenhouse agriculture. Current models have limited predictive capability due to inadequate responsiveness to dynamic environments and delays in parameter acquisition, making accurate predictions challenging under the complex conditions of solar greenhouses. This study aimed to develop a dynamic canopy photosynthesis model for greenhouse tomatoes, leveraging an IoT sensor network for real-time biological feedback and parameterization. By integrating real-time monitoring with dynamic feedback, the model facilitates precision management of greenhouse tomato cultivation, thereby optimizing plant growth, resource use efficiency, and yield predictability. To achieve this, a non-destructive inversion method based on a dual weighing system was developed, enabling accurate dynamic monitoring of tomato canopy leaf area index (LAI, R2 ≥ 0.94) and the photosynthetic leaf area index (LAIp, R2 ≥ 0.91), continuously providing parameters for updating modelling (validated against destructive sampling and actual measurements for trait specifics). Based on accurate parameter acquisition, a dynamic canopy photosynthesis model was developed using LAIp as the core variable, integrating above-canopy radiation. A newly developed parameter, which integrates the radiation component of transpiration, serves as a key factor for estimating photosynthesis. This innovative approach allows for accurate daily prediction and assessment of assimilated biomass. Experimental results from 2022 and 2023 showed that the LAIp model performed better than the comparison model, showing higher accuracy and adaptability (R2 = 0.87 and 0.89, NRMSE = 0.17 and 0.12 vs. R2 = 0.70 and 0.80, NRMSE = 0.26 and 0.15). These results confirmed the reliability of the integrated modeling framework, which forms a closed-loop system connecting real-time plant monitoring, statistical parameter inversion, online model adaptation, and biomass feedback verification. This modeling approach provides a solid foundation for precise growth simulation, sustainably improving yield and quality in solar greenhouse tomatoes, and advancing digital twin-enabled intelligent production.
准确的冠层光合作用模型对了解和优化温室农业作物生长和产量至关重要。由于对动态环境的响应能力不足和参数获取的延迟,当前模型的预测能力有限,在复杂的太阳温室条件下进行准确预测具有挑战性。本研究旨在开发温室番茄的动态冠层光合作用模型,利用物联网传感器网络进行实时生物反馈和参数化。该模型将实时监测与动态反馈相结合,实现温室番茄种植的精准管理,从而优化植株生长、资源利用效率和产量可预测性。为此,开发了一种基于双称重系统的无损反演方法,实现了对番茄冠层叶面积指数(LAI, R2≥0.94)和光合叶面积指数(LAIp, R2≥0.91)的精确动态监测,为更新模型提供了持续的参数(通过破坏性采样和性状细节的实际测量验证)。在准确获取参数的基础上,以叶片光合速率为核心变量,考虑冠层上辐射,建立了动态冠层光合作用模型。一个新提出的综合蒸腾辐射分量的参数是估算光合作用的关键因子。这种创新的方法允许对同化生物量进行准确的每日预测和评估。2022年和2023年的实验结果表明,LAIp模型优于比较模型,具有更高的准确性和适应性(R2 = 0.87和0.89,NRMSE = 0.17和0.12,R2 = 0.70和0.80,NRMSE = 0.26和0.15)。这些结果证实了集成建模框架的可靠性,该框架形成了一个连接实时植物监测、统计参数反演、在线模型自适应和生物量反馈验证的闭环系统。这种建模方法为精确的生长模拟,可持续地提高日光温室番茄的产量和质量,以及推进数字孪生智能生产提供了坚实的基础。
{"title":"Innovative photosynthesis model twinning after intelligent interpretation of complex sensor analytics","authors":"Xiaotong Wang ,&nbsp;Xuejiao Tong ,&nbsp;Bingguang Han ,&nbsp;Zhulin Li ,&nbsp;Qingji Li ,&nbsp;Xianmin Liu ,&nbsp;Zhouping Sun ,&nbsp;Nick Sigrimis ,&nbsp;Tianlai Li","doi":"10.1016/j.compag.2026.111496","DOIUrl":"10.1016/j.compag.2026.111496","url":null,"abstract":"<div><div>Accurate canopy photosynthesis modeling is essential for understanding and optimizing crop growth and yield in greenhouse agriculture. Current models have limited predictive capability due to inadequate responsiveness to dynamic environments and delays in parameter acquisition, making accurate predictions challenging under the complex conditions of solar greenhouses. This study aimed to develop a dynamic canopy photosynthesis model for greenhouse tomatoes, leveraging an IoT sensor network for real-time biological feedback and parameterization. By integrating real-time monitoring with dynamic feedback, the model facilitates precision management of greenhouse tomato cultivation, thereby optimizing plant growth, resource use efficiency, and yield predictability. To achieve this, a non-destructive inversion method based on a dual weighing system was developed, enabling accurate dynamic monitoring of tomato canopy leaf area index (LAI, R<sup>2</sup> ≥ 0.94) and the photosynthetic leaf area index (LAI<sub>p</sub>, R<sup>2</sup> ≥ 0.91), continuously providing parameters for updating modelling (validated against destructive sampling and actual measurements for trait specifics). Based on accurate parameter acquisition, a dynamic canopy photosynthesis model was developed using LAI<sub>p</sub> as the core variable, integrating above-canopy radiation. A newly developed parameter, which integrates the radiation component of transpiration, serves as a key factor for estimating photosynthesis. This innovative approach allows for accurate daily prediction and assessment of assimilated biomass. Experimental results from 2022 and 2023 showed that the LAI<sub>p</sub> model performed better than the comparison model, showing higher accuracy and adaptability (R<sup>2</sup> = 0.87 and 0.89, NRMSE = 0.17 and 0.12 vs. R<sup>2</sup> = 0.70 and 0.80, NRMSE = 0.26 and 0.15). These results confirmed the reliability of the integrated modeling framework, which forms a closed-loop system connecting real-time plant monitoring, statistical parameter inversion, online model adaptation, and biomass feedback verification. This modeling approach provides a solid foundation for precise growth simulation, sustainably improving yield and quality in solar greenhouse tomatoes, and advancing digital twin-enabled intelligent production.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"244 ","pages":"Article 111496"},"PeriodicalIF":8.9,"publicationDate":"2026-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146079802","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
WeedCAM: An edge-computing camera system for multi-species weed detection in sugar beet production fields WeedCAM:用于甜菜生产领域多品种杂草检测的边缘计算摄像系统
IF 8.9 1区 农林科学 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2026-01-31 DOI: 10.1016/j.compag.2026.111498
Zonglin Yang , Wei-Zhen Liang , Nevin Lawrence , Xin Qiao , Benjamin Riggan , Robert Harveson , Chi-En Chiang , Joseph Oboamah , Diwenitissiou Philipine Andjawo
This study introduces WeedCAM, a low-cost, near real-time, edge-computing camera system for multi-species weed detection, built on a Raspberry Pi 5 and integrated with a GPS module and LoRa board for geolocation and data transmission. A three-phase framework, including data acquisition, model fine-tuning, and deployment, is proposed to implement detection models on WeedCAM. A total of 5734 high-resolution (4K) images were automatically collected using WeedCAM, producing a dataset with a long-tail distribution that poses challenges for model training. To address this, Repeat Factor Sampling and Focal Loss were applied during the fine-tuning. Seven object detection models were evaluated, including YOLOX-S, YOLOX-L, Faster R-CNN, Cascade R-CNN, Deformable-DETR, and DINO. Finally, WeedCAMs with embedded trained models were deployed in the field on pivot-mounted and ground-based installations, detecting weeds at 30-min intervals and transmitting results to a customized gateway via LoRa. The gateway parsed and mapped these results to our custom-designed website for visualization. Our best model, DINO-Swin/L, set the performance benchmark with a 76.0 overall mAP (IoU = 0.5) and strong per-species scores for kochia (76.4 mAP), Palmer amaranth (77.3 mAP), and volunteer corn (75.9 mAP) at 4K resolution image. Despite this, YOLOX-L was deployed on the WeedCAM, as its efficient 8-min processing cycle represented the better trade-off between accuracy and speed. Field evaluation confirmed that WeedCAM effectively identified weed species and quantities under varying lighting conditions, camera angles, and soil moisture levels during irrigation events. These results demonstrate the practicality of deploying WeedCAM edge-computing deep learning systems for near real-time, multi-species weed detection under sugar beet fields.
本研究介绍了WeedCAM,一种低成本、近实时的边缘计算相机系统,用于多物种杂草检测,建立在树莓派5上,集成了GPS模块和LoRa板,用于地理定位和数据传输。提出了一个包括数据采集、模型微调和部署的三阶段框架来实现WeedCAM上的检测模型。使用WeedCAM自动收集了5734张高分辨率(4K)图像,生成了一个具有长尾分布的数据集,这对模型训练提出了挑战。为了解决这个问题,在微调期间应用了重复因子采样和焦点损失。评估了7种目标检测模型,包括YOLOX-S、YOLOX-L、Faster R-CNN、Cascade R-CNN、deformation - detr和DINO。最后,将带有嵌入式训练模型的WeedCAMs部署在枢轴安装和地面装置上,每隔30分钟检测一次杂草,并通过LoRa将结果传输到定制网关。该网关解析并将这些结果映射到我们定制设计的可视化网站。我们的最佳模型DINO-Swin/L设置了性能基准,总体mAP为76.0 (IoU = 0.5),在4K分辨率图像下,kochia (76.4 mAP), Palmer苋菜(77.3 mAP)和志愿者玉米(75.9 mAP)的每种得分都很高。尽管如此,由于其高效的8分钟处理周期在精度和速度之间取得了更好的平衡,YOLOX-L被部署在WeedCAM上。现场评估证实,在灌溉期间,WeedCAM在不同的光照条件、相机角度和土壤湿度水平下有效地识别了杂草的种类和数量。这些结果证明了在甜菜地里部署WeedCAM边缘计算深度学习系统进行近实时、多品种杂草检测的实用性。
{"title":"WeedCAM: An edge-computing camera system for multi-species weed detection in sugar beet production fields","authors":"Zonglin Yang ,&nbsp;Wei-Zhen Liang ,&nbsp;Nevin Lawrence ,&nbsp;Xin Qiao ,&nbsp;Benjamin Riggan ,&nbsp;Robert Harveson ,&nbsp;Chi-En Chiang ,&nbsp;Joseph Oboamah ,&nbsp;Diwenitissiou Philipine Andjawo","doi":"10.1016/j.compag.2026.111498","DOIUrl":"10.1016/j.compag.2026.111498","url":null,"abstract":"<div><div>This study introduces WeedCAM, a low-cost, near real-time, edge-computing camera system for multi-species weed detection, built on a Raspberry Pi 5 and integrated with a GPS module and LoRa board for geolocation and data transmission. A three-phase framework, including data acquisition, model fine-tuning, and deployment, is proposed to implement detection models on WeedCAM. A total of 5734 high-resolution (4K) images were automatically collected using WeedCAM, producing a dataset with a long-tail distribution that poses challenges for model training. To address this, Repeat Factor Sampling and Focal Loss were applied during the fine-tuning. Seven object detection models were evaluated, including YOLOX-S, YOLOX-L, Faster R-CNN, Cascade R-CNN, Deformable-DETR, and DINO. Finally, WeedCAMs with embedded trained models were deployed in the field on pivot-mounted and ground-based installations, detecting weeds at 30-min intervals and transmitting results to a customized gateway via LoRa. The gateway parsed and mapped these results to our custom-designed website for visualization. Our best model, DINO-Swin/L, set the performance benchmark with a 76.0 overall mAP (IoU = 0.5) and strong per-species scores for kochia (76.4 mAP), Palmer amaranth (77.3 mAP), and volunteer corn (75.9 mAP) at 4K resolution image. Despite this, YOLOX-L was deployed on the WeedCAM, as its efficient 8-min processing cycle represented the better trade-off between accuracy and speed. Field evaluation confirmed that WeedCAM effectively identified weed species and quantities under varying lighting conditions, camera angles, and soil moisture levels during irrigation events. These results demonstrate the practicality of deploying WeedCAM edge-computing deep learning systems for near real-time, multi-species weed detection under sugar beet fields.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"244 ","pages":"Article 111498"},"PeriodicalIF":8.9,"publicationDate":"2026-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146080287","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Detection of Potato Virus Y in plant foliage using convolutional neural network classifiers and hyperspectral imagery 利用卷积神经网络分类器和高光谱图像检测马铃薯叶片Y病毒
IF 8.9 1区 农林科学 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2026-01-30 DOI: 10.1016/j.compag.2026.111499
L.M. Griffel, D. Delparte
Solanum tuberosum (potato) is one of the most important global food crops relative to economic opportunities and food security. Potato Virus Y (Potyviridae, PVY), a detrimental plant pathogen propagated by insect vectors, negatively affects tuber yield and quality. This has forced industry stakeholders to adopt many different types of mitigation strategies including pesticide applications, manual field scouting, and potato seed certification programs. Despite these efforts, PVY continues to disrupt industry production regions resulting in significant economic losses due to the lack of robust diagnostic tools. Machine learning algorithms trained on remotely sensed spectral features show promise as a diagnostic tool for many plant diseases including PVY. This study proposes a novel Convolutional Neural Network (CNN) architecture to detect potato plant canopy regions of plants infected with PVY based on unmanned aerial system (UAS) hyperspectral pixel features comprised of bands matching the center wavelengths of nine spectral channels captured by the European Space Agency’s Sentinel 2 multispectral instrument. Accuracy and F1 metrics of 0.815 and 0.766 respectively were achieved on test data collected over multiple growing seasons and locations. Additionally, efforts were made to identify optimal combinations of spectral bands that are most beneficial for the CNN classifier by evaluating every possible combination of the nine spectral wavelengths in groups ranging from 3 to 9 channels. Results show that hyperspectral channels centered on 783 nm, 739 nm, and 560 nm are the most important features for the CNN architecture. Additionally, six hyperspectral features consisting of the three previously mentioned along with 665 nm, 704 nm, and 864 nm yielded the best results of all possible combinations achieving accuracy and F1 Score metrics of 0.833 and 0.791 respectively.
马铃薯(Solanum tuberosum)是全球最重要的经济机会和粮食安全粮食作物之一。马铃薯Y型病毒(Potyviridae, PVY)是一种通过昆虫媒介传播的有害植物病原体,对薯类产量和品质产生负面影响。这迫使行业利益相关者采取许多不同类型的缓解策略,包括农药应用,人工田间侦察和马铃薯种子认证计划。尽管做出了这些努力,但由于缺乏强大的诊断工具,PVY继续破坏行业生产区域,导致重大经济损失。基于遥感光谱特征训练的机器学习算法有望成为包括PVY在内的许多植物病害的诊断工具。本研究提出了一种新颖的卷积神经网络(CNN)架构,基于无人机系统(UAS)高光谱像素特征,该特征由与欧洲航天局Sentinel 2多光谱仪器捕获的9个光谱通道的中心波长相匹配的波段组成,用于检测受PVY感染的马铃薯植物冠层区域。在多个生长季节和地点采集的试验数据,精度和F1指标分别为0.815和0.766。此外,通过评估3到9个通道组中9个光谱波长的每种可能组合,努力识别最有利于CNN分类器的光谱波段的最佳组合。结果表明,以783 nm、739 nm和560 nm为中心的高光谱通道是CNN架构的最重要特征。此外,665 nm、704 nm和864 nm组成的6个高光谱特征在所有可能组合中获得了最佳结果,精度和F1 Score指标分别为0.833和0.791。
{"title":"Detection of Potato Virus Y in plant foliage using convolutional neural network classifiers and hyperspectral imagery","authors":"L.M. Griffel,&nbsp;D. Delparte","doi":"10.1016/j.compag.2026.111499","DOIUrl":"10.1016/j.compag.2026.111499","url":null,"abstract":"<div><div><em>Solanum tuberosum</em> (potato) is one of the most important global food crops relative to economic opportunities and food security. <em>Potato Virus Y</em> (<em>Potyviridae</em>, PVY), a detrimental plant pathogen propagated by insect vectors, negatively affects tuber yield and quality. This has forced industry stakeholders to adopt many different types of mitigation strategies including pesticide applications, manual field scouting, and potato seed certification programs. Despite these efforts, PVY continues to disrupt industry production regions resulting in significant economic losses due to the lack of robust diagnostic tools. Machine learning algorithms trained on remotely sensed spectral features show promise as a diagnostic tool for many plant diseases including PVY. This study proposes a novel Convolutional Neural Network (CNN) architecture to detect potato plant canopy regions of plants infected with PVY based on unmanned aerial system (UAS) hyperspectral pixel features comprised of bands matching the center wavelengths of nine spectral channels captured by the European Space Agency’s Sentinel 2 multispectral instrument. Accuracy and F1 metrics of 0.815 and 0.766 respectively were achieved on test data collected over multiple growing seasons and locations. Additionally, efforts were made to identify optimal combinations of spectral bands that are most beneficial for the CNN classifier by evaluating every possible combination of the nine spectral wavelengths in groups ranging from 3 to 9 channels. Results show that hyperspectral channels centered on 783 nm, 739 nm, and 560 nm are the most important features for the CNN architecture. Additionally, six hyperspectral features consisting of the three previously mentioned along with 665 nm, 704 nm, and 864 nm yielded the best results of all possible combinations achieving accuracy and F1 Score metrics of 0.833 and 0.791 respectively.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"244 ","pages":"Article 111499"},"PeriodicalIF":8.9,"publicationDate":"2026-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146079823","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Semantic segmentation–based detection of exposed soil regions in paddy fields for a floating-type puddling and leveling operation 基于语义分割的水田暴露土区浮式灌浆找平检测
IF 8.9 1区 农林科学 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2026-01-30 DOI: 10.1016/j.compag.2026.111494
Zian Liang , Jun Zhou , Yongpeng Chen , Yinghua Zhang , Tamiru Tesfaye Gemechu , Lei Li , Huayu Zhou , Muhammad Aurangzaib
Integrated puddling–leveling operation is a critical step in paddy field preparation, typically conducted between plowing and rice transplanting. However, the accuracy of elevation measurements in existing automatic leveling technologies is often constrained by limited operating ranges or susceptibility to electromagnetic interference, resulting in inconsistent leveling performance. Because the water surface naturally reflects terrain undulations in paddy fields, this study proposes a semantic segmentation–based approach to detect exposed soil regions for guiding a floating-type puddling and leveling implement. To this end, a lightweight semantic segmentation model, PL_DeepLabV3+_0.8, was developed specifically for integrated puddling–leveling operation. The model combines a MobileNetV2_S backbone, a Low-Level Feature Fusion Module (LFM), and structured pruning. These components collectively enable the rapid and accurate detection of exposed soil in paddy fields under computationally constrained conditions. The PL_DeepLabV3+_0.8 model was successfully deployed in the control system of a floating-type implement, and its effectiveness was validated through field tests conducted at different operating speeds and modes. On a paddy field image dataset, PL_DeepLabV3+_0.8 achieved a mean Pixel Accuracy (mPA) of 92.23 ± 0.22%, a mean Intersection over Union (mIoU) of 84.18 ± 0.31%, and an inference speed of 7.73 frames per second (FPS), outperforming the original DeepLabV3 + model, which achieved 91.90%, 83.81%, and 0.88 FPS, respectively. In field tests at operating speeds of 1.1 m/s and 1.5 m/s, the surface flatness (standard deviation of elevation) in two paddy fields was improved from 3.61 cm and 4.07 cm to 2.11 cm and 2.42 cm, respectively. These results indicate that the deployed model not only satisfies the flatness requirement for rice transplanting (< 3 cm) but also delivers a productivity increase of 0.28 ha/h compared with conventional manual operation. Overall, this study provides a useful reference for the development of intelligent puddling and leveling technologies in paddy field preparation.
整地作业是稻田整地的关键步骤,一般在翻耕和插秧之间进行。然而,现有自动找平技术的高程测量精度往往受到工作范围有限或易受电磁干扰的限制,导致找平性能不一致。由于水田的水面自然反映地形起伏,本研究提出了一种基于语义分割的暴露土壤区域检测方法,以指导浮式刨平机。为此,我们开发了一个轻量级的语义分割模型,PL_DeepLabV3+_0.8,专门用于集成的调平布丁操作。该模型结合了MobileNetV2_S主干、低级特征融合模块(LFM)和结构化修剪。这些组成部分共同使在计算约束条件下快速准确地检测水田暴露的土壤。PL_DeepLabV3+_0.8模型成功部署在浮式采油器的控制系统中,并通过不同操作速度和模式下的现场测试验证了其有效性。在水稻田图像数据集上,PL_DeepLabV3+_0.8模型的平均像素精度(mPA)为92.23±0.22%,平均相交比(mIoU)为84.18±0.31%,推理速度为7.73帧/秒(FPS),优于原始DeepLabV3+模型的91.90%、83.81%和0.88 FPS。在1.1 m/s和1.5 m/s运行速度下的田间试验中,两个水田的表面平整度(高程标准差)分别从3.61 cm和4.07 cm提高到2.11 cm和2.42 cm。上述结果表明,所部署的模型不仅满足水稻移栽平整度要求(< 3 cm),而且与常规人工操作相比,生产率提高了0.28 ha/h。总之,本研究为水田整备智能灌浆平整技术的发展提供了有益的参考。
{"title":"Semantic segmentation–based detection of exposed soil regions in paddy fields for a floating-type puddling and leveling operation","authors":"Zian Liang ,&nbsp;Jun Zhou ,&nbsp;Yongpeng Chen ,&nbsp;Yinghua Zhang ,&nbsp;Tamiru Tesfaye Gemechu ,&nbsp;Lei Li ,&nbsp;Huayu Zhou ,&nbsp;Muhammad Aurangzaib","doi":"10.1016/j.compag.2026.111494","DOIUrl":"10.1016/j.compag.2026.111494","url":null,"abstract":"<div><div>Integrated puddling–leveling operation is a critical step in paddy field preparation, typically conducted between plowing and rice transplanting. However, the accuracy of elevation measurements in existing automatic leveling technologies is often constrained by limited operating ranges or susceptibility to electromagnetic interference, resulting in inconsistent leveling performance. Because the water surface naturally reflects terrain undulations in paddy fields, this study proposes a semantic segmentation–based approach to detect exposed soil regions for guiding a floating-type puddling and leveling implement. To this end, a lightweight semantic segmentation model, PL_DeepLabV3+_0.8, was developed specifically for integrated puddling–leveling operation. The model combines a MobileNetV2_S backbone, a Low-Level Feature Fusion Module (LFM), and structured pruning. These components collectively enable the rapid and accurate detection of exposed soil in paddy fields under computationally constrained conditions. The PL_DeepLabV3+_0.8 model was successfully deployed in the control system of a floating-type implement, and its effectiveness was validated through field tests conducted at different operating<!--> <!-->speeds and modes. On a paddy field image dataset, PL_DeepLabV3+_0.8 achieved a mean Pixel Accuracy (mPA) of 92.23 ± 0.22%, a mean Intersection over Union (mIoU) of 84.18 ± 0.31%, and an inference speed of 7.73 frames per second (FPS), outperforming the original DeepLabV3 + model, which achieved 91.90%, 83.81%, and 0.88 FPS, respectively. In field tests at operating speeds of 1.1 m/s and 1.5 m/s, the surface flatness (standard deviation of elevation) in two paddy fields was improved from 3.61 cm and 4.07 cm to 2.11 cm and 2.42 cm, respectively. These results indicate that the deployed model not only satisfies the flatness requirement for rice transplanting (&lt; 3 cm) but also delivers a productivity increase of 0.28 ha/h compared with conventional manual operation. Overall, this study provides a useful reference for the development of intelligent puddling and leveling technologies in paddy field preparation.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"244 ","pages":"Article 111494"},"PeriodicalIF":8.9,"publicationDate":"2026-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146080286","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Design and experiment of a film-breaking robot for sweet potato horizontal transplantation with plastic mulch 地膜地瓜水平移栽破膜机器人的设计与试验
IF 8.9 1区 农林科学 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2026-01-30 DOI: 10.1016/j.compag.2026.111483
Wanzhi Zhang , Yangqian Zhang , Xuyang Wang , Yulu Sun , Hongjuan Liu , Zhigang Li
Plastic film mulch cultivation technology is a crucial agronomic measure for enhancing early-spring sweet potato yields. However, prolonged film coverage can scorch seedlings beneath the mulch, adversely affecting their normal growth and subsequent yield. Therefore, timely film-breaking to guide seedling emergence is essential. Currently, manual film-breaking is the primary method. To address the high labor intensity associated with manual operations, this paper designs a sweet potato seedling film-breaking robot based on deep learning and a Delta parallel robot. First, a calibration method was proposed for scenarios where the camera field of view is separated from the Delta parallel robot’s workspace, which avoids missed detection issues caused by manipulator occlusion. Subsequently, images of sweet potato seedlings captured under various environmental conditions were selected as the data basis for the deep learning model, and the BW-YOLO sweet potato seedling detection model was constructed. This model replaces the CIoU loss function with the Wise-IoU v3 loss function and incorporates a BiFPN module into the neck network. Testing results show that the model achieved a mean Average Precision (mAP) of 96.8% and a detection speed of 76.34 FPS, demonstrating significant improvements in both detection accuracy and speed. Finally, the detection model was deployed on the sweet potato seedling film-breaking robot, and field trials were conducted. The model achieved an average recognition success rate of 90.56%, the film-breaking robot attained a film-breaking qualification rate of 84.56%, and the seedling emergence rate reached 83.74%. The proposed sweet potato film-breaking robot for flat cultivation enables unmanned operation during seedling emergence, providing a valuable reference for the design of intelligent agricultural equipment.
地膜覆盖栽培技术是提高早春甘薯产量的一项重要农艺措施。然而,长时间的薄膜覆盖会烧焦地膜下的幼苗,对其正常生长和随后的产量产生不利影响。因此,及时破膜引导幼苗出苗至关重要。目前,手工破膜是主要的方法。针对人工作业劳动强度高的问题,设计了一种基于深度学习的红薯苗破膜机器人和一种Delta并联机器人。首先,针对相机视场与Delta并联机器人工作空间分离的情况,提出了一种校正方法,避免了由于机械手遮挡造成的漏检问题;随后,选取不同环境条件下捕获的甘薯苗木图像作为深度学习模型的数据基础,构建BW-YOLO甘薯苗木检测模型。该型号采用Wise-IoU v3的损失函数代替CIoU的损失函数,并在颈部网络中加入一个BiFPN模块。测试结果表明,该模型的平均精度(mAP)为96.8%,检测速度为76.34 FPS,检测精度和速度均有显著提高。最后,将该检测模型部署在红薯育苗破膜机器人上,并进行田间试验。模型平均识别成功率为90.56%,破膜机器人破膜合格率为84.56%,出苗率为83.74%。本文提出的平耕红薯破膜机器人实现了出苗过程中的无人操作,为智能农业装备的设计提供了有价值的参考。
{"title":"Design and experiment of a film-breaking robot for sweet potato horizontal transplantation with plastic mulch","authors":"Wanzhi Zhang ,&nbsp;Yangqian Zhang ,&nbsp;Xuyang Wang ,&nbsp;Yulu Sun ,&nbsp;Hongjuan Liu ,&nbsp;Zhigang Li","doi":"10.1016/j.compag.2026.111483","DOIUrl":"10.1016/j.compag.2026.111483","url":null,"abstract":"<div><div>Plastic film mulch cultivation technology is a crucial agronomic measure for enhancing early-spring sweet potato yields. However, prolonged film coverage can scorch seedlings beneath the mulch, adversely affecting their normal growth and subsequent yield. Therefore, timely film-breaking to guide seedling emergence is essential. Currently, manual film-breaking is the primary method. To address the high labor intensity associated with manual operations, this paper designs a sweet potato seedling film-breaking robot based on deep learning and a Delta parallel robot. First, a calibration method was proposed for scenarios where the camera field of view is separated from the Delta parallel robot’s workspace, which avoids missed detection issues caused by manipulator occlusion. Subsequently, images of sweet potato seedlings captured under various environmental conditions were selected as the data basis for the deep learning model, and the BW-YOLO sweet potato seedling detection model was constructed. This model replaces the CIoU loss function with the Wise-IoU v3 loss function and incorporates a BiFPN module into the neck network. Testing results show that the model achieved a mean Average Precision (mAP) of 96.8% and a detection speed of 76.34 FPS, demonstrating significant improvements in both detection accuracy and speed. Finally, the detection model was deployed on the sweet potato seedling film-breaking robot, and field trials were conducted. The model achieved an average recognition success rate of 90.56%, the film-breaking robot attained a film-breaking qualification rate of 84.56%, and the seedling emergence rate reached 83.74%. The proposed sweet potato film-breaking robot for flat cultivation enables unmanned operation during seedling emergence, providing a valuable reference for the design of intelligent agricultural equipment.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"244 ","pages":"Article 111483"},"PeriodicalIF":8.9,"publicationDate":"2026-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146080290","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Development of a new Single-Tree-Row-Tracking robot navigation for intra-row weeding operations in orchards using a Machine stereo vision system and LiDAR 利用机器立体视觉系统和激光雷达开发一种新的单树行跟踪机器人导航,用于果园行内除草作业
IF 8.9 1区 农林科学 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2026-01-30 DOI: 10.1016/j.compag.2026.111491
Rizky Mulya Sampurno , Zifu Liu , Victor Massaki Nakaguchi , Ailian Jiang , Tofael Ahamed
Driven by the need for efficient intra-row weed management in orchards, a new robotic system is designed and proposed to operate in narrow spaces and low-hanging branches with minimal soil compaction and no reliance on Global Navigation Satellite System (GNSS). To enable the tree-row following that is required for intra-row weeding, we introduced a vision-based framework that combined a 3D camera and a lightweight YOLOv8 instance segmentation model to detect tree trunks and extracted the navigation path from a single tree row through the frontal side of the view of the robot. The trajectory of the robot had offset by 0.8 m from the tree row, enabling a new Light Detection and Ranging (LiDAR)-triggered side-shift mechanism to target uncut weeds between trees in intra-rows. An experimental evaluation in simulated environments demonstrated stable navigation performance, with a 0.329 m RMSE. Furthermore, the side-shift actuation mechanism for weeding achieved 84.04% accuracy at a lower speed (0.5 m/s) and 76.85% accuracy at a faster speed (0.8 m/s); these results were due in part to the processing latency in real-time LiDAR point cloud analysis. These findings highlight the importance of optimizing computational efficiency and actuation timing for better field performance. Finally, the developed robotic system effectively integrated 3D vision, deep learning, and LiDAR-triggered actuation to perform autonomous intra-row weeding, demonstrated strong potential to address operational efficiency for intra-row weed management in orchards.
为满足果园行内杂草高效管理的需求,设计并提出了一种新的机器人系统,该系统可以在狭窄空间和低悬枝上运行,土壤压实程度最低,不依赖全球导航卫星系统(GNSS)。为了实现行内除草所需的树行跟踪,我们引入了一个基于视觉的框架,该框架结合了3D相机和轻量级的YOLOv8实例分割模型来检测树干,并通过机器人的正面视图从单行树中提取导航路径。机器人的轨迹与树行偏移了0.8米,从而启用了一种新的光探测和测距(LiDAR)触发的侧移机制,以瞄准行内树木之间未修剪的杂草。在模拟环境下的实验评估表明,导航性能稳定,RMSE为0.329 m。侧移驱动机构在低速(0.5 m/s)和高速(0.8 m/s)下的除草精度分别达到84.04%和76.85%;这些结果部分是由于实时激光雷达点云分析的处理延迟。这些发现强调了优化计算效率和驱动时间以获得更好的现场性能的重要性。最后,开发的机器人系统有效地集成了3D视觉、深度学习和激光雷达触发驱动来执行自动行内除草,显示出在果园行内杂草管理的操作效率方面的强大潜力。
{"title":"Development of a new Single-Tree-Row-Tracking robot navigation for intra-row weeding operations in orchards using a Machine stereo vision system and LiDAR","authors":"Rizky Mulya Sampurno ,&nbsp;Zifu Liu ,&nbsp;Victor Massaki Nakaguchi ,&nbsp;Ailian Jiang ,&nbsp;Tofael Ahamed","doi":"10.1016/j.compag.2026.111491","DOIUrl":"10.1016/j.compag.2026.111491","url":null,"abstract":"<div><div>Driven by the need for efficient intra-row weed management in orchards, a new robotic system is designed and proposed to operate in narrow spaces and low-hanging branches with minimal soil compaction and no reliance on Global Navigation Satellite System (GNSS). To enable the tree-row following that is required for intra-row weeding, we introduced a vision-based framework that combined a 3D camera and a lightweight YOLOv8 instance segmentation model to detect tree trunks and extracted the navigation path from a single tree row through the frontal side of the view of the robot. The trajectory of the robot had offset by 0.8 m from the tree row, enabling a new Light Detection and Ranging (LiDAR)-triggered side-shift mechanism to target uncut weeds between trees in intra-rows. An experimental evaluation in simulated environments demonstrated stable navigation performance, with a 0.329 m RMSE. Furthermore, the side-shift actuation mechanism for weeding achieved 84.04% accuracy at a lower speed (0.5 m/s) and 76.85% accuracy at a faster speed (0.8 m/s); these results were due in part to the processing latency in real-time LiDAR point cloud analysis. These findings highlight the importance of optimizing computational efficiency and actuation timing for better field performance. Finally, the developed robotic system effectively integrated 3D vision, deep learning, and LiDAR-triggered actuation to perform autonomous intra-row weeding, demonstrated strong potential to address operational efficiency for intra-row weed management in orchards.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"244 ","pages":"Article 111491"},"PeriodicalIF":8.9,"publicationDate":"2026-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146079825","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep learning driven edge inference for pest detection in potato crops using the AgriScout robot 利用AgriScout机器人进行马铃薯害虫检测的深度学习驱动边缘推理
IF 8.9 1区 农林科学 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2026-01-29 DOI: 10.1016/j.compag.2026.111492
Yuvraj Singh Gill , Hassan Afzaal , Charanpreet Singh , Gurjit S. Randhawa , Kritikiran Angrish , Navpreet Jaura , Zarnab Qamar , Aitazaz A. Farooque
Early field-scale surveillance of Colorado potato beetle remains a persistent bottleneck for sustainable potato production because conventional scouting is labor-intensive and provides limited spatial resolution for timely intervention. Here, we present AgriScout, a battery-powered autonomous scouting robot equipped with RGB imaging, controlled lighting, and RTK-GPS geotagging for continuous row-to-row data collection. Using AgriScout, we curated a field dataset of 832 georeferenced images and manually annotated adult beetles with tight bounding boxes to support tiny-object detection under real canopy conditions. We benchmarked six YOLO object detectors (YOLOv5s, YOLOv8s, YOLOv9s, YOLOv10s, YOLOv11s, and YOLOv12s) using transfer learning, high-resolution inputs (1280 × 1280), and an augmentation strategy tailored to small targets (including mosaic, scaling, and translation). To address training variability on the modest dataset, models were evaluated across multiple random seeds (7, 42, 123, 999, and 2024) and compared using precision, recall, mAP, F1, confidence behavior, and statistical tests of between-model differences. Across runs, YOLOv11s provided the most reliable overall balance for deployment, exhibiting strong precision and robust localization performance. For edge deployment, inference throughput was measured on an NVIDIA Jetson Orin Nano across multiple export formats; TensorRT consistently delivered the highest FPS, reaching 46.5 FPS (YOLOv5s) and exceeding 40 FPS for several variants, confirming real-time feasibility under FP32 inference. Finally, YOLOv11s detections were fused with RTK-GPS coordinates to generate centimeter-level infestation maps that visualize spatial clustering of beetle activity and support hotspot-driven, targeted management. Collectively, this work demonstrates an end-to-end, robot-to-map pipeline for beetle monitoring and provides a reproducible benchmark of accuracy, stability, and edge deployability for YOLO-based pest detection in commercial potato systems.
科罗拉多马铃薯甲虫的早期田间监测仍然是马铃薯可持续生产的持续瓶颈,因为传统的侦察是劳动密集型的,并且为及时干预提供有限的空间分辨率。在这里,我们展示了AgriScout,一个电池供电的自主侦察机器人,配备了RGB成像,控制照明和RTK-GPS地理标记,用于连续的逐行数据收集。利用AgriScout,我们整理了832张地理参考图像的野外数据集,并对成年甲虫进行了严密的边界框标注,以支持真实冠层条件下的微小目标检测。我们使用迁移学习、高分辨率输入(1280 × 1280)和针对小目标的增强策略(包括马赛克、缩放和平移)对六个YOLO目标检测器(yolov5、yolov8、yolov9、yolov10、yolov11和yolov12)进行了基准测试。为了解决适度数据集上的训练可变性,模型在多个随机种子(7,42,123,999和2024)上进行评估,并使用精度,召回率,mAP, F1,置信度行为和模型间差异的统计检验进行比较。在运行过程中,yolov11为部署提供了最可靠的整体平衡,表现出强大的精度和强大的定位性能。对于边缘部署,在NVIDIA Jetson Orin Nano上跨多种导出格式测量推理吞吐量;TensorRT始终提供最高的FPS,达到46.5 FPS (YOLOv5s),并且在几个变体中超过40 FPS,证实了FP32推理下的实时可行性。最后,将YOLOv11s检测结果与RTK-GPS坐标融合,生成厘米级虫害图,可视化甲虫活动的空间聚类,支持热点驱动的针对性管理。总的来说,这项工作展示了一个端到端、机器人到地图的甲虫监测管道,并为商业马铃薯系统中基于yolo的害虫检测提供了准确性、稳定性和边缘可部署性的可重复基准。
{"title":"Deep learning driven edge inference for pest detection in potato crops using the AgriScout robot","authors":"Yuvraj Singh Gill ,&nbsp;Hassan Afzaal ,&nbsp;Charanpreet Singh ,&nbsp;Gurjit S. Randhawa ,&nbsp;Kritikiran Angrish ,&nbsp;Navpreet Jaura ,&nbsp;Zarnab Qamar ,&nbsp;Aitazaz A. Farooque","doi":"10.1016/j.compag.2026.111492","DOIUrl":"10.1016/j.compag.2026.111492","url":null,"abstract":"<div><div>Early field-scale surveillance of Colorado potato beetle remains a persistent bottleneck for sustainable potato production because conventional scouting is labor-intensive and provides limited spatial resolution for timely intervention. Here, we present AgriScout, a battery-powered autonomous scouting robot equipped with RGB imaging, controlled lighting, and RTK-GPS geotagging for continuous row-to-row data collection. Using AgriScout, we curated a field dataset of 832 georeferenced images and manually annotated adult beetles with tight bounding boxes to support tiny-object detection under real canopy conditions. We benchmarked six YOLO object detectors (YOLOv5s, YOLOv8s, YOLOv9s, YOLOv10s, YOLOv11s, and YOLOv12s) using transfer learning, high-resolution inputs (1280 × 1280), and an augmentation strategy tailored to small targets (including mosaic, scaling, and translation). To address training variability on the modest dataset, models were evaluated across multiple random seeds (7, 42, 123, 999, and 2024) and compared using precision, recall, mAP, F1, confidence behavior, and statistical tests of between-model differences. Across runs, YOLOv11s provided the most reliable overall balance for deployment, exhibiting strong precision and robust localization performance. For edge deployment, inference throughput was measured on an NVIDIA Jetson Orin Nano across multiple export formats; TensorRT consistently delivered the highest FPS, reaching 46.5 FPS (YOLOv5s) and exceeding 40 FPS for several variants, confirming real-time feasibility under FP32 inference. Finally, YOLOv11s detections were fused with RTK-GPS coordinates to generate centimeter-level infestation maps that visualize spatial clustering of beetle activity and support hotspot-driven, targeted management. Collectively, this work demonstrates an end-to-end, robot-to-map pipeline for beetle monitoring and provides a reproducible benchmark of accuracy, stability, and edge deployability for YOLO-based pest detection in commercial potato systems.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"244 ","pages":"Article 111492"},"PeriodicalIF":8.9,"publicationDate":"2026-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146080288","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-dimensional behavioral signature analysis of laying hens under heat stress: development of a behavior-based level assessment model 热应激条件下蛋鸡多维行为特征分析:基于行为水平评估模型的建立
IF 8.9 1区 农林科学 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2026-01-29 DOI: 10.1016/j.compag.2026.111433
Zixuan Zhou , Lihua Li , Hao Xue , Yuchen Jia , Yao Yu , Zongkui Xie , Yuhan Gu
Accurate heat stress assessment is pivotal for early prevention and safeguarding poultry welfare. However, current protocols relying on the Temperature-Humidity Index (THI) often fail to capture the true physiological thermal load of laying hens. Conversely, animal behavior serves as a direct phenotypic response to environmental stressors, offering unique insights into adaptive mechanisms. Consequently, this study proposes a Behavior-based Heat Stress Assessment (BHSA) method driven by behavioral feedback. To achieve precise, non-invasive detection and automated feature extraction of individual heat stress behaviors, we developed YOLO-SPS, an enhanced architecture based on YOLOv12. By integrating SPD-Conv modules, A2C2F-PPA structures, and a Slide Loss function, the model effectively mitigates missed and false detections associated with fine-grained features and significant postural variations. We established a behavior-environment association model under controlled conditions (20-38 °C at 60% and 80% RH), identifying six heat stress-associated behaviors quantified by skewness and occurrence intensity. K-means clustering categorized these data into five distinct patterns, which were biologically validated by significant differences in corticosterone (CORT) and Heat Shock Protein 70 (HSP70) levels across clusters (P < 0.05). Accordingly, a five-level BHSA model was established, stratifying stress into Normal, Alert, Impact, Harm, and Disaster levels. Results demonstrated that YOLO-SPS improved detection accuracy by 3.8% and inference speed by 22.1% compared to the baseline. In comparison to the traditional THI methods, the BHSA triggered Alert and Harm warnings at temperatures 2 ± 1 °C lower, enabling earlier detection. Furthermore, under extreme heat, the BHSA successfully differentiated between Harm and Disaster states. This study realizes a paradigm shift in heat stress assessment from “environment-driven” to “animal behavior-driven,” providing robust technical support for precision livestock management and early intervention strategies.
准确的热应激评估是早期预防和保障家禽福利的关键。然而,目前依赖于温度-湿度指数(THI)的方案往往不能捕捉到蛋鸡的真实生理热负荷。相反,动物行为是对环境压力的直接表型反应,为适应机制提供了独特的见解。因此,本研究提出了一种基于行为反馈驱动的基于行为的热应激评估方法。为了实现个体热应力行为的精确、无创检测和自动特征提取,我们开发了基于YOLOv12的增强架构YOLO-SPS。通过集成SPD-Conv模块、A2C2F-PPA结构和Slide Loss函数,该模型有效地减少了与细粒度特征和显著姿势变化相关的漏检和误检。我们建立了受控条件下(20-38°C, 60%和80% RH)的行为-环境关联模型,通过偏度和发生强度确定了6种与热应力相关的行为。K-means聚类将这些数据分为五种不同的模式,并通过聚类间皮质酮(CORT)和热休克蛋白70 (HSP70)水平的显著差异进行生物学验证(P < 0.05)。据此,建立了一个五级BHSA模型,将压力分为正常、警报、影响、危害和灾难四个级别。结果表明,与基线相比,YOLO-SPS的检测精度提高了3.8%,推理速度提高了22.1%。与传统的THI方法相比,BHSA在温度降低2±1°C时触发警报和危害警告,从而能够更早地检测到。此外,在极端高温下,BHSA成功区分了危害和灾难状态。本研究实现了热应激评估从“环境驱动”到“动物行为驱动”的范式转变,为家畜精准管理和早期干预策略提供了强有力的技术支持。
{"title":"Multi-dimensional behavioral signature analysis of laying hens under heat stress: development of a behavior-based level assessment model","authors":"Zixuan Zhou ,&nbsp;Lihua Li ,&nbsp;Hao Xue ,&nbsp;Yuchen Jia ,&nbsp;Yao Yu ,&nbsp;Zongkui Xie ,&nbsp;Yuhan Gu","doi":"10.1016/j.compag.2026.111433","DOIUrl":"10.1016/j.compag.2026.111433","url":null,"abstract":"<div><div>Accurate heat stress assessment is pivotal for early prevention and safeguarding poultry welfare. However, current protocols relying on the Temperature-Humidity Index (THI) often fail to capture the true physiological thermal load of laying hens. Conversely, animal behavior serves as a direct phenotypic response to environmental stressors, offering unique insights into adaptive mechanisms. Consequently, this study proposes a Behavior-based Heat Stress Assessment (BHSA) method driven by behavioral feedback. To achieve precise, non-invasive detection and automated feature extraction of individual heat stress behaviors, we developed YOLO-SPS, an enhanced architecture based on YOLOv12. By integrating SPD-Conv modules, A2C2F-PPA structures, and a Slide Loss function, the model effectively mitigates missed and false detections associated with fine-grained features and significant postural variations. We established a behavior-environment association model under controlled conditions (20-38 °C at 60% and 80% RH), identifying six heat stress-associated behaviors quantified by skewness and occurrence intensity. K-means clustering categorized these data into five distinct patterns, which were biologically validated by significant differences in corticosterone (CORT) and Heat Shock Protein 70 (HSP70) levels across clusters (P &lt; 0.05). Accordingly, a five-level BHSA model was established, stratifying stress into Normal, Alert, Impact, Harm, and Disaster levels. Results demonstrated that YOLO-SPS improved detection accuracy by 3.8% and inference speed by 22.1% compared to the baseline. In comparison to the traditional THI methods, the BHSA triggered Alert and Harm warnings at temperatures 2 ± 1 °C lower, enabling earlier detection. Furthermore, under extreme heat, the BHSA successfully differentiated between Harm and Disaster states. This study realizes a paradigm shift in heat stress assessment from “environment-driven” to “animal behavior-driven,” providing robust technical support for precision livestock management and early intervention strategies.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"244 ","pages":"Article 111433"},"PeriodicalIF":8.9,"publicationDate":"2026-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146079879","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LMTRNet: Lightweight Multi-scale Temperature-Regulated Network For real-time detection of multiple species pests LMTRNet:用于实时检测多种害虫的轻型多尺度温度调节网络
IF 8.9 1区 农林科学 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2026-01-29 DOI: 10.1016/j.compag.2026.111487
Taiyu Xu
Pests cause significant losses to global agricultural production. However, their concealment and mobility pose considerable challenges for real-time pest detection. In this paper, we propose a Lightweight Multi-scale Temperature-Regulated Network (LMTRNet) for real-time multi-pest detection. LMTRNet consists of three key components: a lightweight feature extraction network, a multi-scale fusion network (DMFN), and an adaptive temperature-modulated head (AITMH). To improve feature learning efficiency, we introduce the Adaptive Feature Sparsity Block (AFSBlock) and the Spatial-Channel Decoupled Downsampling (SCDown) module in the lightweight feature extraction network, reducing computational cost while preserving accuracy. The DMFN employs skip connections for enhanced multi-scale feature integration, while AITMH leverages a temperature-aware fusion strategy to refine feature representation. Additionally, LMTRNet utilizes an anchor-free detection head with a dynamic inner loss (DILoss) function to improve localization accuracy, particularly for small pests in cluttered environments. To address data scarcity, we propose a Synthetic Object Projection Augmentation method, enriching training diversity by projecting multiple species of pest onto complex backgrounds. Experiments are conducted on a proprietary dataset and the Pest24 dataset to evaluate LMTRNet’s performance. On the proprietary dataset, LMTRNet-l, with only 23.03M parameters, achieved the precision of 96.02%, mAP50 of 95.7%, and mAP50-95 of 63.33%. On the Pest24 dataset, it attained the precision of 77.49%, mAP50 of 70.1%, and mAP50-95 of 45.71%. These results demonstrate that LMTRNet achieves state-of-the-art accuracy and real-time performance, making it a robust solution for practical pest monitoring.
害虫对全球农业生产造成重大损失。然而,它们的隐蔽性和移动性给实时害虫检测带来了相当大的挑战。在本文中,我们提出了一个轻量级的多尺度温度调节网络(LMTRNet),用于实时检测多种害虫。LMTRNet由三个关键部分组成:轻量级特征提取网络、多尺度融合网络(DMFN)和自适应温度调制头(AITMH)。为了提高特征学习效率,我们在轻量级特征提取网络中引入了自适应特征稀疏块(AFSBlock)和空间信道解耦下采样(SCDown)模块,在保持精度的同时降低了计算成本。DMFN采用跳跃连接来增强多尺度特征集成,而AITMH利用温度感知融合策略来优化特征表示。此外,LMTRNet采用无锚检测头,具有动态内部损失(DILoss)功能,以提高定位精度,特别是对杂乱环境中的小型害虫。为了解决数据稀缺问题,我们提出了一种合成对象投影增强方法,通过将多种害虫投影到复杂背景上来丰富训练多样性。在专有数据集和Pest24数据集上进行了实验,以评估LMTRNet的性能。在专有数据集上,lmtrnet - 1仅使用23.03M个参数,精度为96.02%,mAP50为95.7%,mAP50-95为63.33%。在Pest24数据集上,其精度为77.49%,mAP50为70.1%,mAP50-95为45.71%。这些结果表明,LMTRNet达到了最先进的准确性和实时性,使其成为实际害虫监测的强大解决方案。
{"title":"LMTRNet: Lightweight Multi-scale Temperature-Regulated Network For real-time detection of multiple species pests","authors":"Taiyu Xu","doi":"10.1016/j.compag.2026.111487","DOIUrl":"10.1016/j.compag.2026.111487","url":null,"abstract":"<div><div>Pests cause significant losses to global agricultural production. However, their concealment and mobility pose considerable challenges for real-time pest detection. In this paper, we propose a Lightweight Multi-scale Temperature-Regulated Network (LMTRNet) for real-time multi-pest detection. LMTRNet consists of three key components: a lightweight feature extraction network, a multi-scale fusion network (DMFN), and an adaptive temperature-modulated head (AITMH). To improve feature learning efficiency, we introduce the Adaptive Feature Sparsity Block (AFSBlock) and the Spatial-Channel Decoupled Downsampling (SCDown) module in the lightweight feature extraction network, reducing computational cost while preserving accuracy. The DMFN employs skip connections for enhanced multi-scale feature integration, while AITMH leverages a temperature-aware fusion strategy to refine feature representation. Additionally, LMTRNet utilizes an anchor-free detection head with a dynamic inner loss (DILoss) function to improve localization accuracy, particularly for small pests in cluttered environments. To address data scarcity, we propose a Synthetic Object Projection Augmentation method, enriching training diversity by projecting multiple species of pest onto complex backgrounds. Experiments are conducted on a proprietary dataset and the Pest24 dataset to evaluate LMTRNet’s performance. On the proprietary dataset, LMTRNet-l, with only 23.03M parameters, achieved the precision of 96.02%, mAP50 of 95.7%, and mAP50-95 of 63.33%. On the Pest24 dataset, it attained the precision of 77.49%, mAP50 of 70.1%, and mAP50-95 of 45.71%. These results demonstrate that LMTRNet achieves state-of-the-art accuracy and real-time performance, making it a robust solution for practical pest monitoring.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"244 ","pages":"Article 111487"},"PeriodicalIF":8.9,"publicationDate":"2026-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146079798","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimization of spraying quality and drift risk in unmanned aerial spraying systems (UASS) based on Multi-Gradient droplet size control 基于多梯度液滴粒径控制的无人机喷雾系统(UASS)喷雾质量和漂移风险优化
IF 8.9 1区 农林科学 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2026-01-29 DOI: 10.1016/j.compag.2026.111481
Pengchao Chen , Jiapei Wu , Zhihao Bian , Jean Paul Douzals , Yingdong Qin , Hanbing Liu , Juan Wang , Yubin Lan
Unmanned aerial spraying systems (UASS) are widely used in agriculture; however, spray drift remains a significant barrier to their broader adoption. Conventional drift-control measures—such as nozzle optimization and adjuvants—primarily act by altering droplet size. This study introduces a dynamic droplet-size control approach for UASS equipped with centrifugal nozzles to balance spray quality and drift risk. We established the relationship between nozzle rotational speed and droplet size and developed an embedded, variable droplet-size UASS. The system utilizes differential RTK to acquire real-time UAV position data, enabling dynamic adjustment of droplet size during operation. Field trials demonstrated the system’s stability and reliability: the UASS responded promptly to ground commands for droplet size changes and accurately logged the corresponding adjustment locations. Data analysis indicated that increasing droplet size markedly reduces drift volume and droplet density in the downwind drift zone. Relative to a baseline without droplet-size control, a three-stage adjustment strategy reduced the drift ratio by 89.18%, shortened the 90% drift distance to 3.36 m, and delivered the highest drift-mitigation rate. To minimize drift while maintaining effective penetration, we propose using the very fine (VF) droplet size (82.4 μm) for the first two flight paths and increasing to 300–350 μm for the third. These findings demonstrate that dynamic droplet-size adjustment via pulse-width modulation can effectively reduce drift. External factors, such as wind and terrain, continue to be influential, underscoring the need for further research to refine and optimize drift-control strategies under diverse operating conditions.
无人机喷洒系统(UASS)在农业中应用广泛;然而,喷雾漂移仍然是其广泛采用的一个重大障碍。传统的漂移控制措施,如喷嘴优化和辅助措施,主要是通过改变液滴的大小来起作用。为平衡喷雾质量和漂移风险,提出了一种配备离心喷嘴的无人潜航器液滴粒径动态控制方法。我们建立了喷嘴转速与液滴大小之间的关系,并开发了一种嵌入式可变液滴大小的UASS。该系统利用差分RTK获取实时无人机位置数据,能够在操作过程中动态调整液滴大小。现场试验证明了该系统的稳定性和可靠性:UASS对液滴大小变化的地面命令做出了快速响应,并准确记录了相应的调整位置。数据分析表明,增大液滴尺寸可显著降低顺风漂移区的漂移体积和密度。相对于没有液滴大小控制的基线,三段式调整策略将漂移比降低了89.18%,将90%的漂移距离缩短到3.36 m,并提供了最高的漂移缓解率。为了在保持有效穿透的同时最大限度地减少漂移,我们建议在前两条飞行路径中使用非常细(VF)的液滴尺寸(82.4 μm),并在第三条飞行路径中增加到300-350 μm。这些结果表明,通过脉宽调制动态调整液滴大小可以有效地减少漂移。风和地形等外部因素仍然具有影响,因此需要进一步研究以完善和优化不同操作条件下的漂控策略。
{"title":"Optimization of spraying quality and drift risk in unmanned aerial spraying systems (UASS) based on Multi-Gradient droplet size control","authors":"Pengchao Chen ,&nbsp;Jiapei Wu ,&nbsp;Zhihao Bian ,&nbsp;Jean Paul Douzals ,&nbsp;Yingdong Qin ,&nbsp;Hanbing Liu ,&nbsp;Juan Wang ,&nbsp;Yubin Lan","doi":"10.1016/j.compag.2026.111481","DOIUrl":"10.1016/j.compag.2026.111481","url":null,"abstract":"<div><div>Unmanned aerial spraying systems (UASS) are widely used in agriculture; however, spray drift remains a significant barrier to their broader adoption. Conventional drift-control measures—such as nozzle optimization and adjuvants—primarily act by altering droplet size. This study introduces a dynamic droplet-size control approach for UASS equipped with centrifugal nozzles to balance spray quality and drift risk. We established the relationship between nozzle rotational speed and droplet size and developed an embedded, variable droplet-size UASS. The system utilizes differential RTK to acquire real-time UAV position data, enabling dynamic adjustment of droplet size during operation. Field trials demonstrated the system’s stability and reliability: the UASS responded promptly to ground commands for droplet size changes and accurately logged the corresponding adjustment locations. Data analysis indicated that increasing droplet size markedly reduces drift volume and droplet density in the downwind drift zone. Relative to a baseline without droplet-size control, a three-stage adjustment strategy reduced the drift ratio by 89.18%, shortened the 90% drift distance to 3.36 m, and delivered the highest drift-mitigation rate. To minimize drift while maintaining effective penetration, we propose using the very fine (VF) droplet size (82.4 μm) for the first two flight paths and increasing to 300–350 μm for the third. These findings demonstrate that dynamic droplet-size adjustment via pulse-width modulation can effectively reduce drift. External factors, such as wind and terrain, continue to be influential, underscoring the need for further research to refine and optimize drift-control strategies under diverse operating conditions.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"244 ","pages":"Article 111481"},"PeriodicalIF":8.9,"publicationDate":"2026-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146079803","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computers and Electronics in Agriculture
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1