首页 > 最新文献

Computers and Electronics in Agriculture最新文献

英文 中文
Semantic segmentation–based detection of exposed soil regions in paddy fields for a floating-type puddling and leveling operation 基于语义分割的水田暴露土区浮式灌浆找平检测
IF 8.9 1区 农林科学 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2026-01-30 DOI: 10.1016/j.compag.2026.111494
Zian Liang , Jun Zhou , Yongpeng Chen , Yinghua Zhang , Tamiru Tesfaye Gemechu , Lei Li , Huayu Zhou , Muhammad Aurangzaib
Integrated puddling–leveling operation is a critical step in paddy field preparation, typically conducted between plowing and rice transplanting. However, the accuracy of elevation measurements in existing automatic leveling technologies is often constrained by limited operating ranges or susceptibility to electromagnetic interference, resulting in inconsistent leveling performance. Because the water surface naturally reflects terrain undulations in paddy fields, this study proposes a semantic segmentation–based approach to detect exposed soil regions for guiding a floating-type puddling and leveling implement. To this end, a lightweight semantic segmentation model, PL_DeepLabV3+_0.8, was developed specifically for integrated puddling–leveling operation. The model combines a MobileNetV2_S backbone, a Low-Level Feature Fusion Module (LFM), and structured pruning. These components collectively enable the rapid and accurate detection of exposed soil in paddy fields under computationally constrained conditions. The PL_DeepLabV3+_0.8 model was successfully deployed in the control system of a floating-type implement, and its effectiveness was validated through field tests conducted at different operating speeds and modes. On a paddy field image dataset, PL_DeepLabV3+_0.8 achieved a mean Pixel Accuracy (mPA) of 92.23 ± 0.22%, a mean Intersection over Union (mIoU) of 84.18 ± 0.31%, and an inference speed of 7.73 frames per second (FPS), outperforming the original DeepLabV3 + model, which achieved 91.90%, 83.81%, and 0.88 FPS, respectively. In field tests at operating speeds of 1.1 m/s and 1.5 m/s, the surface flatness (standard deviation of elevation) in two paddy fields was improved from 3.61 cm and 4.07 cm to 2.11 cm and 2.42 cm, respectively. These results indicate that the deployed model not only satisfies the flatness requirement for rice transplanting (< 3 cm) but also delivers a productivity increase of 0.28 ha/h compared with conventional manual operation. Overall, this study provides a useful reference for the development of intelligent puddling and leveling technologies in paddy field preparation.
整地作业是稻田整地的关键步骤,一般在翻耕和插秧之间进行。然而,现有自动找平技术的高程测量精度往往受到工作范围有限或易受电磁干扰的限制,导致找平性能不一致。由于水田的水面自然反映地形起伏,本研究提出了一种基于语义分割的暴露土壤区域检测方法,以指导浮式刨平机。为此,我们开发了一个轻量级的语义分割模型,PL_DeepLabV3+_0.8,专门用于集成的调平布丁操作。该模型结合了MobileNetV2_S主干、低级特征融合模块(LFM)和结构化修剪。这些组成部分共同使在计算约束条件下快速准确地检测水田暴露的土壤。PL_DeepLabV3+_0.8模型成功部署在浮式采油器的控制系统中,并通过不同操作速度和模式下的现场测试验证了其有效性。在水稻田图像数据集上,PL_DeepLabV3+_0.8模型的平均像素精度(mPA)为92.23±0.22%,平均相交比(mIoU)为84.18±0.31%,推理速度为7.73帧/秒(FPS),优于原始DeepLabV3+模型的91.90%、83.81%和0.88 FPS。在1.1 m/s和1.5 m/s运行速度下的田间试验中,两个水田的表面平整度(高程标准差)分别从3.61 cm和4.07 cm提高到2.11 cm和2.42 cm。上述结果表明,所部署的模型不仅满足水稻移栽平整度要求(< 3 cm),而且与常规人工操作相比,生产率提高了0.28 ha/h。总之,本研究为水田整备智能灌浆平整技术的发展提供了有益的参考。
{"title":"Semantic segmentation–based detection of exposed soil regions in paddy fields for a floating-type puddling and leveling operation","authors":"Zian Liang ,&nbsp;Jun Zhou ,&nbsp;Yongpeng Chen ,&nbsp;Yinghua Zhang ,&nbsp;Tamiru Tesfaye Gemechu ,&nbsp;Lei Li ,&nbsp;Huayu Zhou ,&nbsp;Muhammad Aurangzaib","doi":"10.1016/j.compag.2026.111494","DOIUrl":"10.1016/j.compag.2026.111494","url":null,"abstract":"<div><div>Integrated puddling–leveling operation is a critical step in paddy field preparation, typically conducted between plowing and rice transplanting. However, the accuracy of elevation measurements in existing automatic leveling technologies is often constrained by limited operating ranges or susceptibility to electromagnetic interference, resulting in inconsistent leveling performance. Because the water surface naturally reflects terrain undulations in paddy fields, this study proposes a semantic segmentation–based approach to detect exposed soil regions for guiding a floating-type puddling and leveling implement. To this end, a lightweight semantic segmentation model, PL_DeepLabV3+_0.8, was developed specifically for integrated puddling–leveling operation. The model combines a MobileNetV2_S backbone, a Low-Level Feature Fusion Module (LFM), and structured pruning. These components collectively enable the rapid and accurate detection of exposed soil in paddy fields under computationally constrained conditions. The PL_DeepLabV3+_0.8 model was successfully deployed in the control system of a floating-type implement, and its effectiveness was validated through field tests conducted at different operating<!--> <!-->speeds and modes. On a paddy field image dataset, PL_DeepLabV3+_0.8 achieved a mean Pixel Accuracy (mPA) of 92.23 ± 0.22%, a mean Intersection over Union (mIoU) of 84.18 ± 0.31%, and an inference speed of 7.73 frames per second (FPS), outperforming the original DeepLabV3 + model, which achieved 91.90%, 83.81%, and 0.88 FPS, respectively. In field tests at operating speeds of 1.1 m/s and 1.5 m/s, the surface flatness (standard deviation of elevation) in two paddy fields was improved from 3.61 cm and 4.07 cm to 2.11 cm and 2.42 cm, respectively. These results indicate that the deployed model not only satisfies the flatness requirement for rice transplanting (&lt; 3 cm) but also delivers a productivity increase of 0.28 ha/h compared with conventional manual operation. Overall, this study provides a useful reference for the development of intelligent puddling and leveling technologies in paddy field preparation.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"244 ","pages":"Article 111494"},"PeriodicalIF":8.9,"publicationDate":"2026-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146080286","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Design and experiment of a film-breaking robot for sweet potato horizontal transplantation with plastic mulch 地膜地瓜水平移栽破膜机器人的设计与试验
IF 8.9 1区 农林科学 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2026-01-30 DOI: 10.1016/j.compag.2026.111483
Wanzhi Zhang , Yangqian Zhang , Xuyang Wang , Yulu Sun , Hongjuan Liu , Zhigang Li
Plastic film mulch cultivation technology is a crucial agronomic measure for enhancing early-spring sweet potato yields. However, prolonged film coverage can scorch seedlings beneath the mulch, adversely affecting their normal growth and subsequent yield. Therefore, timely film-breaking to guide seedling emergence is essential. Currently, manual film-breaking is the primary method. To address the high labor intensity associated with manual operations, this paper designs a sweet potato seedling film-breaking robot based on deep learning and a Delta parallel robot. First, a calibration method was proposed for scenarios where the camera field of view is separated from the Delta parallel robot’s workspace, which avoids missed detection issues caused by manipulator occlusion. Subsequently, images of sweet potato seedlings captured under various environmental conditions were selected as the data basis for the deep learning model, and the BW-YOLO sweet potato seedling detection model was constructed. This model replaces the CIoU loss function with the Wise-IoU v3 loss function and incorporates a BiFPN module into the neck network. Testing results show that the model achieved a mean Average Precision (mAP) of 96.8% and a detection speed of 76.34 FPS, demonstrating significant improvements in both detection accuracy and speed. Finally, the detection model was deployed on the sweet potato seedling film-breaking robot, and field trials were conducted. The model achieved an average recognition success rate of 90.56%, the film-breaking robot attained a film-breaking qualification rate of 84.56%, and the seedling emergence rate reached 83.74%. The proposed sweet potato film-breaking robot for flat cultivation enables unmanned operation during seedling emergence, providing a valuable reference for the design of intelligent agricultural equipment.
地膜覆盖栽培技术是提高早春甘薯产量的一项重要农艺措施。然而,长时间的薄膜覆盖会烧焦地膜下的幼苗,对其正常生长和随后的产量产生不利影响。因此,及时破膜引导幼苗出苗至关重要。目前,手工破膜是主要的方法。针对人工作业劳动强度高的问题,设计了一种基于深度学习的红薯苗破膜机器人和一种Delta并联机器人。首先,针对相机视场与Delta并联机器人工作空间分离的情况,提出了一种校正方法,避免了由于机械手遮挡造成的漏检问题;随后,选取不同环境条件下捕获的甘薯苗木图像作为深度学习模型的数据基础,构建BW-YOLO甘薯苗木检测模型。该型号采用Wise-IoU v3的损失函数代替CIoU的损失函数,并在颈部网络中加入一个BiFPN模块。测试结果表明,该模型的平均精度(mAP)为96.8%,检测速度为76.34 FPS,检测精度和速度均有显著提高。最后,将该检测模型部署在红薯育苗破膜机器人上,并进行田间试验。模型平均识别成功率为90.56%,破膜机器人破膜合格率为84.56%,出苗率为83.74%。本文提出的平耕红薯破膜机器人实现了出苗过程中的无人操作,为智能农业装备的设计提供了有价值的参考。
{"title":"Design and experiment of a film-breaking robot for sweet potato horizontal transplantation with plastic mulch","authors":"Wanzhi Zhang ,&nbsp;Yangqian Zhang ,&nbsp;Xuyang Wang ,&nbsp;Yulu Sun ,&nbsp;Hongjuan Liu ,&nbsp;Zhigang Li","doi":"10.1016/j.compag.2026.111483","DOIUrl":"10.1016/j.compag.2026.111483","url":null,"abstract":"<div><div>Plastic film mulch cultivation technology is a crucial agronomic measure for enhancing early-spring sweet potato yields. However, prolonged film coverage can scorch seedlings beneath the mulch, adversely affecting their normal growth and subsequent yield. Therefore, timely film-breaking to guide seedling emergence is essential. Currently, manual film-breaking is the primary method. To address the high labor intensity associated with manual operations, this paper designs a sweet potato seedling film-breaking robot based on deep learning and a Delta parallel robot. First, a calibration method was proposed for scenarios where the camera field of view is separated from the Delta parallel robot’s workspace, which avoids missed detection issues caused by manipulator occlusion. Subsequently, images of sweet potato seedlings captured under various environmental conditions were selected as the data basis for the deep learning model, and the BW-YOLO sweet potato seedling detection model was constructed. This model replaces the CIoU loss function with the Wise-IoU v3 loss function and incorporates a BiFPN module into the neck network. Testing results show that the model achieved a mean Average Precision (mAP) of 96.8% and a detection speed of 76.34 FPS, demonstrating significant improvements in both detection accuracy and speed. Finally, the detection model was deployed on the sweet potato seedling film-breaking robot, and field trials were conducted. The model achieved an average recognition success rate of 90.56%, the film-breaking robot attained a film-breaking qualification rate of 84.56%, and the seedling emergence rate reached 83.74%. The proposed sweet potato film-breaking robot for flat cultivation enables unmanned operation during seedling emergence, providing a valuable reference for the design of intelligent agricultural equipment.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"244 ","pages":"Article 111483"},"PeriodicalIF":8.9,"publicationDate":"2026-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146080290","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Development of a new Single-Tree-Row-Tracking robot navigation for intra-row weeding operations in orchards using a Machine stereo vision system and LiDAR 利用机器立体视觉系统和激光雷达开发一种新的单树行跟踪机器人导航,用于果园行内除草作业
IF 8.9 1区 农林科学 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2026-01-30 DOI: 10.1016/j.compag.2026.111491
Rizky Mulya Sampurno , Zifu Liu , Victor Massaki Nakaguchi , Ailian Jiang , Tofael Ahamed
Driven by the need for efficient intra-row weed management in orchards, a new robotic system is designed and proposed to operate in narrow spaces and low-hanging branches with minimal soil compaction and no reliance on Global Navigation Satellite System (GNSS). To enable the tree-row following that is required for intra-row weeding, we introduced a vision-based framework that combined a 3D camera and a lightweight YOLOv8 instance segmentation model to detect tree trunks and extracted the navigation path from a single tree row through the frontal side of the view of the robot. The trajectory of the robot had offset by 0.8 m from the tree row, enabling a new Light Detection and Ranging (LiDAR)-triggered side-shift mechanism to target uncut weeds between trees in intra-rows. An experimental evaluation in simulated environments demonstrated stable navigation performance, with a 0.329 m RMSE. Furthermore, the side-shift actuation mechanism for weeding achieved 84.04% accuracy at a lower speed (0.5 m/s) and 76.85% accuracy at a faster speed (0.8 m/s); these results were due in part to the processing latency in real-time LiDAR point cloud analysis. These findings highlight the importance of optimizing computational efficiency and actuation timing for better field performance. Finally, the developed robotic system effectively integrated 3D vision, deep learning, and LiDAR-triggered actuation to perform autonomous intra-row weeding, demonstrated strong potential to address operational efficiency for intra-row weed management in orchards.
为满足果园行内杂草高效管理的需求,设计并提出了一种新的机器人系统,该系统可以在狭窄空间和低悬枝上运行,土壤压实程度最低,不依赖全球导航卫星系统(GNSS)。为了实现行内除草所需的树行跟踪,我们引入了一个基于视觉的框架,该框架结合了3D相机和轻量级的YOLOv8实例分割模型来检测树干,并通过机器人的正面视图从单行树中提取导航路径。机器人的轨迹与树行偏移了0.8米,从而启用了一种新的光探测和测距(LiDAR)触发的侧移机制,以瞄准行内树木之间未修剪的杂草。在模拟环境下的实验评估表明,导航性能稳定,RMSE为0.329 m。侧移驱动机构在低速(0.5 m/s)和高速(0.8 m/s)下的除草精度分别达到84.04%和76.85%;这些结果部分是由于实时激光雷达点云分析的处理延迟。这些发现强调了优化计算效率和驱动时间以获得更好的现场性能的重要性。最后,开发的机器人系统有效地集成了3D视觉、深度学习和激光雷达触发驱动来执行自动行内除草,显示出在果园行内杂草管理的操作效率方面的强大潜力。
{"title":"Development of a new Single-Tree-Row-Tracking robot navigation for intra-row weeding operations in orchards using a Machine stereo vision system and LiDAR","authors":"Rizky Mulya Sampurno ,&nbsp;Zifu Liu ,&nbsp;Victor Massaki Nakaguchi ,&nbsp;Ailian Jiang ,&nbsp;Tofael Ahamed","doi":"10.1016/j.compag.2026.111491","DOIUrl":"10.1016/j.compag.2026.111491","url":null,"abstract":"<div><div>Driven by the need for efficient intra-row weed management in orchards, a new robotic system is designed and proposed to operate in narrow spaces and low-hanging branches with minimal soil compaction and no reliance on Global Navigation Satellite System (GNSS). To enable the tree-row following that is required for intra-row weeding, we introduced a vision-based framework that combined a 3D camera and a lightweight YOLOv8 instance segmentation model to detect tree trunks and extracted the navigation path from a single tree row through the frontal side of the view of the robot. The trajectory of the robot had offset by 0.8 m from the tree row, enabling a new Light Detection and Ranging (LiDAR)-triggered side-shift mechanism to target uncut weeds between trees in intra-rows. An experimental evaluation in simulated environments demonstrated stable navigation performance, with a 0.329 m RMSE. Furthermore, the side-shift actuation mechanism for weeding achieved 84.04% accuracy at a lower speed (0.5 m/s) and 76.85% accuracy at a faster speed (0.8 m/s); these results were due in part to the processing latency in real-time LiDAR point cloud analysis. These findings highlight the importance of optimizing computational efficiency and actuation timing for better field performance. Finally, the developed robotic system effectively integrated 3D vision, deep learning, and LiDAR-triggered actuation to perform autonomous intra-row weeding, demonstrated strong potential to address operational efficiency for intra-row weed management in orchards.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"244 ","pages":"Article 111491"},"PeriodicalIF":8.9,"publicationDate":"2026-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146079825","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep learning driven edge inference for pest detection in potato crops using the AgriScout robot 利用AgriScout机器人进行马铃薯害虫检测的深度学习驱动边缘推理
IF 8.9 1区 农林科学 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2026-01-29 DOI: 10.1016/j.compag.2026.111492
Yuvraj Singh Gill , Hassan Afzaal , Charanpreet Singh , Gurjit S. Randhawa , Kritikiran Angrish , Navpreet Jaura , Zarnab Qamar , Aitazaz A. Farooque
Early field-scale surveillance of Colorado potato beetle remains a persistent bottleneck for sustainable potato production because conventional scouting is labor-intensive and provides limited spatial resolution for timely intervention. Here, we present AgriScout, a battery-powered autonomous scouting robot equipped with RGB imaging, controlled lighting, and RTK-GPS geotagging for continuous row-to-row data collection. Using AgriScout, we curated a field dataset of 832 georeferenced images and manually annotated adult beetles with tight bounding boxes to support tiny-object detection under real canopy conditions. We benchmarked six YOLO object detectors (YOLOv5s, YOLOv8s, YOLOv9s, YOLOv10s, YOLOv11s, and YOLOv12s) using transfer learning, high-resolution inputs (1280 × 1280), and an augmentation strategy tailored to small targets (including mosaic, scaling, and translation). To address training variability on the modest dataset, models were evaluated across multiple random seeds (7, 42, 123, 999, and 2024) and compared using precision, recall, mAP, F1, confidence behavior, and statistical tests of between-model differences. Across runs, YOLOv11s provided the most reliable overall balance for deployment, exhibiting strong precision and robust localization performance. For edge deployment, inference throughput was measured on an NVIDIA Jetson Orin Nano across multiple export formats; TensorRT consistently delivered the highest FPS, reaching 46.5 FPS (YOLOv5s) and exceeding 40 FPS for several variants, confirming real-time feasibility under FP32 inference. Finally, YOLOv11s detections were fused with RTK-GPS coordinates to generate centimeter-level infestation maps that visualize spatial clustering of beetle activity and support hotspot-driven, targeted management. Collectively, this work demonstrates an end-to-end, robot-to-map pipeline for beetle monitoring and provides a reproducible benchmark of accuracy, stability, and edge deployability for YOLO-based pest detection in commercial potato systems.
科罗拉多马铃薯甲虫的早期田间监测仍然是马铃薯可持续生产的持续瓶颈,因为传统的侦察是劳动密集型的,并且为及时干预提供有限的空间分辨率。在这里,我们展示了AgriScout,一个电池供电的自主侦察机器人,配备了RGB成像,控制照明和RTK-GPS地理标记,用于连续的逐行数据收集。利用AgriScout,我们整理了832张地理参考图像的野外数据集,并对成年甲虫进行了严密的边界框标注,以支持真实冠层条件下的微小目标检测。我们使用迁移学习、高分辨率输入(1280 × 1280)和针对小目标的增强策略(包括马赛克、缩放和平移)对六个YOLO目标检测器(yolov5、yolov8、yolov9、yolov10、yolov11和yolov12)进行了基准测试。为了解决适度数据集上的训练可变性,模型在多个随机种子(7,42,123,999和2024)上进行评估,并使用精度,召回率,mAP, F1,置信度行为和模型间差异的统计检验进行比较。在运行过程中,yolov11为部署提供了最可靠的整体平衡,表现出强大的精度和强大的定位性能。对于边缘部署,在NVIDIA Jetson Orin Nano上跨多种导出格式测量推理吞吐量;TensorRT始终提供最高的FPS,达到46.5 FPS (YOLOv5s),并且在几个变体中超过40 FPS,证实了FP32推理下的实时可行性。最后,将YOLOv11s检测结果与RTK-GPS坐标融合,生成厘米级虫害图,可视化甲虫活动的空间聚类,支持热点驱动的针对性管理。总的来说,这项工作展示了一个端到端、机器人到地图的甲虫监测管道,并为商业马铃薯系统中基于yolo的害虫检测提供了准确性、稳定性和边缘可部署性的可重复基准。
{"title":"Deep learning driven edge inference for pest detection in potato crops using the AgriScout robot","authors":"Yuvraj Singh Gill ,&nbsp;Hassan Afzaal ,&nbsp;Charanpreet Singh ,&nbsp;Gurjit S. Randhawa ,&nbsp;Kritikiran Angrish ,&nbsp;Navpreet Jaura ,&nbsp;Zarnab Qamar ,&nbsp;Aitazaz A. Farooque","doi":"10.1016/j.compag.2026.111492","DOIUrl":"10.1016/j.compag.2026.111492","url":null,"abstract":"<div><div>Early field-scale surveillance of Colorado potato beetle remains a persistent bottleneck for sustainable potato production because conventional scouting is labor-intensive and provides limited spatial resolution for timely intervention. Here, we present AgriScout, a battery-powered autonomous scouting robot equipped with RGB imaging, controlled lighting, and RTK-GPS geotagging for continuous row-to-row data collection. Using AgriScout, we curated a field dataset of 832 georeferenced images and manually annotated adult beetles with tight bounding boxes to support tiny-object detection under real canopy conditions. We benchmarked six YOLO object detectors (YOLOv5s, YOLOv8s, YOLOv9s, YOLOv10s, YOLOv11s, and YOLOv12s) using transfer learning, high-resolution inputs (1280 × 1280), and an augmentation strategy tailored to small targets (including mosaic, scaling, and translation). To address training variability on the modest dataset, models were evaluated across multiple random seeds (7, 42, 123, 999, and 2024) and compared using precision, recall, mAP, F1, confidence behavior, and statistical tests of between-model differences. Across runs, YOLOv11s provided the most reliable overall balance for deployment, exhibiting strong precision and robust localization performance. For edge deployment, inference throughput was measured on an NVIDIA Jetson Orin Nano across multiple export formats; TensorRT consistently delivered the highest FPS, reaching 46.5 FPS (YOLOv5s) and exceeding 40 FPS for several variants, confirming real-time feasibility under FP32 inference. Finally, YOLOv11s detections were fused with RTK-GPS coordinates to generate centimeter-level infestation maps that visualize spatial clustering of beetle activity and support hotspot-driven, targeted management. Collectively, this work demonstrates an end-to-end, robot-to-map pipeline for beetle monitoring and provides a reproducible benchmark of accuracy, stability, and edge deployability for YOLO-based pest detection in commercial potato systems.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"244 ","pages":"Article 111492"},"PeriodicalIF":8.9,"publicationDate":"2026-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146080288","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-dimensional behavioral signature analysis of laying hens under heat stress: development of a behavior-based level assessment model 热应激条件下蛋鸡多维行为特征分析:基于行为水平评估模型的建立
IF 8.9 1区 农林科学 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2026-01-29 DOI: 10.1016/j.compag.2026.111433
Zixuan Zhou , Lihua Li , Hao Xue , Yuchen Jia , Yao Yu , Zongkui Xie , Yuhan Gu
Accurate heat stress assessment is pivotal for early prevention and safeguarding poultry welfare. However, current protocols relying on the Temperature-Humidity Index (THI) often fail to capture the true physiological thermal load of laying hens. Conversely, animal behavior serves as a direct phenotypic response to environmental stressors, offering unique insights into adaptive mechanisms. Consequently, this study proposes a Behavior-based Heat Stress Assessment (BHSA) method driven by behavioral feedback. To achieve precise, non-invasive detection and automated feature extraction of individual heat stress behaviors, we developed YOLO-SPS, an enhanced architecture based on YOLOv12. By integrating SPD-Conv modules, A2C2F-PPA structures, and a Slide Loss function, the model effectively mitigates missed and false detections associated with fine-grained features and significant postural variations. We established a behavior-environment association model under controlled conditions (20-38 °C at 60% and 80% RH), identifying six heat stress-associated behaviors quantified by skewness and occurrence intensity. K-means clustering categorized these data into five distinct patterns, which were biologically validated by significant differences in corticosterone (CORT) and Heat Shock Protein 70 (HSP70) levels across clusters (P < 0.05). Accordingly, a five-level BHSA model was established, stratifying stress into Normal, Alert, Impact, Harm, and Disaster levels. Results demonstrated that YOLO-SPS improved detection accuracy by 3.8% and inference speed by 22.1% compared to the baseline. In comparison to the traditional THI methods, the BHSA triggered Alert and Harm warnings at temperatures 2 ± 1 °C lower, enabling earlier detection. Furthermore, under extreme heat, the BHSA successfully differentiated between Harm and Disaster states. This study realizes a paradigm shift in heat stress assessment from “environment-driven” to “animal behavior-driven,” providing robust technical support for precision livestock management and early intervention strategies.
准确的热应激评估是早期预防和保障家禽福利的关键。然而,目前依赖于温度-湿度指数(THI)的方案往往不能捕捉到蛋鸡的真实生理热负荷。相反,动物行为是对环境压力的直接表型反应,为适应机制提供了独特的见解。因此,本研究提出了一种基于行为反馈驱动的基于行为的热应激评估方法。为了实现个体热应力行为的精确、无创检测和自动特征提取,我们开发了基于YOLOv12的增强架构YOLO-SPS。通过集成SPD-Conv模块、A2C2F-PPA结构和Slide Loss函数,该模型有效地减少了与细粒度特征和显著姿势变化相关的漏检和误检。我们建立了受控条件下(20-38°C, 60%和80% RH)的行为-环境关联模型,通过偏度和发生强度确定了6种与热应力相关的行为。K-means聚类将这些数据分为五种不同的模式,并通过聚类间皮质酮(CORT)和热休克蛋白70 (HSP70)水平的显著差异进行生物学验证(P < 0.05)。据此,建立了一个五级BHSA模型,将压力分为正常、警报、影响、危害和灾难四个级别。结果表明,与基线相比,YOLO-SPS的检测精度提高了3.8%,推理速度提高了22.1%。与传统的THI方法相比,BHSA在温度降低2±1°C时触发警报和危害警告,从而能够更早地检测到。此外,在极端高温下,BHSA成功区分了危害和灾难状态。本研究实现了热应激评估从“环境驱动”到“动物行为驱动”的范式转变,为家畜精准管理和早期干预策略提供了强有力的技术支持。
{"title":"Multi-dimensional behavioral signature analysis of laying hens under heat stress: development of a behavior-based level assessment model","authors":"Zixuan Zhou ,&nbsp;Lihua Li ,&nbsp;Hao Xue ,&nbsp;Yuchen Jia ,&nbsp;Yao Yu ,&nbsp;Zongkui Xie ,&nbsp;Yuhan Gu","doi":"10.1016/j.compag.2026.111433","DOIUrl":"10.1016/j.compag.2026.111433","url":null,"abstract":"<div><div>Accurate heat stress assessment is pivotal for early prevention and safeguarding poultry welfare. However, current protocols relying on the Temperature-Humidity Index (THI) often fail to capture the true physiological thermal load of laying hens. Conversely, animal behavior serves as a direct phenotypic response to environmental stressors, offering unique insights into adaptive mechanisms. Consequently, this study proposes a Behavior-based Heat Stress Assessment (BHSA) method driven by behavioral feedback. To achieve precise, non-invasive detection and automated feature extraction of individual heat stress behaviors, we developed YOLO-SPS, an enhanced architecture based on YOLOv12. By integrating SPD-Conv modules, A2C2F-PPA structures, and a Slide Loss function, the model effectively mitigates missed and false detections associated with fine-grained features and significant postural variations. We established a behavior-environment association model under controlled conditions (20-38 °C at 60% and 80% RH), identifying six heat stress-associated behaviors quantified by skewness and occurrence intensity. K-means clustering categorized these data into five distinct patterns, which were biologically validated by significant differences in corticosterone (CORT) and Heat Shock Protein 70 (HSP70) levels across clusters (P &lt; 0.05). Accordingly, a five-level BHSA model was established, stratifying stress into Normal, Alert, Impact, Harm, and Disaster levels. Results demonstrated that YOLO-SPS improved detection accuracy by 3.8% and inference speed by 22.1% compared to the baseline. In comparison to the traditional THI methods, the BHSA triggered Alert and Harm warnings at temperatures 2 ± 1 °C lower, enabling earlier detection. Furthermore, under extreme heat, the BHSA successfully differentiated between Harm and Disaster states. This study realizes a paradigm shift in heat stress assessment from “environment-driven” to “animal behavior-driven,” providing robust technical support for precision livestock management and early intervention strategies.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"244 ","pages":"Article 111433"},"PeriodicalIF":8.9,"publicationDate":"2026-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146079879","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LMTRNet: Lightweight Multi-scale Temperature-Regulated Network For real-time detection of multiple species pests LMTRNet:用于实时检测多种害虫的轻型多尺度温度调节网络
IF 8.9 1区 农林科学 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2026-01-29 DOI: 10.1016/j.compag.2026.111487
Taiyu Xu
Pests cause significant losses to global agricultural production. However, their concealment and mobility pose considerable challenges for real-time pest detection. In this paper, we propose a Lightweight Multi-scale Temperature-Regulated Network (LMTRNet) for real-time multi-pest detection. LMTRNet consists of three key components: a lightweight feature extraction network, a multi-scale fusion network (DMFN), and an adaptive temperature-modulated head (AITMH). To improve feature learning efficiency, we introduce the Adaptive Feature Sparsity Block (AFSBlock) and the Spatial-Channel Decoupled Downsampling (SCDown) module in the lightweight feature extraction network, reducing computational cost while preserving accuracy. The DMFN employs skip connections for enhanced multi-scale feature integration, while AITMH leverages a temperature-aware fusion strategy to refine feature representation. Additionally, LMTRNet utilizes an anchor-free detection head with a dynamic inner loss (DILoss) function to improve localization accuracy, particularly for small pests in cluttered environments. To address data scarcity, we propose a Synthetic Object Projection Augmentation method, enriching training diversity by projecting multiple species of pest onto complex backgrounds. Experiments are conducted on a proprietary dataset and the Pest24 dataset to evaluate LMTRNet’s performance. On the proprietary dataset, LMTRNet-l, with only 23.03M parameters, achieved the precision of 96.02%, mAP50 of 95.7%, and mAP50-95 of 63.33%. On the Pest24 dataset, it attained the precision of 77.49%, mAP50 of 70.1%, and mAP50-95 of 45.71%. These results demonstrate that LMTRNet achieves state-of-the-art accuracy and real-time performance, making it a robust solution for practical pest monitoring.
害虫对全球农业生产造成重大损失。然而,它们的隐蔽性和移动性给实时害虫检测带来了相当大的挑战。在本文中,我们提出了一个轻量级的多尺度温度调节网络(LMTRNet),用于实时检测多种害虫。LMTRNet由三个关键部分组成:轻量级特征提取网络、多尺度融合网络(DMFN)和自适应温度调制头(AITMH)。为了提高特征学习效率,我们在轻量级特征提取网络中引入了自适应特征稀疏块(AFSBlock)和空间信道解耦下采样(SCDown)模块,在保持精度的同时降低了计算成本。DMFN采用跳跃连接来增强多尺度特征集成,而AITMH利用温度感知融合策略来优化特征表示。此外,LMTRNet采用无锚检测头,具有动态内部损失(DILoss)功能,以提高定位精度,特别是对杂乱环境中的小型害虫。为了解决数据稀缺问题,我们提出了一种合成对象投影增强方法,通过将多种害虫投影到复杂背景上来丰富训练多样性。在专有数据集和Pest24数据集上进行了实验,以评估LMTRNet的性能。在专有数据集上,lmtrnet - 1仅使用23.03M个参数,精度为96.02%,mAP50为95.7%,mAP50-95为63.33%。在Pest24数据集上,其精度为77.49%,mAP50为70.1%,mAP50-95为45.71%。这些结果表明,LMTRNet达到了最先进的准确性和实时性,使其成为实际害虫监测的强大解决方案。
{"title":"LMTRNet: Lightweight Multi-scale Temperature-Regulated Network For real-time detection of multiple species pests","authors":"Taiyu Xu","doi":"10.1016/j.compag.2026.111487","DOIUrl":"10.1016/j.compag.2026.111487","url":null,"abstract":"<div><div>Pests cause significant losses to global agricultural production. However, their concealment and mobility pose considerable challenges for real-time pest detection. In this paper, we propose a Lightweight Multi-scale Temperature-Regulated Network (LMTRNet) for real-time multi-pest detection. LMTRNet consists of three key components: a lightweight feature extraction network, a multi-scale fusion network (DMFN), and an adaptive temperature-modulated head (AITMH). To improve feature learning efficiency, we introduce the Adaptive Feature Sparsity Block (AFSBlock) and the Spatial-Channel Decoupled Downsampling (SCDown) module in the lightweight feature extraction network, reducing computational cost while preserving accuracy. The DMFN employs skip connections for enhanced multi-scale feature integration, while AITMH leverages a temperature-aware fusion strategy to refine feature representation. Additionally, LMTRNet utilizes an anchor-free detection head with a dynamic inner loss (DILoss) function to improve localization accuracy, particularly for small pests in cluttered environments. To address data scarcity, we propose a Synthetic Object Projection Augmentation method, enriching training diversity by projecting multiple species of pest onto complex backgrounds. Experiments are conducted on a proprietary dataset and the Pest24 dataset to evaluate LMTRNet’s performance. On the proprietary dataset, LMTRNet-l, with only 23.03M parameters, achieved the precision of 96.02%, mAP50 of 95.7%, and mAP50-95 of 63.33%. On the Pest24 dataset, it attained the precision of 77.49%, mAP50 of 70.1%, and mAP50-95 of 45.71%. These results demonstrate that LMTRNet achieves state-of-the-art accuracy and real-time performance, making it a robust solution for practical pest monitoring.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"244 ","pages":"Article 111487"},"PeriodicalIF":8.9,"publicationDate":"2026-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146079798","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimization of spraying quality and drift risk in unmanned aerial spraying systems (UASS) based on Multi-Gradient droplet size control 基于多梯度液滴粒径控制的无人机喷雾系统(UASS)喷雾质量和漂移风险优化
IF 8.9 1区 农林科学 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2026-01-29 DOI: 10.1016/j.compag.2026.111481
Pengchao Chen , Jiapei Wu , Zhihao Bian , Jean Paul Douzals , Yingdong Qin , Hanbing Liu , Juan Wang , Yubin Lan
Unmanned aerial spraying systems (UASS) are widely used in agriculture; however, spray drift remains a significant barrier to their broader adoption. Conventional drift-control measures—such as nozzle optimization and adjuvants—primarily act by altering droplet size. This study introduces a dynamic droplet-size control approach for UASS equipped with centrifugal nozzles to balance spray quality and drift risk. We established the relationship between nozzle rotational speed and droplet size and developed an embedded, variable droplet-size UASS. The system utilizes differential RTK to acquire real-time UAV position data, enabling dynamic adjustment of droplet size during operation. Field trials demonstrated the system’s stability and reliability: the UASS responded promptly to ground commands for droplet size changes and accurately logged the corresponding adjustment locations. Data analysis indicated that increasing droplet size markedly reduces drift volume and droplet density in the downwind drift zone. Relative to a baseline without droplet-size control, a three-stage adjustment strategy reduced the drift ratio by 89.18%, shortened the 90% drift distance to 3.36 m, and delivered the highest drift-mitigation rate. To minimize drift while maintaining effective penetration, we propose using the very fine (VF) droplet size (82.4 μm) for the first two flight paths and increasing to 300–350 μm for the third. These findings demonstrate that dynamic droplet-size adjustment via pulse-width modulation can effectively reduce drift. External factors, such as wind and terrain, continue to be influential, underscoring the need for further research to refine and optimize drift-control strategies under diverse operating conditions.
无人机喷洒系统(UASS)在农业中应用广泛;然而,喷雾漂移仍然是其广泛采用的一个重大障碍。传统的漂移控制措施,如喷嘴优化和辅助措施,主要是通过改变液滴的大小来起作用。为平衡喷雾质量和漂移风险,提出了一种配备离心喷嘴的无人潜航器液滴粒径动态控制方法。我们建立了喷嘴转速与液滴大小之间的关系,并开发了一种嵌入式可变液滴大小的UASS。该系统利用差分RTK获取实时无人机位置数据,能够在操作过程中动态调整液滴大小。现场试验证明了该系统的稳定性和可靠性:UASS对液滴大小变化的地面命令做出了快速响应,并准确记录了相应的调整位置。数据分析表明,增大液滴尺寸可显著降低顺风漂移区的漂移体积和密度。相对于没有液滴大小控制的基线,三段式调整策略将漂移比降低了89.18%,将90%的漂移距离缩短到3.36 m,并提供了最高的漂移缓解率。为了在保持有效穿透的同时最大限度地减少漂移,我们建议在前两条飞行路径中使用非常细(VF)的液滴尺寸(82.4 μm),并在第三条飞行路径中增加到300-350 μm。这些结果表明,通过脉宽调制动态调整液滴大小可以有效地减少漂移。风和地形等外部因素仍然具有影响,因此需要进一步研究以完善和优化不同操作条件下的漂控策略。
{"title":"Optimization of spraying quality and drift risk in unmanned aerial spraying systems (UASS) based on Multi-Gradient droplet size control","authors":"Pengchao Chen ,&nbsp;Jiapei Wu ,&nbsp;Zhihao Bian ,&nbsp;Jean Paul Douzals ,&nbsp;Yingdong Qin ,&nbsp;Hanbing Liu ,&nbsp;Juan Wang ,&nbsp;Yubin Lan","doi":"10.1016/j.compag.2026.111481","DOIUrl":"10.1016/j.compag.2026.111481","url":null,"abstract":"<div><div>Unmanned aerial spraying systems (UASS) are widely used in agriculture; however, spray drift remains a significant barrier to their broader adoption. Conventional drift-control measures—such as nozzle optimization and adjuvants—primarily act by altering droplet size. This study introduces a dynamic droplet-size control approach for UASS equipped with centrifugal nozzles to balance spray quality and drift risk. We established the relationship between nozzle rotational speed and droplet size and developed an embedded, variable droplet-size UASS. The system utilizes differential RTK to acquire real-time UAV position data, enabling dynamic adjustment of droplet size during operation. Field trials demonstrated the system’s stability and reliability: the UASS responded promptly to ground commands for droplet size changes and accurately logged the corresponding adjustment locations. Data analysis indicated that increasing droplet size markedly reduces drift volume and droplet density in the downwind drift zone. Relative to a baseline without droplet-size control, a three-stage adjustment strategy reduced the drift ratio by 89.18%, shortened the 90% drift distance to 3.36 m, and delivered the highest drift-mitigation rate. To minimize drift while maintaining effective penetration, we propose using the very fine (VF) droplet size (82.4 μm) for the first two flight paths and increasing to 300–350 μm for the third. These findings demonstrate that dynamic droplet-size adjustment via pulse-width modulation can effectively reduce drift. External factors, such as wind and terrain, continue to be influential, underscoring the need for further research to refine and optimize drift-control strategies under diverse operating conditions.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"244 ","pages":"Article 111481"},"PeriodicalIF":8.9,"publicationDate":"2026-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146079803","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Intelligent management of crop diseases and pests in multiscale and multimodal complex scenarios: Technologies, applications, and prospects 多尺度、多模式复杂场景下作物病虫害智能管理:技术、应用与展望
IF 8.9 1区 农林科学 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2026-01-28 DOI: 10.1016/j.compag.2026.111443
Chang Xu , Lei Zhao , Haojie Wen , Yiding Zhang , Lipo Wang , Lingxian Zhang
Efficient management and precise monitoring are essential for the sustainable control of crop diseases and pests. Traditional unimodal methods exhibit reduced reliability due to data gaps and environmental fluctuations. Multimodal artificial intelligence (AI) offers a promising alternative by integrating complementary data sources and enhancing robustness and adaptability. However, a comprehensive synthesis connecting multimodal AI with multi-scale disease and pest management is still lacking. Based on 950 publications from the past decade reflecting a 31.7% annual growth rate over the past five years, this review examines the evolution of AI-driven research and compares unimodal and multimodal approaches by summarizing major data modalities, fusion strategies, and modeling techniques. Deep learning emerges as the most widely used class of AI methods, and quantitative evidence indicates that multimodal systems achieve approximately 3–48.9% higher diagnostic accuracy than unimodal models. Evidence from 27 studies demonstrates the effectiveness of multimodal fusion across imaging, spectral, environmental, and sensor-based datasets. Building upon these findings, we propose a novel three-level management framework comprising point-level diagnosis, area-scale monitoring, and spatiotemporal forecasting, clarifying how multimodal AI strengthens each task. We further highlight the role of Plant Electronic Medical Records (PEMRs) and outline a conceptual virtual plant clinic to support continuous, data-driven crop health services. Finally, this review identifies key directions including advanced fusion strategies, lightweight and interpretable models, digital twin integration, and scalable decision-support systems, which are essential for intelligent and sustainable crop disease and pest management.
有效的管理和精确的监测对作物病虫害的可持续控制至关重要。由于数据差距和环境波动,传统单峰方法的可靠性降低。多模态人工智能(AI)通过集成互补数据源和增强鲁棒性和适应性提供了一种有前途的替代方案。然而,将多模态人工智能与多尺度病虫害管理相结合的综合研究仍然缺乏。基于过去十年的950篇出版物,反映了过去五年31.7%的年增长率,本综述通过总结主要数据模式、融合策略和建模技术,检查了人工智能驱动研究的演变,并比较了单模态和多模态方法。深度学习成为使用最广泛的一类人工智能方法,定量证据表明,多模态系统的诊断准确率比单模态模型高出约3-48.9%。来自27项研究的证据证明了跨成像、光谱、环境和基于传感器的数据集的多模态融合的有效性。在这些发现的基础上,我们提出了一个新的三级管理框架,包括点级诊断、区域尺度监测和时空预测,阐明了多模式人工智能如何加强每个任务。我们进一步强调了植物电子病历(pemr)的作用,并概述了一个概念性的虚拟植物诊所,以支持连续的、数据驱动的作物健康服务。最后,本文提出了先进的融合策略、轻量级和可解释模型、数字孪生集成和可扩展的决策支持系统等实现作物病虫害智能可持续管理的关键方向。
{"title":"Intelligent management of crop diseases and pests in multiscale and multimodal complex scenarios: Technologies, applications, and prospects","authors":"Chang Xu ,&nbsp;Lei Zhao ,&nbsp;Haojie Wen ,&nbsp;Yiding Zhang ,&nbsp;Lipo Wang ,&nbsp;Lingxian Zhang","doi":"10.1016/j.compag.2026.111443","DOIUrl":"10.1016/j.compag.2026.111443","url":null,"abstract":"<div><div>Efficient management and precise monitoring are essential for the sustainable control of crop diseases and pests. Traditional unimodal methods exhibit reduced reliability due to data gaps and environmental fluctuations. Multimodal artificial intelligence (AI) offers a promising alternative by integrating complementary data sources and enhancing robustness and adaptability. However, a comprehensive synthesis connecting multimodal AI with multi-scale disease and pest management is still lacking. Based on 950 publications from the past decade reflecting a 31.7% annual growth rate over the past five years, this review examines the evolution of AI-driven research and compares unimodal and multimodal approaches by summarizing major data modalities, fusion strategies, and modeling techniques. Deep learning emerges as the most widely used class of AI methods, and quantitative evidence indicates that multimodal systems achieve approximately 3–48.9% higher diagnostic accuracy than unimodal models. Evidence from 27 studies demonstrates the effectiveness of multimodal fusion across imaging, spectral, environmental, and sensor-based datasets. Building upon these findings, we propose a novel three-level management framework comprising point-level diagnosis, area-scale monitoring, and spatiotemporal forecasting, clarifying how multimodal AI strengthens each task. We further highlight the role of Plant Electronic Medical Records (PEMRs) and outline a conceptual virtual plant clinic to support continuous, data-driven crop health services. Finally, this review identifies key directions including advanced fusion strategies, lightweight and interpretable models, digital twin integration, and scalable decision-support systems, which are essential for intelligent and sustainable crop disease and pest management.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"244 ","pages":"Article 111443"},"PeriodicalIF":8.9,"publicationDate":"2026-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146080289","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Monitoring of Anthracnose in litchi orchards in complex environments 复杂环境荔枝果园炭疽病监测
IF 8.9 1区 农林科学 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2026-01-28 DOI: 10.1016/j.compag.2026.111478
Jianfeng Zhang, Mei Liu, Song Yang, Yukang Chen, Jiasheng Chen
Litchi anthracnose severely impacts fruit yield and quality. Unmanned Aerial Vehicle (UAV)-based detection of this disease in complex orchard environments faces significant challenges, including occlusion, weak features of small lesions, high false-positive rates, and difficulties in achieving real-time monitoring. To address these issues, we propose ACF-DETR, a lightweight end-to-end detection model designed for UAV deployment. Based on a Transformer framework, it integrates a lightweight backbone network (AgileNet) utilizing differential convolution, an efficient hybrid encoder with a content-guided cross-layer fusion mechanism (CCRNet), and an optimized Focaler-MPDIoU loss function. This design enhances feature representation for small targets and mitigates occlusion and class imbalance. Specifically, Evaluations on our self-constructed Litchi Field Anthracnose Expert (LFAE) dataset show that ACF-DETR outperforms mainstream YOLO models by 0.4%–5.1% and surpasses DETR-based models by 1.5%–13.7% in mAP50, while reducing computational complexity (GFLOPs) by 26.21%–56.32%. Furthermore, it achieves real-time inference on an NVIDIA Jetson Orin Nano edge device. This study provides an efficient and practical solution for real-time anthracnose monitoring in large-scale litchi orchards.
荔枝炭疽病严重影响果实产量和品质。利用无人机在复杂果园环境中进行病害检测面临着遮挡、病灶小特征弱、假阳性率高、难以实现实时监测等重大挑战。为了解决这些问题,我们提出了ACF-DETR,一种专为无人机部署设计的轻量级端到端检测模型。基于Transformer框架,它集成了利用微分卷积的轻量级骨干网络(AgileNet)、具有内容引导跨层融合机制的高效混合编码器(CCRNet)以及优化的Focaler-MPDIoU损失函数。该设计增强了小目标的特征表示,减轻了遮挡和类不平衡。具体而言,对我们自建的荔枝田炭疽病专家(LFAE)数据集的评估表明,在mAP50中,ACF-DETR比主流的YOLO模型高出0.4%-5.1%,比基于detr的模型高出1.5%-13.7%,而计算复杂度(GFLOPs)降低了26.21%-56.32%。并在NVIDIA Jetson Orin纳米边缘设备上实现了实时推理。本研究为大规模荔枝果园的炭疽病实时监测提供了一种高效实用的解决方案。
{"title":"Monitoring of Anthracnose in litchi orchards in complex environments","authors":"Jianfeng Zhang,&nbsp;Mei Liu,&nbsp;Song Yang,&nbsp;Yukang Chen,&nbsp;Jiasheng Chen","doi":"10.1016/j.compag.2026.111478","DOIUrl":"10.1016/j.compag.2026.111478","url":null,"abstract":"<div><div>Litchi anthracnose severely impacts fruit yield and quality. Unmanned Aerial Vehicle (UAV)-based detection of this disease in complex orchard environments faces significant challenges, including occlusion, weak features of small lesions, high false-positive rates, and difficulties in achieving real-time monitoring. To address these issues, we propose ACF-DETR, a lightweight end-to-end detection model designed for UAV deployment. Based on a Transformer framework, it integrates a lightweight backbone network (AgileNet) utilizing differential convolution, an efficient hybrid encoder with a content-guided cross-layer fusion mechanism (CCRNet), and an optimized Focaler-MPDIoU loss function. This design enhances feature representation for small targets and mitigates occlusion and class imbalance. Specifically, Evaluations on our self-constructed Litchi Field Anthracnose Expert (LFAE) dataset show that ACF-DETR outperforms mainstream YOLO models by 0.4%–5.1% and surpasses DETR-based models by 1.5%–13.7% in mAP50, while reducing computational complexity (GFLOPs) by 26.21%–56.32%. Furthermore, it achieves real-time inference on an NVIDIA Jetson Orin Nano edge device. This study provides an efficient and practical solution for real-time anthracnose monitoring in large-scale litchi orchards.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"244 ","pages":"Article 111478"},"PeriodicalIF":8.9,"publicationDate":"2026-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146079797","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Study on diagnosis of subclinical mastitis and identification of pathogens using electronic nose detection of cow’s milk 乳汁电子鼻检测对亚临床乳腺炎诊断及病原菌鉴定的研究
IF 8.9 1区 农林科学 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2026-01-28 DOI: 10.1016/j.compag.2026.111488
Fang Wang , Yujun Zhu , Enqiu Zhang , Rong Zhang , Hongwei Duan , Shuai Yang , Xianghong Du , Xingxu Zhao , Xiaofei Ma , Lihong Zhang , Junjie Hu
Subclinical mastitis causes a decrease in milk production and quality, as well as changes in milk composition, thereby leading to substantial economic losses to the dairy industry. Therefore, establishing an early and rapid diagnostic method for subclinical mastitis is crucial for preventing and reducing the incidence of mastitis in dairy cows. Response signals were randomly selected from the milk of 20 healthy cows, 20 cows with subclinical mastitis, and 20 cows with clinical mastitis, ensuring equal representation from each group. Response signals from the electronic nose (e-Nose) detection system were analyzed using multivariate classifiers. Secondly, the milk samples from subclinical mastitis cows infected with single bacterium (Staphylococcus aureus, Escherichia coli, or Streptococcus agalactiae) were identified through bacteriological analysis and specific Polymerase Chain Reaction (PCR). An e-Nose detection system combined with Gas Chromatography-Mass Spectrometry (GC–MS) was used to analyze the three pathogenic bacteria in the milk samples of subclinical mastitis cows.
The results indicate that the e-Nose was effective in the diagnosis of subclinical mastitis on the basis of milk samples, the Random Forest (RF) algorithm achieved 91.7%, 93.3%, and 97.0% accuracy, sensitivity and specificity, respectively. Secondly, the accuracy, specificity and sensitivity of Support Vector Machine (SVM) algorithm for the of the three pathogenic bacteria by using an e-Nose were 83.3%, 88.9%, and 91.7%, respectively. Finally, the GC–MS results unveiled that the three pathogens significantly affected the metabolite profile of subclinical mastitis milk samples. The study results indicate that the established e-Nose system can effectively capture relevant diagnostic information about subclinical mastitis and different pathogens.
亚临床乳腺炎导致牛奶产量和质量下降,以及牛奶成分的变化,从而给乳制品行业带来巨大的经济损失。因此,建立亚临床乳腺炎的早期快速诊断方法对于预防和减少奶牛乳腺炎的发病率至关重要。从20头健康奶牛、20头亚临床乳腺炎奶牛和20头临床乳腺炎奶牛的乳汁中随机选取响应信号,确保每组的代表性相等。利用多元分类器对电子鼻检测系统的响应信号进行分析。其次,对感染单一细菌(金黄色葡萄球菌、大肠杆菌或无乳链球菌)的亚临床乳腺炎奶牛的乳样进行细菌学分析和特异性聚合酶链反应(PCR)鉴定。采用电子鼻检测系统结合气相色谱-质谱联用技术对亚临床乳腺炎奶牛乳样中的3种致病菌进行了分析。结果表明,基于乳样的e-Nose诊断亚临床乳腺炎是有效的,随机森林(Random Forest, RF)算法的准确率、灵敏度和特异性分别达到91.7%、93.3%和97.0%。其次,支持向量机(SVM)算法对电子鼻检测三种病原菌的准确率、特异性和灵敏度分别为83.3%、88.9%和91.7%。最后,GC-MS结果显示,这三种病原体显著影响亚临床乳腺炎乳样品的代谢物谱。研究结果表明,建立的电子鼻系统可以有效地捕获亚临床乳腺炎和不同病原体的相关诊断信息。
{"title":"Study on diagnosis of subclinical mastitis and identification of pathogens using electronic nose detection of cow’s milk","authors":"Fang Wang ,&nbsp;Yujun Zhu ,&nbsp;Enqiu Zhang ,&nbsp;Rong Zhang ,&nbsp;Hongwei Duan ,&nbsp;Shuai Yang ,&nbsp;Xianghong Du ,&nbsp;Xingxu Zhao ,&nbsp;Xiaofei Ma ,&nbsp;Lihong Zhang ,&nbsp;Junjie Hu","doi":"10.1016/j.compag.2026.111488","DOIUrl":"10.1016/j.compag.2026.111488","url":null,"abstract":"<div><div>Subclinical mastitis causes a decrease in milk production and quality, as well as changes in milk composition, thereby leading to substantial economic losses to the dairy industry. Therefore, establishing an early and rapid diagnostic method for subclinical mastitis is crucial for preventing and reducing the incidence of mastitis in dairy cows. Response signals were randomly selected from the milk of 20 healthy cows, 20 cows with subclinical mastitis, and 20 cows with clinical mastitis, ensuring equal representation from each group. Response signals from the electronic nose (e-Nose) detection system were analyzed using multivariate classifiers. Secondly, the milk samples from subclinical mastitis cows infected with single bacterium (<em>Staphylococcus aureus, Escherichia coli, or Streptococcus agalactiae</em>) were identified through bacteriological analysis and specific Polymerase Chain Reaction (PCR). An e-Nose detection system combined with Gas Chromatography-Mass Spectrometry (GC–MS) was used to analyze the three pathogenic bacteria in the milk samples of subclinical mastitis cows.</div><div>The results indicate that the e-Nose was effective in the diagnosis of subclinical mastitis on the basis of milk samples, the Random Forest (RF) algorithm achieved 91.7%, 93.3%, and 97.0% accuracy, sensitivity and specificity, respectively. Secondly, the accuracy, specificity and sensitivity of Support Vector Machine (SVM) algorithm for the of the three pathogenic bacteria by using an e-Nose were 83.3%, 88.9%, and 91.7%, respectively. Finally, the GC–MS results unveiled that the three pathogens significantly affected the metabolite profile of subclinical mastitis milk samples. The study results indicate that the established e-Nose system can effectively capture relevant diagnostic information about subclinical mastitis and different pathogens.</div></div>","PeriodicalId":50627,"journal":{"name":"Computers and Electronics in Agriculture","volume":"244 ","pages":"Article 111488"},"PeriodicalIF":8.9,"publicationDate":"2026-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146079877","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computers and Electronics in Agriculture
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1