首页 > 最新文献

Smart agricultural technology最新文献

英文 中文
PHDT-DETR: A lightweight end-to-end detector for on-device truss tomato detection in greenhouses PHDT-DETR:一种轻量级的端到端检测器,用于在温室中进行设备桁架番茄检测
IF 5.7 Q1 AGRICULTURAL ENGINEERING Pub Date : 2026-03-01 Epub Date: 2025-12-22 DOI: 10.1016/j.atech.2025.101742
Nengwei Yang , Peng Ji , Sen Lin , Ya Xiong
Visual perception systems are essential for harvesting robots in smart agriculture, but deployment is often limited by computational constraints. For real-time truss tomato detection in complex greenhouses, existing models rarely deliver high accuracy, low latency, and lightweight design on resource-constrained edge devices, especially under variable illumination. We introduce PHDT-DETR, a lightweight, end-to-end detector optimized for edge deployment. Building on the RT-DETR baseline, PHDT-DETR integrates a CSP-PMSFA backbone for efficient multi-scale feature extraction, a CA-HSFPN neck that enhances feature fusion via Coordinate Attention, a DRBC3 block that enhances multi-scale feature representation through multi-branch re-parameterized convolutions while trimming redundant computation, a TS-IFI encoder that reduces attention complexity, and a joint NWD+Shape-IoU regression loss that provides overlap-independent, aspect-ratio–aware supervision for slender, irregular tomato skewers. We further apply Layer-Adaptive Magnitude-based Pruning (LAMP) for aggressive compression. Experiments show that the pruned model achieves 90.8% mAP50 while reducing the parameter count to 6.1 M and the computational cost to 17.4 GFLOPs. Deployed on an NVIDIA Jetson Orin Nano Super and compiled with TensorRT, the model runs at 66.0 FPS with a compact 15.5 MB footprint, outperforming mainstream YOLO models. These results demonstrate the feasibility of deploying high-precision, real-time, end-to-end object detectors on resource-constrained edge devices for robotic harvesting in greenhouses.
视觉感知系统对于智能农业中的收获机器人至关重要,但部署往往受到计算约束的限制。对于复杂温室中的实时桁架番茄检测,现有模型很少在资源受限的边缘设备上提供高精度、低延迟和轻量级设计,特别是在可变照明下。我们介绍了PHDT-DETR,这是一种轻量级的端到端检测器,针对边缘部署进行了优化。基于RT-DETR基线,PHDT-DETR集成了用于高效多尺度特征提取的CSP-PMSFA主干、通过坐标注意增强特征融合的CA-HSFPN颈部、通过多分支重新参数化卷积增强多尺度特征表示的DRBC3块、减少冗余计算的TS-IFI编码器、以及提供重叠无关的NWD+Shape-IoU联合回归损失。对细长、不规则的番茄串进行宽高比感知监督。我们进一步应用基于层自适应幅度的剪枝(LAMP)进行主动压缩。实验表明,修正后的模型mAP50达到90.8%,参数个数减少到6.1 M,计算成本减少到17.4 GFLOPs。该模型部署在NVIDIA Jetson Orin Nano Super上,并使用TensorRT进行编译,运行速度为66.0 FPS,占用空间为15.5 MB,优于主流的YOLO模型。这些结果证明了在资源受限的边缘设备上部署高精度、实时、端到端目标探测器用于温室机器人收获的可行性。
{"title":"PHDT-DETR: A lightweight end-to-end detector for on-device truss tomato detection in greenhouses","authors":"Nengwei Yang ,&nbsp;Peng Ji ,&nbsp;Sen Lin ,&nbsp;Ya Xiong","doi":"10.1016/j.atech.2025.101742","DOIUrl":"10.1016/j.atech.2025.101742","url":null,"abstract":"<div><div>Visual perception systems are essential for harvesting robots in smart agriculture, but deployment is often limited by computational constraints. For real-time truss tomato detection in complex greenhouses, existing models rarely deliver high accuracy, low latency, and lightweight design on resource-constrained edge devices, especially under variable illumination. We introduce PHDT-DETR, a lightweight, end-to-end detector optimized for edge deployment. Building on the RT-DETR baseline, PHDT-DETR integrates a CSP-PMSFA backbone for efficient multi-scale feature extraction, a CA-HSFPN neck that enhances feature fusion via Coordinate Attention, a DRBC3 block that enhances multi-scale feature representation through multi-branch re-parameterized convolutions while trimming redundant computation, a TS-IFI encoder that reduces attention complexity, and a joint NWD+Shape-IoU regression loss that provides overlap-independent, aspect-ratio–aware supervision for slender, irregular tomato skewers. We further apply Layer-Adaptive Magnitude-based Pruning (LAMP) for aggressive compression. Experiments show that the pruned model achieves 90.8% mAP50 while reducing the parameter count to 6.1 M and the computational cost to 17.4 GFLOPs. Deployed on an NVIDIA Jetson Orin Nano Super and compiled with TensorRT, the model runs at 66.0 FPS with a compact 15.5 MB footprint, outperforming mainstream YOLO models. These results demonstrate the feasibility of deploying high-precision, real-time, end-to-end object detectors on resource-constrained edge devices for robotic harvesting in greenhouses.</div></div>","PeriodicalId":74813,"journal":{"name":"Smart agricultural technology","volume":"13 ","pages":"Article 101742"},"PeriodicalIF":5.7,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145925494","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
EdgeSoybeanNet : A framework for real-time, high-accuracy field soybean pod counting EdgeSoybeanNet:一个用于实时、高精度田间大豆豆荚计数的框架
IF 5.7 Q1 AGRICULTURAL ENGINEERING Pub Date : 2026-03-01 Epub Date: 2025-12-26 DOI: 10.1016/j.atech.2025.101750
Johnbosco Nnamso , Francia Ravelombola , Feng Lin , Chao Lu
Accurate estimation of field soybean pods plays a critical role in precision agriculture. However, conventional methods face significant limitations, including high field variability, visually complex backgrounds, and the computational constraints of deploying deep learning models in rural edge environments. To address these challenges, we present EdgeSoybeanNet, a high-accuracy, edge-deployable AI framework for near real-time soybean pod counting. The proposed framework integrates a customized UNet-Lite segmentation network with an adaptive thresholding strategy. The computation process begins with region-of-interest extraction from UAV imagery, followed by segmentation and pod detection using adaptive thresholding. The trained AI models are then quantized and exported to ONNX and deployed with ONNX Runtime, TensorFlow Lite (TFLite), or TensorRT on edge devices, eliminating the need for cloud connectivity and enabling near real-time inference in the soybean field. To the best of our knowledge, this is the first study to incorporate adaptive threshold learning into a UNet-Lite segmentation for agricultural applications. The experimental results show a counting accuracy of 89.57% with an inference time of 0.66 s on a Raspberry Pi 5 at 300  ×  300 input UAV images, and up to 90.43% counting accuracy at 560  ×  560 input. These results demonstrate the feasibility and effectiveness of this approach for resource-constrained precision farming. Compared with the state-of-the-art SoybeanNet-S model, our approach improves counting accuracy by 5.07% and reduces the number of parameters by approximately 14 times, from 49.6 million down to 3.57 million.
田间豆荚的准确估算在精准农业中起着至关重要的作用。然而,传统方法面临着显著的局限性,包括高场变异性、视觉复杂的背景以及在农村边缘环境中部署深度学习模型的计算限制。为了应对这些挑战,我们提出了EdgeSoybeanNet,这是一种高精度、边缘可部署的人工智能框架,用于近实时的大豆豆荚计数。该框架集成了自定义UNet-Lite分割网络和自适应阈值策略。计算过程首先从无人机图像中提取感兴趣区域,然后使用自适应阈值进行分割和吊舱检测。经过训练的人工智能模型随后被量化并导出到ONNX,并与ONNX Runtime、TensorFlow Lite (TFLite)或TensorRT一起部署在边缘设备上,从而消除了对云连接的需求,并在大豆领域实现了近乎实时的推断。据我们所知,这是第一个将自适应阈值学习纳入农业应用的unet - life分割的研究。实验结果表明,在树莓派5上,在300 × 300输入的无人机图像上,计数精度为89.57%,推理时间为0.66 s,在560 × 560输入的无人机图像上,计数精度高达90.43%。这些结果证明了该方法在资源受限的精准农业中的可行性和有效性。与最先进的SoybeanNet-S模型相比,我们的方法将计数精度提高了5.07%,并将参数数量减少了约14倍,从4960万减少到357万。
{"title":"EdgeSoybeanNet : A framework for real-time, high-accuracy field soybean pod counting","authors":"Johnbosco Nnamso ,&nbsp;Francia Ravelombola ,&nbsp;Feng Lin ,&nbsp;Chao Lu","doi":"10.1016/j.atech.2025.101750","DOIUrl":"10.1016/j.atech.2025.101750","url":null,"abstract":"<div><div>Accurate estimation of field soybean pods plays a critical role in precision agriculture. However, conventional methods face significant limitations, including high field variability, visually complex backgrounds, and the computational constraints of deploying deep learning models in rural edge environments. To address these challenges, we present EdgeSoybeanNet, a high-accuracy, edge-deployable AI framework for near real-time soybean pod counting. The proposed framework integrates a customized UNet-Lite segmentation network with an adaptive thresholding strategy. The computation process begins with region-of-interest extraction from UAV imagery, followed by segmentation and pod detection using adaptive thresholding. The trained AI models are then quantized and exported to ONNX and deployed with ONNX Runtime, TensorFlow Lite (TFLite), or TensorRT on edge devices, eliminating the need for cloud connectivity and enabling near real-time inference in the soybean field. To the best of our knowledge, this is the first study to incorporate adaptive threshold learning into a UNet-Lite segmentation for agricultural applications. The experimental results show a counting accuracy of 89.57% with an inference time of 0.66 s on a Raspberry Pi 5 at 300  ×  300 input UAV images, and up to 90.43% counting accuracy at 560  ×  560 input. These results demonstrate the feasibility and effectiveness of this approach for resource-constrained precision farming. Compared with the state-of-the-art SoybeanNet-S model, our approach improves counting accuracy by 5.07% and reduces the number of parameters by approximately 14 times, from 49.6 million down to 3.57 million.</div></div>","PeriodicalId":74813,"journal":{"name":"Smart agricultural technology","volume":"13 ","pages":"Article 101750"},"PeriodicalIF":5.7,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145925635","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Double deep Q-network for intelligent control and energy efficiency optimization of zonal ventilation in laying-hen houses 蛋鸡舍区域通风智能控制与能效优化的双深q网络
IF 5.7 Q1 AGRICULTURAL ENGINEERING Pub Date : 2026-03-01 Epub Date: 2025-12-24 DOI: 10.1016/j.atech.2025.101753
Changzeng Hu , Lihua Li , Limin Huo , Yuchen Jia , Zongkui Xie , Yao Yu
Precise environmental control in laying hen houses is essential for animal welfare and production efficiency. Traditional ventilation strategies based on fixed temperature thresholds cause significant environmental fluctuations and high energy consumption due to frequent fan cycling. To address this, we propose a ventilation control strategy utilizing a Double Deep Q-Network (Double DQN) reinforcement learning algorithm. The system partitions the hen house into four equal-volume zones, each equipped with a positive-pressure fan unit. These units cooperate with a central negative-pressure fan set for precise temperature and humidity regulation. The strategy employs a composite state space integrating real-time environmental parameters (temperature, humidity) and fan operation status. A multi-dimensional action space defines the on/off combinations for the 16 commands governing the four positive-pressure fan units. A dual-objective reward function incorporates both environmental parameter deviation from setpoints and penalties for fan switching. Experimental results demonstrate that the Double DQN strategy significantly reduces the standard deviation of temperature and humidity across all zones compared to traditional threshold control, achieving closer proximity to the target setpoint (26 °C, 70 %). Furthermore, it reduces the daily energy consumption of the positive-pressure fan units by 10.35 % (103.63 kWh total). This strategy markedly enhances environmental control precision and stability while conserving energy, offering a novel intelligent solution for sustainable facility livestock environmental management.
蛋鸡舍环境的精确控制对动物福利和生产效率至关重要。传统的基于固定温度阈值的通风策略由于风机循环频繁,导致环境波动大,能耗高。为了解决这个问题,我们提出了一种利用双深度Q-Network (Double DQN)强化学习算法的通风控制策略。该系统将鸡舍分成四个等容积的区域,每个区域都配备了一个正压风扇单元。这些单位配合中央负压风扇设置精确的温度和湿度调节。该策略采用综合实时环境参数(温度、湿度)和风扇运行状态的复合状态空间。多维动作空间定义了控制四个正压风扇单元的16个指令的开/关组合。双目标奖励函数包含环境参数偏离设定值和风扇切换的惩罚。实验结果表明,与传统的阈值控制相比,双DQN策略显著降低了所有区域的温度和湿度的标准差,更接近目标设定值(26°C, 70%)。同时使正压风机机组日能耗降低10.35%(合计103.63 kWh)。该策略显著提高了环境控制的精度和稳定性,同时节约了能源,为可持续设施家畜环境管理提供了一种新颖的智能解决方案。
{"title":"Double deep Q-network for intelligent control and energy efficiency optimization of zonal ventilation in laying-hen houses","authors":"Changzeng Hu ,&nbsp;Lihua Li ,&nbsp;Limin Huo ,&nbsp;Yuchen Jia ,&nbsp;Zongkui Xie ,&nbsp;Yao Yu","doi":"10.1016/j.atech.2025.101753","DOIUrl":"10.1016/j.atech.2025.101753","url":null,"abstract":"<div><div>Precise environmental control in laying hen houses is essential for animal welfare and production efficiency. Traditional ventilation strategies based on fixed temperature thresholds cause significant environmental fluctuations and high energy consumption due to frequent fan cycling. To address this, we propose a ventilation control strategy utilizing a Double Deep Q-Network (Double DQN) reinforcement learning algorithm. The system partitions the hen house into four equal-volume zones, each equipped with a positive-pressure fan unit. These units cooperate with a central negative-pressure fan set for precise temperature and humidity regulation. The strategy employs a composite state space integrating real-time environmental parameters (temperature, humidity) and fan operation status. A multi-dimensional action space defines the on/off combinations for the 16 commands governing the four positive-pressure fan units. A dual-objective reward function incorporates both environmental parameter deviation from setpoints and penalties for fan switching. Experimental results demonstrate that the Double DQN strategy significantly reduces the standard deviation of temperature and humidity across all zones compared to traditional threshold control, achieving closer proximity to the target setpoint (26 °C, 70 %). Furthermore, it reduces the daily energy consumption of the positive-pressure fan units by 10.35 % (103.63 kWh total). This strategy markedly enhances environmental control precision and stability while conserving energy, offering a novel intelligent solution for sustainable facility livestock environmental management.</div></div>","PeriodicalId":74813,"journal":{"name":"Smart agricultural technology","volume":"13 ","pages":"Article 101753"},"PeriodicalIF":5.7,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145925638","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Comparison of spray drift between spraying drone and conventional airblast sprayer in vineyards 无人机与传统喷风喷雾器在葡萄园喷雾漂移的比较
IF 5.7 Q1 AGRICULTURAL ENGINEERING Pub Date : 2026-03-01 Epub Date: 2025-12-19 DOI: 10.1016/j.atech.2025.101741
Vasilis Psiroukis , Aikaterini Kasimati , Konstantinos Nychas , Evangelos Anastasiou , Athanasios Balafoutis , Spyros Fountas
Spraying Unmanned Aerial Vehicles (UAVs) are autonomous airborne platforms that primarily operate on predetermined flight plans and spraying missions. Although spraying UAVs are increasingly used for plant protection in vineyards, limited experimental evidence exists on how operational parameters influence spray drift under real field conditions, especially in European vineyards. This study quantified ground-level drift from a UAV sprayer in a commercial vineyard, evaluating two flight altitudes (2.0 m and 2.5 m AGL), two flight speeds (1.0 and 1.5 m/s), and three application strategies (inter-row with and without a buffer line, and over-row with a buffer line). An additional set of replicates using a conventional air-assisted sprayer was included as a reference for current vineyard practice. Spray drift was measured at multiple downwind distances using filter paper collectors and analysed with laboratory spectrophotometric methods following ISO 22,866. Drift from UAV applications was highly concentrated near the field boundary and declined sharply within the first 5 m for all configurations. Flight altitude was the dominant driver: increasing AGL from 2.0 m to 2.5 m raised drift at the closest sampling point by 30–70 %. Higher flight speed (1.5 m/s) increased drift by 10–20 % compared with 1.0 m/s. Applying a buffer reduced drift by up to 60 %, particularly in inter-row spraying. Under optimal UAV settings (2.0 m AGL, 1.0 m/s, buffer applied), drift became negligible beyond 10 m downwind. Compared with the conventional air-assisted sprayer, UAV applications under optimised conditions reduced drift at the closest sampling distance by approximately 65–70 % and showed substantially lower drift beyond 10 m. These findings demonstrate that appropriate UAV operational settings can significantly reduce off-target movement and offer a lower-drift alternative to conventional terrestrial sprayers in vineyard applications and such mitigation strategies should always be considered prior to designing a flight plan or spray mission.
喷洒无人机(uav)是一种自主的机载平台,主要执行预定的飞行计划和喷洒任务。尽管喷洒无人机越来越多地用于葡萄园的植物保护,但在实际现场条件下,特别是在欧洲葡萄园,操作参数如何影响喷雾漂移的实验证据有限。本研究量化了商业葡萄园中无人机喷雾器的地面漂移,评估了两种飞行高度(2.0 m和2.5 m AGL),两种飞行速度(1.0和1.5 m/s),以及三种应用策略(行间有和没有缓冲线,行上有缓冲线)。另外一组使用传统空气辅助喷雾器的重复试验也被包括在内,作为当前葡萄园实践的参考。使用滤纸收集器在多个顺风距离上测量喷雾漂移,并根据ISO 22,866使用实验室分光光度法进行分析。无人机应用的漂移高度集中在场边界附近,并且在所有配置的前5 m内急剧下降。飞行高度是主要驱动因素:从2.0 m到2.5 m的AGL增加使最近采样点的漂移增加30 - 70%。与1.0 m/s相比,更高的飞行速度(1.5 m/s)使漂移增加了10 - 20%。施用缓冲剂可减少高达60%的漂移,特别是在行间喷洒时。在最佳的无人机设置(2.0 m AGL, 1.0 m/s,使用缓冲)下,下风10 m以上的漂移可以忽略不计。与传统的空气辅助喷雾器相比,优化条件下的无人机应用在最近采样距离处减少了大约65 - 70%的漂移,并且在10米以上显示出明显较低的漂移。这些发现表明,适当的无人机操作设置可以显着减少偏离目标的移动,并为葡萄园应用中的传统地面喷雾器提供更低漂移的替代方案。在设计飞行计划或喷雾任务之前,应始终考虑此类缓解策略。
{"title":"Comparison of spray drift between spraying drone and conventional airblast sprayer in vineyards","authors":"Vasilis Psiroukis ,&nbsp;Aikaterini Kasimati ,&nbsp;Konstantinos Nychas ,&nbsp;Evangelos Anastasiou ,&nbsp;Athanasios Balafoutis ,&nbsp;Spyros Fountas","doi":"10.1016/j.atech.2025.101741","DOIUrl":"10.1016/j.atech.2025.101741","url":null,"abstract":"<div><div>Spraying Unmanned Aerial Vehicles (UAVs) are autonomous airborne platforms that primarily operate on predetermined flight plans and spraying missions. Although spraying UAVs are increasingly used for plant protection in vineyards, limited experimental evidence exists on how operational parameters influence spray drift under real field conditions, especially in European vineyards. This study quantified ground-level drift from a UAV sprayer in a commercial vineyard, evaluating two flight altitudes (2.0 m and 2.5 m AGL), two flight speeds (1.0 and 1.5 m/s), and three application strategies (inter-row with and without a buffer line, and over-row with a buffer line). An additional set of replicates using a conventional air-assisted sprayer was included as a reference for current vineyard practice. Spray drift was measured at multiple downwind distances using filter paper collectors and analysed with laboratory spectrophotometric methods following ISO 22,866. Drift from UAV applications was highly concentrated near the field boundary and declined sharply within the first 5 m for all configurations. Flight altitude was the dominant driver: increasing AGL from 2.0 m to 2.5 m raised drift at the closest sampling point by 30–70 %. Higher flight speed (1.5 m/s) increased drift by 10–20 % compared with 1.0 m/s. Applying a buffer reduced drift by up to 60 %, particularly in inter-row spraying. Under optimal UAV settings (2.0 m AGL, 1.0 m/s, buffer applied), drift became negligible beyond 10 m downwind. Compared with the conventional air-assisted sprayer, UAV applications under optimised conditions reduced drift at the closest sampling distance by approximately 65–70 % and showed substantially lower drift beyond 10 m. These findings demonstrate that appropriate UAV operational settings can significantly reduce off-target movement and offer a lower-drift alternative to conventional terrestrial sprayers in vineyard applications and such mitigation strategies should always be considered prior to designing a flight plan or spray mission.</div></div>","PeriodicalId":74813,"journal":{"name":"Smart agricultural technology","volume":"13 ","pages":"Article 101741"},"PeriodicalIF":5.7,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145925725","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Image-based fine-scale analysis of insect movement patterns and environmental triggers using pitfall traps 利用陷阱对昆虫运动模式和环境触发因素进行基于图像的精细分析
IF 5.7 Q1 AGRICULTURAL ENGINEERING Pub Date : 2026-03-01 Epub Date: 2025-12-12 DOI: 10.1016/j.atech.2025.101712
Don Chathurika Amarathunga , Zorica Duric , Andrew Hulthen , Andy Wang , Mukti Chalise , Mubin Ul Haque , Hazel Parry
Effective pest management within Integrated Pest Management (IPM) frameworks requires detailed insights into insect population dynamics and environmental triggers of movement. Traditional pitfall trapping methods often lack the temporal resolution needed to capture fine-scale activity patterns of ground-dwelling insects. This study presents an image-based monitoring system that integrates pitfall traps with in-field time-lapse cameras and a deep learning pipeline to automate insect detection and counting. We focus on the Rutherglen bug (Nysius vinitor), a sporadic pest in Australian cropping systems, particularly in canola, that migrates to summer crops. A YOLOv8 object detection model was fine-tuned using a custom-labeled subset of over 150,000 time-series images captured at 5-minute intervals from ten camera-equipped pitfall traps deployed across a mixed cropping landscape. The full dataset, collected over an eight-week period, was used for downstream insect activity analysis. The model achieved a mean average precision (mAP) of 0.84 for detecting both adult and nymph stages. A post-processing pipeline, including image segmentation and temporal filtering, was developed to reduce false positives caused by non-target insects and debris, minimize duplicate detections of the same insect—a common limitation in pitfall trapping—and enable accurate insect counts over defined time intervals. The system revealed fine-scale movement patterns and environmental responses, including increased nymph activity during hot, dry conditions and synchronized migration at crop-pasture interfaces. Insect counts estimated from the system showed moderate to high correlation with manual weekly trap counts. This study demonstrates both the potential and the practical challenges of applying an image-based pitfall-trap monitoring framework for fine-scale insect activity analysis, providing a biologically meaningful case study that can guide the future development and adaptation of similar systems.
在综合病虫害管理(IPM)框架内进行有效的病虫害管理需要详细了解昆虫种群动态和运动的环境触发因素。传统的陷阱捕捉方法往往缺乏捕捉地面昆虫精细活动模式所需的时间分辨率。本研究提出了一种基于图像的监测系统,该系统将陷阱与现场延时相机和深度学习管道集成在一起,以自动检测和计数昆虫。我们的重点是Rutherglen臭虫(Nysius vinitor),这是澳大利亚种植系统中的一种零星害虫,特别是油菜籽,它会迁移到夏季作物上。YOLOv8目标检测模型使用自定义标记的超过150,000个时间序列图像子集进行微调,这些图像每隔5分钟从部署在混合种植景观中的十个配备相机的陷阱中捕获。在八周的时间内收集的完整数据集用于下游昆虫活动分析。该模型检测成虫和若虫阶段的平均精度(mAP)均为0.84。开发了包括图像分割和时间滤波在内的后处理流程,以减少由非目标昆虫和碎片引起的误报,最大限度地减少对同一昆虫的重复检测(陷阱陷阱的常见限制),并在规定的时间间隔内实现准确的昆虫计数。该系统揭示了精细尺度的运动模式和环境响应,包括若虫在炎热和干燥条件下的活动增加以及在作物-牧场界面的同步迁移。系统估计的昆虫数量与人工每周捕获的昆虫数量有中等到高度的相关性。本研究展示了应用基于图像的陷阱监测框架进行精细尺度昆虫活动分析的潜力和实际挑战,提供了一个具有生物学意义的案例研究,可以指导类似系统的未来发展和适应。
{"title":"Image-based fine-scale analysis of insect movement patterns and environmental triggers using pitfall traps","authors":"Don Chathurika Amarathunga ,&nbsp;Zorica Duric ,&nbsp;Andrew Hulthen ,&nbsp;Andy Wang ,&nbsp;Mukti Chalise ,&nbsp;Mubin Ul Haque ,&nbsp;Hazel Parry","doi":"10.1016/j.atech.2025.101712","DOIUrl":"10.1016/j.atech.2025.101712","url":null,"abstract":"<div><div>Effective pest management within Integrated Pest Management (IPM) frameworks requires detailed insights into insect population dynamics and environmental triggers of movement. Traditional pitfall trapping methods often lack the temporal resolution needed to capture fine-scale activity patterns of ground-dwelling insects. This study presents an image-based monitoring system that integrates pitfall traps with in-field time-lapse cameras and a deep learning pipeline to automate insect detection and counting. We focus on the Rutherglen bug (<em>Nysius vinitor</em>), a sporadic pest in Australian cropping systems, particularly in canola, that migrates to summer crops. A YOLOv8 object detection model was fine-tuned using a custom-labeled subset of over 150,000 time-series images captured at 5-minute intervals from ten camera-equipped pitfall traps deployed across a mixed cropping landscape. The full dataset, collected over an eight-week period, was used for downstream insect activity analysis. The model achieved a mean average precision (mAP) of 0.84 for detecting both adult and nymph stages. A post-processing pipeline, including image segmentation and temporal filtering, was developed to reduce false positives caused by non-target insects and debris, minimize duplicate detections of the same insect—a common limitation in pitfall trapping—and enable accurate insect counts over defined time intervals. The system revealed fine-scale movement patterns and environmental responses, including increased nymph activity during hot, dry conditions and synchronized migration at crop-pasture interfaces. Insect counts estimated from the system showed moderate to high correlation with manual weekly trap counts. This study demonstrates both the potential and the practical challenges of applying an image-based pitfall-trap monitoring framework for fine-scale insect activity analysis, providing a biologically meaningful case study that can guide the future development and adaptation of similar systems.</div></div>","PeriodicalId":74813,"journal":{"name":"Smart agricultural technology","volume":"13 ","pages":"Article 101712"},"PeriodicalIF":5.7,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145790643","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Real-time sunflower detection using semi-supervised and self-supervised deep learning for precision agriculture 基于半监督和自监督深度学习的精准农业实时向日葵检测
IF 5.7 Q1 AGRICULTURAL ENGINEERING Pub Date : 2026-03-01 Epub Date: 2025-12-07 DOI: 10.1016/j.atech.2025.101684
Fathhur Rahaman Sams, Sanjana Kazi Supti, Shayma Binte Hamid, Radin Junayed, K.M. Fahim A Bari, Md Junaeid Ali, Raiyan Gani, Karib Shams, Mohammad Rifat Ahmmad Rashid, Raihan Ul Islam
Accurate sunflower head detection is essential for precision agriculture, supporting timely monitoring and yield estimation. However, reliable detection under UAV settings remains challenging due to annotation scarcity, variable field conditions, and inconsistent localization across flowering stages. This study presents a unified framework that evaluates supervised, semi-supervised, and self-supervised learning strategies on UAV imagery collected under real field conditions. In the supervised setting, YOLOv12s achieved the strongest performance (mAP@50  ≈  93 %), with stable convergence and focused visual attention, while RF-DETR showed lower recall and weaker localization. To reduce annotation requirements, a Pseudo-STAC teacher–student approach was evaluated across varying labeled-to-unlabeled ratios. Teacher models maintained high accuracy even with limited supervision (mAP@50 = 88.5–91.6 %), while student models approached teacher-level performance when 20–30 % of images were labeled. At extremely low label ratios (10 %), instability from pseudo-label noise was observed, though confidence-adaptive filtering alleviated some of these effects. Self-supervised learning (SSL) using DINOv2-style and BYOL pretraining further strengthened representation quality, consistently producing mAP@50 scores above 91 %. SSL-enhanced YOLOv12s generated compact and discriminative embeddings and exhibited smoother optimization, confirmed through loss curves, clustering analyses, and XAI visualizations. Finally, a real-time Streamlit application was developed, enabling image, video, and live-camera detection at up to 22 FPS, demonstrating the practical deployment potential of the proposed framework. This work demonstrates the potential of semi- and self-supervised learning to reduce annotation costs, enhance generalization, and deliver interpretable real-time solutions for precision agriculture.
准确的葵花籽头检测对精准农业至关重要,支持及时监测和产量估算。然而,在无人机设置下的可靠检测仍然具有挑战性,因为注释稀缺性,多变的现场条件和开花阶段不一致的定位。本研究提出了一个统一的框架,用于评估在真实现场条件下收集的无人机图像的监督、半监督和自监督学习策略。在监督设置下,YOLOv12s的表现最强(mAP@50 ≈ 93%),收敛稳定,视觉注意力集中,而RF-DETR的召回率较低,定位能力较弱。为了减少注释需求,伪stac师生方法在不同的标记与未标记比率上进行了评估。即使在有限的监督下,教师模型也保持了很高的准确性(mAP@50 = 88.5 - 91.6%),而学生模型在20 - 30%的图像被标记时,表现接近教师水平。在极低的标签比率(10%)下,观察到伪标签噪声的不稳定性,尽管自适应置信滤波减轻了其中的一些影响。使用dinov2风格和BYOL预训练的自监督学习(Self-supervised learning, SSL)进一步增强了表征质量,mAP@50得分始终在91%以上。ssl增强的YOLOv12s生成紧凑和判别嵌入,并通过损失曲线、聚类分析和XAI可视化证实了更平滑的优化。最后,开发了实时Streamlit应用程序,支持高达22 FPS的图像、视频和实时摄像机检测,展示了所提出框架的实际部署潜力。这项工作证明了半监督学习和自监督学习在降低注释成本、增强泛化和为精准农业提供可解释的实时解决方案方面的潜力。
{"title":"Real-time sunflower detection using semi-supervised and self-supervised deep learning for precision agriculture","authors":"Fathhur Rahaman Sams,&nbsp;Sanjana Kazi Supti,&nbsp;Shayma Binte Hamid,&nbsp;Radin Junayed,&nbsp;K.M. Fahim A Bari,&nbsp;Md Junaeid Ali,&nbsp;Raiyan Gani,&nbsp;Karib Shams,&nbsp;Mohammad Rifat Ahmmad Rashid,&nbsp;Raihan Ul Islam","doi":"10.1016/j.atech.2025.101684","DOIUrl":"10.1016/j.atech.2025.101684","url":null,"abstract":"<div><div>Accurate sunflower head detection is essential for precision agriculture, supporting timely monitoring and yield estimation. However, reliable detection under UAV settings remains challenging due to annotation scarcity, variable field conditions, and inconsistent localization across flowering stages. This study presents a unified framework that evaluates supervised, semi-supervised, and self-supervised learning strategies on UAV imagery collected under real field conditions. In the supervised setting, YOLOv12s achieved the strongest performance (mAP@50  ≈  93 %), with stable convergence and focused visual attention, while RF-DETR showed lower recall and weaker localization. To reduce annotation requirements, a Pseudo-STAC teacher–student approach was evaluated across varying labeled-to-unlabeled ratios. Teacher models maintained high accuracy even with limited supervision (mAP@50 = 88.5–91.6 %), while student models approached teacher-level performance when 20–30 % of images were labeled. At extremely low label ratios (10 %), instability from pseudo-label noise was observed, though confidence-adaptive filtering alleviated some of these effects. Self-supervised learning (SSL) using DINOv2-style and BYOL pretraining further strengthened representation quality, consistently producing mAP@50 scores above 91 %. SSL-enhanced YOLOv12s generated compact and discriminative embeddings and exhibited smoother optimization, confirmed through loss curves, clustering analyses, and XAI visualizations. Finally, a real-time Streamlit application was developed, enabling image, video, and live-camera detection at up to 22 FPS, demonstrating the practical deployment potential of the proposed framework. This work demonstrates the potential of semi- and self-supervised learning to reduce annotation costs, enhance generalization, and deliver interpretable real-time solutions for precision agriculture.</div></div>","PeriodicalId":74813,"journal":{"name":"Smart agricultural technology","volume":"13 ","pages":"Article 101684"},"PeriodicalIF":5.7,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145790644","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Use of an automated walk-over-weighing system to monitor and forecast liveweight in grazing lambs 使用自动行走秤系统监测和预测放牧羔羊的活重
IF 5.7 Q1 AGRICULTURAL ENGINEERING Pub Date : 2026-03-01 Epub Date: 2025-12-15 DOI: 10.1016/j.atech.2025.101727
Alessio Cotticelli , Konstantinos Zaralis , Matteo Santinello , Roberta Matera , Luciano A. González
Aim of the present study was to evaluate the use of a walk-over-weighing (WoW) technology to remotely weigh growing lambs in a pastoral sheep production system and then use these data to predict future liveweight (LW) at different lead times. Thus, an experiment was carried out in a flock of 144 lambs that were grazing freely for a total of 94 days while an automatic WoW system allowed to remotely estimate LW and growth rate of individual lambs daily under these grazing conditions. Data were recorded as each animal entered voluntarily into the WoW platform and walked through it to access water. Daily LW of each animal was used to forecast LW (FW) at 20, 30, 40, 50, and 60 days ahead of any actual day. The accuracy of the FW was assessed using a linear mixed-effects model and Lin’s concordance correlation coefficient (LCCC) with FW as dependent variable and actual observed LW (OW) as independent for each target days, both animal and date were random effects. In total, data from 132 lambs were included in the final dataset which had an average growth rate of 0.25 ± 0.11 kg/d throughout the 93 days of the trial. The FW for the next 20 and 30 days showed substantial agreement with observed weight (LCCC > 0.90). However, FW beyond 40 days was less precise and accurate (LCCC < 0.75). In addition, the LCCC of FW was higher when estimated from the growth rate in the last 14 compared to the last 7 days and late compared to early in the trial. The WoW technology is suitable to monitor LW and growth rate of lambs both in real-time and to predict future LW in commercial farms. Hence, the WoW system can be recommended to help with on-farm decision making of individual sheep.
本研究的目的是评估在牧羊生产系统中使用行走秤(WoW)技术远程称重生长羔羊,然后使用这些数据预测不同提前期的未来活重(LW)。因此,本实验选取了144只羊,在放牧期共94天,利用自动WoW系统远程估算放牧期每只羊的LW和生长速率。当每只动物自愿进入WoW平台并穿过平台取水时,数据都会被记录下来。用每只动物的日体重预测实际日期前20、30、40、50和60天的体重(FW)。采用线性混合效应模型和Lin’s一致性相关系数(LCCC)评价FW的准确性,其中FW为因变量,实际观测LW (OW)为独立变量,动物和日期均为随机效应。共有132只羔羊的数据被纳入最终数据集,在试验的93天内,羔羊的平均生长速率为0.25±0.11 kg/d。随后20和30 d的FW与观测体重基本一致(LCCC > 0.90)。然而,超过40天的FW不那么精确和准确(LCCC < 0.75)。此外,从最后14天的生长速率估计FW的LCCC高于最后7天,较晚于试验初期。WoW技术适用于实时监测羔羊的LW和生长率,并预测商业农场未来的LW。因此,可以推荐WoW系统来帮助单个羊的农场决策。
{"title":"Use of an automated walk-over-weighing system to monitor and forecast liveweight in grazing lambs","authors":"Alessio Cotticelli ,&nbsp;Konstantinos Zaralis ,&nbsp;Matteo Santinello ,&nbsp;Roberta Matera ,&nbsp;Luciano A. González","doi":"10.1016/j.atech.2025.101727","DOIUrl":"10.1016/j.atech.2025.101727","url":null,"abstract":"<div><div>Aim of the present study was to evaluate the use of a walk-over-weighing (WoW) technology to remotely weigh growing lambs in a pastoral sheep production system and then use these data to predict future liveweight (LW) at different lead times. Thus, an experiment was carried out in a flock of 144 lambs that were grazing freely for a total of 94 days while an automatic WoW system allowed to remotely estimate LW and growth rate of individual lambs daily under these grazing conditions. Data were recorded as each animal entered voluntarily into the WoW platform and walked through it to access water. Daily LW of each animal was used to forecast LW (FW) at 20, 30, 40, 50, and 60 days ahead of any actual day. The accuracy of the FW was assessed using a linear mixed-effects model and Lin’s concordance correlation coefficient (LCCC) with FW as dependent variable and actual observed LW (OW) as independent for each target days, both animal and date were random effects. In total, data from 132 lambs were included in the final dataset which had an average growth rate of 0.25 ± 0.11 kg/d throughout the 93 days of the trial. The FW for the next 20 and 30 days showed substantial agreement with observed weight (LCCC &gt; 0.90). However, FW beyond 40 days was less precise and accurate (LCCC &lt; 0.75). In addition, the LCCC of FW was higher when estimated from the growth rate in the last 14 compared to the last 7 days and late compared to early in the trial. The WoW technology is suitable to monitor LW and growth rate of lambs both in real-time and to predict future LW in commercial farms. Hence, the WoW system can be recommended to help with on-farm decision making of individual sheep.</div></div>","PeriodicalId":74813,"journal":{"name":"Smart agricultural technology","volume":"13 ","pages":"Article 101727"},"PeriodicalIF":5.7,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145790772","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A bionic furrow opener-based real-time monitoring method for maize sowing depth 基于仿生开沟器的玉米播深实时监测方法研究
IF 5.7 Q1 AGRICULTURAL ENGINEERING Pub Date : 2026-03-01 Epub Date: 2025-12-13 DOI: 10.1016/j.atech.2025.101716
Chunling Zhang , Si Chen , Hao Shu , Liqing Chen , Xiaodong Xie , Weiwei Wang
The issue of insufficient accuracy in monitoring sowing depth during maize planting negatively impacts sowing quality. In response to this problem and in accordance with the agronomic requirements for maize cultivation, this study proposes a real-time monitoring method for maize sowing depth based on a bionic furrow opener. Furthermore, a sowing depth monitoring system has been developed based on this method, enabling real-time and precise measurement of maize sowing depth. Through dynamic analysis involving the bionic mackerel and seed sowing, a Bionic Mackerel-Style Side Baffle (BMSSB) furrow opener was designed, with key structural parameter ranges established. The opening process was simulated using the Discrete Element Method (DEM), and the results demonstrated the effectiveness of the monitoring method. Furthermore, when the width of the side flaps of the BMSSB furrow opener is set to 32 mm, the consistency index (Ci) between the monitored depth and the actual sowing depth reaches its maximum value of 90%. System response speed tests were conducted, revealing that the response time did not exceed 0.1 s. In addition, field experiments were conducted to monitor the seeding depth of maize. The results indicated that when the operating speed was set at 10 km/h and the sowing depth was maintained at 60 mm, the consistency index (Ci) between the monitored and actual depths, as well as the standard deviation, were found to be 90.14 ± 0.11%.
玉米种植过程中播深监测的准确性不足,严重影响了播种质量。针对这一问题,根据玉米栽培的农艺要求,本研究提出了一种基于仿生开沟器的玉米播深实时监测方法。在此基础上开发了玉米播深监测系统,实现了玉米播深的实时、精确测量。通过对仿生鲭鱼和种子播种的动力学分析,设计了一种仿生鲭鱼式侧挡板开沟器,确定了关键结构参数范围。利用离散元法(DEM)对开挖过程进行了数值模拟,结果验证了该监测方法的有效性。当BMSSB开沟器侧瓣宽度为32 mm时,监测深度与实际播种深度的一致性指数(Ci)达到最大值,为90%。进行系统响应速度测试,响应时间不超过0.1 s。此外,还进行了玉米播种深度的田间监测试验。结果表明,当作业速度为10 km/h,播种深度为60 mm时,监测深度与实际深度的一致性指数(Ci)及标准差为90.14±0.11%;
{"title":"A bionic furrow opener-based real-time monitoring method for maize sowing depth","authors":"Chunling Zhang ,&nbsp;Si Chen ,&nbsp;Hao Shu ,&nbsp;Liqing Chen ,&nbsp;Xiaodong Xie ,&nbsp;Weiwei Wang","doi":"10.1016/j.atech.2025.101716","DOIUrl":"10.1016/j.atech.2025.101716","url":null,"abstract":"<div><div>The issue of insufficient accuracy in monitoring sowing depth during maize planting negatively impacts sowing quality. In response to this problem and in accordance with the agronomic requirements for maize cultivation, this study proposes a real-time monitoring method for maize sowing depth based on a bionic furrow opener. Furthermore, a sowing depth monitoring system has been developed based on this method, enabling real-time and precise measurement of maize sowing depth. Through dynamic analysis involving the bionic mackerel and seed sowing, a Bionic Mackerel-Style Side Baffle (BMSSB) furrow opener was designed, with key structural parameter ranges established. The opening process was simulated using the Discrete Element Method (DEM), and the results demonstrated the effectiveness of the monitoring method. Furthermore, when the width of the side flaps of the BMSSB furrow opener is set to 32 mm, the consistency index (<em>C<sub>i</sub></em>) between the monitored depth and the actual sowing depth reaches its maximum value of 90%. System response speed tests were conducted, revealing that the response time did not exceed 0.1 s. In addition, field experiments were conducted to monitor the seeding depth of maize. The results indicated that when the operating speed was set at 10 km/h and the sowing depth was maintained at 60 mm, the consistency index (<em>C<sub>i</sub></em>) between the monitored and actual depths, as well as the standard deviation, were found to be 90.14 ± 0.11%.</div></div>","PeriodicalId":74813,"journal":{"name":"Smart agricultural technology","volume":"13 ","pages":"Article 101716"},"PeriodicalIF":5.7,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145790641","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Ensemble machine learning with limited data: Feature selection and wheat yield prediction in Bangladesh 有限数据的集成机器学习:孟加拉国的特征选择和小麦产量预测
IF 5.7 Q1 AGRICULTURAL ENGINEERING Pub Date : 2026-03-01 Epub Date: 2025-12-04 DOI: 10.1016/j.atech.2025.101693
Zia U Ahmed , Timothy J. Krupnik , Mohammad S Alam , Jagadish Timsina , Md. Khaled Hossain , G.K.M. Mustafizur Rahman , Andrew J. McDonald
Predicting wheat yields accurately in smallholder farming systems in Bangladesh is challenging due to the complex interplay of various factors and the lack of high-quality field data. To tackle this problem, we created a stacked ensemble machine learning framework. It includes stepwise feature selection and model-agnostic interpretation techniques using data from 178 farmers in Bangladesh. We ensemble four main models: generalized linear model (GLM), random forest (RF), gradient boosting machine (GBM), and extreme gradient boosting (XGBoost). Our ensemble model outperformed individual models, reducing the root mean square error (RMSE) by 6 % compared to GBM and by 12 % compared to GLM. We found that ground cover, represented by well-covered plots (>60 %) and patchy ground cover plots (<60 %), is the most important factor for predicting wheat yields, followed by the wheat maturity date and the date of the previous monsoon rice harvest. These factors are easily observable and recordable by farmers and extension agents. These findings offer practical solutions for Bangladesh: (1) more reliable yield forecasts can assist with planning inputs and harvests; (2) we can develop digital tools that focus on simple-to-measure indicators like ground cover, which can be easily monitored with satellite images; and (3) this research can support government and NGO programs aimed at making wheat farming more climate-resilient. By demonstrating how ensemble learning can extract valuable insights from limited data, this study contributes to enhancing food security and promoting sustainable farming practices in South Asia’s resource-limited agricultural regions.
由于各种因素的复杂相互作用和缺乏高质量的田间数据,准确预测孟加拉国小农农业系统的小麦产量具有挑战性。为了解决这个问题,我们创建了一个堆叠集成机器学习框架。它包括逐步特征选择和模型不可知的解释技术,使用来自孟加拉国178名农民的数据。我们集成了四个主要模型:广义线性模型(GLM)、随机森林(RF)、梯度增强机(GBM)和极端梯度增强(XGBoost)。我们的集成模型优于单个模型,与GBM相比,其均方根误差(RMSE)降低了6%,与GLM相比降低了12%。我们发现,以覆盖良好的地块(> 60%)和斑块覆盖地块(< 60%)为代表的地表覆盖是预测小麦产量的最重要因素,其次是小麦成熟期和前一季稻收获日期。这些因素很容易被农民和推广人员观察和记录。这些发现为孟加拉国提供了切实可行的解决方案:(1)更可靠的产量预测有助于规划投入和收成;(2)我们可以开发数字工具,专注于地表覆盖等易于测量的指标,这些指标可以很容易地通过卫星图像进行监测;(3)这项研究可以支持政府和非政府组织旨在使小麦种植更具气候适应性的项目。通过展示集成学习如何从有限的数据中提取有价值的见解,本研究有助于加强南亚资源有限的农业地区的粮食安全和促进可持续农业实践。
{"title":"Ensemble machine learning with limited data: Feature selection and wheat yield prediction in Bangladesh","authors":"Zia U Ahmed ,&nbsp;Timothy J. Krupnik ,&nbsp;Mohammad S Alam ,&nbsp;Jagadish Timsina ,&nbsp;Md. Khaled Hossain ,&nbsp;G.K.M. Mustafizur Rahman ,&nbsp;Andrew J. McDonald","doi":"10.1016/j.atech.2025.101693","DOIUrl":"10.1016/j.atech.2025.101693","url":null,"abstract":"<div><div>Predicting wheat yields accurately in smallholder farming systems in Bangladesh is challenging due to the complex interplay of various factors and the lack of high-quality field data. To tackle this problem, we created a stacked ensemble machine learning framework. It includes stepwise feature selection and model-agnostic interpretation techniques using data from 178 farmers in Bangladesh. We ensemble four main models: generalized linear model (GLM), random forest (RF), gradient boosting machine (GBM), and extreme gradient boosting (XGBoost). Our ensemble model outperformed individual models, reducing the root mean square error (RMSE) by 6 % compared to GBM and by 12 % compared to GLM. We found that ground cover, represented by well-covered plots (&gt;60 %) and patchy ground cover plots (&lt;60 %), is the most important factor for predicting wheat yields, followed by the wheat maturity date and the date of the previous monsoon rice harvest. These factors are easily observable and recordable by farmers and extension agents. These findings offer practical solutions for Bangladesh: (1) more reliable yield forecasts can assist with planning inputs and harvests; (2) we can develop digital tools that focus on simple-to-measure indicators like ground cover, which can be easily monitored with satellite images; and (3) this research can support government and NGO programs aimed at making wheat farming more climate-resilient. By demonstrating how ensemble learning can extract valuable insights from limited data, this study contributes to enhancing food security and promoting sustainable farming practices in South Asia’s resource-limited agricultural regions.</div></div>","PeriodicalId":74813,"journal":{"name":"Smart agricultural technology","volume":"13 ","pages":"Article 101693"},"PeriodicalIF":5.7,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145790645","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
From mechanistic-driven to data-driven: A review of the evolution of crop models 从机械驱动到数据驱动:作物模型的发展综述
IF 5.7 Q1 AGRICULTURAL ENGINEERING Pub Date : 2026-03-01 Epub Date: 2025-12-08 DOI: 10.1016/j.atech.2025.101699
Lebing Zheng , Hong-Yu Zhang , Shanmei Liu , Zaiwen Feng , JunWen He , Hai Liang , Fang Tian , Hui Peng
The convergence of climate change,resource scarcity, and rising global food demand necessitates advanced tools for sustainable agricultural intensification.Traditional farming practices, often based on static guidelines,are increasingly inadequate to manage the nonlinear and interactive effects of multiple stressors. Crop models—originally mechanistic, process-based simulators—have evolved into hybrid, data-integrated systems that support precision and intelligent agriculture. This review traces their evolution from early physiological simulations to contemporary paradigms combining mechanistic interpretability with machine learning adaptability,and examines applications in crop growth simulation, management optimization and strategic decision-making. Persistent challenges, including parameter overfitting,computational demands and limited cross-regional transferability, highlight the need for “mechanism-guided, data-enhanced” approaches that anchor interpretability in physiological knowledge while leveraging data-driven flexibility. This synthesis provides both the conceptual and technical foundation for the development of next-generation crop models, offering theoretical support for more precise and adaptive decision-making in smart agriculture.
气候变化、资源短缺和全球粮食需求上升三者的共同作用,需要先进的工具来实现可持续农业集约化。传统的农业实践通常基于静态准则,越来越不足以管理多种压力源的非线性和相互作用效应。作物模型——最初是机械性的、基于过程的模拟器——已经演变成混合的、数据集成的系统,支持精准和智能农业。本文回顾了它们从早期生理模拟到结合机械可解释性和机器学习适应性的当代范式的演变,并研究了它们在作物生长模拟、管理优化和战略决策中的应用。持续存在的挑战,包括参数过拟合、计算需求和有限的跨区域可转移性,突出了对“机制指导、数据增强”方法的需求,这些方法可以在利用数据驱动的灵活性的同时,锚定生理知识的可解释性。这种综合为下一代作物模型的开发提供了概念和技术基础,为智能农业中更精确和适应性的决策提供了理论支持。
{"title":"From mechanistic-driven to data-driven: A review of the evolution of crop models","authors":"Lebing Zheng ,&nbsp;Hong-Yu Zhang ,&nbsp;Shanmei Liu ,&nbsp;Zaiwen Feng ,&nbsp;JunWen He ,&nbsp;Hai Liang ,&nbsp;Fang Tian ,&nbsp;Hui Peng","doi":"10.1016/j.atech.2025.101699","DOIUrl":"10.1016/j.atech.2025.101699","url":null,"abstract":"<div><div>The convergence of climate change,resource scarcity, and rising global food demand necessitates advanced tools for sustainable agricultural intensification.Traditional farming practices, often based on static guidelines,are increasingly inadequate to manage the nonlinear and interactive effects of multiple stressors. Crop models—originally mechanistic, process-based simulators—have evolved into hybrid, data-integrated systems that support precision and intelligent agriculture. This review traces their evolution from early physiological simulations to contemporary paradigms combining mechanistic interpretability with machine learning adaptability,and examines applications in crop growth simulation, management optimization and strategic decision-making. Persistent challenges, including parameter overfitting,computational demands and limited cross-regional transferability, highlight the need for “mechanism-guided, data-enhanced” approaches that anchor interpretability in physiological knowledge while leveraging data-driven flexibility. This synthesis provides both the conceptual and technical foundation for the development of next-generation crop models, offering theoretical support for more precise and adaptive decision-making in smart agriculture.</div></div>","PeriodicalId":74813,"journal":{"name":"Smart agricultural technology","volume":"13 ","pages":"Article 101699"},"PeriodicalIF":5.7,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145791350","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Smart agricultural technology
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1