Pub Date : 2026-03-01Epub Date: 2025-12-22DOI: 10.1016/j.atech.2025.101742
Nengwei Yang , Peng Ji , Sen Lin , Ya Xiong
Visual perception systems are essential for harvesting robots in smart agriculture, but deployment is often limited by computational constraints. For real-time truss tomato detection in complex greenhouses, existing models rarely deliver high accuracy, low latency, and lightweight design on resource-constrained edge devices, especially under variable illumination. We introduce PHDT-DETR, a lightweight, end-to-end detector optimized for edge deployment. Building on the RT-DETR baseline, PHDT-DETR integrates a CSP-PMSFA backbone for efficient multi-scale feature extraction, a CA-HSFPN neck that enhances feature fusion via Coordinate Attention, a DRBC3 block that enhances multi-scale feature representation through multi-branch re-parameterized convolutions while trimming redundant computation, a TS-IFI encoder that reduces attention complexity, and a joint NWD+Shape-IoU regression loss that provides overlap-independent, aspect-ratio–aware supervision for slender, irregular tomato skewers. We further apply Layer-Adaptive Magnitude-based Pruning (LAMP) for aggressive compression. Experiments show that the pruned model achieves 90.8% mAP50 while reducing the parameter count to 6.1 M and the computational cost to 17.4 GFLOPs. Deployed on an NVIDIA Jetson Orin Nano Super and compiled with TensorRT, the model runs at 66.0 FPS with a compact 15.5 MB footprint, outperforming mainstream YOLO models. These results demonstrate the feasibility of deploying high-precision, real-time, end-to-end object detectors on resource-constrained edge devices for robotic harvesting in greenhouses.
视觉感知系统对于智能农业中的收获机器人至关重要,但部署往往受到计算约束的限制。对于复杂温室中的实时桁架番茄检测,现有模型很少在资源受限的边缘设备上提供高精度、低延迟和轻量级设计,特别是在可变照明下。我们介绍了PHDT-DETR,这是一种轻量级的端到端检测器,针对边缘部署进行了优化。基于RT-DETR基线,PHDT-DETR集成了用于高效多尺度特征提取的CSP-PMSFA主干、通过坐标注意增强特征融合的CA-HSFPN颈部、通过多分支重新参数化卷积增强多尺度特征表示的DRBC3块、减少冗余计算的TS-IFI编码器、以及提供重叠无关的NWD+Shape-IoU联合回归损失。对细长、不规则的番茄串进行宽高比感知监督。我们进一步应用基于层自适应幅度的剪枝(LAMP)进行主动压缩。实验表明,修正后的模型mAP50达到90.8%,参数个数减少到6.1 M,计算成本减少到17.4 GFLOPs。该模型部署在NVIDIA Jetson Orin Nano Super上,并使用TensorRT进行编译,运行速度为66.0 FPS,占用空间为15.5 MB,优于主流的YOLO模型。这些结果证明了在资源受限的边缘设备上部署高精度、实时、端到端目标探测器用于温室机器人收获的可行性。
{"title":"PHDT-DETR: A lightweight end-to-end detector for on-device truss tomato detection in greenhouses","authors":"Nengwei Yang , Peng Ji , Sen Lin , Ya Xiong","doi":"10.1016/j.atech.2025.101742","DOIUrl":"10.1016/j.atech.2025.101742","url":null,"abstract":"<div><div>Visual perception systems are essential for harvesting robots in smart agriculture, but deployment is often limited by computational constraints. For real-time truss tomato detection in complex greenhouses, existing models rarely deliver high accuracy, low latency, and lightweight design on resource-constrained edge devices, especially under variable illumination. We introduce PHDT-DETR, a lightweight, end-to-end detector optimized for edge deployment. Building on the RT-DETR baseline, PHDT-DETR integrates a CSP-PMSFA backbone for efficient multi-scale feature extraction, a CA-HSFPN neck that enhances feature fusion via Coordinate Attention, a DRBC3 block that enhances multi-scale feature representation through multi-branch re-parameterized convolutions while trimming redundant computation, a TS-IFI encoder that reduces attention complexity, and a joint NWD+Shape-IoU regression loss that provides overlap-independent, aspect-ratio–aware supervision for slender, irregular tomato skewers. We further apply Layer-Adaptive Magnitude-based Pruning (LAMP) for aggressive compression. Experiments show that the pruned model achieves 90.8% mAP50 while reducing the parameter count to 6.1 M and the computational cost to 17.4 GFLOPs. Deployed on an NVIDIA Jetson Orin Nano Super and compiled with TensorRT, the model runs at 66.0 FPS with a compact 15.5 MB footprint, outperforming mainstream YOLO models. These results demonstrate the feasibility of deploying high-precision, real-time, end-to-end object detectors on resource-constrained edge devices for robotic harvesting in greenhouses.</div></div>","PeriodicalId":74813,"journal":{"name":"Smart agricultural technology","volume":"13 ","pages":"Article 101742"},"PeriodicalIF":5.7,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145925494","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-01Epub Date: 2025-12-26DOI: 10.1016/j.atech.2025.101750
Johnbosco Nnamso , Francia Ravelombola , Feng Lin , Chao Lu
Accurate estimation of field soybean pods plays a critical role in precision agriculture. However, conventional methods face significant limitations, including high field variability, visually complex backgrounds, and the computational constraints of deploying deep learning models in rural edge environments. To address these challenges, we present EdgeSoybeanNet, a high-accuracy, edge-deployable AI framework for near real-time soybean pod counting. The proposed framework integrates a customized UNet-Lite segmentation network with an adaptive thresholding strategy. The computation process begins with region-of-interest extraction from UAV imagery, followed by segmentation and pod detection using adaptive thresholding. The trained AI models are then quantized and exported to ONNX and deployed with ONNX Runtime, TensorFlow Lite (TFLite), or TensorRT on edge devices, eliminating the need for cloud connectivity and enabling near real-time inference in the soybean field. To the best of our knowledge, this is the first study to incorporate adaptive threshold learning into a UNet-Lite segmentation for agricultural applications. The experimental results show a counting accuracy of 89.57% with an inference time of 0.66 s on a Raspberry Pi 5 at 300 × 300 input UAV images, and up to 90.43% counting accuracy at 560 × 560 input. These results demonstrate the feasibility and effectiveness of this approach for resource-constrained precision farming. Compared with the state-of-the-art SoybeanNet-S model, our approach improves counting accuracy by 5.07% and reduces the number of parameters by approximately 14 times, from 49.6 million down to 3.57 million.
{"title":"EdgeSoybeanNet : A framework for real-time, high-accuracy field soybean pod counting","authors":"Johnbosco Nnamso , Francia Ravelombola , Feng Lin , Chao Lu","doi":"10.1016/j.atech.2025.101750","DOIUrl":"10.1016/j.atech.2025.101750","url":null,"abstract":"<div><div>Accurate estimation of field soybean pods plays a critical role in precision agriculture. However, conventional methods face significant limitations, including high field variability, visually complex backgrounds, and the computational constraints of deploying deep learning models in rural edge environments. To address these challenges, we present EdgeSoybeanNet, a high-accuracy, edge-deployable AI framework for near real-time soybean pod counting. The proposed framework integrates a customized UNet-Lite segmentation network with an adaptive thresholding strategy. The computation process begins with region-of-interest extraction from UAV imagery, followed by segmentation and pod detection using adaptive thresholding. The trained AI models are then quantized and exported to ONNX and deployed with ONNX Runtime, TensorFlow Lite (TFLite), or TensorRT on edge devices, eliminating the need for cloud connectivity and enabling near real-time inference in the soybean field. To the best of our knowledge, this is the first study to incorporate adaptive threshold learning into a UNet-Lite segmentation for agricultural applications. The experimental results show a counting accuracy of 89.57% with an inference time of 0.66 s on a Raspberry Pi 5 at 300 × 300 input UAV images, and up to 90.43% counting accuracy at 560 × 560 input. These results demonstrate the feasibility and effectiveness of this approach for resource-constrained precision farming. Compared with the state-of-the-art SoybeanNet-S model, our approach improves counting accuracy by 5.07% and reduces the number of parameters by approximately 14 times, from 49.6 million down to 3.57 million.</div></div>","PeriodicalId":74813,"journal":{"name":"Smart agricultural technology","volume":"13 ","pages":"Article 101750"},"PeriodicalIF":5.7,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145925635","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-01Epub Date: 2025-12-24DOI: 10.1016/j.atech.2025.101753
Changzeng Hu , Lihua Li , Limin Huo , Yuchen Jia , Zongkui Xie , Yao Yu
Precise environmental control in laying hen houses is essential for animal welfare and production efficiency. Traditional ventilation strategies based on fixed temperature thresholds cause significant environmental fluctuations and high energy consumption due to frequent fan cycling. To address this, we propose a ventilation control strategy utilizing a Double Deep Q-Network (Double DQN) reinforcement learning algorithm. The system partitions the hen house into four equal-volume zones, each equipped with a positive-pressure fan unit. These units cooperate with a central negative-pressure fan set for precise temperature and humidity regulation. The strategy employs a composite state space integrating real-time environmental parameters (temperature, humidity) and fan operation status. A multi-dimensional action space defines the on/off combinations for the 16 commands governing the four positive-pressure fan units. A dual-objective reward function incorporates both environmental parameter deviation from setpoints and penalties for fan switching. Experimental results demonstrate that the Double DQN strategy significantly reduces the standard deviation of temperature and humidity across all zones compared to traditional threshold control, achieving closer proximity to the target setpoint (26 °C, 70 %). Furthermore, it reduces the daily energy consumption of the positive-pressure fan units by 10.35 % (103.63 kWh total). This strategy markedly enhances environmental control precision and stability while conserving energy, offering a novel intelligent solution for sustainable facility livestock environmental management.
{"title":"Double deep Q-network for intelligent control and energy efficiency optimization of zonal ventilation in laying-hen houses","authors":"Changzeng Hu , Lihua Li , Limin Huo , Yuchen Jia , Zongkui Xie , Yao Yu","doi":"10.1016/j.atech.2025.101753","DOIUrl":"10.1016/j.atech.2025.101753","url":null,"abstract":"<div><div>Precise environmental control in laying hen houses is essential for animal welfare and production efficiency. Traditional ventilation strategies based on fixed temperature thresholds cause significant environmental fluctuations and high energy consumption due to frequent fan cycling. To address this, we propose a ventilation control strategy utilizing a Double Deep Q-Network (Double DQN) reinforcement learning algorithm. The system partitions the hen house into four equal-volume zones, each equipped with a positive-pressure fan unit. These units cooperate with a central negative-pressure fan set for precise temperature and humidity regulation. The strategy employs a composite state space integrating real-time environmental parameters (temperature, humidity) and fan operation status. A multi-dimensional action space defines the on/off combinations for the 16 commands governing the four positive-pressure fan units. A dual-objective reward function incorporates both environmental parameter deviation from setpoints and penalties for fan switching. Experimental results demonstrate that the Double DQN strategy significantly reduces the standard deviation of temperature and humidity across all zones compared to traditional threshold control, achieving closer proximity to the target setpoint (26 °C, 70 %). Furthermore, it reduces the daily energy consumption of the positive-pressure fan units by 10.35 % (103.63 kWh total). This strategy markedly enhances environmental control precision and stability while conserving energy, offering a novel intelligent solution for sustainable facility livestock environmental management.</div></div>","PeriodicalId":74813,"journal":{"name":"Smart agricultural technology","volume":"13 ","pages":"Article 101753"},"PeriodicalIF":5.7,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145925638","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Spraying Unmanned Aerial Vehicles (UAVs) are autonomous airborne platforms that primarily operate on predetermined flight plans and spraying missions. Although spraying UAVs are increasingly used for plant protection in vineyards, limited experimental evidence exists on how operational parameters influence spray drift under real field conditions, especially in European vineyards. This study quantified ground-level drift from a UAV sprayer in a commercial vineyard, evaluating two flight altitudes (2.0 m and 2.5 m AGL), two flight speeds (1.0 and 1.5 m/s), and three application strategies (inter-row with and without a buffer line, and over-row with a buffer line). An additional set of replicates using a conventional air-assisted sprayer was included as a reference for current vineyard practice. Spray drift was measured at multiple downwind distances using filter paper collectors and analysed with laboratory spectrophotometric methods following ISO 22,866. Drift from UAV applications was highly concentrated near the field boundary and declined sharply within the first 5 m for all configurations. Flight altitude was the dominant driver: increasing AGL from 2.0 m to 2.5 m raised drift at the closest sampling point by 30–70 %. Higher flight speed (1.5 m/s) increased drift by 10–20 % compared with 1.0 m/s. Applying a buffer reduced drift by up to 60 %, particularly in inter-row spraying. Under optimal UAV settings (2.0 m AGL, 1.0 m/s, buffer applied), drift became negligible beyond 10 m downwind. Compared with the conventional air-assisted sprayer, UAV applications under optimised conditions reduced drift at the closest sampling distance by approximately 65–70 % and showed substantially lower drift beyond 10 m. These findings demonstrate that appropriate UAV operational settings can significantly reduce off-target movement and offer a lower-drift alternative to conventional terrestrial sprayers in vineyard applications and such mitigation strategies should always be considered prior to designing a flight plan or spray mission.
{"title":"Comparison of spray drift between spraying drone and conventional airblast sprayer in vineyards","authors":"Vasilis Psiroukis , Aikaterini Kasimati , Konstantinos Nychas , Evangelos Anastasiou , Athanasios Balafoutis , Spyros Fountas","doi":"10.1016/j.atech.2025.101741","DOIUrl":"10.1016/j.atech.2025.101741","url":null,"abstract":"<div><div>Spraying Unmanned Aerial Vehicles (UAVs) are autonomous airborne platforms that primarily operate on predetermined flight plans and spraying missions. Although spraying UAVs are increasingly used for plant protection in vineyards, limited experimental evidence exists on how operational parameters influence spray drift under real field conditions, especially in European vineyards. This study quantified ground-level drift from a UAV sprayer in a commercial vineyard, evaluating two flight altitudes (2.0 m and 2.5 m AGL), two flight speeds (1.0 and 1.5 m/s), and three application strategies (inter-row with and without a buffer line, and over-row with a buffer line). An additional set of replicates using a conventional air-assisted sprayer was included as a reference for current vineyard practice. Spray drift was measured at multiple downwind distances using filter paper collectors and analysed with laboratory spectrophotometric methods following ISO 22,866. Drift from UAV applications was highly concentrated near the field boundary and declined sharply within the first 5 m for all configurations. Flight altitude was the dominant driver: increasing AGL from 2.0 m to 2.5 m raised drift at the closest sampling point by 30–70 %. Higher flight speed (1.5 m/s) increased drift by 10–20 % compared with 1.0 m/s. Applying a buffer reduced drift by up to 60 %, particularly in inter-row spraying. Under optimal UAV settings (2.0 m AGL, 1.0 m/s, buffer applied), drift became negligible beyond 10 m downwind. Compared with the conventional air-assisted sprayer, UAV applications under optimised conditions reduced drift at the closest sampling distance by approximately 65–70 % and showed substantially lower drift beyond 10 m. These findings demonstrate that appropriate UAV operational settings can significantly reduce off-target movement and offer a lower-drift alternative to conventional terrestrial sprayers in vineyard applications and such mitigation strategies should always be considered prior to designing a flight plan or spray mission.</div></div>","PeriodicalId":74813,"journal":{"name":"Smart agricultural technology","volume":"13 ","pages":"Article 101741"},"PeriodicalIF":5.7,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145925725","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-01Epub Date: 2025-12-12DOI: 10.1016/j.atech.2025.101712
Don Chathurika Amarathunga , Zorica Duric , Andrew Hulthen , Andy Wang , Mukti Chalise , Mubin Ul Haque , Hazel Parry
Effective pest management within Integrated Pest Management (IPM) frameworks requires detailed insights into insect population dynamics and environmental triggers of movement. Traditional pitfall trapping methods often lack the temporal resolution needed to capture fine-scale activity patterns of ground-dwelling insects. This study presents an image-based monitoring system that integrates pitfall traps with in-field time-lapse cameras and a deep learning pipeline to automate insect detection and counting. We focus on the Rutherglen bug (Nysius vinitor), a sporadic pest in Australian cropping systems, particularly in canola, that migrates to summer crops. A YOLOv8 object detection model was fine-tuned using a custom-labeled subset of over 150,000 time-series images captured at 5-minute intervals from ten camera-equipped pitfall traps deployed across a mixed cropping landscape. The full dataset, collected over an eight-week period, was used for downstream insect activity analysis. The model achieved a mean average precision (mAP) of 0.84 for detecting both adult and nymph stages. A post-processing pipeline, including image segmentation and temporal filtering, was developed to reduce false positives caused by non-target insects and debris, minimize duplicate detections of the same insect—a common limitation in pitfall trapping—and enable accurate insect counts over defined time intervals. The system revealed fine-scale movement patterns and environmental responses, including increased nymph activity during hot, dry conditions and synchronized migration at crop-pasture interfaces. Insect counts estimated from the system showed moderate to high correlation with manual weekly trap counts. This study demonstrates both the potential and the practical challenges of applying an image-based pitfall-trap monitoring framework for fine-scale insect activity analysis, providing a biologically meaningful case study that can guide the future development and adaptation of similar systems.
{"title":"Image-based fine-scale analysis of insect movement patterns and environmental triggers using pitfall traps","authors":"Don Chathurika Amarathunga , Zorica Duric , Andrew Hulthen , Andy Wang , Mukti Chalise , Mubin Ul Haque , Hazel Parry","doi":"10.1016/j.atech.2025.101712","DOIUrl":"10.1016/j.atech.2025.101712","url":null,"abstract":"<div><div>Effective pest management within Integrated Pest Management (IPM) frameworks requires detailed insights into insect population dynamics and environmental triggers of movement. Traditional pitfall trapping methods often lack the temporal resolution needed to capture fine-scale activity patterns of ground-dwelling insects. This study presents an image-based monitoring system that integrates pitfall traps with in-field time-lapse cameras and a deep learning pipeline to automate insect detection and counting. We focus on the Rutherglen bug (<em>Nysius vinitor</em>), a sporadic pest in Australian cropping systems, particularly in canola, that migrates to summer crops. A YOLOv8 object detection model was fine-tuned using a custom-labeled subset of over 150,000 time-series images captured at 5-minute intervals from ten camera-equipped pitfall traps deployed across a mixed cropping landscape. The full dataset, collected over an eight-week period, was used for downstream insect activity analysis. The model achieved a mean average precision (mAP) of 0.84 for detecting both adult and nymph stages. A post-processing pipeline, including image segmentation and temporal filtering, was developed to reduce false positives caused by non-target insects and debris, minimize duplicate detections of the same insect—a common limitation in pitfall trapping—and enable accurate insect counts over defined time intervals. The system revealed fine-scale movement patterns and environmental responses, including increased nymph activity during hot, dry conditions and synchronized migration at crop-pasture interfaces. Insect counts estimated from the system showed moderate to high correlation with manual weekly trap counts. This study demonstrates both the potential and the practical challenges of applying an image-based pitfall-trap monitoring framework for fine-scale insect activity analysis, providing a biologically meaningful case study that can guide the future development and adaptation of similar systems.</div></div>","PeriodicalId":74813,"journal":{"name":"Smart agricultural technology","volume":"13 ","pages":"Article 101712"},"PeriodicalIF":5.7,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145790643","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-01Epub Date: 2025-12-07DOI: 10.1016/j.atech.2025.101684
Fathhur Rahaman Sams, Sanjana Kazi Supti, Shayma Binte Hamid, Radin Junayed, K.M. Fahim A Bari, Md Junaeid Ali, Raiyan Gani, Karib Shams, Mohammad Rifat Ahmmad Rashid, Raihan Ul Islam
Accurate sunflower head detection is essential for precision agriculture, supporting timely monitoring and yield estimation. However, reliable detection under UAV settings remains challenging due to annotation scarcity, variable field conditions, and inconsistent localization across flowering stages. This study presents a unified framework that evaluates supervised, semi-supervised, and self-supervised learning strategies on UAV imagery collected under real field conditions. In the supervised setting, YOLOv12s achieved the strongest performance (mAP@50 ≈ 93 %), with stable convergence and focused visual attention, while RF-DETR showed lower recall and weaker localization. To reduce annotation requirements, a Pseudo-STAC teacher–student approach was evaluated across varying labeled-to-unlabeled ratios. Teacher models maintained high accuracy even with limited supervision (mAP@50 = 88.5–91.6 %), while student models approached teacher-level performance when 20–30 % of images were labeled. At extremely low label ratios (10 %), instability from pseudo-label noise was observed, though confidence-adaptive filtering alleviated some of these effects. Self-supervised learning (SSL) using DINOv2-style and BYOL pretraining further strengthened representation quality, consistently producing mAP@50 scores above 91 %. SSL-enhanced YOLOv12s generated compact and discriminative embeddings and exhibited smoother optimization, confirmed through loss curves, clustering analyses, and XAI visualizations. Finally, a real-time Streamlit application was developed, enabling image, video, and live-camera detection at up to 22 FPS, demonstrating the practical deployment potential of the proposed framework. This work demonstrates the potential of semi- and self-supervised learning to reduce annotation costs, enhance generalization, and deliver interpretable real-time solutions for precision agriculture.
{"title":"Real-time sunflower detection using semi-supervised and self-supervised deep learning for precision agriculture","authors":"Fathhur Rahaman Sams, Sanjana Kazi Supti, Shayma Binte Hamid, Radin Junayed, K.M. Fahim A Bari, Md Junaeid Ali, Raiyan Gani, Karib Shams, Mohammad Rifat Ahmmad Rashid, Raihan Ul Islam","doi":"10.1016/j.atech.2025.101684","DOIUrl":"10.1016/j.atech.2025.101684","url":null,"abstract":"<div><div>Accurate sunflower head detection is essential for precision agriculture, supporting timely monitoring and yield estimation. However, reliable detection under UAV settings remains challenging due to annotation scarcity, variable field conditions, and inconsistent localization across flowering stages. This study presents a unified framework that evaluates supervised, semi-supervised, and self-supervised learning strategies on UAV imagery collected under real field conditions. In the supervised setting, YOLOv12s achieved the strongest performance (mAP@50 ≈ 93 %), with stable convergence and focused visual attention, while RF-DETR showed lower recall and weaker localization. To reduce annotation requirements, a Pseudo-STAC teacher–student approach was evaluated across varying labeled-to-unlabeled ratios. Teacher models maintained high accuracy even with limited supervision (mAP@50 = 88.5–91.6 %), while student models approached teacher-level performance when 20–30 % of images were labeled. At extremely low label ratios (10 %), instability from pseudo-label noise was observed, though confidence-adaptive filtering alleviated some of these effects. Self-supervised learning (SSL) using DINOv2-style and BYOL pretraining further strengthened representation quality, consistently producing mAP@50 scores above 91 %. SSL-enhanced YOLOv12s generated compact and discriminative embeddings and exhibited smoother optimization, confirmed through loss curves, clustering analyses, and XAI visualizations. Finally, a real-time Streamlit application was developed, enabling image, video, and live-camera detection at up to 22 FPS, demonstrating the practical deployment potential of the proposed framework. This work demonstrates the potential of semi- and self-supervised learning to reduce annotation costs, enhance generalization, and deliver interpretable real-time solutions for precision agriculture.</div></div>","PeriodicalId":74813,"journal":{"name":"Smart agricultural technology","volume":"13 ","pages":"Article 101684"},"PeriodicalIF":5.7,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145790644","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-01Epub Date: 2025-12-15DOI: 10.1016/j.atech.2025.101727
Alessio Cotticelli , Konstantinos Zaralis , Matteo Santinello , Roberta Matera , Luciano A. González
Aim of the present study was to evaluate the use of a walk-over-weighing (WoW) technology to remotely weigh growing lambs in a pastoral sheep production system and then use these data to predict future liveweight (LW) at different lead times. Thus, an experiment was carried out in a flock of 144 lambs that were grazing freely for a total of 94 days while an automatic WoW system allowed to remotely estimate LW and growth rate of individual lambs daily under these grazing conditions. Data were recorded as each animal entered voluntarily into the WoW platform and walked through it to access water. Daily LW of each animal was used to forecast LW (FW) at 20, 30, 40, 50, and 60 days ahead of any actual day. The accuracy of the FW was assessed using a linear mixed-effects model and Lin’s concordance correlation coefficient (LCCC) with FW as dependent variable and actual observed LW (OW) as independent for each target days, both animal and date were random effects. In total, data from 132 lambs were included in the final dataset which had an average growth rate of 0.25 ± 0.11 kg/d throughout the 93 days of the trial. The FW for the next 20 and 30 days showed substantial agreement with observed weight (LCCC > 0.90). However, FW beyond 40 days was less precise and accurate (LCCC < 0.75). In addition, the LCCC of FW was higher when estimated from the growth rate in the last 14 compared to the last 7 days and late compared to early in the trial. The WoW technology is suitable to monitor LW and growth rate of lambs both in real-time and to predict future LW in commercial farms. Hence, the WoW system can be recommended to help with on-farm decision making of individual sheep.
{"title":"Use of an automated walk-over-weighing system to monitor and forecast liveweight in grazing lambs","authors":"Alessio Cotticelli , Konstantinos Zaralis , Matteo Santinello , Roberta Matera , Luciano A. González","doi":"10.1016/j.atech.2025.101727","DOIUrl":"10.1016/j.atech.2025.101727","url":null,"abstract":"<div><div>Aim of the present study was to evaluate the use of a walk-over-weighing (WoW) technology to remotely weigh growing lambs in a pastoral sheep production system and then use these data to predict future liveweight (LW) at different lead times. Thus, an experiment was carried out in a flock of 144 lambs that were grazing freely for a total of 94 days while an automatic WoW system allowed to remotely estimate LW and growth rate of individual lambs daily under these grazing conditions. Data were recorded as each animal entered voluntarily into the WoW platform and walked through it to access water. Daily LW of each animal was used to forecast LW (FW) at 20, 30, 40, 50, and 60 days ahead of any actual day. The accuracy of the FW was assessed using a linear mixed-effects model and Lin’s concordance correlation coefficient (LCCC) with FW as dependent variable and actual observed LW (OW) as independent for each target days, both animal and date were random effects. In total, data from 132 lambs were included in the final dataset which had an average growth rate of 0.25 ± 0.11 kg/d throughout the 93 days of the trial. The FW for the next 20 and 30 days showed substantial agreement with observed weight (LCCC > 0.90). However, FW beyond 40 days was less precise and accurate (LCCC < 0.75). In addition, the LCCC of FW was higher when estimated from the growth rate in the last 14 compared to the last 7 days and late compared to early in the trial. The WoW technology is suitable to monitor LW and growth rate of lambs both in real-time and to predict future LW in commercial farms. Hence, the WoW system can be recommended to help with on-farm decision making of individual sheep.</div></div>","PeriodicalId":74813,"journal":{"name":"Smart agricultural technology","volume":"13 ","pages":"Article 101727"},"PeriodicalIF":5.7,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145790772","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-01Epub Date: 2025-12-13DOI: 10.1016/j.atech.2025.101716
Chunling Zhang , Si Chen , Hao Shu , Liqing Chen , Xiaodong Xie , Weiwei Wang
The issue of insufficient accuracy in monitoring sowing depth during maize planting negatively impacts sowing quality. In response to this problem and in accordance with the agronomic requirements for maize cultivation, this study proposes a real-time monitoring method for maize sowing depth based on a bionic furrow opener. Furthermore, a sowing depth monitoring system has been developed based on this method, enabling real-time and precise measurement of maize sowing depth. Through dynamic analysis involving the bionic mackerel and seed sowing, a Bionic Mackerel-Style Side Baffle (BMSSB) furrow opener was designed, with key structural parameter ranges established. The opening process was simulated using the Discrete Element Method (DEM), and the results demonstrated the effectiveness of the monitoring method. Furthermore, when the width of the side flaps of the BMSSB furrow opener is set to 32 mm, the consistency index (Ci) between the monitored depth and the actual sowing depth reaches its maximum value of 90%. System response speed tests were conducted, revealing that the response time did not exceed 0.1 s. In addition, field experiments were conducted to monitor the seeding depth of maize. The results indicated that when the operating speed was set at 10 km/h and the sowing depth was maintained at 60 mm, the consistency index (Ci) between the monitored and actual depths, as well as the standard deviation, were found to be 90.14 ± 0.11%.
{"title":"A bionic furrow opener-based real-time monitoring method for maize sowing depth","authors":"Chunling Zhang , Si Chen , Hao Shu , Liqing Chen , Xiaodong Xie , Weiwei Wang","doi":"10.1016/j.atech.2025.101716","DOIUrl":"10.1016/j.atech.2025.101716","url":null,"abstract":"<div><div>The issue of insufficient accuracy in monitoring sowing depth during maize planting negatively impacts sowing quality. In response to this problem and in accordance with the agronomic requirements for maize cultivation, this study proposes a real-time monitoring method for maize sowing depth based on a bionic furrow opener. Furthermore, a sowing depth monitoring system has been developed based on this method, enabling real-time and precise measurement of maize sowing depth. Through dynamic analysis involving the bionic mackerel and seed sowing, a Bionic Mackerel-Style Side Baffle (BMSSB) furrow opener was designed, with key structural parameter ranges established. The opening process was simulated using the Discrete Element Method (DEM), and the results demonstrated the effectiveness of the monitoring method. Furthermore, when the width of the side flaps of the BMSSB furrow opener is set to 32 mm, the consistency index (<em>C<sub>i</sub></em>) between the monitored depth and the actual sowing depth reaches its maximum value of 90%. System response speed tests were conducted, revealing that the response time did not exceed 0.1 s. In addition, field experiments were conducted to monitor the seeding depth of maize. The results indicated that when the operating speed was set at 10 km/h and the sowing depth was maintained at 60 mm, the consistency index (<em>C<sub>i</sub></em>) between the monitored and actual depths, as well as the standard deviation, were found to be 90.14 ± 0.11%.</div></div>","PeriodicalId":74813,"journal":{"name":"Smart agricultural technology","volume":"13 ","pages":"Article 101716"},"PeriodicalIF":5.7,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145790641","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-01Epub Date: 2025-12-04DOI: 10.1016/j.atech.2025.101693
Zia U Ahmed , Timothy J. Krupnik , Mohammad S Alam , Jagadish Timsina , Md. Khaled Hossain , G.K.M. Mustafizur Rahman , Andrew J. McDonald
Predicting wheat yields accurately in smallholder farming systems in Bangladesh is challenging due to the complex interplay of various factors and the lack of high-quality field data. To tackle this problem, we created a stacked ensemble machine learning framework. It includes stepwise feature selection and model-agnostic interpretation techniques using data from 178 farmers in Bangladesh. We ensemble four main models: generalized linear model (GLM), random forest (RF), gradient boosting machine (GBM), and extreme gradient boosting (XGBoost). Our ensemble model outperformed individual models, reducing the root mean square error (RMSE) by 6 % compared to GBM and by 12 % compared to GLM. We found that ground cover, represented by well-covered plots (>60 %) and patchy ground cover plots (<60 %), is the most important factor for predicting wheat yields, followed by the wheat maturity date and the date of the previous monsoon rice harvest. These factors are easily observable and recordable by farmers and extension agents. These findings offer practical solutions for Bangladesh: (1) more reliable yield forecasts can assist with planning inputs and harvests; (2) we can develop digital tools that focus on simple-to-measure indicators like ground cover, which can be easily monitored with satellite images; and (3) this research can support government and NGO programs aimed at making wheat farming more climate-resilient. By demonstrating how ensemble learning can extract valuable insights from limited data, this study contributes to enhancing food security and promoting sustainable farming practices in South Asia’s resource-limited agricultural regions.
{"title":"Ensemble machine learning with limited data: Feature selection and wheat yield prediction in Bangladesh","authors":"Zia U Ahmed , Timothy J. Krupnik , Mohammad S Alam , Jagadish Timsina , Md. Khaled Hossain , G.K.M. Mustafizur Rahman , Andrew J. McDonald","doi":"10.1016/j.atech.2025.101693","DOIUrl":"10.1016/j.atech.2025.101693","url":null,"abstract":"<div><div>Predicting wheat yields accurately in smallholder farming systems in Bangladesh is challenging due to the complex interplay of various factors and the lack of high-quality field data. To tackle this problem, we created a stacked ensemble machine learning framework. It includes stepwise feature selection and model-agnostic interpretation techniques using data from 178 farmers in Bangladesh. We ensemble four main models: generalized linear model (GLM), random forest (RF), gradient boosting machine (GBM), and extreme gradient boosting (XGBoost). Our ensemble model outperformed individual models, reducing the root mean square error (RMSE) by 6 % compared to GBM and by 12 % compared to GLM. We found that ground cover, represented by well-covered plots (>60 %) and patchy ground cover plots (<60 %), is the most important factor for predicting wheat yields, followed by the wheat maturity date and the date of the previous monsoon rice harvest. These factors are easily observable and recordable by farmers and extension agents. These findings offer practical solutions for Bangladesh: (1) more reliable yield forecasts can assist with planning inputs and harvests; (2) we can develop digital tools that focus on simple-to-measure indicators like ground cover, which can be easily monitored with satellite images; and (3) this research can support government and NGO programs aimed at making wheat farming more climate-resilient. By demonstrating how ensemble learning can extract valuable insights from limited data, this study contributes to enhancing food security and promoting sustainable farming practices in South Asia’s resource-limited agricultural regions.</div></div>","PeriodicalId":74813,"journal":{"name":"Smart agricultural technology","volume":"13 ","pages":"Article 101693"},"PeriodicalIF":5.7,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145790645","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-01Epub Date: 2025-12-08DOI: 10.1016/j.atech.2025.101699
Lebing Zheng , Hong-Yu Zhang , Shanmei Liu , Zaiwen Feng , JunWen He , Hai Liang , Fang Tian , Hui Peng
The convergence of climate change,resource scarcity, and rising global food demand necessitates advanced tools for sustainable agricultural intensification.Traditional farming practices, often based on static guidelines,are increasingly inadequate to manage the nonlinear and interactive effects of multiple stressors. Crop models—originally mechanistic, process-based simulators—have evolved into hybrid, data-integrated systems that support precision and intelligent agriculture. This review traces their evolution from early physiological simulations to contemporary paradigms combining mechanistic interpretability with machine learning adaptability,and examines applications in crop growth simulation, management optimization and strategic decision-making. Persistent challenges, including parameter overfitting,computational demands and limited cross-regional transferability, highlight the need for “mechanism-guided, data-enhanced” approaches that anchor interpretability in physiological knowledge while leveraging data-driven flexibility. This synthesis provides both the conceptual and technical foundation for the development of next-generation crop models, offering theoretical support for more precise and adaptive decision-making in smart agriculture.
{"title":"From mechanistic-driven to data-driven: A review of the evolution of crop models","authors":"Lebing Zheng , Hong-Yu Zhang , Shanmei Liu , Zaiwen Feng , JunWen He , Hai Liang , Fang Tian , Hui Peng","doi":"10.1016/j.atech.2025.101699","DOIUrl":"10.1016/j.atech.2025.101699","url":null,"abstract":"<div><div>The convergence of climate change,resource scarcity, and rising global food demand necessitates advanced tools for sustainable agricultural intensification.Traditional farming practices, often based on static guidelines,are increasingly inadequate to manage the nonlinear and interactive effects of multiple stressors. Crop models—originally mechanistic, process-based simulators—have evolved into hybrid, data-integrated systems that support precision and intelligent agriculture. This review traces their evolution from early physiological simulations to contemporary paradigms combining mechanistic interpretability with machine learning adaptability,and examines applications in crop growth simulation, management optimization and strategic decision-making. Persistent challenges, including parameter overfitting,computational demands and limited cross-regional transferability, highlight the need for “mechanism-guided, data-enhanced” approaches that anchor interpretability in physiological knowledge while leveraging data-driven flexibility. This synthesis provides both the conceptual and technical foundation for the development of next-generation crop models, offering theoretical support for more precise and adaptive decision-making in smart agriculture.</div></div>","PeriodicalId":74813,"journal":{"name":"Smart agricultural technology","volume":"13 ","pages":"Article 101699"},"PeriodicalIF":5.7,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145791350","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}