For effective weed control in vegetable farms, enhancing precision spraying through improved real-time detection is crucial. Over the years, weed detection studies have evolved from traditional feature-based methods to deep learning approaches, particularly convolutional neural networks (CNNs). While numerous studies have focused on improving detection accuracy by experimenting with different backbones, architectures, and hyperparameter tuning, fewer have addressed the real-time implementation of these models in field conditions. Existing research primarily benchmarks model inference speed but often neglects the broader algorithmic efficiency, which includes sensor data integration, processing pipelines, and microcontroller output handling. Furthermore, real-world deployment challenges, such as camera performance at different robot speeds, the optimal operational range for high detection accuracy, and the end-to-end latency of the machine vision system, remain underexplored. This study addresses these gaps by training a custom YOLOv8 nano model to detect three weed types (broadleaf, nutsedge, and grass) and two crop types (pepper and tomato) in plasticulture beds. The system runs on a robotic smart sprayer in real time, integrating GPS and camera data while transmitting control signals to the microcontroller. Beyond detection performance, we evaluate the entire processing pipeline by measuring the total loop time and its variation with the number of detections per frame. Additionally, the optimal robot operational speed was determined, finding that 0.45–0.89 m s−1 provides the best balance between detection accuracy and system responsiveness. By focusing on end-to-end real-time performance on vegetable beds, this study provides insights into the practical deployment of smart spraying, often been overlooked in prior research.
扫码关注我们
求助内容:
应助结果提醒方式:
