Pub Date : 2025-04-11DOI: 10.1016/j.aiia.2025.04.002
Hao Fu , Xueguan Zhao , Haoran Tan , Shengyu Zheng , Changyuan Zhai , Liping Chen
To address the low recognition accuracy of open-field vegetables under light occlusion, this study focused on cabbage and developed an online target recognition model based on deep learning. Using Yolov8n as the base network, a method was proposed to mitigate the impact of light occlusion on the accuracy of online cabbage recognition. A combination of cabbage image filters was designed to eliminate the effects of light occlusion. A filter parameter adaptive learning module for cabbage image filter parameters was constructed. The image filter combination and adaptive learning module were embedded into the Yolov8n object detection network. This integration enabled precise real-time recognition of cabbage under light occlusion conditions. Experimental results showed recognition accuracies of 97.5 % on the normal lighting dataset, 93.1 % on the light occlusion dataset, and 95.0 % on the mixed dataset. For images with a light occlusion degree greater than 0.4, the recognition accuracy improved by 9.9 % and 13.7 % compared to Yolov5n and Yolov8n models. The model achieved recognition accuracies of 99.3 % on the Chinese cabbage dataset and 98.3 % on the broccoli dataset. The model was deployed on an Nvidia Jetson Orin NX edge computing device, achieving an image processing speed of 26.32 frames per second. Field trials showed recognition accuracies of 96.0 % under normal lighting conditions and 91.2 % under light occlusion. The proposed online cabbage recognition model enables real-time recognition and localization of cabbage in complex open-field environments, offering technical support for target-oriented spraying.
针对光照遮挡下露地蔬菜识别准确率低的问题,本研究以白菜为研究对象,开发了一种基于深度学习的在线目标识别模型。以Yolov8n为基础网络,提出了一种减轻光遮挡对白菜在线识别精度影响的方法。白菜图像过滤器的组合设计,以消除光遮挡的影响。构建了白菜图像滤波参数的滤波参数自适应学习模块。将图像滤波组合和自适应学习模块嵌入到Yolov8n目标检测网络中。这种整合使得在光遮挡条件下对卷心菜进行精确的实时识别。实验结果表明,正常光照数据集的识别准确率为97.5%,光遮挡数据集的识别准确率为93.1%,混合数据集的识别准确率为95.0%。对于光遮挡度大于0.4的图像,与Yolov5n和Yolov8n模型相比,识别准确率分别提高了9.9%和13.7%。该模型对大白菜和西兰花的识别准确率分别达到99.3%和98.3%。该模型部署在Nvidia Jetson Orin NX边缘计算设备上,实现了每秒26.32帧的图像处理速度。野外试验表明,在正常光照条件下识别准确率为96.0%,在光遮挡条件下识别准确率为91.2%。所提出的在线大白菜识别模型能够实现复杂开阔环境下大白菜的实时识别和定位,为定向喷洒提供技术支持。
{"title":"Effective methods for mitigate the impact of light occlusion on the accuracy of online cabbage recognition in open fields","authors":"Hao Fu , Xueguan Zhao , Haoran Tan , Shengyu Zheng , Changyuan Zhai , Liping Chen","doi":"10.1016/j.aiia.2025.04.002","DOIUrl":"10.1016/j.aiia.2025.04.002","url":null,"abstract":"<div><div>To address the low recognition accuracy of open-field vegetables under light occlusion, this study focused on cabbage and developed an online target recognition model based on deep learning. Using Yolov8n as the base network, a method was proposed to mitigate the impact of light occlusion on the accuracy of online cabbage recognition. A combination of cabbage image filters was designed to eliminate the effects of light occlusion. A filter parameter adaptive learning module for cabbage image filter parameters was constructed. The image filter combination and adaptive learning module were embedded into the Yolov8n object detection network. This integration enabled precise real-time recognition of cabbage under light occlusion conditions. Experimental results showed recognition accuracies of 97.5 % on the normal lighting dataset, 93.1 % on the light occlusion dataset, and 95.0 % on the mixed dataset. For images with a light occlusion degree greater than 0.4, the recognition accuracy improved by 9.9 % and 13.7 % compared to Yolov5n and Yolov8n models. The model achieved recognition accuracies of 99.3 % on the Chinese cabbage dataset and 98.3 % on the broccoli dataset. The model was deployed on an Nvidia Jetson Orin NX edge computing device, achieving an image processing speed of 26.32 frames per second. Field trials showed recognition accuracies of 96.0 % under normal lighting conditions and 91.2 % under light occlusion. The proposed online cabbage recognition model enables real-time recognition and localization of cabbage in complex open-field environments, offering technical support for target-oriented spraying.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 3","pages":"Pages 449-458"},"PeriodicalIF":8.2,"publicationDate":"2025-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143855790","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-10DOI: 10.1016/j.aiia.2025.04.003
Shi Yinyan, Zhu Yangxu, Wang Xiaochan, Zhang Xiaolei, Zheng Enlai, Zhang Yongnian
Environmental impacts and economic demands are driving the development of variable rate fertilization (VRF) technology for precision agriculture. Despite the advantages of a simple structure, low cost and high efficiency, uneven fertilizer-spreading uniformity is becoming a key factor restricting the application of centrifugal fertilizer spreaders. Accordingly, the particle application characteristics and variation laws for centrifugal VRF spreaders with multi-pass overlapped spreading needs to be urgently explored, in order to improve their distribution uniformity and working accuracy. In this study, the working performance of a self-developed centrifugal VRF spreader, based on real-time growth information of rice and wheat, was investigated and tested through the test methods of using the collection trays prescribed in ISO 5690 and ASAE S341.2. The coefficient of variation (CV) was calculated by weighing the fertilizer mass in standard pans, in order to evaluate the distribution uniformity of spreading patterns. The results showed that the effective application widths were 21.05, 22.58 and 23.67 m for application rates of 225, 300 and 375 kg/ha, respectively. The actual fertilizer application rates of multi-pass overlapped spreading were generally higher than the target rates, as well as the particle distribution CVs within the effective spreading widths were 11.51, 9.25 and 11.28 % for the respective target rates. Field test results for multi-pass overlapped spreading showed that the average difference between the actual and target application was 4.54 %, as well as the average particle distribution CV within the operating width was 11.94 %, which met the operation requirements of particle transverse distribution for centrifugal fertilizer spreaders. The results and findings of this study provide a theoretical reference for technical innovation and development of centrifugal VRF spreaders and are of great practical and social significance for accelerating their application in implementing precision agriculture.
{"title":"Assessing particle application in multi-pass overlapping scenarios with variable rate centrifugal fertilizer spreaders for precision agriculture","authors":"Shi Yinyan, Zhu Yangxu, Wang Xiaochan, Zhang Xiaolei, Zheng Enlai, Zhang Yongnian","doi":"10.1016/j.aiia.2025.04.003","DOIUrl":"10.1016/j.aiia.2025.04.003","url":null,"abstract":"<div><div>Environmental impacts and economic demands are driving the development of variable rate fertilization (VRF) technology for precision agriculture. Despite the advantages of a simple structure, low cost and high efficiency, uneven fertilizer-spreading uniformity is becoming a key factor restricting the application of centrifugal fertilizer spreaders. Accordingly, the particle application characteristics and variation laws for centrifugal VRF spreaders with multi-pass overlapped spreading needs to be urgently explored, in order to improve their distribution uniformity and working accuracy. In this study, the working performance of a self-developed centrifugal VRF spreader, based on real-time growth information of rice and wheat, was investigated and tested through the test methods of using the collection trays prescribed in ISO 5690 and ASAE S341.2. The coefficient of variation (CV) was calculated by weighing the fertilizer mass in standard pans, in order to evaluate the distribution uniformity of spreading patterns. The results showed that the effective application widths were 21.05, 22.58 and 23.67 m for application rates of 225, 300 and 375 kg/ha, respectively. The actual fertilizer application rates of multi-pass overlapped spreading were generally higher than the target rates, as well as the particle distribution CVs within the effective spreading widths were 11.51, 9.25 and 11.28 % for the respective target rates. Field test results for multi-pass overlapped spreading showed that the average difference between the actual and target application was 4.54 %, as well as the average particle distribution CV within the operating width was 11.94 %, which met the operation requirements of particle transverse distribution for centrifugal fertilizer spreaders. The results and findings of this study provide a theoretical reference for technical innovation and development of centrifugal VRF spreaders and are of great practical and social significance for accelerating their application in implementing precision agriculture.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 3","pages":"Pages 395-406"},"PeriodicalIF":8.2,"publicationDate":"2025-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143835179","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-08DOI: 10.1016/j.aiia.2025.03.006
Yuqing Yang , Chengguo Xu , Wenhao Hou , Alan G. McElligott , Kai Liu , Yueju Xue
Nursing behaviour and the calling-to-nurse sound are crucial indicators for assessing sow maternal behaviour and nursing status. However, accurately identifying these behaviours for individual sows in complex indoor pig housing is challenging due to factors such as variable lighting, rail obstructions, and interference from other sows' calls. Multimodal fusion, which integrates audio and visual data, has proven to be an effective approach for improving accuracy and robustness in complex scenarios. In this study, we designed an audio-visual data acquisition system that includes a camera for synchronised audio and video capture, along with a custom-developed sound source localisation system that leverages a sound sensor to track sound direction. Specifically, we proposed a novel transformer-based audio-visual multimodal fusion (TMF) framework for recognising fine-grained sow nursing behaviour with or without the calling-to-nurse sound. Initially, a unimodal self-attention enhancement (USE) module was employed to augment video and audio features with global contextual information. Subsequently, we developed an audio-visual interaction enhancement (AVIE) module to compress relevant information and reduce noise using the information bottleneck principle. Moreover, we presented an adaptive dynamic decision fusion strategy to optimise the model's performance by focusing on the most relevant features in each modality. Finally, we comprehensively identified fine-grained nursing behaviours by integrating audio and fused information, while incorporating angle information from the real-time sound source localisation system to accurately determine whether the sound cues originate from the target sow. Our results demonstrate that the proposed method achieves an accuracy of 98.42 % for general sow nursing behaviour and 94.37 % for fine-grained nursing behaviour, including nursing with and without the calling-to-nurse sound, and non-nursing behaviours. This fine-grained nursing information can provide a more nuanced understanding of the sow's health and lactation willingness, thereby enhancing management practices in pig farming.
{"title":"Transformer-based audio-visual multimodal fusion for fine-grained recognition of individual sow nursing behaviour","authors":"Yuqing Yang , Chengguo Xu , Wenhao Hou , Alan G. McElligott , Kai Liu , Yueju Xue","doi":"10.1016/j.aiia.2025.03.006","DOIUrl":"10.1016/j.aiia.2025.03.006","url":null,"abstract":"<div><div>Nursing behaviour and the calling-to-nurse sound are crucial indicators for assessing sow maternal behaviour and nursing status. However, accurately identifying these behaviours for individual sows in complex indoor pig housing is challenging due to factors such as variable lighting, rail obstructions, and interference from other sows' calls. Multimodal fusion, which integrates audio and visual data, has proven to be an effective approach for improving accuracy and robustness in complex scenarios. In this study, we designed an audio-visual data acquisition system that includes a camera for synchronised audio and video capture, along with a custom-developed sound source localisation system that leverages a sound sensor to track sound direction. Specifically, we proposed a novel transformer-based audio-visual multimodal fusion (TMF) framework for recognising fine-grained sow nursing behaviour with or without the calling-to-nurse sound. Initially, a unimodal self-attention enhancement (USE) module was employed to augment video and audio features with global contextual information. Subsequently, we developed an audio-visual interaction enhancement (AVIE) module to compress relevant information and reduce noise using the information bottleneck principle. Moreover, we presented an adaptive dynamic decision fusion strategy to optimise the model's performance by focusing on the most relevant features in each modality. Finally, we comprehensively identified fine-grained nursing behaviours by integrating audio and fused information, while incorporating angle information from the real-time sound source localisation system to accurately determine whether the sound cues originate from the target sow. Our results demonstrate that the proposed method achieves an accuracy of 98.42 % for general sow nursing behaviour and 94.37 % for fine-grained nursing behaviour, including nursing with and without the calling-to-nurse sound, and non-nursing behaviours. This fine-grained nursing information can provide a more nuanced understanding of the sow's health and lactation willingness, thereby enhancing management practices in pig farming.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 3","pages":"Pages 363-376"},"PeriodicalIF":8.2,"publicationDate":"2025-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143835177","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-05DOI: 10.1016/j.aiia.2025.04.001
Josué Kpodo , A. Pouyan Nejadhashemi
Agricultural Extension (AE) research faces significant challenges in producing relevant and practical knowledge due to rapid advancements in artificial intelligence (AI). AE struggles to keep pace with these advancements, complicating the development of actionable information. One major challenge is the absence of intelligent platforms that enable efficient information retrieval and quick decision-making. Investigations have shown a shortage of AI-assisted solutions that effectively use AE materials across various media formats while preserving scientific accuracy and contextual relevance. Although mainstream AI systems can potentially reduce decision-making risks, their usage remains limited. This limitation arises primarily from the lack of standardized datasets and concerns regarding user data privacy. For AE datasets to be standardized, they must satisfy four key criteria: inclusion of critical domain-specific knowledge, expert curation, consistent structure, and acceptance by peers. Addressing data privacy issues involves adhering to open-access principles and enforcing strict data encryption and anonymization standards. To address these gaps, a conceptual framework is introduced. This framework extends beyond typical user-oriented platforms and comprises five core modules. It features a neurosymbolic pipeline integrating large language models with physically based agricultural modeling software, further enhanced by Reinforcement Learning from Human Feedback. Notable aspects of the framework include a dedicated human-in-the-loop process and a governance structure consisting of three primary bodies focused on data standardization, ethics and security, and accountability and transparency. Overall, this work represents a significant advancement in agricultural knowledge systems, potentially transforming how AE services deliver critical information to farmers and other stakeholders.
{"title":"Navigating challenges/opportunities in developing smart agricultural extension platforms: Multi-media data mining techniques","authors":"Josué Kpodo , A. Pouyan Nejadhashemi","doi":"10.1016/j.aiia.2025.04.001","DOIUrl":"10.1016/j.aiia.2025.04.001","url":null,"abstract":"<div><div>Agricultural Extension (AE) research faces significant challenges in producing relevant and practical knowledge due to rapid advancements in artificial intelligence (AI). AE struggles to keep pace with these advancements, complicating the development of actionable information. One major challenge is the absence of intelligent platforms that enable efficient information retrieval and quick decision-making. Investigations have shown a shortage of AI-assisted solutions that effectively use AE materials across various media formats while preserving scientific accuracy and contextual relevance. Although mainstream AI systems can potentially reduce decision-making risks, their usage remains limited. This limitation arises primarily from the lack of standardized datasets and concerns regarding user data privacy. For AE datasets to be standardized, they must satisfy four key criteria: inclusion of critical domain-specific knowledge, expert curation, consistent structure, and acceptance by peers. Addressing data privacy issues involves adhering to open-access principles and enforcing strict data encryption and anonymization standards. To address these gaps, a conceptual framework is introduced. This framework extends beyond typical user-oriented platforms and comprises five core modules. It features a neurosymbolic pipeline integrating large language models with physically based agricultural modeling software, further enhanced by Reinforcement Learning from Human Feedback. Notable aspects of the framework include a dedicated human-in-the-loop process and a governance structure consisting of three primary bodies focused on data standardization, ethics and security, and accountability and transparency. Overall, this work represents a significant advancement in agricultural knowledge systems, potentially transforming how AE services deliver critical information to farmers and other stakeholders.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 3","pages":"Pages 426-448"},"PeriodicalIF":8.2,"publicationDate":"2025-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143842756","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-04DOI: 10.1016/j.aiia.2025.03.008
Rabiu Aminu , Samantha M. Cook , David Ljungberg , Oliver Hensel , Abozar Nasirahmadi
<div><div>To reduce damage caused by insect pests, farmers use insecticides to protect produce from crop pests. This practice leads to high synthetic chemical usage because a large portion of the applied insecticide does not reach its intended target; instead, it may affect non-target organisms and pollute the environment. One approach to mitigating this is through the selective application of insecticides to only those crop plants (or patches of plants) where the insect pests are located, avoiding non-targets and beneficials. The first step to achieve this is the identification of insects on plants and discrimination between pests and beneficial non-targets. However, detecting small-sized individual insects is challenging using image-based machine learning techniques, especially in natural field settings. This paper proposes a method based on explainable artificial intelligence feature selection and machine learning to detect pests and beneficial insects in field crops. An insect-plant dataset reflecting real field conditions was created. It comprises two pest insects—the Colorado potato beetle (CPB, <em>Leptinotarsa decemlineata</em>) and green peach aphid (<em>Myzus persicae</em>)—and the beneficial seven-spot ladybird (<em>Coccinella septempunctata</em>). The specialist herbivore CPB was imaged only on potato plants (<em>Solanum tuberosum</em>) while green peach aphids and seven-spot ladybirds were imaged on three crops: potato, faba bean (<em>Vicia faba)</em>, and sugar beet (<em>Beta vulgaris</em> subsp. <em>vulgaris</em>). This increased dataset diversity, broadening the potential application of the developed method for discriminating between pests and beneficial insects in several crops. The insects were imaged in both laboratory and outdoor settings. Using the GrabCut algorithm, regions of interest in the image were identified before shape, texture, and colour features were extracted from the segmented regions. The concept of explainable artificial intelligence was adopted by incorporating permutation feature importance ranking and Shapley Additive explanations values to identify the feature set that optimized a model's performance while reducing computational complexity. The proposed explainable artificial intelligence feature selection method was compared to conventional feature selection techniques, including mutual information, chi-square coefficient, maximal information coefficient, Fisher separation criterion and variance thresholding. Results showed improved accuracy (92.62 % Random forest, 90.16 % Support vector machine, 83.61 % K-nearest neighbours, and 81.97 % Naïve Bayes) and a reduction in the number of model parameters and memory usage (7.22 <em>×</em> 10<sup>7</sup> Random forest, 6.23 <em>×</em> 10<sup>3</sup> Support vector machine, 3.64 <em>×</em> 10<sup>4</sup> K-nearest neighbours and 1.88 <em>×</em> 10<sup>2</sup> Naïve Bayes) compared to using all features. Prediction and training times were also reduced by approxima
{"title":"Improving the performance of machine learning algorithms for detection of individual pests and beneficial insects using feature selection techniques","authors":"Rabiu Aminu , Samantha M. Cook , David Ljungberg , Oliver Hensel , Abozar Nasirahmadi","doi":"10.1016/j.aiia.2025.03.008","DOIUrl":"10.1016/j.aiia.2025.03.008","url":null,"abstract":"<div><div>To reduce damage caused by insect pests, farmers use insecticides to protect produce from crop pests. This practice leads to high synthetic chemical usage because a large portion of the applied insecticide does not reach its intended target; instead, it may affect non-target organisms and pollute the environment. One approach to mitigating this is through the selective application of insecticides to only those crop plants (or patches of plants) where the insect pests are located, avoiding non-targets and beneficials. The first step to achieve this is the identification of insects on plants and discrimination between pests and beneficial non-targets. However, detecting small-sized individual insects is challenging using image-based machine learning techniques, especially in natural field settings. This paper proposes a method based on explainable artificial intelligence feature selection and machine learning to detect pests and beneficial insects in field crops. An insect-plant dataset reflecting real field conditions was created. It comprises two pest insects—the Colorado potato beetle (CPB, <em>Leptinotarsa decemlineata</em>) and green peach aphid (<em>Myzus persicae</em>)—and the beneficial seven-spot ladybird (<em>Coccinella septempunctata</em>). The specialist herbivore CPB was imaged only on potato plants (<em>Solanum tuberosum</em>) while green peach aphids and seven-spot ladybirds were imaged on three crops: potato, faba bean (<em>Vicia faba)</em>, and sugar beet (<em>Beta vulgaris</em> subsp. <em>vulgaris</em>). This increased dataset diversity, broadening the potential application of the developed method for discriminating between pests and beneficial insects in several crops. The insects were imaged in both laboratory and outdoor settings. Using the GrabCut algorithm, regions of interest in the image were identified before shape, texture, and colour features were extracted from the segmented regions. The concept of explainable artificial intelligence was adopted by incorporating permutation feature importance ranking and Shapley Additive explanations values to identify the feature set that optimized a model's performance while reducing computational complexity. The proposed explainable artificial intelligence feature selection method was compared to conventional feature selection techniques, including mutual information, chi-square coefficient, maximal information coefficient, Fisher separation criterion and variance thresholding. Results showed improved accuracy (92.62 % Random forest, 90.16 % Support vector machine, 83.61 % K-nearest neighbours, and 81.97 % Naïve Bayes) and a reduction in the number of model parameters and memory usage (7.22 <em>×</em> 10<sup>7</sup> Random forest, 6.23 <em>×</em> 10<sup>3</sup> Support vector machine, 3.64 <em>×</em> 10<sup>4</sup> K-nearest neighbours and 1.88 <em>×</em> 10<sup>2</sup> Naïve Bayes) compared to using all features. Prediction and training times were also reduced by approxima","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 3","pages":"Pages 377-394"},"PeriodicalIF":8.2,"publicationDate":"2025-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143835178","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-04DOI: 10.1016/j.aiia.2025.03.007
Jiang Pin , Tingfeng Guo , Minzi Xv , Xiangjun Zou , Wenwu Hu
This paper describes the design, algorithm development, and experimental verification of a precise spray perception system based on LiDAR were presented to address the issue that the navigation line extraction accuracy of self-propelled sprayers during field operations is low, resulting in wheels rolling over the ridges and excessive pesticide waste. A data processing framework was established for the precision spray perception system. Through data preprocessing, adaptive segmentation of crops and ditches, extraction of navigation lines and crop positioning, which were derived from the original LiDAR point cloud species. Data collection and analysis of the field environment of cabbages in different growth cycles were conducted to verify the stability of the precision spraying system. A controllable constant-speed experimental setup was established to compare the performance of LiDAR and depth camera in the same field environment. The experimental results show that at the self-propelled sprayer of speeds of 0.5 and 1 ms−1, the maximum lateral error is 0.112 m in a cabbage ridge environment with inter-row weeds, with an mean absolute lateral error of 0.059 m. The processing speed per frame does not exceed 43 ms. Compared to the machine vision algorithm, this method reduces the average processing time by 122 ms. The proposed system demonstrates superior accuracy, processing time, and robustness in crop identification and navigation line extraction compared to the machine vision system.
{"title":"Fast extraction of navigation line and crop position based on LiDAR for cabbage crops","authors":"Jiang Pin , Tingfeng Guo , Minzi Xv , Xiangjun Zou , Wenwu Hu","doi":"10.1016/j.aiia.2025.03.007","DOIUrl":"10.1016/j.aiia.2025.03.007","url":null,"abstract":"<div><div>This paper describes the design, algorithm development, and experimental verification of a precise spray perception system based on LiDAR were presented to address the issue that the navigation line extraction accuracy of self-propelled sprayers during field operations is low, resulting in wheels rolling over the ridges and excessive pesticide waste. A data processing framework was established for the precision spray perception system. Through data preprocessing, adaptive segmentation of crops and ditches, extraction of navigation lines and crop positioning, which were derived from the original LiDAR point cloud species. Data collection and analysis of the field environment of cabbages in different growth cycles were conducted to verify the stability of the precision spraying system. A controllable constant-speed experimental setup was established to compare the performance of LiDAR and depth camera in the same field environment. The experimental results show that at the self-propelled sprayer of speeds of 0.5 and 1 ms<sup>−1</sup>, the maximum lateral error is 0.112 m in a cabbage ridge environment with inter-row weeds, with an mean absolute lateral error of 0.059 m. The processing speed per frame does not exceed 43 ms. Compared to the machine vision algorithm, this method reduces the average processing time by 122 ms. The proposed system demonstrates superior accuracy, processing time, and robustness in crop identification and navigation line extraction compared to the machine vision system.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 4","pages":"Pages 686-695"},"PeriodicalIF":8.2,"publicationDate":"2025-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144254490","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sudden reductions in crop yield (i.e., yield shocks) severely disrupt the food supply, intensify food insecurity, depress farmers' welfare, and worsen a country's economic conditions. Here, we study the spatiotemporal patterns of wheat yield shocks, quantified by the lower quantiles of yield fluctuations, in 86 countries over 30 years. Furthermore, we assess the relationships between shocks and their key ecological and socioeconomic drivers using quantile regression based on statistical (linear quantile mixed model) and machine learning (quantile random forest) models. Using a panel dataset that captures spatiotemporal patterns of yield shocks and possible drivers in 86 countries, we find that the severity of yield shocks has been increasing globally since 1997. Moreover, our cross-validation exercise shows that quantile random forest outperforms the linear quantile regression model. Despite this performance difference, both models consistently reveal that the severity of shocks is associated with higher weather stress, nitrogen fertilizer application rate, and gross domestic product (GDP) per capita (a typical indicator for economic and technological advancement in a country). While the unexpected negative association between more severe wheat yield shocks and higher fertilizer application rate and GDP per capita does not imply a direct causal effect, they indicate that the advancement in wheat production has been primarily on achieving higher yields and less on lowering the possibility and magnitude of sharp yield reductions. Hence, in the context of growing extreme weather stress, there is a critical need to enhance the technology and management practices that mitigate yield shocks to improve the resilience of the world food systems.
{"title":"Unveiling the drivers contributing to global wheat yield shocks through quantile regression","authors":"Srishti Vishwakarma , Xin Zhang , Vyacheslav Lyubchich","doi":"10.1016/j.aiia.2025.03.004","DOIUrl":"10.1016/j.aiia.2025.03.004","url":null,"abstract":"<div><div>Sudden reductions in crop yield (i.e., yield shocks) severely disrupt the food supply, intensify food insecurity, depress farmers' welfare, and worsen a country's economic conditions. Here, we study the spatiotemporal patterns of wheat yield shocks, quantified by the lower quantiles of yield fluctuations, in 86 countries over 30 years. Furthermore, we assess the relationships between shocks and their key ecological and socioeconomic drivers using quantile regression based on statistical (linear quantile mixed model) and machine learning (quantile random forest) models. Using a panel dataset that captures spatiotemporal patterns of yield shocks and possible drivers in 86 countries, we find that the severity of yield shocks has been increasing globally since 1997. Moreover, our cross-validation exercise shows that quantile random forest outperforms the linear quantile regression model. Despite this performance difference, both models consistently reveal that the severity of shocks is associated with higher weather stress, nitrogen fertilizer application rate, and gross domestic product (GDP) per capita (a typical indicator for economic and technological advancement in a country). While the unexpected negative association between more severe wheat yield shocks and higher fertilizer application rate and GDP per capita does not imply a direct causal effect, they indicate that the advancement in wheat production has been primarily on achieving higher yields and less on lowering the possibility and magnitude of sharp yield reductions. Hence, in the context of growing extreme weather stress, there is a critical need to enhance the technology and management practices that mitigate yield shocks to improve the resilience of the world food systems.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 3","pages":"Pages 564-572"},"PeriodicalIF":8.2,"publicationDate":"2025-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144106557","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-22DOI: 10.1016/j.aiia.2025.03.005
Alberto Carraro , Mattia Pravato , Francesco Marinello , Francesco Bordignon , Angela Trocino , Gerolamo Xiccato , Andrea Pezzuolo
Precision Livestock Farming (PLF) emerges as a promising solution for revolutionising farming by enabling real-time automated monitoring of animals through smart technologies. PLF provides farmers with precise data to enhance farm management, increasing productivity and profitability. For instance, it allows for non-intrusive health assessments, contributing to maintaining a healthy herd while reducing stress associated with handling. In the poultry sector, image analysis can be utilised to monitor and analyse the behaviour of each hen in real time. Researchers have recently used machine learning algorithms to monitor the behaviour, health, and positioning of hens through computer vision techniques. Convolutional neural networks, a type of deep learning algorithm, have been utilised for image analysis to identify and categorise various hen behaviours and track specific activities like feeding and drinking. This research presents an automated system for analysing laying hen movement using video footage from surveillance cameras. With a customised implementation of object tracking, the system can efficiently process hundreds of hours of videos while maintaining high measurement precision. Its modular implementation adapts well to optimally exploit the GPU computing capabilities of the hardware platform it is running on. The use of this system is beneficial for both real-time monitoring and post-processing, contributing to improved monitoring capabilities in precision livestock farming.
{"title":"A new tool to improve the computation of animal kinetic activity indices in precision poultry farming","authors":"Alberto Carraro , Mattia Pravato , Francesco Marinello , Francesco Bordignon , Angela Trocino , Gerolamo Xiccato , Andrea Pezzuolo","doi":"10.1016/j.aiia.2025.03.005","DOIUrl":"10.1016/j.aiia.2025.03.005","url":null,"abstract":"<div><div>Precision Livestock Farming (PLF) emerges as a promising solution for revolutionising farming by enabling real-time automated monitoring of animals through smart technologies. PLF provides farmers with precise data to enhance farm management, increasing productivity and profitability. For instance, it allows for non-intrusive health assessments, contributing to maintaining a healthy herd while reducing stress associated with handling. In the poultry sector, image analysis can be utilised to monitor and analyse the behaviour of each hen in real time. Researchers have recently used machine learning algorithms to monitor the behaviour, health, and positioning of hens through computer vision techniques. Convolutional neural networks, a type of deep learning algorithm, have been utilised for image analysis to identify and categorise various hen behaviours and track specific activities like feeding and drinking. This research presents an automated system for analysing laying hen movement using video footage from surveillance cameras. With a customised implementation of object tracking, the system can efficiently process hundreds of hours of videos while maintaining high measurement precision. Its modular implementation adapts well to optimally exploit the GPU computing capabilities of the hardware platform it is running on. The use of this system is beneficial for both real-time monitoring and post-processing, contributing to improved monitoring capabilities in precision livestock farming.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 4","pages":"Pages 659-670"},"PeriodicalIF":8.2,"publicationDate":"2025-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144253342","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-17DOI: 10.1016/j.aiia.2025.03.001
Laura Gómez-Zamanillo , Arantza Bereciartúa-Pérez , Artzai Picón , Liliana Parra , Marian Oldenbuerger , Ramón Navarra-Mestre , Christian Klukas , Till Eggers , Jone Echazarra
The use of image based and, recently, deep learning-based systems have provided good results in several applications. Greenhouse trials are key part in the process of developing and testing new herbicides and analyze the response of the species to different products and doses in a controlled way. The assessment of the damage in the plant is daily done in all trials by visual evaluation by experts. This entails time consuming process and lack of repeatability. Greenhouse trials require new digital tools to reduce time consuming process and to endow the experts with more objective and repetitive methods for establishing the damage in the plants.
To this end, a novel method is proposed composed by an initial segmentation of the plant species followed by a multibranch convolutional neural network to estimate the damage level. In this way, we overcome the need for costly and unaffordable pixelwise manual segmentation for damage symptoms and we make use of global damage estimation values provided by the experts.
The algorithm has been deployed under real greenhouse trials conditions in a pilot study located in BASF in Germany and tested over four species (GLXMA, TRZAW, ECHCG, AMARE). The results show mean average error (MAE) values ranging from 5.20 for AMARE and 8.07 for ECHCG for the estimation of PDCU value, with correlation values (R2) higher than 0.85 in all situations, and up to 0.92 in AMARE. These results surpass the inter-rater variability of human experts demonstrating that the proposed automated method is appropriate for automatically assessing greenhouse damage trials.
{"title":"Digitalizing greenhouse trials: An automated approach for efficient and objective assessment of plant damage using deep learning","authors":"Laura Gómez-Zamanillo , Arantza Bereciartúa-Pérez , Artzai Picón , Liliana Parra , Marian Oldenbuerger , Ramón Navarra-Mestre , Christian Klukas , Till Eggers , Jone Echazarra","doi":"10.1016/j.aiia.2025.03.001","DOIUrl":"10.1016/j.aiia.2025.03.001","url":null,"abstract":"<div><div>The use of image based and, recently, deep learning-based systems have provided good results in several applications. Greenhouse trials are key part in the process of developing and testing new herbicides and analyze the response of the species to different products and doses in a controlled way. The assessment of the damage in the plant is daily done in all trials by visual evaluation by experts. This entails time consuming process and lack of repeatability. Greenhouse trials require new digital tools to reduce time consuming process and to endow the experts with more objective and repetitive methods for establishing the damage in the plants.</div><div>To this end, a novel method is proposed composed by an initial segmentation of the plant species followed by a multibranch convolutional neural network to estimate the damage level. In this way, we overcome the need for costly and unaffordable pixelwise manual segmentation for damage symptoms and we make use of global damage estimation values provided by the experts.</div><div>The algorithm has been deployed under real greenhouse trials conditions in a pilot study located in BASF in Germany and tested over four species (GLXMA, TRZAW, ECHCG, AMARE). The results show mean average error (MAE) values ranging from 5.20 for AMARE and 8.07 for ECHCG for the estimation of PDCU value, with correlation values (R<sup>2</sup>) higher than 0.85 in all situations, and up to 0.92 in AMARE. These results surpass the inter-rater variability of human experts demonstrating that the proposed automated method is appropriate for automatically assessing greenhouse damage trials.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 2","pages":"Pages 280-295"},"PeriodicalIF":8.2,"publicationDate":"2025-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143684582","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-17DOI: 10.1016/j.aiia.2025.03.003
Mehdi Fasihi , Mirko Sodini , Alex Falcon , Francesco Degano , Paolo Sivilotti , Giuseppe Serra
Predicting grapevine phenological stages (GPHS) is critical for precisely managing vineyard operations, including plant disease treatments, pruning, and harvest. Solutions commonly used to address viticulture challenges rely on image processing techniques, which have achieved significant results. However, they require the installation of dedicated hardware in the vineyard, making it invasive and difficult to maintain. Moreover, accurate prediction is influenced by the interplay of climatic factors, especially temperature, and the impact of global warming, which are difficult to model using images. Another problem frequently found in GPHS prediction is the persistent issue of missing values in viticultural datasets, particularly in phenological stages. This paper proposes a semi-supervised approach that begins with a small set of labeled phenological stage examples and automatically generates new annotations for large volumes of unlabeled climatic data. This approach aims to address key challenges in phenological analysis. This novel climatic data-based approach offers advantages over common image processing methods, as it is non-intrusive, cost-effective, and adaptable for vineyards of various sizes and technological levels. To ensure the robustness of the proposed Pseudo-labelling strategy, we integrated it into eight machine-learning algorithms. We evaluated its performance across seven diverse datasets, each exhibiting varying percentages of missing values. Performance metrics, including the coefficient of determination (R2) and root-mean-square error (RMSE), are employed to assess the effectiveness of the models. The study demonstrates that integrating the proposed Pseudo-labeling strategy with supervised learning approaches significantly improves predictive accuracy. Moreover, the study shows that the proposed methodology can also be integrated with explainable artificial intelligence techniques to determine the importance of the input features. In particular, the investigation highlights that growing degree days are crucial for improved GPHS prediction.
{"title":"Boosting grapevine phenological stages prediction based on climatic data by pseudo-labeling approach","authors":"Mehdi Fasihi , Mirko Sodini , Alex Falcon , Francesco Degano , Paolo Sivilotti , Giuseppe Serra","doi":"10.1016/j.aiia.2025.03.003","DOIUrl":"10.1016/j.aiia.2025.03.003","url":null,"abstract":"<div><div>Predicting grapevine phenological stages (GPHS) is critical for precisely managing vineyard operations, including plant disease treatments, pruning, and harvest. Solutions commonly used to address viticulture challenges rely on image processing techniques, which have achieved significant results. However, they require the installation of dedicated hardware in the vineyard, making it invasive and difficult to maintain. Moreover, accurate prediction is influenced by the interplay of climatic factors, especially temperature, and the impact of global warming, which are difficult to model using images. Another problem frequently found in GPHS prediction is the persistent issue of missing values in viticultural datasets, particularly in phenological stages. This paper proposes a semi-supervised approach that begins with a small set of labeled phenological stage examples and automatically generates new annotations for large volumes of unlabeled climatic data. This approach aims to address key challenges in phenological analysis. This novel climatic data-based approach offers advantages over common image processing methods, as it is non-intrusive, cost-effective, and adaptable for vineyards of various sizes and technological levels. To ensure the robustness of the proposed Pseudo-labelling strategy, we integrated it into eight machine-learning algorithms. We evaluated its performance across seven diverse datasets, each exhibiting varying percentages of missing values. Performance metrics, including the coefficient of determination (R<sup>2</sup>) and root-mean-square error (RMSE), are employed to assess the effectiveness of the models. The study demonstrates that integrating the proposed Pseudo-labeling strategy with supervised learning approaches significantly improves predictive accuracy. Moreover, the study shows that the proposed methodology can also be integrated with explainable artificial intelligence techniques to determine the importance of the input features. In particular, the investigation highlights that growing degree days are crucial for improved GPHS prediction.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 3","pages":"Pages 550-563"},"PeriodicalIF":8.2,"publicationDate":"2025-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144106556","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}