首页 > 最新文献

Artificial Intelligence in Agriculture最新文献

英文 中文
Effective methods for mitigate the impact of light occlusion on the accuracy of online cabbage recognition in open fields 减轻光遮挡对露天白菜在线识别准确性影响的有效方法
IF 8.2 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2025-04-11 DOI: 10.1016/j.aiia.2025.04.002
Hao Fu , Xueguan Zhao , Haoran Tan , Shengyu Zheng , Changyuan Zhai , Liping Chen
To address the low recognition accuracy of open-field vegetables under light occlusion, this study focused on cabbage and developed an online target recognition model based on deep learning. Using Yolov8n as the base network, a method was proposed to mitigate the impact of light occlusion on the accuracy of online cabbage recognition. A combination of cabbage image filters was designed to eliminate the effects of light occlusion. A filter parameter adaptive learning module for cabbage image filter parameters was constructed. The image filter combination and adaptive learning module were embedded into the Yolov8n object detection network. This integration enabled precise real-time recognition of cabbage under light occlusion conditions. Experimental results showed recognition accuracies of 97.5 % on the normal lighting dataset, 93.1 % on the light occlusion dataset, and 95.0 % on the mixed dataset. For images with a light occlusion degree greater than 0.4, the recognition accuracy improved by 9.9 % and 13.7 % compared to Yolov5n and Yolov8n models. The model achieved recognition accuracies of 99.3 % on the Chinese cabbage dataset and 98.3 % on the broccoli dataset. The model was deployed on an Nvidia Jetson Orin NX edge computing device, achieving an image processing speed of 26.32 frames per second. Field trials showed recognition accuracies of 96.0 % under normal lighting conditions and 91.2 % under light occlusion. The proposed online cabbage recognition model enables real-time recognition and localization of cabbage in complex open-field environments, offering technical support for target-oriented spraying.
针对光照遮挡下露地蔬菜识别准确率低的问题,本研究以白菜为研究对象,开发了一种基于深度学习的在线目标识别模型。以Yolov8n为基础网络,提出了一种减轻光遮挡对白菜在线识别精度影响的方法。白菜图像过滤器的组合设计,以消除光遮挡的影响。构建了白菜图像滤波参数的滤波参数自适应学习模块。将图像滤波组合和自适应学习模块嵌入到Yolov8n目标检测网络中。这种整合使得在光遮挡条件下对卷心菜进行精确的实时识别。实验结果表明,正常光照数据集的识别准确率为97.5%,光遮挡数据集的识别准确率为93.1%,混合数据集的识别准确率为95.0%。对于光遮挡度大于0.4的图像,与Yolov5n和Yolov8n模型相比,识别准确率分别提高了9.9%和13.7%。该模型对大白菜和西兰花的识别准确率分别达到99.3%和98.3%。该模型部署在Nvidia Jetson Orin NX边缘计算设备上,实现了每秒26.32帧的图像处理速度。野外试验表明,在正常光照条件下识别准确率为96.0%,在光遮挡条件下识别准确率为91.2%。所提出的在线大白菜识别模型能够实现复杂开阔环境下大白菜的实时识别和定位,为定向喷洒提供技术支持。
{"title":"Effective methods for mitigate the impact of light occlusion on the accuracy of online cabbage recognition in open fields","authors":"Hao Fu ,&nbsp;Xueguan Zhao ,&nbsp;Haoran Tan ,&nbsp;Shengyu Zheng ,&nbsp;Changyuan Zhai ,&nbsp;Liping Chen","doi":"10.1016/j.aiia.2025.04.002","DOIUrl":"10.1016/j.aiia.2025.04.002","url":null,"abstract":"<div><div>To address the low recognition accuracy of open-field vegetables under light occlusion, this study focused on cabbage and developed an online target recognition model based on deep learning. Using Yolov8n as the base network, a method was proposed to mitigate the impact of light occlusion on the accuracy of online cabbage recognition. A combination of cabbage image filters was designed to eliminate the effects of light occlusion. A filter parameter adaptive learning module for cabbage image filter parameters was constructed. The image filter combination and adaptive learning module were embedded into the Yolov8n object detection network. This integration enabled precise real-time recognition of cabbage under light occlusion conditions. Experimental results showed recognition accuracies of 97.5 % on the normal lighting dataset, 93.1 % on the light occlusion dataset, and 95.0 % on the mixed dataset. For images with a light occlusion degree greater than 0.4, the recognition accuracy improved by 9.9 % and 13.7 % compared to Yolov5n and Yolov8n models. The model achieved recognition accuracies of 99.3 % on the Chinese cabbage dataset and 98.3 % on the broccoli dataset. The model was deployed on an Nvidia Jetson Orin NX edge computing device, achieving an image processing speed of 26.32 frames per second. Field trials showed recognition accuracies of 96.0 % under normal lighting conditions and 91.2 % under light occlusion. The proposed online cabbage recognition model enables real-time recognition and localization of cabbage in complex open-field environments, offering technical support for target-oriented spraying.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 3","pages":"Pages 449-458"},"PeriodicalIF":8.2,"publicationDate":"2025-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143855790","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Assessing particle application in multi-pass overlapping scenarios with variable rate centrifugal fertilizer spreaders for precision agriculture 精准农业用变速离心式撒肥机在多道重叠场景下的颗粒应用评估
IF 8.2 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2025-04-10 DOI: 10.1016/j.aiia.2025.04.003
Shi Yinyan, Zhu Yangxu, Wang Xiaochan, Zhang Xiaolei, Zheng Enlai, Zhang Yongnian
Environmental impacts and economic demands are driving the development of variable rate fertilization (VRF) technology for precision agriculture. Despite the advantages of a simple structure, low cost and high efficiency, uneven fertilizer-spreading uniformity is becoming a key factor restricting the application of centrifugal fertilizer spreaders. Accordingly, the particle application characteristics and variation laws for centrifugal VRF spreaders with multi-pass overlapped spreading needs to be urgently explored, in order to improve their distribution uniformity and working accuracy. In this study, the working performance of a self-developed centrifugal VRF spreader, based on real-time growth information of rice and wheat, was investigated and tested through the test methods of using the collection trays prescribed in ISO 5690 and ASAE S341.2. The coefficient of variation (CV) was calculated by weighing the fertilizer mass in standard pans, in order to evaluate the distribution uniformity of spreading patterns. The results showed that the effective application widths were 21.05, 22.58 and 23.67 m for application rates of 225, 300 and 375 kg/ha, respectively. The actual fertilizer application rates of multi-pass overlapped spreading were generally higher than the target rates, as well as the particle distribution CVs within the effective spreading widths were 11.51, 9.25 and 11.28 % for the respective target rates. Field test results for multi-pass overlapped spreading showed that the average difference between the actual and target application was 4.54 %, as well as the average particle distribution CV within the operating width was 11.94 %, which met the operation requirements of particle transverse distribution for centrifugal fertilizer spreaders. The results and findings of this study provide a theoretical reference for technical innovation and development of centrifugal VRF spreaders and are of great practical and social significance for accelerating their application in implementing precision agriculture.
环境影响和经济需求推动了精准农业可变施肥技术的发展。尽管具有结构简单、成本低、效率高等优点,但施肥均匀性不均匀正成为制约离心式撒肥机应用的关键因素。因此,迫切需要探索多道次重叠铺布的离心式VRF铺布机的颗粒应用特性及变化规律,以提高其分布均匀性和工作精度。本研究采用ISO 5690和ASAE S341.2规定的采集盘测试方法,对自行研制的基于水稻和小麦实时生长信息的离心式VRF播撒机的工作性能进行了研究和测试。变异系数(CV)是通过称量标准盘内的肥料质量来计算的,以评价施用模式分布的均匀性。结果表明,施用225、300和375 kg/ha时,有效施用宽度分别为21.05、22.58和23.67 m。多道重叠撒播的实际施肥量普遍高于目标施肥量,目标施肥量下有效撒播宽度内颗粒分布cv分别为11.51、9.25和11.28%。多道重叠撒施的田间试验结果表明,实际撒施量与目标撒施量的平均差值为4.54%,作业宽度内的平均颗粒分布CV值为11.94%,满足离心式撒肥机颗粒横向分布的作业要求。研究结果和发现为离心式VRF播种机的技术创新和发展提供了理论参考,对加快其在实施精准农业中的应用具有重要的现实和社会意义。
{"title":"Assessing particle application in multi-pass overlapping scenarios with variable rate centrifugal fertilizer spreaders for precision agriculture","authors":"Shi Yinyan,&nbsp;Zhu Yangxu,&nbsp;Wang Xiaochan,&nbsp;Zhang Xiaolei,&nbsp;Zheng Enlai,&nbsp;Zhang Yongnian","doi":"10.1016/j.aiia.2025.04.003","DOIUrl":"10.1016/j.aiia.2025.04.003","url":null,"abstract":"<div><div>Environmental impacts and economic demands are driving the development of variable rate fertilization (VRF) technology for precision agriculture. Despite the advantages of a simple structure, low cost and high efficiency, uneven fertilizer-spreading uniformity is becoming a key factor restricting the application of centrifugal fertilizer spreaders. Accordingly, the particle application characteristics and variation laws for centrifugal VRF spreaders with multi-pass overlapped spreading needs to be urgently explored, in order to improve their distribution uniformity and working accuracy. In this study, the working performance of a self-developed centrifugal VRF spreader, based on real-time growth information of rice and wheat, was investigated and tested through the test methods of using the collection trays prescribed in ISO 5690 and ASAE S341.2. The coefficient of variation (CV) was calculated by weighing the fertilizer mass in standard pans, in order to evaluate the distribution uniformity of spreading patterns. The results showed that the effective application widths were 21.05, 22.58 and 23.67 m for application rates of 225, 300 and 375 kg/ha, respectively. The actual fertilizer application rates of multi-pass overlapped spreading were generally higher than the target rates, as well as the particle distribution CVs within the effective spreading widths were 11.51, 9.25 and 11.28 % for the respective target rates. Field test results for multi-pass overlapped spreading showed that the average difference between the actual and target application was 4.54 %, as well as the average particle distribution CV within the operating width was 11.94 %, which met the operation requirements of particle transverse distribution for centrifugal fertilizer spreaders. The results and findings of this study provide a theoretical reference for technical innovation and development of centrifugal VRF spreaders and are of great practical and social significance for accelerating their application in implementing precision agriculture.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 3","pages":"Pages 395-406"},"PeriodicalIF":8.2,"publicationDate":"2025-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143835179","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Transformer-based audio-visual multimodal fusion for fine-grained recognition of individual sow nursing behaviour 基于变压器的视听多模态融合技术,用于精细识别母猪的哺乳行为
IF 8.2 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2025-04-08 DOI: 10.1016/j.aiia.2025.03.006
Yuqing Yang , Chengguo Xu , Wenhao Hou , Alan G. McElligott , Kai Liu , Yueju Xue
Nursing behaviour and the calling-to-nurse sound are crucial indicators for assessing sow maternal behaviour and nursing status. However, accurately identifying these behaviours for individual sows in complex indoor pig housing is challenging due to factors such as variable lighting, rail obstructions, and interference from other sows' calls. Multimodal fusion, which integrates audio and visual data, has proven to be an effective approach for improving accuracy and robustness in complex scenarios. In this study, we designed an audio-visual data acquisition system that includes a camera for synchronised audio and video capture, along with a custom-developed sound source localisation system that leverages a sound sensor to track sound direction. Specifically, we proposed a novel transformer-based audio-visual multimodal fusion (TMF) framework for recognising fine-grained sow nursing behaviour with or without the calling-to-nurse sound. Initially, a unimodal self-attention enhancement (USE) module was employed to augment video and audio features with global contextual information. Subsequently, we developed an audio-visual interaction enhancement (AVIE) module to compress relevant information and reduce noise using the information bottleneck principle. Moreover, we presented an adaptive dynamic decision fusion strategy to optimise the model's performance by focusing on the most relevant features in each modality. Finally, we comprehensively identified fine-grained nursing behaviours by integrating audio and fused information, while incorporating angle information from the real-time sound source localisation system to accurately determine whether the sound cues originate from the target sow. Our results demonstrate that the proposed method achieves an accuracy of 98.42 % for general sow nursing behaviour and 94.37 % for fine-grained nursing behaviour, including nursing with and without the calling-to-nurse sound, and non-nursing behaviours. This fine-grained nursing information can provide a more nuanced understanding of the sow's health and lactation willingness, thereby enhancing management practices in pig farming.
哺乳行为和母猪叫声是评估母猪母性行为和哺乳状况的重要指标。然而,在复杂的室内猪舍中准确识别母猪的这些行为具有挑战性,原因包括光照变化、栏杆障碍物以及其他母猪叫声的干扰。多模态融合(将音频和视觉数据整合在一起)已被证明是在复杂场景中提高准确性和鲁棒性的有效方法。在本研究中,我们设计了一个视听数据采集系统,其中包括一个用于同步采集音频和视频的摄像头,以及一个利用声音传感器追踪声音方向的定制开发的声源定位系统。具体来说,我们提出了一种基于变压器的新型视听多模态融合(TMF)框架,用于识别有或没有母猪叫声的细粒度母猪哺乳行为。最初,我们采用了单模态自我注意增强(USE)模块,利用全局上下文信息增强视频和音频特征。随后,我们开发了视听交互增强(AVIE)模块,利用信息瓶颈原理压缩相关信息并减少噪音。此外,我们还提出了一种自适应动态决策融合策略,通过关注每种模式中最相关的特征来优化模型的性能。最后,我们通过整合音频和融合信息,全面识别了细粒度的哺乳行为,同时结合实时声源定位系统的角度信息,准确判断声音线索是否来自目标母猪。我们的研究结果表明,所提出的方法对一般母猪哺乳行为的准确率达到 98.42%,对细粒度哺乳行为的准确率达到 94.37%,其中包括发出或未发出母猪叫声的哺乳行为以及非哺乳行为。这种精细的哺乳信息可以让人更细致地了解母猪的健康状况和泌乳意愿,从而改进养猪业的管理方法。
{"title":"Transformer-based audio-visual multimodal fusion for fine-grained recognition of individual sow nursing behaviour","authors":"Yuqing Yang ,&nbsp;Chengguo Xu ,&nbsp;Wenhao Hou ,&nbsp;Alan G. McElligott ,&nbsp;Kai Liu ,&nbsp;Yueju Xue","doi":"10.1016/j.aiia.2025.03.006","DOIUrl":"10.1016/j.aiia.2025.03.006","url":null,"abstract":"<div><div>Nursing behaviour and the calling-to-nurse sound are crucial indicators for assessing sow maternal behaviour and nursing status. However, accurately identifying these behaviours for individual sows in complex indoor pig housing is challenging due to factors such as variable lighting, rail obstructions, and interference from other sows' calls. Multimodal fusion, which integrates audio and visual data, has proven to be an effective approach for improving accuracy and robustness in complex scenarios. In this study, we designed an audio-visual data acquisition system that includes a camera for synchronised audio and video capture, along with a custom-developed sound source localisation system that leverages a sound sensor to track sound direction. Specifically, we proposed a novel transformer-based audio-visual multimodal fusion (TMF) framework for recognising fine-grained sow nursing behaviour with or without the calling-to-nurse sound. Initially, a unimodal self-attention enhancement (USE) module was employed to augment video and audio features with global contextual information. Subsequently, we developed an audio-visual interaction enhancement (AVIE) module to compress relevant information and reduce noise using the information bottleneck principle. Moreover, we presented an adaptive dynamic decision fusion strategy to optimise the model's performance by focusing on the most relevant features in each modality. Finally, we comprehensively identified fine-grained nursing behaviours by integrating audio and fused information, while incorporating angle information from the real-time sound source localisation system to accurately determine whether the sound cues originate from the target sow. Our results demonstrate that the proposed method achieves an accuracy of 98.42 % for general sow nursing behaviour and 94.37 % for fine-grained nursing behaviour, including nursing with and without the calling-to-nurse sound, and non-nursing behaviours. This fine-grained nursing information can provide a more nuanced understanding of the sow's health and lactation willingness, thereby enhancing management practices in pig farming.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 3","pages":"Pages 363-376"},"PeriodicalIF":8.2,"publicationDate":"2025-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143835177","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Navigating challenges/opportunities in developing smart agricultural extension platforms: Multi-media data mining techniques 应对发展智能农业推广平台的挑战/机遇:多媒体数据挖掘技术
IF 8.2 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2025-04-05 DOI: 10.1016/j.aiia.2025.04.001
Josué Kpodo , A. Pouyan Nejadhashemi
Agricultural Extension (AE) research faces significant challenges in producing relevant and practical knowledge due to rapid advancements in artificial intelligence (AI). AE struggles to keep pace with these advancements, complicating the development of actionable information. One major challenge is the absence of intelligent platforms that enable efficient information retrieval and quick decision-making. Investigations have shown a shortage of AI-assisted solutions that effectively use AE materials across various media formats while preserving scientific accuracy and contextual relevance. Although mainstream AI systems can potentially reduce decision-making risks, their usage remains limited. This limitation arises primarily from the lack of standardized datasets and concerns regarding user data privacy. For AE datasets to be standardized, they must satisfy four key criteria: inclusion of critical domain-specific knowledge, expert curation, consistent structure, and acceptance by peers. Addressing data privacy issues involves adhering to open-access principles and enforcing strict data encryption and anonymization standards. To address these gaps, a conceptual framework is introduced. This framework extends beyond typical user-oriented platforms and comprises five core modules. It features a neurosymbolic pipeline integrating large language models with physically based agricultural modeling software, further enhanced by Reinforcement Learning from Human Feedback. Notable aspects of the framework include a dedicated human-in-the-loop process and a governance structure consisting of three primary bodies focused on data standardization, ethics and security, and accountability and transparency. Overall, this work represents a significant advancement in agricultural knowledge systems, potentially transforming how AE services deliver critical information to farmers and other stakeholders.
由于人工智能(AI)的快速发展,农业推广(AE)研究在产生相关和实用知识方面面临重大挑战。AE努力跟上这些进步的步伐,使可操作信息的开发复杂化。一个主要的挑战是缺乏能够实现有效信息检索和快速决策的智能平台。调查显示,缺乏人工智能辅助解决方案,既能在各种媒体格式中有效地使用声发射材料,又能保持科学准确性和上下文相关性。虽然主流人工智能系统可以潜在地降低决策风险,但它们的使用仍然有限。这种限制主要源于缺乏标准化的数据集和对用户数据隐私的担忧。对于标准化的AE数据集,它们必须满足四个关键标准:包含关键领域特定知识、专家管理、一致的结构和同行的接受度。解决数据隐私问题需要遵守开放获取原则,并执行严格的数据加密和匿名化标准。为了解决这些差距,引入了一个概念性框架。该框架超越了典型的面向用户的平台,并包含五个核心模块。它的特点是一个神经符号管道,将大型语言模型与基于物理的农业建模软件集成在一起,并通过人类反馈的强化学习进一步增强。该框架值得注意的方面包括一个专门的人在循环过程和一个由三个主要机构组成的治理结构,重点是数据标准化、道德和安全、问责制和透明度。总的来说,这项工作代表了农业知识系统的重大进步,可能会改变AE服务向农民和其他利益相关者提供关键信息的方式。
{"title":"Navigating challenges/opportunities in developing smart agricultural extension platforms: Multi-media data mining techniques","authors":"Josué Kpodo ,&nbsp;A. Pouyan Nejadhashemi","doi":"10.1016/j.aiia.2025.04.001","DOIUrl":"10.1016/j.aiia.2025.04.001","url":null,"abstract":"<div><div>Agricultural Extension (AE) research faces significant challenges in producing relevant and practical knowledge due to rapid advancements in artificial intelligence (AI). AE struggles to keep pace with these advancements, complicating the development of actionable information. One major challenge is the absence of intelligent platforms that enable efficient information retrieval and quick decision-making. Investigations have shown a shortage of AI-assisted solutions that effectively use AE materials across various media formats while preserving scientific accuracy and contextual relevance. Although mainstream AI systems can potentially reduce decision-making risks, their usage remains limited. This limitation arises primarily from the lack of standardized datasets and concerns regarding user data privacy. For AE datasets to be standardized, they must satisfy four key criteria: inclusion of critical domain-specific knowledge, expert curation, consistent structure, and acceptance by peers. Addressing data privacy issues involves adhering to open-access principles and enforcing strict data encryption and anonymization standards. To address these gaps, a conceptual framework is introduced. This framework extends beyond typical user-oriented platforms and comprises five core modules. It features a neurosymbolic pipeline integrating large language models with physically based agricultural modeling software, further enhanced by Reinforcement Learning from Human Feedback. Notable aspects of the framework include a dedicated human-in-the-loop process and a governance structure consisting of three primary bodies focused on data standardization, ethics and security, and accountability and transparency. Overall, this work represents a significant advancement in agricultural knowledge systems, potentially transforming how AE services deliver critical information to farmers and other stakeholders.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 3","pages":"Pages 426-448"},"PeriodicalIF":8.2,"publicationDate":"2025-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143842756","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improving the performance of machine learning algorithms for detection of individual pests and beneficial insects using feature selection techniques 利用特征选择技术改进机器学习算法检测害虫和益虫个体的性能
IF 8.2 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2025-04-04 DOI: 10.1016/j.aiia.2025.03.008
Rabiu Aminu , Samantha M. Cook , David Ljungberg , Oliver Hensel , Abozar Nasirahmadi
<div><div>To reduce damage caused by insect pests, farmers use insecticides to protect produce from crop pests. This practice leads to high synthetic chemical usage because a large portion of the applied insecticide does not reach its intended target; instead, it may affect non-target organisms and pollute the environment. One approach to mitigating this is through the selective application of insecticides to only those crop plants (or patches of plants) where the insect pests are located, avoiding non-targets and beneficials. The first step to achieve this is the identification of insects on plants and discrimination between pests and beneficial non-targets. However, detecting small-sized individual insects is challenging using image-based machine learning techniques, especially in natural field settings. This paper proposes a method based on explainable artificial intelligence feature selection and machine learning to detect pests and beneficial insects in field crops. An insect-plant dataset reflecting real field conditions was created. It comprises two pest insects—the Colorado potato beetle (CPB, <em>Leptinotarsa decemlineata</em>) and green peach aphid (<em>Myzus persicae</em>)—and the beneficial seven-spot ladybird (<em>Coccinella septempunctata</em>). The specialist herbivore CPB was imaged only on potato plants (<em>Solanum tuberosum</em>) while green peach aphids and seven-spot ladybirds were imaged on three crops: potato, faba bean (<em>Vicia faba)</em>, and sugar beet (<em>Beta vulgaris</em> subsp. <em>vulgaris</em>). This increased dataset diversity, broadening the potential application of the developed method for discriminating between pests and beneficial insects in several crops. The insects were imaged in both laboratory and outdoor settings. Using the GrabCut algorithm, regions of interest in the image were identified before shape, texture, and colour features were extracted from the segmented regions. The concept of explainable artificial intelligence was adopted by incorporating permutation feature importance ranking and Shapley Additive explanations values to identify the feature set that optimized a model's performance while reducing computational complexity. The proposed explainable artificial intelligence feature selection method was compared to conventional feature selection techniques, including mutual information, chi-square coefficient, maximal information coefficient, Fisher separation criterion and variance thresholding. Results showed improved accuracy (92.62 % Random forest, 90.16 % Support vector machine, 83.61 % K-nearest neighbours, and 81.97 % Naïve Bayes) and a reduction in the number of model parameters and memory usage (7.22 <em>×</em> 10<sup>7</sup> Random forest, 6.23 <em>×</em> 10<sup>3</sup> Support vector machine, 3.64 <em>×</em> 10<sup>4</sup> K-nearest neighbours and 1.88 <em>×</em> 10<sup>2</sup> Naïve Bayes) compared to using all features. Prediction and training times were also reduced by approxima
为了减少害虫造成的损害,农民使用杀虫剂来保护农作物不受害虫的侵害。这种做法导致大量合成化学品的使用,因为大部分施用的杀虫剂没有达到预期目标;相反,它可能会影响非目标生物并污染环境。减轻这种情况的一种方法是选择性地只对害虫所在的作物(或植物斑块)施用杀虫剂,避免非目标和有益的作物。实现这一目标的第一步是识别植物上的昆虫,区分害虫和有益的非目标。然而,使用基于图像的机器学习技术检测小型昆虫个体是具有挑战性的,特别是在自然环境中。提出了一种基于可解释人工智能特征选择和机器学习的田间作物病虫害检测方法。创建了一个反映真实野外条件的昆虫-植物数据集。它包括两种害虫——科罗拉多马铃薯甲虫(CPB, Leptinotarsa decemlineata)和绿桃蚜虫(Myzus persicae)——以及有益的七星瓢虫(Coccinella七星瓢虫)。专门的草食虫CPB仅在马铃薯植物(Solanum tuberosum)上成像,而绿桃蚜虫和7点瓢虫在马铃薯、蚕豆(Vicia faba)和甜菜(Beta vulgaris subsp)三种作物上成像。寻常的)。这增加了数据集的多样性,扩大了所开发的方法在几种作物中区分害虫和有益昆虫的潜在应用。这些昆虫在实验室和室外环境下都进行了成像。利用GrabCut算法对图像中感兴趣的区域进行识别,然后从分割的区域中提取形状、纹理和颜色特征。采用可解释人工智能的概念,结合排列特征重要性排序和Shapley Additive解释值来识别优化模型性能同时降低计算复杂度的特征集。将提出的可解释的人工智能特征选择方法与传统的互信息、卡方系数、最大信息系数、Fisher分离准则和方差阈值等特征选择方法进行了比较。结果表明,与使用所有特征相比,提高了准确率(随机森林92.62%,支持向量机90.16%,k近邻83.61%,Naïve贝叶斯81.97%),减少了模型参数数量和内存使用(7.22 × 107随机森林,6.23 × 103支持向量机,3.64 × 104 k近邻和1.88 × 102 Naïve贝叶斯)。与传统的特征选择技术相比,预测和训练时间也减少了大约一半。这证明了一个简单的机器学习算法结合理想的特征选择方法可以获得与其他方法相当的鲁棒性能。通过特征选择,可以最大化模型性能并减少硬件需求,这对于具有资源限制的实际应用程序是必不可少的。该研究为害虫和有益昆虫的自动检测和识别提供了可靠的方法,将有助于开发替代害虫防治方法和其他有针对性的除虫方法,这些方法对环境的危害比大规模应用合成杀虫剂要小。
{"title":"Improving the performance of machine learning algorithms for detection of individual pests and beneficial insects using feature selection techniques","authors":"Rabiu Aminu ,&nbsp;Samantha M. Cook ,&nbsp;David Ljungberg ,&nbsp;Oliver Hensel ,&nbsp;Abozar Nasirahmadi","doi":"10.1016/j.aiia.2025.03.008","DOIUrl":"10.1016/j.aiia.2025.03.008","url":null,"abstract":"&lt;div&gt;&lt;div&gt;To reduce damage caused by insect pests, farmers use insecticides to protect produce from crop pests. This practice leads to high synthetic chemical usage because a large portion of the applied insecticide does not reach its intended target; instead, it may affect non-target organisms and pollute the environment. One approach to mitigating this is through the selective application of insecticides to only those crop plants (or patches of plants) where the insect pests are located, avoiding non-targets and beneficials. The first step to achieve this is the identification of insects on plants and discrimination between pests and beneficial non-targets. However, detecting small-sized individual insects is challenging using image-based machine learning techniques, especially in natural field settings. This paper proposes a method based on explainable artificial intelligence feature selection and machine learning to detect pests and beneficial insects in field crops. An insect-plant dataset reflecting real field conditions was created. It comprises two pest insects—the Colorado potato beetle (CPB, &lt;em&gt;Leptinotarsa decemlineata&lt;/em&gt;) and green peach aphid (&lt;em&gt;Myzus persicae&lt;/em&gt;)—and the beneficial seven-spot ladybird (&lt;em&gt;Coccinella septempunctata&lt;/em&gt;). The specialist herbivore CPB was imaged only on potato plants (&lt;em&gt;Solanum tuberosum&lt;/em&gt;) while green peach aphids and seven-spot ladybirds were imaged on three crops: potato, faba bean (&lt;em&gt;Vicia faba)&lt;/em&gt;, and sugar beet (&lt;em&gt;Beta vulgaris&lt;/em&gt; subsp. &lt;em&gt;vulgaris&lt;/em&gt;). This increased dataset diversity, broadening the potential application of the developed method for discriminating between pests and beneficial insects in several crops. The insects were imaged in both laboratory and outdoor settings. Using the GrabCut algorithm, regions of interest in the image were identified before shape, texture, and colour features were extracted from the segmented regions. The concept of explainable artificial intelligence was adopted by incorporating permutation feature importance ranking and Shapley Additive explanations values to identify the feature set that optimized a model's performance while reducing computational complexity. The proposed explainable artificial intelligence feature selection method was compared to conventional feature selection techniques, including mutual information, chi-square coefficient, maximal information coefficient, Fisher separation criterion and variance thresholding. Results showed improved accuracy (92.62 % Random forest, 90.16 % Support vector machine, 83.61 % K-nearest neighbours, and 81.97 % Naïve Bayes) and a reduction in the number of model parameters and memory usage (7.22 &lt;em&gt;×&lt;/em&gt; 10&lt;sup&gt;7&lt;/sup&gt; Random forest, 6.23 &lt;em&gt;×&lt;/em&gt; 10&lt;sup&gt;3&lt;/sup&gt; Support vector machine, 3.64 &lt;em&gt;×&lt;/em&gt; 10&lt;sup&gt;4&lt;/sup&gt; K-nearest neighbours and 1.88 &lt;em&gt;×&lt;/em&gt; 10&lt;sup&gt;2&lt;/sup&gt; Naïve Bayes) compared to using all features. Prediction and training times were also reduced by approxima","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 3","pages":"Pages 377-394"},"PeriodicalIF":8.2,"publicationDate":"2025-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143835178","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fast extraction of navigation line and crop position based on LiDAR for cabbage crops 基于激光雷达的白菜导航线和作物位置快速提取
IF 8.2 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2025-04-04 DOI: 10.1016/j.aiia.2025.03.007
Jiang Pin , Tingfeng Guo , Minzi Xv , Xiangjun Zou , Wenwu Hu
This paper describes the design, algorithm development, and experimental verification of a precise spray perception system based on LiDAR were presented to address the issue that the navigation line extraction accuracy of self-propelled sprayers during field operations is low, resulting in wheels rolling over the ridges and excessive pesticide waste. A data processing framework was established for the precision spray perception system. Through data preprocessing, adaptive segmentation of crops and ditches, extraction of navigation lines and crop positioning, which were derived from the original LiDAR point cloud species. Data collection and analysis of the field environment of cabbages in different growth cycles were conducted to verify the stability of the precision spraying system. A controllable constant-speed experimental setup was established to compare the performance of LiDAR and depth camera in the same field environment. The experimental results show that at the self-propelled sprayer of speeds of 0.5 and 1 ms−1, the maximum lateral error is 0.112 m in a cabbage ridge environment with inter-row weeds, with an mean absolute lateral error of 0.059 m. The processing speed per frame does not exceed 43 ms. Compared to the machine vision algorithm, this method reduces the average processing time by 122 ms. The proposed system demonstrates superior accuracy, processing time, and robustness in crop identification and navigation line extraction compared to the machine vision system.
针对野外作业中自行式喷雾器导航线提取精度低、车轮滚过山脊、农药浪费过多等问题,介绍了基于激光雷达的精准喷雾感知系统的设计、算法开发和实验验证。建立了高精度喷雾感知系统的数据处理框架。通过数据预处理,从原始LiDAR点云物种中提取作物和沟渠的自适应分割、导航线提取和作物定位。通过对不同生长周期卷心菜田间环境的数据采集和分析,验证了精准喷洒系统的稳定性。为了比较激光雷达和深度相机在相同野外环境下的性能,建立了可控等速实验装置。实验结果表明,在速度为0.5和1 ms−1的自行式喷雾器中,在有行间杂草的白菜垄环境中,最大侧向误差为0.112 m,平均绝对侧向误差为0.059 m。每帧的处理速度不超过43毫秒。与机器视觉算法相比,该方法平均处理时间缩短了122 ms。与机器视觉系统相比,该系统在作物识别和导航线提取方面具有更高的精度、处理时间和鲁棒性。
{"title":"Fast extraction of navigation line and crop position based on LiDAR for cabbage crops","authors":"Jiang Pin ,&nbsp;Tingfeng Guo ,&nbsp;Minzi Xv ,&nbsp;Xiangjun Zou ,&nbsp;Wenwu Hu","doi":"10.1016/j.aiia.2025.03.007","DOIUrl":"10.1016/j.aiia.2025.03.007","url":null,"abstract":"<div><div>This paper describes the design, algorithm development, and experimental verification of a precise spray perception system based on LiDAR were presented to address the issue that the navigation line extraction accuracy of self-propelled sprayers during field operations is low, resulting in wheels rolling over the ridges and excessive pesticide waste. A data processing framework was established for the precision spray perception system. Through data preprocessing, adaptive segmentation of crops and ditches, extraction of navigation lines and crop positioning, which were derived from the original LiDAR point cloud species. Data collection and analysis of the field environment of cabbages in different growth cycles were conducted to verify the stability of the precision spraying system. A controllable constant-speed experimental setup was established to compare the performance of LiDAR and depth camera in the same field environment. The experimental results show that at the self-propelled sprayer of speeds of 0.5 and 1 ms<sup>−1</sup>, the maximum lateral error is 0.112 m in a cabbage ridge environment with inter-row weeds, with an mean absolute lateral error of 0.059 m. The processing speed per frame does not exceed 43 ms. Compared to the machine vision algorithm, this method reduces the average processing time by 122 ms. The proposed system demonstrates superior accuracy, processing time, and robustness in crop identification and navigation line extraction compared to the machine vision system.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 4","pages":"Pages 686-695"},"PeriodicalIF":8.2,"publicationDate":"2025-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144254490","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unveiling the drivers contributing to global wheat yield shocks through quantile regression 通过分位数回归揭示全球小麦产量冲击的驱动因素
IF 8.2 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2025-03-22 DOI: 10.1016/j.aiia.2025.03.004
Srishti Vishwakarma , Xin Zhang , Vyacheslav Lyubchich
Sudden reductions in crop yield (i.e., yield shocks) severely disrupt the food supply, intensify food insecurity, depress farmers' welfare, and worsen a country's economic conditions. Here, we study the spatiotemporal patterns of wheat yield shocks, quantified by the lower quantiles of yield fluctuations, in 86 countries over 30 years. Furthermore, we assess the relationships between shocks and their key ecological and socioeconomic drivers using quantile regression based on statistical (linear quantile mixed model) and machine learning (quantile random forest) models. Using a panel dataset that captures spatiotemporal patterns of yield shocks and possible drivers in 86 countries, we find that the severity of yield shocks has been increasing globally since 1997. Moreover, our cross-validation exercise shows that quantile random forest outperforms the linear quantile regression model. Despite this performance difference, both models consistently reveal that the severity of shocks is associated with higher weather stress, nitrogen fertilizer application rate, and gross domestic product (GDP) per capita (a typical indicator for economic and technological advancement in a country). While the unexpected negative association between more severe wheat yield shocks and higher fertilizer application rate and GDP per capita does not imply a direct causal effect, they indicate that the advancement in wheat production has been primarily on achieving higher yields and less on lowering the possibility and magnitude of sharp yield reductions. Hence, in the context of growing extreme weather stress, there is a critical need to enhance the technology and management practices that mitigate yield shocks to improve the resilience of the world food systems.
作物产量突然下降(即产量冲击)严重扰乱粮食供应,加剧粮食不安全,降低农民福利,并使一个国家的经济状况恶化。在这里,我们研究了86个国家30年来小麦产量冲击的时空格局,通过产量波动的低分位数进行量化。此外,我们使用基于统计(线性分位数混合模型)和机器学习(分位数随机森林)模型的分位数回归评估了冲击与其主要生态和社会经济驱动因素之间的关系。通过面板数据集,我们发现,自1997年以来,全球收益冲击的严重程度一直在增加。该数据集捕获了86个国家收益冲击的时空模式和可能的驱动因素。此外,我们的交叉验证练习表明,分位数随机森林优于线性分位数回归模型。尽管存在这种表现差异,但两种模型都一致显示,冲击的严重程度与较高的天气压力、氮肥施用量和人均国内生产总值(一国经济和技术进步的典型指标)有关。虽然更严重的小麦产量冲击与更高的化肥施用量和人均GDP之间意想不到的负相关关系并不意味着直接的因果关系,但它们表明,小麦生产的进步主要是实现更高的产量,而不是降低产量急剧下降的可能性和幅度。因此,在极端天气压力日益加剧的背景下,迫切需要加强减轻产量冲击的技术和管理实践,以提高世界粮食系统的抵御能力。
{"title":"Unveiling the drivers contributing to global wheat yield shocks through quantile regression","authors":"Srishti Vishwakarma ,&nbsp;Xin Zhang ,&nbsp;Vyacheslav Lyubchich","doi":"10.1016/j.aiia.2025.03.004","DOIUrl":"10.1016/j.aiia.2025.03.004","url":null,"abstract":"<div><div>Sudden reductions in crop yield (i.e., yield shocks) severely disrupt the food supply, intensify food insecurity, depress farmers' welfare, and worsen a country's economic conditions. Here, we study the spatiotemporal patterns of wheat yield shocks, quantified by the lower quantiles of yield fluctuations, in 86 countries over 30 years. Furthermore, we assess the relationships between shocks and their key ecological and socioeconomic drivers using quantile regression based on statistical (linear quantile mixed model) and machine learning (quantile random forest) models. Using a panel dataset that captures spatiotemporal patterns of yield shocks and possible drivers in 86 countries, we find that the severity of yield shocks has been increasing globally since 1997. Moreover, our cross-validation exercise shows that quantile random forest outperforms the linear quantile regression model. Despite this performance difference, both models consistently reveal that the severity of shocks is associated with higher weather stress, nitrogen fertilizer application rate, and gross domestic product (GDP) per capita (a typical indicator for economic and technological advancement in a country). While the unexpected negative association between more severe wheat yield shocks and higher fertilizer application rate and GDP per capita does not imply a direct causal effect, they indicate that the advancement in wheat production has been primarily on achieving higher yields and less on lowering the possibility and magnitude of sharp yield reductions. Hence, in the context of growing extreme weather stress, there is a critical need to enhance the technology and management practices that mitigate yield shocks to improve the resilience of the world food systems.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 3","pages":"Pages 564-572"},"PeriodicalIF":8.2,"publicationDate":"2025-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144106557","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A new tool to improve the computation of animal kinetic activity indices in precision poultry farming 一种改进精密家禽养殖动物动力活动指数计算的新工具
IF 8.2 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2025-03-22 DOI: 10.1016/j.aiia.2025.03.005
Alberto Carraro , Mattia Pravato , Francesco Marinello , Francesco Bordignon , Angela Trocino , Gerolamo Xiccato , Andrea Pezzuolo
Precision Livestock Farming (PLF) emerges as a promising solution for revolutionising farming by enabling real-time automated monitoring of animals through smart technologies. PLF provides farmers with precise data to enhance farm management, increasing productivity and profitability. For instance, it allows for non-intrusive health assessments, contributing to maintaining a healthy herd while reducing stress associated with handling. In the poultry sector, image analysis can be utilised to monitor and analyse the behaviour of each hen in real time. Researchers have recently used machine learning algorithms to monitor the behaviour, health, and positioning of hens through computer vision techniques. Convolutional neural networks, a type of deep learning algorithm, have been utilised for image analysis to identify and categorise various hen behaviours and track specific activities like feeding and drinking. This research presents an automated system for analysing laying hen movement using video footage from surveillance cameras. With a customised implementation of object tracking, the system can efficiently process hundreds of hours of videos while maintaining high measurement precision. Its modular implementation adapts well to optimally exploit the GPU computing capabilities of the hardware platform it is running on. The use of this system is beneficial for both real-time monitoring and post-processing, contributing to improved monitoring capabilities in precision livestock farming.
精准畜牧业(PLF)通过智能技术实现对动物的实时自动化监控,成为一种有前途的农业革命解决方案。PLF为农民提供精确的数据,以加强农场管理,提高生产力和盈利能力。例如,它允许进行非侵入性健康评估,有助于保持健康的牛群,同时减少与处理相关的压力。在家禽业,图像分析可用于实时监测和分析每只母鸡的行为。研究人员最近使用机器学习算法通过计算机视觉技术来监测母鸡的行为、健康和定位。卷积神经网络是一种深度学习算法,已被用于图像分析,以识别和分类母鸡的各种行为,并跟踪诸如喂食和饮水等特定活动。本研究提出了一种利用监控摄像机录像片段分析蛋鸡运动的自动化系统。通过定制的目标跟踪实现,该系统可以有效地处理数百小时的视频,同时保持高测量精度。它的模块化实现很好地适应了它所运行的硬件平台的GPU计算能力。该系统的使用有利于实时监测和后期处理,有助于提高精准畜牧业的监测能力。
{"title":"A new tool to improve the computation of animal kinetic activity indices in precision poultry farming","authors":"Alberto Carraro ,&nbsp;Mattia Pravato ,&nbsp;Francesco Marinello ,&nbsp;Francesco Bordignon ,&nbsp;Angela Trocino ,&nbsp;Gerolamo Xiccato ,&nbsp;Andrea Pezzuolo","doi":"10.1016/j.aiia.2025.03.005","DOIUrl":"10.1016/j.aiia.2025.03.005","url":null,"abstract":"<div><div>Precision Livestock Farming (PLF) emerges as a promising solution for revolutionising farming by enabling real-time automated monitoring of animals through smart technologies. PLF provides farmers with precise data to enhance farm management, increasing productivity and profitability. For instance, it allows for non-intrusive health assessments, contributing to maintaining a healthy herd while reducing stress associated with handling. In the poultry sector, image analysis can be utilised to monitor and analyse the behaviour of each hen in real time. Researchers have recently used machine learning algorithms to monitor the behaviour, health, and positioning of hens through computer vision techniques. Convolutional neural networks, a type of deep learning algorithm, have been utilised for image analysis to identify and categorise various hen behaviours and track specific activities like feeding and drinking. This research presents an automated system for analysing laying hen movement using video footage from surveillance cameras. With a customised implementation of object tracking, the system can efficiently process hundreds of hours of videos while maintaining high measurement precision. Its modular implementation adapts well to optimally exploit the GPU computing capabilities of the hardware platform it is running on. The use of this system is beneficial for both real-time monitoring and post-processing, contributing to improved monitoring capabilities in precision livestock farming.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 4","pages":"Pages 659-670"},"PeriodicalIF":8.2,"publicationDate":"2025-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144253342","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Digitalizing greenhouse trials: An automated approach for efficient and objective assessment of plant damage using deep learning 数字化温室试验:一种利用深度学习高效客观评估植物损害的自动化方法
IF 8.2 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2025-03-17 DOI: 10.1016/j.aiia.2025.03.001
Laura Gómez-Zamanillo , Arantza Bereciartúa-Pérez , Artzai Picón , Liliana Parra , Marian Oldenbuerger , Ramón Navarra-Mestre , Christian Klukas , Till Eggers , Jone Echazarra
The use of image based and, recently, deep learning-based systems have provided good results in several applications. Greenhouse trials are key part in the process of developing and testing new herbicides and analyze the response of the species to different products and doses in a controlled way. The assessment of the damage in the plant is daily done in all trials by visual evaluation by experts. This entails time consuming process and lack of repeatability. Greenhouse trials require new digital tools to reduce time consuming process and to endow the experts with more objective and repetitive methods for establishing the damage in the plants.
To this end, a novel method is proposed composed by an initial segmentation of the plant species followed by a multibranch convolutional neural network to estimate the damage level. In this way, we overcome the need for costly and unaffordable pixelwise manual segmentation for damage symptoms and we make use of global damage estimation values provided by the experts.
The algorithm has been deployed under real greenhouse trials conditions in a pilot study located in BASF in Germany and tested over four species (GLXMA, TRZAW, ECHCG, AMARE). The results show mean average error (MAE) values ranging from 5.20 for AMARE and 8.07 for ECHCG for the estimation of PDCU value, with correlation values (R2) higher than 0.85 in all situations, and up to 0.92 in AMARE. These results surpass the inter-rater variability of human experts demonstrating that the proposed automated method is appropriate for automatically assessing greenhouse damage trials.
基于图像和最近基于深度学习的系统的使用在几个应用中提供了良好的结果。温室试验是新型除草剂开发和试验的关键环节,是对除草剂品种对不同产品和剂量的反应进行控制分析的重要环节。在所有试验中,每天都由专家通过目视评估对工厂的损害进行评估。这需要耗时的过程和缺乏可重复性。温室试验需要新的数字工具来减少耗时的过程,并赋予专家更客观和重复的方法来确定植物的损害。为此,提出了一种基于多分支卷积神经网络对植物物种进行初始分割的损伤程度估计方法。通过这种方式,我们克服了对损伤症状进行昂贵且难以负担的像素级人工分割的需要,并且我们利用了专家提供的全局损伤估计值。该算法已在德国巴斯夫的一项试点研究中部署在真实的温室试验条件下,并对四种物种(GLXMA, TRZAW, ECHCG, AMARE)进行了测试。结果表明,AMARE估计PDCU值的平均误差(MAE)为5.20,ECHCG估计PDCU值的平均误差为8.07,相关系数(R2)均大于0.85,而AMARE估计PDCU值的相关系数(R2)最高可达0.92。这些结果超过了人类专家的内部变异性,表明所提出的自动化方法适用于自动评估温室损害试验。
{"title":"Digitalizing greenhouse trials: An automated approach for efficient and objective assessment of plant damage using deep learning","authors":"Laura Gómez-Zamanillo ,&nbsp;Arantza Bereciartúa-Pérez ,&nbsp;Artzai Picón ,&nbsp;Liliana Parra ,&nbsp;Marian Oldenbuerger ,&nbsp;Ramón Navarra-Mestre ,&nbsp;Christian Klukas ,&nbsp;Till Eggers ,&nbsp;Jone Echazarra","doi":"10.1016/j.aiia.2025.03.001","DOIUrl":"10.1016/j.aiia.2025.03.001","url":null,"abstract":"<div><div>The use of image based and, recently, deep learning-based systems have provided good results in several applications. Greenhouse trials are key part in the process of developing and testing new herbicides and analyze the response of the species to different products and doses in a controlled way. The assessment of the damage in the plant is daily done in all trials by visual evaluation by experts. This entails time consuming process and lack of repeatability. Greenhouse trials require new digital tools to reduce time consuming process and to endow the experts with more objective and repetitive methods for establishing the damage in the plants.</div><div>To this end, a novel method is proposed composed by an initial segmentation of the plant species followed by a multibranch convolutional neural network to estimate the damage level. In this way, we overcome the need for costly and unaffordable pixelwise manual segmentation for damage symptoms and we make use of global damage estimation values provided by the experts.</div><div>The algorithm has been deployed under real greenhouse trials conditions in a pilot study located in BASF in Germany and tested over four species (GLXMA, TRZAW, ECHCG, AMARE). The results show mean average error (MAE) values ranging from 5.20 for AMARE and 8.07 for ECHCG for the estimation of PDCU value, with correlation values (R<sup>2</sup>) higher than 0.85 in all situations, and up to 0.92 in AMARE. These results surpass the inter-rater variability of human experts demonstrating that the proposed automated method is appropriate for automatically assessing greenhouse damage trials.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 2","pages":"Pages 280-295"},"PeriodicalIF":8.2,"publicationDate":"2025-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143684582","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Boosting grapevine phenological stages prediction based on climatic data by pseudo-labeling approach 伪标记法提高葡萄物候期预测的气候数据
IF 8.2 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2025-03-17 DOI: 10.1016/j.aiia.2025.03.003
Mehdi Fasihi , Mirko Sodini , Alex Falcon , Francesco Degano , Paolo Sivilotti , Giuseppe Serra
Predicting grapevine phenological stages (GPHS) is critical for precisely managing vineyard operations, including plant disease treatments, pruning, and harvest. Solutions commonly used to address viticulture challenges rely on image processing techniques, which have achieved significant results. However, they require the installation of dedicated hardware in the vineyard, making it invasive and difficult to maintain. Moreover, accurate prediction is influenced by the interplay of climatic factors, especially temperature, and the impact of global warming, which are difficult to model using images. Another problem frequently found in GPHS prediction is the persistent issue of missing values in viticultural datasets, particularly in phenological stages. This paper proposes a semi-supervised approach that begins with a small set of labeled phenological stage examples and automatically generates new annotations for large volumes of unlabeled climatic data. This approach aims to address key challenges in phenological analysis. This novel climatic data-based approach offers advantages over common image processing methods, as it is non-intrusive, cost-effective, and adaptable for vineyards of various sizes and technological levels. To ensure the robustness of the proposed Pseudo-labelling strategy, we integrated it into eight machine-learning algorithms. We evaluated its performance across seven diverse datasets, each exhibiting varying percentages of missing values. Performance metrics, including the coefficient of determination (R2) and root-mean-square error (RMSE), are employed to assess the effectiveness of the models. The study demonstrates that integrating the proposed Pseudo-labeling strategy with supervised learning approaches significantly improves predictive accuracy. Moreover, the study shows that the proposed methodology can also be integrated with explainable artificial intelligence techniques to determine the importance of the input features. In particular, the investigation highlights that growing degree days are crucial for improved GPHS prediction.
预测葡萄物候阶段(GPHS)是精确管理葡萄园操作,包括植物病害治疗,修剪和收获的关键。通常用于解决葡萄栽培挑战的解决方案依赖于图像处理技术,该技术已经取得了显著的成果。然而,它们需要在葡萄园中安装专用硬件,使其具有侵入性且难以维护。此外,准确的预测受到气候因素,特别是温度和全球变暖的影响的相互作用的影响,这些因素很难利用图像进行建模。在GPHS预测中经常发现的另一个问题是葡萄栽培数据集中持续存在的缺失值问题,特别是在物候阶段。本文提出了一种半监督方法,该方法从一小组标记物候阶段示例开始,并为大量未标记的气候数据自动生成新的注释。这种方法旨在解决物候分析中的关键挑战。这种新颖的基于气候数据的方法比普通的图像处理方法具有优势,因为它是非侵入性的,具有成本效益,并且适用于各种规模和技术水平的葡萄园。为了确保提出的伪标签策略的鲁棒性,我们将其集成到八种机器学习算法中。我们在七个不同的数据集上评估了它的性能,每个数据集都显示了不同的缺失值百分比。采用决策系数(R2)和均方根误差(RMSE)等绩效指标来评估模型的有效性。研究表明,将伪标注策略与监督学习方法相结合,可以显著提高预测精度。此外,研究表明,所提出的方法也可以与可解释的人工智能技术相结合,以确定输入特征的重要性。该调查特别强调,生长度日对于改进GPHS预测至关重要。
{"title":"Boosting grapevine phenological stages prediction based on climatic data by pseudo-labeling approach","authors":"Mehdi Fasihi ,&nbsp;Mirko Sodini ,&nbsp;Alex Falcon ,&nbsp;Francesco Degano ,&nbsp;Paolo Sivilotti ,&nbsp;Giuseppe Serra","doi":"10.1016/j.aiia.2025.03.003","DOIUrl":"10.1016/j.aiia.2025.03.003","url":null,"abstract":"<div><div>Predicting grapevine phenological stages (GPHS) is critical for precisely managing vineyard operations, including plant disease treatments, pruning, and harvest. Solutions commonly used to address viticulture challenges rely on image processing techniques, which have achieved significant results. However, they require the installation of dedicated hardware in the vineyard, making it invasive and difficult to maintain. Moreover, accurate prediction is influenced by the interplay of climatic factors, especially temperature, and the impact of global warming, which are difficult to model using images. Another problem frequently found in GPHS prediction is the persistent issue of missing values in viticultural datasets, particularly in phenological stages. This paper proposes a semi-supervised approach that begins with a small set of labeled phenological stage examples and automatically generates new annotations for large volumes of unlabeled climatic data. This approach aims to address key challenges in phenological analysis. This novel climatic data-based approach offers advantages over common image processing methods, as it is non-intrusive, cost-effective, and adaptable for vineyards of various sizes and technological levels. To ensure the robustness of the proposed Pseudo-labelling strategy, we integrated it into eight machine-learning algorithms. We evaluated its performance across seven diverse datasets, each exhibiting varying percentages of missing values. Performance metrics, including the coefficient of determination (R<sup>2</sup>) and root-mean-square error (RMSE), are employed to assess the effectiveness of the models. The study demonstrates that integrating the proposed Pseudo-labeling strategy with supervised learning approaches significantly improves predictive accuracy. Moreover, the study shows that the proposed methodology can also be integrated with explainable artificial intelligence techniques to determine the importance of the input features. In particular, the investigation highlights that growing degree days are crucial for improved GPHS prediction.</div></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"15 3","pages":"Pages 550-563"},"PeriodicalIF":8.2,"publicationDate":"2025-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144106556","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Artificial Intelligence in Agriculture
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1