首页 > 最新文献

Artificial Intelligence in Agriculture最新文献

英文 中文
Prediction of spatial heterogeneity in nutrient-limited sub-tropical maize yield: Implications for precision management in the eastern Indo-Gangetic Plains 营养有限的亚热带玉米产量的空间异质性预测:对印度-甘肃平原东部精确管理的影响
IF 8.2 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2024-09-01 DOI: 10.1016/j.aiia.2024.08.001

Knowledge of the factors influencing nutrient-limited subtropical maize yield and subsequent prediction is crucial for effective nutrient management, maximizing profitability, ensuring food security, and promoting environmental sustainability. We analyzed data from nutrient omission plot trials (NOPTs) conducted in 324 farmers' fields across ten agroecological zones (AEZs) in the Eastern Indo-Gangetic Plains (EIGP) of Bangladesh to explain maize yield variability and identify variables controlling nutrient-limited yields. An additive main effect and multiplicative interaction (AMMI) model was used to explain maize yield variability with nutrient addition. Interpretable machine learning (ML) algorithms in automatic machine learning (AutoML) frameworks were subsequently used to predict attainable yield relative nutrient-limited yield (RY) and to rank variables that control RY. The stack-ensemble model was identified as the best-performing model for predicting RYs of N, P, and Zn. In contrast, deep learning outperformed all base learners for predicting RYK. The best model's square errors (RMSEs) were 0.122, 0.105, 0.123, and 0.104 for RYN, RYP, RYK, and RYZn, respectively. The permutation-based feature importance technique identified soil pH as the most critical variable controlling RYN and RYP. The RYK showed lower in the eastern longitudinal direction. Soil N and Zn were associated with RYZn. The predicted median RY of N, P, K, and Zn, representing average soil fertility, was 0.51, 0.84, 0.87, and 0.97, accounting for 44, 54, 54, and 48% upland dry season crop area of Bangladesh, respectively. Efforts are needed to update databases cataloging variability in land type inundation classes, soil characteristics, and INS and combine them with farmers' crop management information to develop more precise nutrient guidelines for maize in the EIGP.

了解影响养分有限的亚热带玉米产量的因素以及随后的预测,对于有效进行养分管理、实现收益最大化、确保粮食安全和促进环境可持续发展至关重要。我们分析了在孟加拉国东印度-遗传平原(EIGP)10 个农业生态区(AEZ)的 324 块农田中进行的养分遗漏小区试验(NOPTs)数据,以解释玉米产量的变异性并确定控制养分限制产量的变量。采用加性主效应和乘性交互作用(AMMI)模型来解释玉米产量随养分添加量的变化。随后使用自动机器学习(AutoML)框架中的可解释机器学习(ML)算法来预测相对养分限制产量(RY)的可达到产量,并对控制 RY 的变量进行排序。在预测氮、磷和锌的可实现产量方面,堆叠-集合模型被认为是表现最好的模型。相比之下,深度学习在预测 RYK 方面的表现优于所有基础学习器。RYN、RYP、RYK 和 RYZn 的最佳模型平方误差(RMSE)分别为 0.122、0.105、0.123 和 0.104。基于置换的特征重要性技术确定土壤 pH 值是控制 RYN 和 RYP 的最关键变量。RYK 在东经方向显示较低。土壤氮和锌与 RYZn 相关。代表平均土壤肥力的氮、磷、钾和锌的预测 RY 中值分别为 0.51、0.84、0.87 和 0.97,分别占孟加拉国高地旱季作物面积的 44%、54%、54% 和 48%。需要努力更新数据库,对土地类型淹没等级、土壤特性和 INS 的变化进行编目,并将其与农民的作物管理信息相结合,以制定更精确的 EIGP 玉米养分指南。
{"title":"Prediction of spatial heterogeneity in nutrient-limited sub-tropical maize yield: Implications for precision management in the eastern Indo-Gangetic Plains","authors":"","doi":"10.1016/j.aiia.2024.08.001","DOIUrl":"10.1016/j.aiia.2024.08.001","url":null,"abstract":"<div><p>Knowledge of the factors influencing nutrient-limited subtropical maize yield and subsequent prediction is crucial for effective nutrient management, maximizing profitability, ensuring food security, and promoting environmental sustainability. We analyzed data from nutrient omission plot trials (NOPTs) conducted in 324 farmers' fields across ten agroecological zones (AEZs) in the Eastern Indo-Gangetic Plains (EIGP) of Bangladesh to explain maize yield variability and identify variables controlling nutrient-limited yields. An additive main effect and multiplicative interaction (AMMI) model was used to explain maize yield variability with nutrient addition. Interpretable machine learning (ML) algorithms in automatic machine learning (AutoML) frameworks were subsequently used to predict attainable yield relative nutrient-limited yield (RY) and to rank variables that control RY. The stack-ensemble model was identified as the best-performing model for predicting RYs of N, P, and Zn. In contrast, deep learning outperformed all base learners for predicting RY<sub>K</sub>. The best model's square errors (RMSEs) were 0.122, 0.105, 0.123, and 0.104 for RY<sub>N</sub>, RY<sub>P</sub>, RY<sub>K</sub>, and RY<sub>Zn</sub>, respectively. The permutation-based feature importance technique identified soil pH as the most critical variable controlling RY<sub>N</sub> and RY<sub>P</sub>. The RY<sub>K</sub> showed lower in the eastern longitudinal direction. Soil N and Zn were associated with RY<sub>Zn</sub>. The predicted median RY of N, P, K, and Zn, representing average soil fertility, was 0.51, 0.84, 0.87, and 0.97, accounting for 44, 54, 54, and 48% upland dry season crop area of Bangladesh, respectively. Efforts are needed to update databases cataloging variability in land type inundation classes, soil characteristics, and INS and combine them with farmers' crop management information to develop more precise nutrient guidelines for maize in the EIGP.</p></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":null,"pages":null},"PeriodicalIF":8.2,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2589721724000291/pdfft?md5=e609aaa51bea70dec6de90b8b5d1eec7&pid=1-s2.0-S2589721724000291-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142164900","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
UAV-based field watermelon detection and counting using YOLOv8s with image panorama stitching and overlap partitioning 使用 YOLOv8s 进行基于无人机的田间西瓜检测和计数,并进行图像全景拼接和重叠分割
IF 8.2 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2024-09-01 DOI: 10.1016/j.aiia.2024.09.001

Accurate watermelon yield estimation is crucial to the agricultural value chain, as it guides the allocation of agricultural resources as well as facilitates inventory and logistics planning. The conventional method of watermelon yield estimation relies heavily on manual labor, which is both time-consuming and labor-intensive. To address this, this work proposes an algorithmic pipeline that utilizes unmanned aerial vehicle (UAV) videos for detection and counting of watermelons. This pipeline uses You Only Look Once version 8 s (YOLOv8s) with panorama stitching and overlap partitioning, which facilitates the overall number estimation of watermelons in field. The watermelon detection model, based on YOLOv8s and obtained using transfer learning, achieved a detection accuracy of 99.20 %, demonstrating its potential for application in yield estimation. The panorama stitching and overlap partitioning based detection and counting method uses panoramic images as input and effectively mitigates the duplications compared with the video tracking based detection and counting method. The counting accuracy reached over 96.61 %, proving a promising application for yield estimation. The high accuracy demonstrates the feasibility of applying this method for overall yield estimation in large watermelon fields.

准确的西瓜产量估算对农业价值链至关重要,因为它可以指导农业资源的分配,促进库存和物流规划。传统的西瓜产量估算方法严重依赖人工,既耗时又耗力。针对这一问题,本研究提出了一种利用无人飞行器(UAV)视频进行西瓜检测和计数的算法流水线。该流水线使用全景拼接和重叠分割的 You Only Look Once version 8 s(YOLOv8s),有助于对田间西瓜的总体数量进行估算。基于 YOLOv8s 并利用迁移学习获得的西瓜检测模型的检测准确率达到 99.20%,证明了其在产量估算中的应用潜力。基于全景拼接和重叠分割的检测和计数方法使用全景图像作为输入,与基于视频跟踪的检测和计数方法相比,有效地减少了重复。计数精度达到 96.61 % 以上,证明在产量估算中的应用前景广阔。高精度证明了将该方法应用于大面积西瓜田整体产量估算的可行性。
{"title":"UAV-based field watermelon detection and counting using YOLOv8s with image panorama stitching and overlap partitioning","authors":"","doi":"10.1016/j.aiia.2024.09.001","DOIUrl":"10.1016/j.aiia.2024.09.001","url":null,"abstract":"<div><p>Accurate watermelon yield estimation is crucial to the agricultural value chain, as it guides the allocation of agricultural resources as well as facilitates inventory and logistics planning. The conventional method of watermelon yield estimation relies heavily on manual labor, which is both time-consuming and labor-intensive. To address this, this work proposes an algorithmic pipeline that utilizes unmanned aerial vehicle (UAV) videos for detection and counting of watermelons. This pipeline uses You Only Look Once version 8 s (YOLOv8s) with panorama stitching and overlap partitioning, which facilitates the overall number estimation of watermelons in field. The watermelon detection model, based on YOLOv8s and obtained using transfer learning, achieved a detection accuracy of 99.20 %, demonstrating its potential for application in yield estimation. The panorama stitching and overlap partitioning based detection and counting method uses panoramic images as input and effectively mitigates the duplications compared with the video tracking based detection and counting method. The counting accuracy reached over 96.61 %, proving a promising application for yield estimation. The high accuracy demonstrates the feasibility of applying this method for overall yield estimation in large watermelon fields.</p></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":null,"pages":null},"PeriodicalIF":8.2,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2589721724000308/pdfft?md5=e51fdb350e08ba1871a8fe3fd59e2ca5&pid=1-s2.0-S2589721724000308-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142232004","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Comparing YOLOv8 and Mask R-CNN for instance segmentation in complex orchard environments 比较 YOLOv8 和 Mask R-CNN 在复杂果园环境中的实例分割功能
IF 8.2 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2024-07-16 DOI: 10.1016/j.aiia.2024.07.001

Instance segmentation, an important image processing operation for automation in agriculture, is used to precisely delineate individual objects of interest within images, which provides foundational information for various automated or robotic tasks such as selective harvesting and precision pruning. This study compares the one-stage YOLOv8 and the two-stage Mask R-CNN machine learning models for instance segmentation under varying orchard conditions across two datasets. Dataset 1, collected in dormant season, includes images of dormant apple trees, which were used to train multi-object segmentation models delineating tree branches and trunks. Dataset 2, collected in the early growing season, includes images of apple tree canopies with green foliage and immature (green) apples (also called fruitlet), which were used to train single-object segmentation models delineating only immature green apples. The results showed that YOLOv8 performed better than Mask R-CNN, achieving good precision and near-perfect recall across both datasets at a confidence threshold of 0.5. Specifically, for Dataset 1, YOLOv8 achieved a precision of 0.90 and a recall of 0.95 for all classes. In comparison, Mask R-CNN demonstrated a precision of 0.81 and a recall of 0.81 for the same dataset. With Dataset 2, YOLOv8 achieved a precision of 0.93 and a recall of 0.97. Mask R-CNN, in this single-class scenario, achieved a precision of 0.85 and a recall of 0.88. Additionally, the inference times for YOLOv8 were 10.9 ms for multi-class segmentation (Dataset 1) and 7.8 ms for single-class segmentation (Dataset 2), compared to 15.6 ms and 12.8 ms achieved by Mask R-CNN's, respectively. These findings show YOLOv8's superior accuracy and efficiency in machine learning applications compared to two-stage models, specifically Mask-R-CNN, which suggests its suitability in developing smart and automated orchard operations, particularly when real-time applications are necessary in such cases as robotic harvesting and robotic immature green fruit thinning.

实例分割是农业自动化中一项重要的图像处理操作,用于精确划分图像中感兴趣的单个对象,为选择性收获和精确修剪等各种自动化或机器人任务提供基础信息。本研究通过两个数据集,比较了用于不同果园条件下实例分割的单级 YOLOv8 和两级 Mask R-CNN 机器学习模型。数据集 1 收集的是休眠期的苹果树图像,用于训练划分树枝和树干的多目标分割模型。数据集 2 收集于早期生长季节,包括苹果树树冠上的绿叶和未成熟(绿色)苹果(也称为小果)的图像,用于训练仅划分未成熟绿色苹果的单目标分割模型。结果显示,YOLOv8 的表现优于 Mask R-CNN,在置信度为 0.5 的阈值下,YOLOv8 在两个数据集上都取得了良好的精确度和接近完美的召回率。具体来说,在数据集 1 中,YOLOv8 对所有类别的精确度都达到了 0.90,召回率达到了 0.95。相比之下,Mask R-CNN 在同一数据集上的精确度为 0.81,召回率为 0.81。在数据集 2 中,YOLOv8 的精确度为 0.93,召回率为 0.97。Mask R-CNN 在单类情况下的精确度为 0.85,召回率为 0.88。此外,YOLOv8 的多类分割推理时间(数据集 1)为 10.9 毫秒,单类分割推理时间(数据集 2)为 7.8 毫秒,而 Mask R-CNN 的推理时间分别为 15.6 毫秒和 12.8 毫秒。这些研究结果表明,在机器学习应用中,YOLOv8 比两级模型(特别是 Mask-R-CNN)具有更高的准确性和效率,这表明它适用于开发智能和自动化果园操作,特别是在机器人收获和机器人疏剪未成熟青果等需要实时应用的情况下。
{"title":"Comparing YOLOv8 and Mask R-CNN for instance segmentation in complex orchard environments","authors":"","doi":"10.1016/j.aiia.2024.07.001","DOIUrl":"10.1016/j.aiia.2024.07.001","url":null,"abstract":"<div><p>Instance segmentation, an important image processing operation for automation in agriculture, is used to precisely delineate individual objects of interest within images, which provides foundational information for various automated or robotic tasks such as selective harvesting and precision pruning. This study compares the one-stage YOLOv8 and the two-stage Mask R-CNN machine learning models for instance segmentation under varying orchard conditions across two datasets. Dataset 1, collected in dormant season, includes images of dormant apple trees, which were used to train multi-object segmentation models delineating tree branches and trunks. Dataset 2, collected in the early growing season, includes images of apple tree canopies with green foliage and immature (green) apples (also called fruitlet), which were used to train single-object segmentation models delineating only immature green apples. The results showed that YOLOv8 performed better than Mask R-CNN, achieving good precision and near-perfect recall across both datasets at a confidence threshold of 0.5. Specifically, for Dataset 1, YOLOv8 achieved a precision of 0.90 and a recall of 0.95 for all classes. In comparison, Mask R-CNN demonstrated a precision of 0.81 and a recall of 0.81 for the same dataset. With Dataset 2, YOLOv8 achieved a precision of 0.93 and a recall of 0.97. Mask R-CNN, in this single-class scenario, achieved a precision of 0.85 and a recall of 0.88. Additionally, the inference times for YOLOv8 were 10.9 ms for multi-class segmentation (Dataset 1) and 7.8 ms for single-class segmentation (Dataset 2), compared to 15.6 ms and 12.8 ms achieved by Mask R-CNN's, respectively. These findings show YOLOv8's superior accuracy and efficiency in machine learning applications compared to two-stage models, specifically Mask-R-CNN, which suggests its suitability in developing smart and automated orchard operations, particularly when real-time applications are necessary in such cases as robotic harvesting and robotic immature green fruit thinning.</p></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":null,"pages":null},"PeriodicalIF":8.2,"publicationDate":"2024-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S258972172400028X/pdfft?md5=d0b3ae6930c8dca43a65b49ca13f6d47&pid=1-s2.0-S258972172400028X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141729373","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A comprehensive survey on weed and crop classification using machine learning and deep learning 利用机器学习和深度学习对杂草和作物进行分类的综合调查
IF 8.2 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2024-06-26 DOI: 10.1016/j.aiia.2024.06.005
Faisal Dharma Adhinata , Wahyono , Raden Sumiharto

Machine learning and deep learning are subsets of Artificial Intelligence that have revolutionized object detection and classification in images or videos. This technology plays a crucial role in facilitating the transition from conventional to precision agriculture, particularly in the context of weed control. Precision agriculture, which previously relied on manual efforts, has now embraced the use of smart devices for more efficient weed detection. However, several challenges are associated with weed detection, including the visual similarity between weed and crop, occlusion and lighting effects, as well as the need for early-stage weed control. Therefore, this study aimed to provide a comprehensive review of the application of both traditional machine learning and deep learning, as well as the combination of the two methods, for weed detection across different crop fields. The results of this review show the advantages and disadvantages of using machine learning and deep learning. Generally, deep learning produced superior accuracy compared to machine learning under various conditions. Machine learning required the selection of the right combination of features to achieve high accuracy in classifying weed and crop, particularly under conditions consisting of lighting and early growth effects. Moreover, a precise segmentation stage would be required in cases of occlusion. Machine learning had the advantage of achieving real-time processing by producing smaller models than deep learning, thereby eliminating the need for additional GPUs. However, the development of GPU technology is currently rapid, so researchers are more often using deep learning for more accurate weed identification.

机器学习和深度学习是人工智能的子集,它们彻底改变了图像或视频中的物体检测和分类。这项技术在促进传统农业向精准农业过渡方面发挥着至关重要的作用,尤其是在杂草控制方面。精准农业以前主要依靠人工,现在已开始使用智能设备来更有效地检测杂草。然而,杂草检测也面临着一些挑战,包括杂草与作物之间的视觉相似性、遮挡和光照效果,以及早期杂草控制的需要。因此,本研究旨在全面综述传统机器学习和深度学习在不同作物田杂草检测中的应用,以及两种方法的结合应用。综述结果显示了使用机器学习和深度学习的优缺点。一般来说,在各种条件下,深度学习比机器学习的精度更高。机器学习需要选择正确的特征组合,才能实现高精度的杂草和作物分类,尤其是在光照和早期生长影响等条件下。此外,在出现遮挡的情况下,还需要精确的分割阶段。与深度学习相比,机器学习的优势在于通过生成更小的模型来实现实时处理,从而无需额外的 GPU。然而,目前 GPU 技术发展迅速,因此研究人员更多地使用深度学习来实现更精确的杂草识别。
{"title":"A comprehensive survey on weed and crop classification using machine learning and deep learning","authors":"Faisal Dharma Adhinata ,&nbsp;Wahyono ,&nbsp;Raden Sumiharto","doi":"10.1016/j.aiia.2024.06.005","DOIUrl":"https://doi.org/10.1016/j.aiia.2024.06.005","url":null,"abstract":"<div><p>Machine learning and deep learning are subsets of Artificial Intelligence that have revolutionized object detection and classification in images or videos. This technology plays a crucial role in facilitating the transition from conventional to precision agriculture, particularly in the context of weed control. Precision agriculture, which previously relied on manual efforts, has now embraced the use of smart devices for more efficient weed detection. However, several challenges are associated with weed detection, including the visual similarity between weed and crop, occlusion and lighting effects, as well as the need for early-stage weed control. Therefore, this study aimed to provide a comprehensive review of the application of both traditional machine learning and deep learning, as well as the combination of the two methods, for weed detection across different crop fields. The results of this review show the advantages and disadvantages of using machine learning and deep learning. Generally, deep learning produced superior accuracy compared to machine learning under various conditions. Machine learning required the selection of the right combination of features to achieve high accuracy in classifying weed and crop, particularly under conditions consisting of lighting and early growth effects. Moreover, a precise segmentation stage would be required in cases of occlusion. Machine learning had the advantage of achieving real-time processing by producing smaller models than deep learning, thereby eliminating the need for additional GPUs. However, the development of GPU technology is currently rapid, so researchers are more often using deep learning for more accurate weed identification.</p></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":null,"pages":null},"PeriodicalIF":8.2,"publicationDate":"2024-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2589721724000278/pdfft?md5=13d026a04a00bc2bca21fc068166d32c&pid=1-s2.0-S2589721724000278-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141481877","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Computer vision in smart agriculture and precision farming: Techniques and applications 智能农业和精准农业中的计算机视觉:技术与应用
IF 8.2 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2024-06-25 DOI: 10.1016/j.aiia.2024.06.004
Sumaira Ghazal , Arslan Munir , Waqar S. Qureshi

The transformation of age-old farming practices through the integration of digitization and automation has sparked a revolution in agriculture that is driven by cutting-edge computer vision and artificial intelligence (AI) technologies. This transformation not only promises increased productivity and economic growth, but also has the potential to address important global issues such as food security and sustainability. This survey paper aims to provide a holistic understanding of the integration of vision-based intelligent systems in various aspects of precision agriculture. By providing a detailed discussion on key areas of digital life cycle of crops, this survey contributes to a deeper understanding of the complexities associated with the implementation of vision-guided intelligent systems in challenging agricultural environments. The focus of this survey is to explore widely used imaging and image analysis techniques being utilized for precision farming tasks. This paper first discusses various salient crop metrics used in digital agriculture. Then this paper illustrates the usage of imaging and computer vision techniques in various phases of digital life cycle of crops in precision agriculture, such as image acquisition, image stitching and photogrammetry, image analysis, decision making, treatment, and planning. After establishing a thorough understanding of related terms and techniques involved in the implementation of vision-based intelligent systems for precision agriculture, the survey concludes by outlining the challenges associated with implementing generalized computer vision models for real-time deployment of fully autonomous farms.

在计算机视觉和人工智能(AI)尖端技术的推动下,通过数字化和自动化的融合,古老的耕作方式发生了转变,引发了一场农业革命。这场变革不仅有望提高生产率和促进经济增长,还有可能解决粮食安全和可持续发展等重要的全球性问题。本调查报告旨在全面了解基于视觉的智能系统在精准农业各方面的集成情况。通过对作物数字生命周期的关键领域进行详细讨论,本调查报告有助于加深对在具有挑战性的农业环境中实施视觉引导智能系统的复杂性的理解。本调查的重点是探索精准农业任务中广泛使用的成像和图像分析技术。本文首先讨论了数字农业中使用的各种作物指标。然后,本文阐述了成像和计算机视觉技术在精准农业中作物数字生命周期各个阶段的应用,如图像采集、图像拼接和摄影测量、图像分析、决策、处理和规划。在对精准农业中基于视觉的智能系统实施过程中涉及的相关术语和技术有了透彻的了解之后,调查报告最后概述了为实时部署完全自主的农场而实施通用计算机视觉模型所面临的挑战。
{"title":"Computer vision in smart agriculture and precision farming: Techniques and applications","authors":"Sumaira Ghazal ,&nbsp;Arslan Munir ,&nbsp;Waqar S. Qureshi","doi":"10.1016/j.aiia.2024.06.004","DOIUrl":"https://doi.org/10.1016/j.aiia.2024.06.004","url":null,"abstract":"<div><p>The transformation of age-old farming practices through the integration of digitization and automation has sparked a revolution in agriculture that is driven by cutting-edge computer vision and artificial intelligence (AI) technologies. This transformation not only promises increased productivity and economic growth, but also has the potential to address important global issues such as food security and sustainability. This survey paper aims to provide a holistic understanding of the integration of vision-based intelligent systems in various aspects of precision agriculture. By providing a detailed discussion on key areas of digital life cycle of crops, this survey contributes to a deeper understanding of the complexities associated with the implementation of vision-guided intelligent systems in challenging agricultural environments. The focus of this survey is to explore widely used imaging and image analysis techniques being utilized for precision farming tasks. This paper first discusses various salient crop metrics used in digital agriculture. Then this paper illustrates the usage of imaging and computer vision techniques in various phases of digital life cycle of crops in precision agriculture, such as image acquisition, image stitching and photogrammetry, image analysis, decision making, treatment, and planning. After establishing a thorough understanding of related terms and techniques involved in the implementation of vision-based intelligent systems for precision agriculture, the survey concludes by outlining the challenges associated with implementing generalized computer vision models for real-time deployment of fully autonomous farms.</p></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":null,"pages":null},"PeriodicalIF":8.2,"publicationDate":"2024-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2589721724000266/pdfft?md5=85ca785f72940b6f0eede997e4743f8c&pid=1-s2.0-S2589721724000266-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141539935","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An artificial neuronal network coupled with a genetic algorithm to optimise the production of unsaturated fatty acids in Parachlorella kessleri 人工神经元网络与遗传算法相结合,优化克氏伞藻不饱和脂肪酸的生产
IF 8.2 Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2024-06-21 DOI: 10.1016/j.aiia.2024.06.003
Pablo Fernández Izquierdo , Leslie Cerón Delagado , Fedra Ortiz Benavides

In this study, an Artificial Neural Network-Genetic Algorithm (ANN-GA) approach was successfully applied to optimise the physicochemical factors influencing the synthesis of unsaturated fatty acids (UFAs) in the microalgae P. kessleri UCM 001. The optimized model recommended specific cultivation conditions, including glucose at 29 g/L, NaNO3 at 2.4 g/L, K2HPO4 at 0.4 g/L, red LED light, an intensity of 1000 lx, and an 8:16-h light-dark cycle. Through ANN-GA optimisation, a remarkable 66.79% increase in UFAs production in P. kessleri UCM 001 was achieved, compared to previous studies. This underscores the potential of this technology for enhancing valuable lipid production. Sequential variations in the application of physicochemical factors during microalgae culture under mixotrophic conditions, as optimized by ANN-GA, induced alterations in UFAs production and composition in P. kessleri UCM 001. This suggests the feasibility of tailoring the lipid profile of microalgae to obtain specific lipids for diverse industrial applications. The microalgae were isolated from a high-mountain lake in Colombia, highlighting their adaptation to extreme conditions. This underscores their potential for sustainable lipid and biomaterial production. This study demonstrates the effectiveness of using ANN-GA technology to optimise UFAs production in microalgae, offering a promising avenue for obtaining valuable lipids. The microalgae's unique origin in a high-mountain environment in Colombia emphasises the importance of exploring and harnessing microbial resources in distinctive geographical regions for biotechnological applications.

本研究采用人工神经网络-遗传算法(ANN-GA)成功地优化了影响微藻 P. kessleri UCM 001 中不饱和脂肪酸(UFAs)合成的理化因素。kessleri UCM 001 微藻中合成不饱和脂肪酸(UFAs)的理化因素。优化模型推荐了特定的培养条件,包括 29 克/升的葡萄糖、2.4 克/升的 NaNO3、0.4 克/升的 K2HPO4、1000 lx 的红色 LED 光和 8:16 小时的光暗循环。通过 ANN-GA 优化,与之前的研究相比,P. kessleri UCM 001 的 UFAs 产量显著提高了 66.79%。这凸显了该技术在提高有价值脂质产量方面的潜力。在混养条件下培养微藻期间,通过 ANN-GA 对理化因素的应用进行优化,在 P. kessleri UCM 001 中诱导改变了 UFAs 的产量和组成。这表明定制微藻脂质特征以获得特定脂质用于多种工业应用是可行的。这些微藻是从哥伦比亚的一个高山湖泊中分离出来的,突出表明了它们对极端条件的适应性。这凸显了它们在可持续脂质和生物材料生产方面的潜力。这项研究证明了使用 ANN-GA 技术优化微藻中的 UFAs 生产的有效性,为获得有价值的脂质提供了一条前景广阔的途径。微藻产自哥伦比亚高山环境的独特性强调了探索和利用独特地理区域的微生物资源进行生物技术应用的重要性。
{"title":"An artificial neuronal network coupled with a genetic algorithm to optimise the production of unsaturated fatty acids in Parachlorella kessleri","authors":"Pablo Fernández Izquierdo ,&nbsp;Leslie Cerón Delagado ,&nbsp;Fedra Ortiz Benavides","doi":"10.1016/j.aiia.2024.06.003","DOIUrl":"https://doi.org/10.1016/j.aiia.2024.06.003","url":null,"abstract":"<div><p>In this study, an Artificial Neural Network-Genetic Algorithm (ANN-GA) approach was successfully applied to optimise the physicochemical factors influencing the synthesis of unsaturated fatty acids (UFAs) in the microalgae <em>P. kessleri</em> UCM 001. The optimized model recommended specific cultivation conditions, including glucose at 29 g/L, NaNO<sub>3</sub> at 2.4 g/L, K<sub>2</sub>HPO<sub>4</sub> at 0.4 g/L, red LED light, an intensity of 1000 lx, and an 8:16-h light-dark cycle. Through ANN-GA optimisation, a remarkable 66.79% increase in UFAs production in <em>P. kessleri</em> UCM 001 was achieved, compared to previous studies. This underscores the potential of this technology for enhancing valuable lipid production. Sequential variations in the application of physicochemical factors during microalgae culture under mixotrophic conditions, as optimized by ANN-GA, induced alterations in UFAs production and composition in <em>P. kessleri</em> UCM 001. This suggests the feasibility of tailoring the lipid profile of microalgae to obtain specific lipids for diverse industrial applications. The microalgae were isolated from a high-mountain lake in Colombia, highlighting their adaptation to extreme conditions. This underscores their potential for sustainable lipid and biomaterial production. This study demonstrates the effectiveness of using ANN-GA technology to optimise UFAs production in microalgae, offering a promising avenue for obtaining valuable lipids. The microalgae's unique origin in a high-mountain environment in Colombia emphasises the importance of exploring and harnessing microbial resources in distinctive geographical regions for biotechnological applications.</p></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":null,"pages":null},"PeriodicalIF":8.2,"publicationDate":"2024-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2589721724000254/pdfft?md5=5e368428bd6813d6d581e52a6bbbc317&pid=1-s2.0-S2589721724000254-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141481876","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Image classification on smart agriculture platforms: Systematic literature review 智慧农业平台上的图像分类:系统文献综述
Q1 Computer Science Pub Date : 2024-06-08 DOI: 10.1016/j.aiia.2024.06.002
Juan Felipe Restrepo-Arias , John W. Branch-Bedoya , Gabriel Awad

In recent years, smart agriculture has gained strength due to the application of industry 4.0 technologies in agriculture. As a result, efforts are increasing in proposing artificial vision applications to solve many problems. However, many of these applications are developed separately. Many academic works have proposed solutions integrating image classification techniques through IoT platforms. For this reason, this paper aims to answer the following research questions: (1) What are the main problems to be solved with smart farming IoT platforms that incorporate images? (2) What are the main strategies for incorporating image classification methods in smart agriculture IoT platforms? and (3) What are the main image acquisition, preprocessing, transmission, and classification technologies used in smart agriculture IoT platforms? This study adopts a Systematic Literature Review (SLR) approach. We searched Scopus, Web of Science, IEEE Xplore, and Springer Link databases from January 2018 to July 2022. From which we could identify five domains corresponding to (1) disease and pest detection, (2) crop growth and health monitoring, (3) irrigation and crop protection management, (4) intrusion detection, and (5) fruits and plant counting. There are three types of strategies to integrate image data into smart agriculture IoT platforms: (1) classification process in the edge, (2) classification process in the cloud, and (3) classification process combined. The main advantage of the first is obtaining data in real-time, and its main disadvantage is the cost of implementation. On the other hand, the main advantage of the second is the ability to process high-resolution images, and its main disadvantage is the need for high-bandwidth connectivity. Finally, the mixed strategy can significantly benefit infrastructure investment, but most works are experimental.

近年来,由于工业 4.0 技术在农业中的应用,智慧农业的发展势头日益强劲。因此,人们越来越努力地提出人工视觉应用来解决许多问题。然而,其中许多应用都是单独开发的。许多学术著作提出了通过物联网平台整合图像分类技术的解决方案。为此,本文旨在回答以下研究问题:(1)结合图像的智能农业物联网平台需要解决哪些主要问题?(2) 将图像分类方法纳入智能农业物联网平台的主要策略是什么? (3) 智能农业物联网平台采用的主要图像采集、预处理、传输和分类技术有哪些?本研究采用了系统文献综述(SLR)方法。我们检索了 2018 年 1 月至 2022 年 7 月期间的 Scopus、Web of Science、IEEE Xplore 和 Springer Link 数据库。从中,我们确定了五个领域,分别是:(1)病虫害检测;(2)作物生长和健康监测;(3)灌溉和作物保护管理;(4)入侵检测;以及(5)水果和植物计数。将图像数据集成到智慧农业物联网平台的策略有三种:(1)边缘分类处理;(2)云端分类处理;(3)分类处理组合。第一种策略的主要优点是实时获取数据,主要缺点是实施成本较高。另一方面,第二种方法的主要优点是能够处理高分辨率图像,其主要缺点是需要高带宽连接。最后,混合策略可大大有利于基础设施投资,但大多数工作都是试验性的。
{"title":"Image classification on smart agriculture platforms: Systematic literature review","authors":"Juan Felipe Restrepo-Arias ,&nbsp;John W. Branch-Bedoya ,&nbsp;Gabriel Awad","doi":"10.1016/j.aiia.2024.06.002","DOIUrl":"https://doi.org/10.1016/j.aiia.2024.06.002","url":null,"abstract":"<div><p>In recent years, smart agriculture has gained strength due to the application of industry 4.0 technologies in agriculture. As a result, efforts are increasing in proposing artificial vision applications to solve many problems. However, many of these applications are developed separately. Many academic works have proposed solutions integrating image classification techniques through IoT platforms. For this reason, this paper aims to answer the following research questions: (1) What are the main problems to be solved with smart farming IoT platforms that incorporate images? (2) What are the main strategies for incorporating image classification methods in smart agriculture IoT platforms? and (3) What are the main image acquisition, preprocessing, transmission, and classification technologies used in smart agriculture IoT platforms? This study adopts a Systematic Literature Review (SLR) approach. We searched Scopus, Web of Science, IEEE Xplore, and Springer Link databases from January 2018 to July 2022. From which we could identify five domains corresponding to (1) disease and pest detection, (2) crop growth and health monitoring, (3) irrigation and crop protection management, (4) intrusion detection, and (5) fruits and plant counting. There are three types of strategies to integrate image data into smart agriculture IoT platforms: (1) classification process in the edge, (2) classification process in the cloud, and (3) classification process combined. The main advantage of the first is obtaining data in real-time, and its main disadvantage is the cost of implementation. On the other hand, the main advantage of the second is the ability to process high-resolution images, and its main disadvantage is the need for high-bandwidth connectivity. Finally, the mixed strategy can significantly benefit infrastructure investment, but most works are experimental.</p></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2589721724000205/pdfft?md5=adaa2b4e5272ad9c56b921776eacfaa1&pid=1-s2.0-S2589721724000205-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141325301","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Estimation of flea beetle damage in the field using a multistage deep learning-based solution 使用基于多级深度学习的解决方案估算田间跳甲危害情况
Q1 Computer Science Pub Date : 2024-06-06 DOI: 10.1016/j.aiia.2024.06.001
Arantza Bereciartua-Pérez , María Monzón , Daniel Múgica , Greta De Both , Jeroen Baert , Brittany Hedges , Nicole Fox , Jone Echazarra , Ramón Navarra-Mestre

Estimation of damage in plants is a key issue for crop protection. Currently, experts in the field manually assess the plots. This is a time-consuming task that can be automated thanks to the latest technology in computer vision (CV). The use of image-based systems and recently deep learning-based systems have provided good results in several agricultural applications. These image-based applications outperform expert evaluation in controlled environments, and now they are being progressively included in non-controlled field applications.

A novel solution based on deep learning techniques in combination with image processing methods is proposed to tackle the estimate of plant damage in the field. The proposed solution is a two-stage algorithm. In a first stage, the single plants in the plots are detected by an object detection YOLO based model. Then a regression model is applied to estimate the damage of each individual plant. The solution has been developed and validated in oilseed rape plants to estimate the damage caused by flea beetle.

The crop detection model achieves a mean precision average of 91% with a [email protected] of 0.99 and a [email protected] of 0.91 for oilseed rape specifically. The regression model to estimate up to 60% of damage degree in single plants achieves a MAE of 7.11, and R2 of 0.46 in comparison with manual evaluations done plant by plant by experts. Models are deployed in a docker, and with a REST API communication protocol they can be inferred directly for images acquired in the field from a mobile device.

估算植物受损情况是作物保护的一个关键问题。目前,田间专家需要对地块进行人工评估。这是一项耗时的工作,但借助计算机视觉(CV)领域的最新技术,这项工作可以实现自动化。基于图像的系统以及最近基于深度学习的系统在一些农业应用中取得了良好的效果。这些基于图像的应用在受控环境中的表现优于专家评估,现在它们正逐渐被纳入非受控田间应用中。所提出的解决方案是一种两阶段算法。在第一阶段,通过基于对象检测 YOLO 的模型检测地块中的单株植物。然后应用回归模型来估算每棵单株植物的损害程度。作物检测模型的平均精确度达到 91%,[email protected] 为 0.99,油菜的[email protected] 为 0.91。回归模型可估算单株植物 60% 的损害程度,与专家逐株进行的人工评估相比,其 MAE 为 7.11,R2 为 0.46。模型部署在 docker 中,通过 REST API 通信协议,可以直接推断出移动设备在田间获取的图像。
{"title":"Estimation of flea beetle damage in the field using a multistage deep learning-based solution","authors":"Arantza Bereciartua-Pérez ,&nbsp;María Monzón ,&nbsp;Daniel Múgica ,&nbsp;Greta De Both ,&nbsp;Jeroen Baert ,&nbsp;Brittany Hedges ,&nbsp;Nicole Fox ,&nbsp;Jone Echazarra ,&nbsp;Ramón Navarra-Mestre","doi":"10.1016/j.aiia.2024.06.001","DOIUrl":"10.1016/j.aiia.2024.06.001","url":null,"abstract":"<div><p>Estimation of damage in plants is a key issue for crop protection. Currently, experts in the field manually assess the plots. This is a time-consuming task that can be automated thanks to the latest technology in computer vision (CV). The use of image-based systems and recently deep learning-based systems have provided good results in several agricultural applications. These image-based applications outperform expert evaluation in controlled environments, and now they are being progressively included in non-controlled field applications.</p><p>A novel solution based on deep learning techniques in combination with image processing methods is proposed to tackle the estimate of plant damage in the field. The proposed solution is a two-stage algorithm. In a first stage, the single plants in the plots are detected by an object detection YOLO based model. Then a regression model is applied to estimate the damage of each individual plant. The solution has been developed and validated in oilseed rape plants to estimate the damage caused by flea beetle.</p><p>The crop detection model achieves a mean precision average of 91% with a [email protected] of 0.99 and a [email protected] of 0.91 for oilseed rape specifically. The regression model to estimate up to 60% of damage degree in single plants achieves a MAE of 7.11, and R2 of 0.46 in comparison with manual evaluations done plant by plant by experts. Models are deployed in a docker, and with a REST API communication protocol they can be inferred directly for images acquired in the field from a mobile device.</p></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2589721724000199/pdfft?md5=6734d348bce39475c37cb2c23f24a354&pid=1-s2.0-S2589721724000199-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141390129","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cross-comparative review of Machine learning for plant disease detection: apple, cassava, cotton and potato plants 机器学习用于植物病害检测的交叉比较综述:苹果、木薯、棉花和马铃薯植物
Q1 Computer Science Pub Date : 2024-06-01 DOI: 10.1016/j.aiia.2024.04.002
James Daniel Omaye , Emeka Ogbuju , Grace Ataguba , Oluwayemisi Jaiyeoba , Joseph Aneke , Francisca Oladipo

Plant disease detection has played a significant role in combating plant diseases that pose a threat to global agriculture and food security. Detecting these diseases early can help mitigate their impact and ensure healthy crop yields. Machine learning algorithms have emerged as powerful tools for accurately identifying and classifying a wide range of plant diseases from trained image datasets of affected crops. These algorithms, including deep learning algorithms, have shown remarkable success in recognizing disease patterns and early signs of plant diseases. Besides early detection, there are other potential benefits of machine learning algorithms in overall plant disease management, such as soil and climatic condition predictions for plants, pest identification, proximity detection, and many more. Over the years, research has focused on using machine-learning algorithms for plant disease detection. Nevertheless, little is known about the extent to which the research community has explored machine learning algorithms to cover other significant areas of plant disease management. In view of this, we present a cross-comparative review of machine learning algorithms and applications designed for plant disease detection with a specific focus on four (4) economically important plants: apple, cassava, cotton, and potato. We conducted a systematic review of articles published between 2013 and 2023 to explore trends in the research community over the years. After filtering a number of articles based on our inclusion criteria, including articles that present individual prediction accuracy for classes of disease associated with the selected plants, 113 articles were considered relevant. From these articles, we analyzed the state-of-the-art techniques, challenges, and future prospects of using machine learning for disease identification of the selected plants. Results from our review show that deep learning and other algorithms performed significantly well in detecting plant diseases. In addition, we found a few references to plant disease management covering prevention, diagnosis, control, and monitoring. In view of this, little or no work has explored the prediction of the recovery of affected plants. Hence, we propose opportunities for developing machine learning-based technologies to cover prevention, diagnosis, control, monitoring, and recovery.

植物病害检测在防治对全球农业和粮食安全构成威胁的植物病害方面发挥了重要作用。及早发现这些病害有助于减轻其影响,确保作物的健康产量。机器学习算法已成为从经过训练的受影响作物图像数据集中准确识别各种植物病害并对其进行分类的强大工具。包括深度学习算法在内的这些算法在识别病害模式和植物病害早期症状方面取得了显著成功。除了早期检测,机器学习算法在植物病害的整体管理方面还有其他潜在优势,如植物的土壤和气候条件预测、害虫识别、近距离检测等。多年来,研究重点一直放在使用机器学习算法检测植物病害上。然而,研究界对机器学习算法在植物病害管理其他重要领域的应用程度却知之甚少。有鉴于此,我们对设计用于植物病害检测的机器学习算法和应用进行了横向比较综述,重点关注四(4)种具有重要经济价值的植物:苹果、木薯、棉花和马铃薯。我们对 2013 年至 2023 年间发表的文章进行了系统性综述,以探索多年来研究界的发展趋势。根据我们的纳入标准筛选了一些文章,包括介绍与所选植物相关的病害类别的单个预测准确性的文章,最后有 113 篇文章被认为是相关的。我们从这些文章中分析了利用机器学习识别选定植物病害的最新技术、挑战和未来前景。综述结果表明,深度学习和其他算法在检测植物病害方面表现出色。此外,我们还发现了一些关于植物病害管理的参考文献,内容涉及预防、诊断、控制和监测。有鉴于此,很少或根本没有研究如何预测受影响植物的恢复情况。因此,我们建议开发基于机器学习的技术,以涵盖预防、诊断、控制、监测和恢复。
{"title":"Cross-comparative review of Machine learning for plant disease detection: apple, cassava, cotton and potato plants","authors":"James Daniel Omaye ,&nbsp;Emeka Ogbuju ,&nbsp;Grace Ataguba ,&nbsp;Oluwayemisi Jaiyeoba ,&nbsp;Joseph Aneke ,&nbsp;Francisca Oladipo","doi":"10.1016/j.aiia.2024.04.002","DOIUrl":"10.1016/j.aiia.2024.04.002","url":null,"abstract":"<div><p>Plant disease detection has played a significant role in combating plant diseases that pose a threat to global agriculture and food security. Detecting these diseases early can help mitigate their impact and ensure healthy crop yields. Machine learning algorithms have emerged as powerful tools for accurately identifying and classifying a wide range of plant diseases from trained image datasets of affected crops. These algorithms, including deep learning algorithms, have shown remarkable success in recognizing disease patterns and early signs of plant diseases. Besides early detection, there are other potential benefits of machine learning algorithms in overall plant disease management, such as soil and climatic condition predictions for plants, pest identification, proximity detection, and many more. Over the years, research has focused on using machine-learning algorithms for plant disease detection. Nevertheless, little is known about the extent to which the research community has explored machine learning algorithms to cover other significant areas of plant disease management. In view of this, we present a cross-comparative review of machine learning algorithms and applications designed for plant disease detection with a specific focus on four (4) economically important plants: apple, cassava, cotton, and potato. We conducted a systematic review of articles published between 2013 and 2023 to explore trends in the research community over the years. After filtering a number of articles based on our inclusion criteria, including articles that present individual prediction accuracy for classes of disease associated with the selected plants, 113 articles were considered relevant. From these articles, we analyzed the state-of-the-art techniques, challenges, and future prospects of using machine learning for disease identification of the selected plants. Results from our review show that deep learning and other algorithms performed significantly well in detecting plant diseases. In addition, we found a few references to plant disease management covering prevention, diagnosis, control, and monitoring. In view of this, little or no work has explored the prediction of the recovery of affected plants. Hence, we propose opportunities for developing machine learning-based technologies to cover prevention, diagnosis, control, monitoring, and recovery.</p></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S258972172400014X/pdfft?md5=a2288673548d57c63626027a95ff21bf&pid=1-s2.0-S258972172400014X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141054049","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hyperparameter optimization of YOLOv8 for smoke and wildfire detection: Implications for agricultural and environmental safety 用于烟雾和野火探测的 YOLOv8 超参数优化:对农业和环境安全的影响
Q1 Computer Science Pub Date : 2024-06-01 DOI: 10.1016/j.aiia.2024.05.003
Leo Ramos , Edmundo Casas , Eduardo Bendek , Cristian Romero , Francklin Rivas-Echeverría

In this study, we extensively evaluated the viability of the state-of-the-art YOLOv8 architecture for object detection tasks, specifically tailored for smoke and wildfire identification with a focus on agricultural and environmental safety. All available versions of YOLOv8 were initially fine-tuned on a domain-specific dataset that included a variety of scenarios, crucial for comprehensive agricultural monitoring. The ‘large’ version (YOLOv8l) was selected for further hyperparameter tuning based on its performance metrics. This model underwent a detailed hyperparameter optimization using the One Factor At a Time (OFAT) methodology, concentrating on key parameters such as learning rate, batch size, weight decay, epochs, and optimizer. Insights from the OFAT study were used to define search spaces for a subsequent Random Search (RS). The final model derived from RS demonstrated significant improvements over the initial fine-tuned model, increasing overall precision by 1.39 %, recall by 1.48 %, F1-score by 1.44 %, [email protected] by 0.70 %, and [email protected]:0.95 by 5.09 %. We validated the enhanced model's efficacy on a diverse set of real-world images, reflecting various agricultural settings, to confirm its robustness in detecting smoke and fire. These results underscore the model's reliability and effectiveness in scenarios critical to agricultural safety and environmental monitoring. This work, representing a significant advancement in the field of fire and smoke detection through machine learning, lays a strong foundation for future research and solutions aimed at safeguarding agricultural areas and natural environments.

在本研究中,我们广泛评估了最先进的 YOLOv8 架构在物体检测任务中的可行性,该架构专门针对烟雾和野火识别而定制,重点关注农业和环境安全。YOLOv8 的所有可用版本最初都是在特定领域的数据集上进行微调的,该数据集包括对全面农业监测至关重要的各种场景。根据其性能指标,"大型 "版本(YOLOv8l)被选中进行进一步的超参数调整。该模型使用 "一次一个因素"(OFAT)方法进行了详细的超参数优化,主要集中在学习率、批量大小、权重衰减、历时和优化器等关键参数上。从 OFAT 研究中获得的启示被用于定义后续随机搜索 (RS) 的搜索空间。与最初的微调模型相比,RS 得出的最终模型有了显著改进,总体精确度提高了 1.39%,召回率提高了 1.48%,F1 分数提高了 1.44%,[email protected] 提高了 0.70%,[email protected]:0.95 提高了 5.09%。我们在一组反映不同农业环境的真实图像上验证了增强型模型的功效,以确认其在检测烟雾和火灾方面的鲁棒性。这些结果凸显了该模型在对农业安全和环境监测至关重要的场景中的可靠性和有效性。这项工作代表了机器学习在火灾和烟雾探测领域取得的重大进展,为今后旨在保护农业地区和自然环境的研究和解决方案奠定了坚实的基础。
{"title":"Hyperparameter optimization of YOLOv8 for smoke and wildfire detection: Implications for agricultural and environmental safety","authors":"Leo Ramos ,&nbsp;Edmundo Casas ,&nbsp;Eduardo Bendek ,&nbsp;Cristian Romero ,&nbsp;Francklin Rivas-Echeverría","doi":"10.1016/j.aiia.2024.05.003","DOIUrl":"https://doi.org/10.1016/j.aiia.2024.05.003","url":null,"abstract":"<div><p>In this study, we extensively evaluated the viability of the state-of-the-art YOLOv8 architecture for object detection tasks, specifically tailored for smoke and wildfire identification with a focus on agricultural and environmental safety. All available versions of YOLOv8 were initially fine-tuned on a domain-specific dataset that included a variety of scenarios, crucial for comprehensive agricultural monitoring. The ‘large’ version (YOLOv8l) was selected for further hyperparameter tuning based on its performance metrics. This model underwent a detailed hyperparameter optimization using the One Factor At a Time (OFAT) methodology, concentrating on key parameters such as learning rate, batch size, weight decay, epochs, and optimizer. Insights from the OFAT study were used to define search spaces for a subsequent Random Search (RS). The final model derived from RS demonstrated significant improvements over the initial fine-tuned model, increasing overall precision by 1.39 %, recall by 1.48 %, F1-score by 1.44 %, [email protected] by 0.70 %, and [email protected]:0.95 by 5.09 %. We validated the enhanced model's efficacy on a diverse set of real-world images, reflecting various agricultural settings, to confirm its robustness in detecting smoke and fire. These results underscore the model's reliability and effectiveness in scenarios critical to agricultural safety and environmental monitoring. This work, representing a significant advancement in the field of fire and smoke detection through machine learning, lays a strong foundation for future research and solutions aimed at safeguarding agricultural areas and natural environments.</p></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2589721724000187/pdfft?md5=c551b82b80431a9f2f37f79894497fcb&pid=1-s2.0-S2589721724000187-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141263998","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Artificial Intelligence in Agriculture
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1