首页 > 最新文献

Artificial Intelligence in Agriculture最新文献

英文 中文
Enhanced detection algorithm for apple bruises using structured light imaging 利用结构光成像技术增强苹果伤痕检测算法
Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2023-12-13 DOI: 10.1016/j.aiia.2023.12.001
Haojie Zhu , Lingling Yang , Yu Wang , Yuwei Wang , Wenhui Hou , Yuan Rao , Lu Liu

Bruising reduces the edibility and marketability of fresh apples, inevitably causing economic losses for the apple industry. However, bruises lack obvious visual symptoms, which makes it challenging to detect them using imaging techniques with uniform or diffuse illumination. This study employed the structured light imaging (SLI) technique to detect apple bruises. First, the grayscale reflection images were captured under phase-shifted sinusoidal illumination at three different wavelengths (600, 650, and 700 nm) and six different spatial frequencies (0.05, 0.10, 0.15, 0.20, 0.25, and 0.30 cycles mm−1). Next, the grayscale reflectance images were demodulated to produce direct component (DC) images representing uniform diffuse illumination and amplitude component (AC) images revealing bruises. Then, by quantifying the contrast between bruised regions and sound regions in all AC images, it was found that bruises exhibited the optimal contrast when subjected to sinusoidal illumination at a wavelength of 700 nm and a spatial frequency of 0.25 mm−1. In the AC image with optimal contrast, the developed h-domes segmentation algorithm to accurately segment the location and range of the bruised regions. Moreover, the algorithm successfully accomplished the task of segmenting central bruised regions while addressing the challenge of segmenting edge bruised regions complicated by vignetting. The average Intersection over Union (IoU) values for the three types of bruises were 0.9422, 0.9231, and 0.9183, respectively. This result demonstrated that the combination of SLI and the h-domes segmentation algorithm was a viable approach for the effective detection of fresh apple bruises.

瘀伤会降低新鲜苹果的可食性和适销性,不可避免地会给苹果产业造成经济损失。然而,瘀伤缺乏明显的视觉症状,因此使用均匀或漫射光成像技术检测瘀伤具有挑战性。本研究采用结构光成像(SLI)技术检测苹果淤伤。首先,在三种不同波长(600、650 和 700 nm)和六种不同空间频率(0.05、0.10、0.15、0.20、0.25 和 0.30 周期 mm-1)的相移正弦波照明下采集灰度反射图像。然后,对灰度反射图像进行解调,生成代表均匀漫射光的直接分量(DC)图像和显示瘀伤的振幅分量(AC)图像。然后,通过量化所有 AC 图像中瘀伤区域和健全区域的对比度,发现在波长为 700 nm、空间频率为 0.25 mm-1 的正弦波照明下,瘀伤显示出最佳对比度。在对比度最佳的交流图像中,所开发的 h-domes 分割算法能准确分割出瘀伤区域的位置和范围。此外,该算法还成功地完成了分割中心淤血区域的任务,同时解决了分割边缘淤血区域的难题。三种瘀伤的平均联合交叉(IoU)值分别为 0.9422、0.9231 和 0.9183。这一结果表明,结合 SLI 和 h-domes 分割算法是有效检测新鲜苹果淤伤的可行方法。
{"title":"Enhanced detection algorithm for apple bruises using structured light imaging","authors":"Haojie Zhu ,&nbsp;Lingling Yang ,&nbsp;Yu Wang ,&nbsp;Yuwei Wang ,&nbsp;Wenhui Hou ,&nbsp;Yuan Rao ,&nbsp;Lu Liu","doi":"10.1016/j.aiia.2023.12.001","DOIUrl":"https://doi.org/10.1016/j.aiia.2023.12.001","url":null,"abstract":"<div><p>Bruising reduces the edibility and marketability of fresh apples, inevitably causing economic losses for the apple industry. However, bruises lack obvious visual symptoms, which makes it challenging to detect them using imaging techniques with uniform or diffuse illumination. This study employed the structured light imaging (SLI) technique to detect apple bruises. First, the grayscale reflection images were captured under phase-shifted sinusoidal illumination at three different wavelengths (600, 650, and 700 nm) and six different spatial frequencies (0.05, 0.10, 0.15, 0.20, 0.25, and 0.30 cycles mm<sup>−1</sup>). Next, the grayscale reflectance images were demodulated to produce direct component (DC) images representing uniform diffuse illumination and amplitude component (AC) images revealing bruises. Then, by quantifying the contrast between bruised regions and sound regions in all AC images, it was found that bruises exhibited the optimal contrast when subjected to sinusoidal illumination at a wavelength of 700 nm and a spatial frequency of 0.25 mm<sup>−1</sup>. In the AC image with optimal contrast, the developed <em>h</em>-domes segmentation algorithm to accurately segment the location and range of the bruised regions. Moreover, the algorithm successfully accomplished the task of segmenting central bruised regions while addressing the challenge of segmenting edge bruised regions complicated by vignetting. The average Intersection over Union (IoU) values for the three types of bruises were 0.9422, 0.9231, and 0.9183, respectively. This result demonstrated that the combination of SLI and the <em>h</em>-domes segmentation algorithm was a viable approach for the effective detection of fresh apple bruises.</p></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"11 ","pages":"Pages 50-60"},"PeriodicalIF":0.0,"publicationDate":"2023-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2589721723000508/pdfft?md5=4b5f4f71fba5824f27f3f6fb52807dae&pid=1-s2.0-S2589721723000508-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138769651","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Image classification of lotus in Nong Han Chaloem Phrakiat Lotus Park using convolutional neural networks 利用卷积神经网络对 Nong Han Chaloem Phrakiat 莲花公园的莲花进行图像分类
Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2023-12-12 DOI: 10.1016/j.aiia.2023.12.003
Thanawat Phattaraworamet , Sawinee Sangsuriyun , Phoempol Kutchomsri , Susama Chokphoemphun

The Nong Han Chaloem Phrakiat Lotus Park is a tourist attraction and a source of learning regarding lotus plants. However, as a training area, it lacks appeal and learning motivation due to its conventional presentation of information regarding lotus plants. The current study introduced the concept of smart learning in this setting to increase interest and motivation for learning. Convolutional neural networks (CNNs) were used for the classification of lotus plant species, for use in the development of a mobile application to display details about each species. The scope of the study was to classify 11 species of lotus plants using the proposed CNN model based on different techniques (augmentation, dropout, and L2) and hyper parameters (dropout and epoch number). The expected outcome was to obtain a high-performance CNN model with reduced total parameters compared to using three different pre-trained CNN models (Inception V3, VGG16, and VGG19) as benchmarks. The performance of the model was presented in terms of accuracy, F1-score, precision, and recall values. The results showed that the CNN model with the augmentation, dropout, and L2 techniques at a dropout value of 0.4 and an epoch number of 30 provided the highest testing accuracy of 0.9954. The best proposed model was more accurate than the pre-trained CNN models, especially compared to Inception V3. In addition, the number of total parameters was reduced by approximately 1.80–2.19 times. These findings demonstrated that the proposed model with a small number of total parameters had a satisfactory degree of classification accuracy.

农汉Chaloem Phrakiat莲花公园是一个旅游景点,也是学习莲花的地方。然而,作为一个培训领域,由于其传统的莲花信息呈现,缺乏吸引力和学习动机。本研究在此背景下引入了智能学习的概念,以提高学习的兴趣和动机。卷积神经网络(cnn)被用于莲花植物种类的分类,用于开发显示每个物种详细信息的移动应用程序。本研究的范围是使用基于不同技术(augmentation、dropout和L2)和超参数(dropout和epoch number)的CNN模型对11种荷花植物进行分类。与使用三种不同的预训练CNN模型(Inception V3、VGG16和VGG19)作为基准相比,预期的结果是获得一个总参数减少的高性能CNN模型。模型的性能从正确率、f1分数、精度和召回值方面进行了展示。结果表明,当dropout值为0.4,epoch数为30时,采用augmentation、dropout和L2技术的CNN模型的测试精度最高,为0.9954。提出的最佳模型比预训练的CNN模型更准确,特别是与盗梦空间V3相比。此外,总参数的数量减少了约1.80-2.19倍。这些结果表明,在总参数较少的情况下,所提出的模型具有令人满意的分类精度。
{"title":"Image classification of lotus in Nong Han Chaloem Phrakiat Lotus Park using convolutional neural networks","authors":"Thanawat Phattaraworamet ,&nbsp;Sawinee Sangsuriyun ,&nbsp;Phoempol Kutchomsri ,&nbsp;Susama Chokphoemphun","doi":"10.1016/j.aiia.2023.12.003","DOIUrl":"https://doi.org/10.1016/j.aiia.2023.12.003","url":null,"abstract":"<div><p>The Nong Han Chaloem Phrakiat Lotus Park is a tourist attraction and a source of learning regarding lotus plants. However, as a training area, it lacks appeal and learning motivation due to its conventional presentation of information regarding lotus plants. The current study introduced the concept of smart learning in this setting to increase interest and motivation for learning. Convolutional neural networks (CNNs) were used for the classification of lotus plant species, for use in the development of a mobile application to display details about each species. The scope of the study was to classify 11 species of lotus plants using the proposed CNN model based on different techniques (augmentation, dropout, and L2) and hyper parameters (dropout and epoch number). The expected outcome was to obtain a high-performance CNN model with reduced total parameters compared to using three different pre-trained CNN models (Inception V3, VGG16, and VGG19) as benchmarks. The performance of the model was presented in terms of accuracy, F1-score, precision, and recall values. The results showed that the CNN model with the augmentation, dropout, and L2 techniques at a dropout value of 0.4 and an epoch number of 30 provided the highest testing accuracy of 0.9954. The best proposed model was more accurate than the pre-trained CNN models, especially compared to Inception V3. In addition, the number of total parameters was reduced by approximately 1.80–2.19 times. These findings demonstrated that the proposed model with a small number of total parameters had a satisfactory degree of classification accuracy.</p></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"11 ","pages":"Pages 23-33"},"PeriodicalIF":0.0,"publicationDate":"2023-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2589721723000491/pdfft?md5=d74952e474880b11ee67566302a088f6&pid=1-s2.0-S2589721723000491-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138656373","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Real-time litchi detection in complex orchard environments: A portable, low-energy edge computing approach for enhanced automated harvesting 在复杂果园环境中实时检测荔枝:一种便携式、低能耗的边缘计算方法,用于增强自动采摘功能
Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2023-12-12 DOI: 10.1016/j.aiia.2023.12.002
Zeyu Jiao , Kai Huang , Qun Wang , Zhenyu Zhong , Yingjie Cai

Litchi, a succulent and perishable fruit, presents a narrow annual harvest window of under two weeks. The advent of smart agriculture has driven the adoption of visually-guided, automated litchi harvesting techniques. However, conventional approaches typically rely on laboratory-based, high-performance computing equipment, which presents challenges in terms of size, energy consumption, and practical application within litchi orchards. To address these limitations, we propose a real-time litchi detection methodology for complex environments, utilizing portable, low-energy edge computing devices. Initially, the litchi orchard imagery is collected to enhance data generalization. Subsequently, a convolutional neural network (CNN)-based single-stage detector, YOLOx, is constructed to accurately pinpoint litchi fruit locations within the images. To facilitate deployment on portable, low-energy edge devices, we employed channel pruning and layer pruning algorithms to compress the trained model, reducing its size and parameters. Additionally, the knowledge distillation technique is harnessed to fine-tune the network. Experimental findings demonstrated that our proposed method achieved a 97.1% compression rate, yielding a compact litchi detection model of a mere 6.9 MB, while maintaining 94.9% average precision and 97.2% average recall. Processing 99 frames per second (FPS), the method exhibited a 1.8-fold increase in speed compared to the unprocessed model. Consequently, our approach can be readily integrated into portable, low-computational automatic harvesting equipment, ensuring real-time, precise litchi detection within orchard settings.

荔枝是一种多汁易腐烂的水果,每年只有不到两周的收获期。智能农业的出现推动了视觉引导、自动化荔枝收获技术的采用。然而,传统的方法通常依赖于基于实验室的高性能计算设备,这在规模、能耗和荔枝果园的实际应用方面提出了挑战。为了解决这些限制,我们提出了一种用于复杂环境的实时荔枝检测方法,利用便携式,低能耗的边缘计算设备。首先,收集荔枝园图像以增强数据泛化。随后,构建了基于卷积神经网络(CNN)的单级检测器YOLOx,以准确定位图像中的荔枝果实位置。为了便于在便携式、低能量的边缘设备上部署,我们使用通道修剪和层修剪算法来压缩训练模型,减小其大小和参数。此外,利用知识蒸馏技术对网络进行微调。实验结果表明,该方法实现了97.1%的压缩率,生成了一个紧凑的荔枝检测模型,仅为6.9 MB,同时保持了94.9%的平均精度和97.2%的平均召回率。该方法每秒处理99帧(FPS),与未处理的模型相比,速度提高了1.8倍。因此,我们的方法可以很容易地集成到便携式、低计算的自动收获设备中,确保在果园设置中实时、精确地检测荔枝。
{"title":"Real-time litchi detection in complex orchard environments: A portable, low-energy edge computing approach for enhanced automated harvesting","authors":"Zeyu Jiao ,&nbsp;Kai Huang ,&nbsp;Qun Wang ,&nbsp;Zhenyu Zhong ,&nbsp;Yingjie Cai","doi":"10.1016/j.aiia.2023.12.002","DOIUrl":"https://doi.org/10.1016/j.aiia.2023.12.002","url":null,"abstract":"<div><p>Litchi, a succulent and perishable fruit, presents a narrow annual harvest window of under two weeks. The advent of smart agriculture has driven the adoption of visually-guided, automated litchi harvesting techniques. However, conventional approaches typically rely on laboratory-based, high-performance computing equipment, which presents challenges in terms of size, energy consumption, and practical application within litchi orchards. To address these limitations, we propose a real-time litchi detection methodology for complex environments, utilizing portable, low-energy edge computing devices. Initially, the litchi orchard imagery is collected to enhance data generalization. Subsequently, a convolutional neural network (CNN)-based single-stage detector, YOLOx, is constructed to accurately pinpoint litchi fruit locations within the images. To facilitate deployment on portable, low-energy edge devices, we employed channel pruning and layer pruning algorithms to compress the trained model, reducing its size and parameters. Additionally, the knowledge distillation technique is harnessed to fine-tune the network. Experimental findings demonstrated that our proposed method achieved a 97.1% compression rate, yielding a compact litchi detection model of a mere 6.9 MB, while maintaining 94.9% average precision and 97.2% average recall. Processing 99 frames per second (FPS), the method exhibited a 1.8-fold increase in speed compared to the unprocessed model. Consequently, our approach can be readily integrated into portable, low-computational automatic harvesting equipment, ensuring real-time, precise litchi detection within orchard settings.</p></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"11 ","pages":"Pages 13-22"},"PeriodicalIF":0.0,"publicationDate":"2023-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S258972172300051X/pdfft?md5=184c6e07ee017be8224834a9e8f6c30d&pid=1-s2.0-S258972172300051X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138657251","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Vision Intelligence for Smart Sheep Farming: Applying Ensemble Learning to Detect Sheep Breeds 用于智能养羊的视觉智能:应用集合学习检测绵羊品种
Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2023-11-28 DOI: 10.1016/j.aiia.2023.11.002
Galib Muhammad Shahriar Himel , Md. Masudul Islam , Mijanur Rahaman

The ability to automatically recognize sheep breeds holds significant value for the sheep industry. Sheep farmers often require breed identification to assess the commercial worth of their flocks. However, many farmers specifically the novice one encounter difficulties in accurately identifying sheep breeds without experts in the field. Therefore, there is a need for autonomous approaches that can effectively and precisely replicate the breed identification skills of a sheep breed expert while functioning within a farm environment, thus providing considerable benefits the industry-specific to the novice farmers in the industry. To achieve this objective, we suggest utilizing a model based on convolutional neural networks (CNNs) which can rapidly and efficiently identify the type of sheep based on their facial features. This approach offers a cost-effective solution. To conduct our experiment, we utilized a dataset consisting of 1680 facial images which represented four distinct sheep breeds. This paper proposes an ensemble method that combines Xception, VGG16, InceptionV3, InceptionResNetV2, and DenseNet121 models. During the transfer learning using this pre-trained model, we applied several optimizers and loss functions and chose the best combinations out of them. This classification model has the potential to aid sheep farmers in precisely and efficiently distinguishing between various breeds, enabling more precise assessments of sector-specific classification for different businesses.

自动识别绵羊品种的能力对养羊业具有重要价值。养羊人通常需要通过品种识别来评估羊群的商业价值。然而,许多农民,特别是新手,在没有专家的情况下,很难准确识别羊的品种。因此,需要一种自主方法,既能有效、准确地复制绵羊品种专家的品种识别技能,又能在农场环境中运行,从而为该行业的新手养殖户提供可观的行业特定效益。为实现这一目标,我们建议使用基于卷积神经网络(CNN)的模型,该模型可根据羊的面部特征快速有效地识别羊的类型。这种方法提供了一种具有成本效益的解决方案。为了进行实验,我们使用了由 1680 张面部图像组成的数据集,这些图像代表了四个不同的绵羊品种。本文提出了一种结合 Xception、VGG16、InceptionV3、InceptionResNetV2 和 DenseNet121 模型的集合方法。在使用该预训练模型进行迁移学习时,我们应用了多个优化器和损失函数,并从中选择了最佳组合。该分类模型有望帮助养羊户精确、高效地区分不同品种的羊,从而对不同企业的特定行业分类进行更精确的评估。
{"title":"Vision Intelligence for Smart Sheep Farming: Applying Ensemble Learning to Detect Sheep Breeds","authors":"Galib Muhammad Shahriar Himel ,&nbsp;Md. Masudul Islam ,&nbsp;Mijanur Rahaman","doi":"10.1016/j.aiia.2023.11.002","DOIUrl":"https://doi.org/10.1016/j.aiia.2023.11.002","url":null,"abstract":"<div><p>The ability to automatically recognize sheep breeds holds significant value for the sheep industry. Sheep farmers often require breed identification to assess the commercial worth of their flocks. However, many farmers specifically the novice one encounter difficulties in accurately identifying sheep breeds without experts in the field. Therefore, there is a need for autonomous approaches that can effectively and precisely replicate the breed identification skills of a sheep breed expert while functioning within a farm environment, thus providing considerable benefits the industry-specific to the novice farmers in the industry. To achieve this objective, we suggest utilizing a model based on convolutional neural networks (CNNs) which can rapidly and efficiently identify the type of sheep based on their facial features. This approach offers a cost-effective solution. To conduct our experiment, we utilized a dataset consisting of 1680 facial images which represented four distinct sheep breeds. This paper proposes an ensemble method that combines Xception, VGG16, InceptionV3, InceptionResNetV2, and DenseNet121 models. During the transfer learning using this pre-trained model, we applied several optimizers and loss functions and chose the best combinations out of them. This classification model has the potential to aid sheep farmers in precisely and efficiently distinguishing between various breeds, enabling more precise assessments of sector-specific classification for different businesses.</p></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"11 ","pages":"Pages 1-12"},"PeriodicalIF":0.0,"publicationDate":"2023-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S258972172300048X/pdfft?md5=5303eef40412bbb4acced911b2385da5&pid=1-s2.0-S258972172300048X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138582174","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DeepRice: A deep learning and deep feature based classification of Rice leaf disease subtypes DeepRice:基于深度学习和深度特征的水稻叶病亚型分类
Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2023-11-23 DOI: 10.1016/j.aiia.2023.11.001
P. Isaac Ritharson , Kumudha Raimond , X. Anitha Mary , Jennifer Eunice Robert , Andrew J

Rice stands as a crucial staple food globally, with its enduring sustainability hinging on the prompt detection of rice leaf diseases. Hence, efficiently detecting diseases when they have already occurred holds paramount importance for solving the cost of manual visual identification and chemical testing. In the recent past, the identification of leaf pathologies in crops predominantly relies on manual methods using specialized equipment, which proves to be time-consuming and inefficient. This study offers a remedy by harnessing Deep Learning (DL) and transfer learning techniques to accurately identify and classify rice leaf diseases. A comprehensive dataset comprising 5932 self-generated images of rice leaves was assembled along with the benchmark datasets, categorized into 9 classes irrespective of the extent of disease spread across the leaves. These classes encompass diverse states including healthy leaves, mild and severe blight, mild and severe tungro, mild and severe blast, as well as mild and severe brown spot. Following meticulous manual labelling and dataset segmentation, which was validated by horticulture experts, data augmentation strategies were implemented to amplify the number of images. The datasets were subjected to evaluation using the proposed tailored Convolutional Neural Networks models. Their performance are scrutinized in conjunction with alternative transfer learning approaches like VGG16, Xception, ResNet50, DenseNet121, Inception ResnetV2, and Inception V3. The effectiveness of the proposed custom VGG16 model was gauged by its capacity to generalize to unseen images, yielding an exceptional accuracy of 99.94%, surpassing the benchmarks set by existing state-of-the-art models. Further, the layer wise feature extraction is also visualized as the interpretable AI.

水稻是全球重要的主食,其持久的可持续性取决于水稻叶片病害的及时发现。因此,有效地检测已经发生的疾病对于解决人工视觉识别和化学测试的成本至关重要。近年来,作物叶片病理的鉴定主要依赖于使用专门设备的人工方法,这被证明是耗时和低效的。本研究通过利用深度学习(DL)和迁移学习技术来准确识别和分类水稻叶片病害,提供了一种补救措施。与基准数据集一起组装了一个包含5932张自生成水稻叶片图像的综合数据集,并将其分为9类,而不考虑病害在叶片上传播的程度。这些类别包括不同的状态,包括健康叶片,轻度和重度枯萎病,轻度和重度结核,轻度和重度稻瘟病,轻度和重度褐斑病。在经过园艺专家验证的细致的人工标记和数据集分割之后,实施数据增强策略来扩大图像数量。使用提出的定制卷积神经网络模型对数据集进行评估。它们的性能与其他迁移学习方法(如VGG16、Xception、ResNet50、DenseNet121、Inception ResnetV2和Inception V3)一起仔细检查。提出的定制VGG16模型的有效性是通过其对未见图像的泛化能力来衡量的,产生了99.94%的卓越准确率,超过了现有最先进模型设定的基准。此外,分层特征提取也被可视化为可解释的AI。
{"title":"DeepRice: A deep learning and deep feature based classification of Rice leaf disease subtypes","authors":"P. Isaac Ritharson ,&nbsp;Kumudha Raimond ,&nbsp;X. Anitha Mary ,&nbsp;Jennifer Eunice Robert ,&nbsp;Andrew J","doi":"10.1016/j.aiia.2023.11.001","DOIUrl":"https://doi.org/10.1016/j.aiia.2023.11.001","url":null,"abstract":"<div><p>Rice stands as a crucial staple food globally, with its enduring sustainability hinging on the prompt detection of rice leaf diseases. Hence, efficiently detecting diseases when they have already occurred holds paramount importance for solving the cost of manual visual identification and chemical testing. In the recent past, the identification of leaf pathologies in crops predominantly relies on manual methods using specialized equipment, which proves to be time-consuming and inefficient. This study offers a remedy by harnessing Deep Learning (DL) and transfer learning techniques to accurately identify and classify rice leaf diseases. A comprehensive dataset comprising 5932 self-generated images of rice leaves was assembled along with the benchmark datasets, categorized into 9 classes irrespective of the extent of disease spread across the leaves. These classes encompass diverse states including healthy leaves, mild and severe blight, mild and severe tungro, mild and severe blast, as well as mild and severe brown spot. Following meticulous manual labelling and dataset segmentation, which was validated by horticulture experts, data augmentation strategies were implemented to amplify the number of images. The datasets were subjected to evaluation using the proposed tailored Convolutional Neural Networks models. Their performance are scrutinized in conjunction with alternative transfer learning approaches like VGG16, Xception, ResNet50, DenseNet121, Inception ResnetV2, and Inception V3. The effectiveness of the proposed custom VGG16 model was gauged by its capacity to generalize to unseen images, yielding an exceptional accuracy of 99.94%, surpassing the benchmarks set by existing state-of-the-art models. Further, the layer wise feature extraction is also visualized as the interpretable AI.</p></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"11 ","pages":"Pages 34-49"},"PeriodicalIF":0.0,"publicationDate":"2023-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2589721723000430/pdfft?md5=8298c1c42100a96a98aecb1442163521&pid=1-s2.0-S2589721723000430-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138656374","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cumulative unsupervised multi-domain adaptation for Holstein cattle re-identification 荷斯坦牛再识别的累积无监督多域自适应
Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2023-10-17 DOI: 10.1016/j.aiia.2023.10.002
Fabian Dubourvieux , Guillaume Lapouge , Angélique Loesch , Bertrand Luvison , Romaric Audigier

In dairy farming, ensuring the health of each cow and minimizing economic losses requires individual monitoring, achieved through cow Re-Identification (Re-ID). Computer vision-based Re-ID relies on visually distinguishing features, such as the distinctive coat patterns of breeds like Holstein.

However, annotating every cow in each farm is cost-prohibitive. Our objective is to develop Re-ID methods applicable to both labeled and unlabeled farms, accommodating new individuals and diverse environments. Unsupervised Domain Adaptation (UDA) techniques bridge this gap, transferring knowledge from labeled source domains to unlabeled target domains, but have only been mainly designed for pedestrian and vehicle Re-ID applications.

Our work introduces Cumulative Unsupervised Multi-Domain Adaptation (CUMDA) to address challenges of limited identity diversity and diverse farm appearances. CUMDA accumulates knowledge from all domains, enhancing specialization in known domains and improving generalization to unseen domains. Our contributions include a CUMDA method adapting to multiple unlabeled target domains while preserving source domain performance, along with extensive cross-dataset experiments on three cattle Re-ID datasets. These experiments demonstrate significant enhancements in source preservation, target domain specialization, and generalization to unseen domains.

在奶牛养殖中,为了确保每头奶牛的健康并最大限度地减少经济损失,需要通过奶牛重新识别(Re-ID)进行个体监测。基于计算机视觉的Re-ID依赖于视觉上的区分特征,比如像霍尔斯坦犬这样的品种独特的皮毛图案。然而,对每个农场的每头奶牛进行注释成本过高。我们的目标是开发适用于标记和未标记农场的Re-ID方法,以适应新的个体和不同的环境。无监督域自适应(UDA)技术弥补了这一差距,将知识从标记的源域转移到未标记的目标域,但主要用于行人和车辆的Re-ID应用。我们的工作引入了累积无监督多域适应(CUMDA)来解决有限的身份多样性和多样化农场外观的挑战。CUMDA从所有领域积累知识,增强了已知领域的专门化,提高了对未知领域的泛化。我们的贡献包括一种适应多个未标记目标域同时保持源域性能的CUMDA方法,以及在三个牛Re-ID数据集上进行的广泛的跨数据集实验。这些实验证明了在源保存、目标领域专门化和对未知领域的泛化方面的显著增强。
{"title":"Cumulative unsupervised multi-domain adaptation for Holstein cattle re-identification","authors":"Fabian Dubourvieux ,&nbsp;Guillaume Lapouge ,&nbsp;Angélique Loesch ,&nbsp;Bertrand Luvison ,&nbsp;Romaric Audigier","doi":"10.1016/j.aiia.2023.10.002","DOIUrl":"https://doi.org/10.1016/j.aiia.2023.10.002","url":null,"abstract":"<div><p>In dairy farming, ensuring the health of each cow and minimizing economic losses requires individual monitoring, achieved through cow <em>Re</em>-Identification (Re-ID). Computer vision-based Re-ID relies on visually distinguishing features, such as the distinctive coat patterns of breeds like Holstein.</p><p>However, annotating every cow in each farm is cost-prohibitive. Our objective is to develop <em>Re</em>-ID methods applicable to both labeled and unlabeled farms, accommodating new individuals and diverse environments. Unsupervised Domain Adaptation (UDA) techniques bridge this gap, transferring knowledge from labeled source domains to unlabeled target domains, but have only been mainly designed for pedestrian and vehicle <em>Re</em>-ID applications.</p><p>Our work introduces Cumulative Unsupervised Multi-Domain Adaptation (CUMDA) to address challenges of limited identity diversity and diverse farm appearances. CUMDA accumulates knowledge from all domains, enhancing specialization in known domains and improving generalization to unseen domains. Our contributions include a CUMDA method adapting to multiple unlabeled target domains while preserving source domain performance, along with extensive cross-dataset experiments on three cattle <em>Re</em>-ID datasets. These experiments demonstrate significant enhancements in source preservation, target domain specialization, and generalization to unseen domains.</p></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"10 ","pages":"Pages 46-60"},"PeriodicalIF":0.0,"publicationDate":"2023-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2589721723000429/pdfft?md5=415adc99dee89219367d287b9bd79295&pid=1-s2.0-S2589721723000429-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91987268","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Harvest optimization for sustainable agriculture: The case of tea harvest scheduling 可持续农业的收获优化——以茶叶收获调度为例
Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2023-10-12 DOI: 10.1016/j.aiia.2023.10.001
Bedirhan Sarımehmet, Mehmet Pınarbaşı, Hacı Mehmet Alakaş, Tamer Eren

To ensure sustainability in agriculture, many optimization problems need to be solved. An important one of them is harvest scheduling problem. In this study, the harvest scheduling problem for the tea is discussed. The tea harvest problem includes the creating a harvest schedule by considering the farmers' quotas under the purchase location and factory capacity. Tea harvesting is carried out in cooperation with the farmer - factory. Factory management is interested in using its resources. So, the factory capacity, purchase location capacities and number of expeditions should be considered during the harvesting process. When the farmer's side is examined, it is seen that the real professions of farmers are different. On harvest days, farmers often cannot attend to their primary professions. Considering the harvest day preferences of farmers in creating the harvest schedule are of great importance for sustainability in agriculture. Two different mathematical models are proposed to solve this problem. The first model minimizes the number of weekly expeditions of factory vehicles within the factor and purchase location capacity restrictions. The second model minimizes the number of expeditions and aims to comply with the preferences of the farmers as much as possible. A sample application was performed in a region with 12 purchase locations, 988 farmers, and 3392 decares of tea fields. The results show that the compliance rate of farmers to harvesting preferences could be increased from 52% to 97%, and this situation did not affect the number of expeditions of the factory. This result shows that considering the farmers' preferences on the harvest day will have no negative impact on the factory. On the contrary, it was concluded that this situation increases sustainability and encouragement in agriculture. Furthermore, the results show that models are effective for solving the problem.

为了确保农业的可持续性,需要解决许多优化问题。其中一个重要的问题是收获调度问题。在本研究中,讨论了茶叶的收获调度问题。茶叶收获问题包括通过考虑农民在购买地点和工厂产能下的配额来制定收获时间表。茶叶收割是与农民工厂合作进行的。工厂管理层对利用其资源很感兴趣。因此,在收获过程中应考虑工厂容量、购买地点容量和探险次数。当考察农民的一面时,可以看出农民的真正职业是不同的。在收获的日子里,农民往往无法从事他们的主要职业。在制定收获时间表时考虑农民的收获日偏好对农业的可持续性至关重要。提出了两种不同的数学模型来解决这个问题。第一种模型在因素和购买地点容量限制范围内,最大限度地减少了工厂车辆的每周考察次数。第二种模式最大限度地减少了探险次数,旨在尽可能符合农民的偏好。在一个有12个购买地点、988名农民和3392个十卡茶园的地区进行了样本应用。结果表明,农民对收割偏好的遵守率可以从52%提高到97%,这种情况不会影响工厂的考察次数。这一结果表明,考虑农民在收获日的偏好不会对工厂产生负面影响。相反,得出的结论是,这种情况增加了农业的可持续性和鼓励性。此外,结果表明,模型对解决该问题是有效的。
{"title":"Harvest optimization for sustainable agriculture: The case of tea harvest scheduling","authors":"Bedirhan Sarımehmet,&nbsp;Mehmet Pınarbaşı,&nbsp;Hacı Mehmet Alakaş,&nbsp;Tamer Eren","doi":"10.1016/j.aiia.2023.10.001","DOIUrl":"https://doi.org/10.1016/j.aiia.2023.10.001","url":null,"abstract":"<div><p>To ensure sustainability in agriculture, many optimization problems need to be solved. An important one of them is harvest scheduling problem. In this study, the harvest scheduling problem for the tea is discussed. The tea harvest problem includes the creating a harvest schedule by considering the farmers' quotas under the purchase location and factory capacity. Tea harvesting is carried out in cooperation with the farmer - factory. Factory management is interested in using its resources. So, the factory capacity, purchase location capacities and number of expeditions should be considered during the harvesting process. When the farmer's side is examined, it is seen that the real professions of farmers are different. On harvest days, farmers often cannot attend to their primary professions. Considering the harvest day preferences of farmers in creating the harvest schedule are of great importance for sustainability in agriculture. Two different mathematical models are proposed to solve this problem. The first model minimizes the number of weekly expeditions of factory vehicles within the factor and purchase location capacity restrictions. The second model minimizes the number of expeditions and aims to comply with the preferences of the farmers as much as possible. A sample application was performed in a region with 12 purchase locations, 988 farmers, and 3392 decares of tea fields. The results show that the compliance rate of farmers to harvesting preferences could be increased from 52% to 97%, and this situation did not affect the number of expeditions of the factory. This result shows that considering the farmers' preferences on the harvest day will have no negative impact on the factory. On the contrary, it was concluded that this situation increases sustainability and encouragement in agriculture. Furthermore, the results show that models are effective for solving the problem.</p></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"10 ","pages":"Pages 35-45"},"PeriodicalIF":0.0,"publicationDate":"2023-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50186814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Machine learning-based spectral and spatial analysis of hyper- and multi-spectral leaf images for Dutch elm disease detection and resistance screening 基于机器学习的超光谱和多光谱叶片图像的光谱和空间分析用于荷兰榆树病害检测和抗性筛选
Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2023-09-26 DOI: 10.1016/j.aiia.2023.09.003
Xing Wei , Jinnuo Zhang , Anna O. Conrad , Charles E. Flower , Cornelia C. Pinchot , Nancy Hayes-Plazolles , Ziling Chen , Zhihang Song , Songlin Fei , Jian Jin

Diseases caused by invasive pathogens are an increasing threat to forest health, and early and accurate disease detection is essential for timely and precision forest management. The recent technological advancements in spectral imaging and artificial intelligence have opened up new possibilities for plant disease detection in both crops and trees. In this study, Dutch elm disease (DED; caused by Ophiostoma novo-ulmi,) and American elm (Ulmus americana) was used as example pathosystem to evaluate the accuracy of two in-house developed high-precision portable hyper- and multi-spectral leaf imagers combined with machine learning as new tools for forest disease detection. Hyper- and multi-spectral images were collected from leaves of American elm genotypes with varied disease susceptibilities after mock-inoculation and inoculation with O. novo-ulmi under greenhouse conditions. Both traditional machine learning and state-of-art deep learning models were built upon derived spectra and directly upon spectral image cubes. Deep learning models that incorporate both spectral and spatial features of high-resolution spectral leaf images have better performance than traditional machine learning models built upon spectral features alone in detecting DED. Edges and symptomatic spots on the leaves were highlighted in the deep learning model as important spatial features to distinguish leaves from inoculated and mock-inoculated trees. In addition, spectral and spatial feature patterns identified in the machine learning-based models were found relative to the DED susceptibility of elm genotypes. Though further studies are needed to assess applications in other pathosystems, hyper- and multi-spectral leaf imagers combined with machine learning show potential as new tools for disease phenotyping in trees.

入侵病原体引起的疾病对森林健康的威胁越来越大,早期准确的疾病检测对于及时准确的森林管理至关重要。光谱成像和人工智能的最新技术进步为作物和树木的植物病害检测开辟了新的可能性。在这项研究中,荷兰榆树病(DED;由Ophiostoma novo ulmi引起)和美国榆树(Ulmus americana)被用作示例病理系统,以评估两种内部开发的高精度便携式超光谱和多光谱叶片成像仪与机器学习相结合作为森林疾病检测的新工具的准确性。在温室条件下,从具有不同疾病易感性的美国榆树基因型的叶片上采集了模拟接种和接种O.novo ulmi后的超光谱和多光谱图像。传统的机器学习和最先进的深度学习模型都是建立在导出的光谱和直接建立在光谱图像立方体上的。在检测DED时,结合高分辨率光谱叶片图像的光谱和空间特征的深度学习模型比单独基于光谱特征建立的传统机器学习模型具有更好的性能。在深度学习模型中,叶片上的边缘和症状点被强调为区分叶片与接种和模拟接种树木的重要空间特征。此外,在基于机器学习的模型中识别的光谱和空间特征模式与榆树基因型的DED易感性有关。尽管还需要进一步的研究来评估在其他病理系统中的应用,但结合机器学习的超光谱和多光谱叶片成像仪显示出作为树木疾病表型新工具的潜力。
{"title":"Machine learning-based spectral and spatial analysis of hyper- and multi-spectral leaf images for Dutch elm disease detection and resistance screening","authors":"Xing Wei ,&nbsp;Jinnuo Zhang ,&nbsp;Anna O. Conrad ,&nbsp;Charles E. Flower ,&nbsp;Cornelia C. Pinchot ,&nbsp;Nancy Hayes-Plazolles ,&nbsp;Ziling Chen ,&nbsp;Zhihang Song ,&nbsp;Songlin Fei ,&nbsp;Jian Jin","doi":"10.1016/j.aiia.2023.09.003","DOIUrl":"https://doi.org/10.1016/j.aiia.2023.09.003","url":null,"abstract":"<div><p>Diseases caused by invasive pathogens are an increasing threat to forest health, and early and accurate disease detection is essential for timely and precision forest management. The recent technological advancements in spectral imaging and artificial intelligence have opened up new possibilities for plant disease detection in both crops and trees. In this study, Dutch elm disease (DED; caused by <em>Ophiostoma novo-ulmi,</em>) and American elm (<em>Ulmus americana</em>) was used as example pathosystem to evaluate the accuracy of two in-house developed high-precision portable hyper- and multi-spectral leaf imagers combined with machine learning as new tools for forest disease detection. Hyper- and multi-spectral images were collected from leaves of American elm genotypes with varied disease susceptibilities after mock-inoculation and inoculation with <em>O. novo-ulmi</em> under greenhouse conditions. Both traditional machine learning and state-of-art deep learning models were built upon derived spectra and directly upon spectral image cubes. Deep learning models that incorporate both spectral and spatial features of high-resolution spectral leaf images have better performance than traditional machine learning models built upon spectral features alone in detecting DED. Edges and symptomatic spots on the leaves were highlighted in the deep learning model as important spatial features to distinguish leaves from inoculated and mock-inoculated trees. In addition, spectral and spatial feature patterns identified in the machine learning-based models were found relative to the DED susceptibility of elm genotypes. Though further studies are needed to assess applications in other pathosystems, hyper- and multi-spectral leaf imagers combined with machine learning show potential as new tools for disease phenotyping in trees.</p></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"10 ","pages":"Pages 26-34"},"PeriodicalIF":0.0,"publicationDate":"2023-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50186813","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Machine learning for weed–plant discrimination in agriculture 5.0: An in-depth review 农业杂草-植物识别的机器学习5.0:深入综述
Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2023-09-19 DOI: 10.1016/j.aiia.2023.09.002
Filbert H. Juwono , W.K. Wong , Seema Verma , Neha Shekhawat , Basil Andy Lease , Catur Apriono

Agriculture 5.0 is an emerging concept where sensors, big data, Internet-of-Things (IoT), robots, and Artificial Intelligence (AI) are used for agricultural purposes. Different from Agriculture 4.0, robots and AI become the focus of the implementation in Agriculture 5.0. One of the applications of Agriculture 5.0 is weed management where robots are used to discriminate weeds from the crops or plants so that proper action can be performed to remove the weeds. This paper discusses an in-depth review of Machine Learning (ML) techniques used for discriminating weeds from crops or plants. We specifically present a detailed explanation of five steps required in using ML algorithms to distinguish between weeds and plants.

农业5.0是一个新兴概念,传感器、大数据、物联网(IoT)、机器人和人工智能(AI)用于农业目的。与农业4.0不同,机器人和人工智能成为农业5.0的实施重点。Agriculture 5.0的应用之一是杂草管理,其中机器人用于区分杂草与作物或植物,以便可以执行适当的行动来清除杂草。本文对用于区分杂草和作物或植物的机器学习(ML)技术进行了深入的综述。我们具体介绍了使用ML算法区分杂草和植物所需的五个步骤。
{"title":"Machine learning for weed–plant discrimination in agriculture 5.0: An in-depth review","authors":"Filbert H. Juwono ,&nbsp;W.K. Wong ,&nbsp;Seema Verma ,&nbsp;Neha Shekhawat ,&nbsp;Basil Andy Lease ,&nbsp;Catur Apriono","doi":"10.1016/j.aiia.2023.09.002","DOIUrl":"https://doi.org/10.1016/j.aiia.2023.09.002","url":null,"abstract":"<div><p>Agriculture 5.0 is an emerging concept where sensors, big data, Internet-of-Things (IoT), robots, and Artificial Intelligence (AI) are used for agricultural purposes. Different from Agriculture 4.0, robots and AI become the focus of the implementation in Agriculture 5.0. One of the applications of Agriculture 5.0 is weed management where robots are used to discriminate weeds from the crops or plants so that proper action can be performed to remove the weeds. This paper discusses an in-depth review of Machine Learning (ML) techniques used for discriminating weeds from crops or plants. We specifically present a detailed explanation of five steps required in using ML algorithms to distinguish between weeds and plants.</p></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"10 ","pages":"Pages 13-25"},"PeriodicalIF":0.0,"publicationDate":"2023-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50186812","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Crop diagnostic system: A robust disease detection and management system for leafy green crops grown in an aquaponics facility 作物诊断系统:一个强大的疾病检测和管理系统,用于水培设施中种植的绿叶作物
Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2023-09-09 DOI: 10.1016/j.aiia.2023.09.001
R. Abbasi , P. Martinez , R. Ahmad

Crops grown on aquaponics farms are susceptible to various diseases or biotic stresses during their growth cycle, just like traditional agriculture. The early detection of diseases is crucial to witnessing the efficiency and progress of the aquaponics system. Aquaponics combines recirculating aquaculture and soilless hydroponics methods and promises to ensure food security, reduce water scarcity, and eliminate carbon footprint. For the large-scale implementation of this farming technique, a unified system is needed that can detect crop diseases and support researchers and farmers in identifying potential causes and treatments at early stages. This study proposes an automatic crop diagnostic system for detecting biotic stresses and managing diseases in four leafy green crops, lettuce, basil, spinach, and parsley, grown in an aquaponics facility. First, a dataset comprising 2640 images is constructed. Then, a disease detection system is developed that works in three phases. The first phase is a crop classification system that identifies the type of crop. The second phase is a disease identification system that determines the crop's health status. The final phase is a disease detection system that localizes and detects the diseased and healthy spots in leaves and categorizes the disease. The proposed approach has shown promising results with accuracy in each of the three phases, reaching 95.83%, 94.13%, and 82.13%, respectively. The final disease detection system is then integrated with an ontology model through a cloud-based application. This ontology model contains domain knowledge related to crop pathology, particularly causes and treatments of different diseases of the studied leafy green crops, which can be automatically extracted upon disease detection allowing agricultural practitioners to take precautionary measures. The proposed application finds its significance as a decision support system that can automate aquaponics facility health monitoring and assist agricultural practitioners in decision-making processes regarding crop and disease management.

与传统农业一样,水培农场种植的作物在生长周期中容易受到各种疾病或生物胁迫的影响。早期发现疾病对于见证水培系统的效率和进步至关重要。水产养殖结合了循环水产养殖和无土水培方法,有望确保粮食安全,减少水资源短缺,消除碳足迹。为了大规模实施这项农业技术,需要一个统一的系统来检测作物疾病,并支持研究人员和农民在早期阶段识别潜在的原因和治疗方法。这项研究提出了一种自动作物诊断系统,用于检测在水培设施中种植的四种绿叶作物(莴苣、罗勒、菠菜和欧芹)的生物胁迫和控制疾病。首先,构建包括2640个图像的数据集。然后,开发了一个分三个阶段工作的疾病检测系统。第一阶段是确定作物类型的作物分类系统。第二阶段是确定作物健康状况的疾病识别系统。最后一个阶段是疾病检测系统,该系统定位和检测叶片中的病变和健康斑点,并对疾病进行分类。所提出的方法在三个阶段中的每一个阶段都显示出有希望的结果,准确率分别达到95.83%、94.13%和82.13%。然后通过基于云的应用程序将最终的疾病检测系统与本体模型集成。该本体模型包含与作物病理学相关的领域知识,特别是所研究的叶绿作物的不同疾病的原因和治疗,这些知识可以在疾病检测时自动提取,从而使农业从业者能够采取预防措施。所提出的应用程序作为一个决策支持系统具有重要意义,可以自动化水产养殖设施的健康监测,并帮助农业从业者进行作物和疾病管理的决策过程。
{"title":"Crop diagnostic system: A robust disease detection and management system for leafy green crops grown in an aquaponics facility","authors":"R. Abbasi ,&nbsp;P. Martinez ,&nbsp;R. Ahmad","doi":"10.1016/j.aiia.2023.09.001","DOIUrl":"https://doi.org/10.1016/j.aiia.2023.09.001","url":null,"abstract":"<div><p>Crops grown on aquaponics farms are susceptible to various diseases or biotic stresses during their growth cycle, just like traditional agriculture. The early detection of diseases is crucial to witnessing the efficiency and progress of the aquaponics system. Aquaponics combines recirculating aquaculture and soilless hydroponics methods and promises to ensure food security, reduce water scarcity, and eliminate carbon footprint. For the large-scale implementation of this farming technique, a unified system is needed that can detect crop diseases and support researchers and farmers in identifying potential causes and treatments at early stages. This study proposes an automatic crop diagnostic system for detecting biotic stresses and managing diseases in four leafy green crops, lettuce, basil, spinach, and parsley, grown in an aquaponics facility. First, a dataset comprising 2640 images is constructed. Then, a disease detection system is developed that works in three phases. The first phase is a crop classification system that identifies the type of crop. The second phase is a disease identification system that determines the crop's health status. The final phase is a disease detection system that localizes and detects the diseased and healthy spots in leaves and categorizes the disease. The proposed approach has shown promising results with accuracy in each of the three phases, reaching 95.83%, 94.13%, and 82.13%, respectively. The final disease detection system is then integrated with an ontology model through a cloud-based application. This ontology model contains domain knowledge related to crop pathology, particularly causes and treatments of different diseases of the studied leafy green crops, which can be automatically extracted upon disease detection allowing agricultural practitioners to take precautionary measures. The proposed application finds its significance as a decision support system that can automate aquaponics facility health monitoring and assist agricultural practitioners in decision-making processes regarding crop and disease management.</p></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"10 ","pages":"Pages 1-12"},"PeriodicalIF":0.0,"publicationDate":"2023-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50186811","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
Artificial Intelligence in Agriculture
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1