首页 > 最新文献

Artificial Intelligence in Agriculture最新文献

英文 中文
Deep learning methods for biotic and abiotic stresses detection and classification in fruits and vegetables: State of the art and perspectives 果蔬生物与非生物胁迫检测与分类的深度学习方法:现状与展望
Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2023-09-01 DOI: 10.1016/j.aiia.2023.08.001
Sèton Calmette Ariane Houetohossou , Vinasetan Ratheil Houndji , Castro Gbêmêmali Hounmenou , Rachidatou Sikirou , Romain Lucas Glele Kakaï

Deep Learning (DL), a type of Machine Learning, has gained significant interest in many fields, including agriculture. This paper aims to shed light on deep learning techniques used in agriculture for abiotic and biotic stress detection in fruits and vegetables, their benefits, and the challenges faced by users. Scientific papers were collected from Web of Science, Scopus, Google Scholar, Springer, and Directory of Open Access Journals (DOAJ) using combinations of specific keywords such as:’Deep Learning’ OR’Artificial Intelligence’ in combination with fruit disease’, vegetable disease’, ‘fruit stress', OR ‘vegetable stress' following PRISMA guidelines. From the initial 818 papers identified using the keywords, 132 were reviewed after excluding books, reviews, and the irrelevant. The recovered scientific papers were from 2003 to 2022; 93 % addressed biotic stress on fruits and vegetables. The most common biotic stresses on species are fungal diseases (grey spots, brown spots, black spots, downy mildew, powdery mildew, and anthracnose). Few studies were interested in abiotic stresses (nutrient deficiency, water stress, light intensity, and heavy metal contamination). Deep Learning and Convolutional Neural Networks were the most used keywords, with GoogleNet (18.28%), ResNet50 (16.67%), and VGG16 (16.67%) as the most used architectures. Fifty-two percent of the data used to compile these models come from the fields, followed by data obtained online. Precision problems due to unbalanced classes and the small size of some databases were also analyzed. We provided the research gaps and some perspectives from the reviewed papers. Further research works are required for a deep understanding of the use of machine learning techniques in fruit and vegetable studies: collection of large datasets according to different scenarios on fruit and vegetable diseases, evaluation of the effect of climatic variability on the fruit and vegetable yield using AI methods and more abiotic stress studies.

深度学习(DL)是一种机器学习,在包括农业在内的许多领域都引起了人们的极大兴趣。本文旨在阐明农业中用于水果和蔬菜非生物和生物胁迫检测的深度学习技术、它们的好处以及用户面临的挑战。科学论文是从科学网、Scopus、谷歌学者、施普林格和开放获取期刊目录(DOAJ)收集的,使用特定关键词的组合,如:“深度学习”或“人工智能”与水果疾病、蔬菜疾病、水果应激或“蔬菜应激”,遵循PRISMA指南。在最初使用关键词识别的818篇论文中,132篇在排除书籍、评论和无关内容后进行了评论。回收的科学论文为2003年至2022年;93%的人解决了水果和蔬菜的生物压力问题。对物种最常见的生物胁迫是真菌病(灰点、褐色斑点、黑色斑点、霜霉菌、白粉菌和炭疽病)。很少有研究对非生物胁迫(营养缺乏、水分胁迫、光照强度和重金属污染)感兴趣。深度学习和卷积神经网络是最常用的关键词,GoogleNet(18.28%)、ResNet50(16.67%)和VGG16(16.67%。用于编译这些模型的52%的数据来自野外,其次是在线获得的数据。还分析了由于类不平衡和一些数据库的小规模而导致的精度问题。我们从综述的论文中提供了研究空白和一些观点。需要进一步的研究工作来深入理解机器学习技术在水果和蔬菜研究中的应用:根据水果和蔬菜疾病的不同场景收集大型数据集,使用人工智能方法评估气候变异对水果和蔬菜产量的影响,以及更多的非生物胁迫研究。
{"title":"Deep learning methods for biotic and abiotic stresses detection and classification in fruits and vegetables: State of the art and perspectives","authors":"Sèton Calmette Ariane Houetohossou ,&nbsp;Vinasetan Ratheil Houndji ,&nbsp;Castro Gbêmêmali Hounmenou ,&nbsp;Rachidatou Sikirou ,&nbsp;Romain Lucas Glele Kakaï","doi":"10.1016/j.aiia.2023.08.001","DOIUrl":"10.1016/j.aiia.2023.08.001","url":null,"abstract":"<div><p>Deep Learning (DL), a type of Machine Learning, has gained significant interest in many fields, including agriculture. This paper aims to shed light on deep learning techniques used in agriculture for abiotic and biotic stress detection in fruits and vegetables, their benefits, and the challenges faced by users. Scientific papers were collected from Web of Science, Scopus, Google Scholar, Springer, and Directory of Open Access Journals (DOAJ) using combinations of specific keywords such as:’Deep Learning’ OR’Artificial Intelligence’ in combination with fruit disease’, vegetable disease’, ‘fruit stress', OR ‘vegetable stress' following PRISMA guidelines. From the initial 818 papers identified using the keywords, 132 were reviewed after excluding books, reviews, and the irrelevant. The recovered scientific papers were from 2003 to 2022; 93 % addressed biotic stress on fruits and vegetables. The most common biotic stresses on species are fungal diseases (grey spots, brown spots, black spots, downy mildew, powdery mildew, and anthracnose). Few studies were interested in abiotic stresses (nutrient deficiency, water stress, light intensity, and heavy metal contamination). Deep Learning and Convolutional Neural Networks were the most used keywords, with GoogleNet (18.28%), ResNet50 (16.67%), and VGG16 (16.67%) as the most used architectures. Fifty-two percent of the data used to compile these models come from the fields, followed by data obtained online. Precision problems due to unbalanced classes and the small size of some databases were also analyzed. We provided the research gaps and some perspectives from the reviewed papers. Further research works are required for a deep understanding of the use of machine learning techniques in fruit and vegetable studies: collection of large datasets according to different scenarios on fruit and vegetable diseases, evaluation of the effect of climatic variability on the fruit and vegetable yield using AI methods and more abiotic stress studies.</p></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"9 ","pages":"Pages 46-60"},"PeriodicalIF":0.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46218612","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Low-cost livestock sorting information management system based on deep learning 基于深度学习的低成本牲畜分拣信息管理系统
Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2023-09-01 DOI: 10.1016/j.aiia.2023.08.007
Yuanzhi Pan , Yuzhen Zhang , Xiaoping Wang , Xiang Xiang Gao , Zhongyu Hou

Modern pig farming leaves much to be desired in terms of efficiency, as these systems rely mainly on electromechanical controls and can only categorize pigs according to their weight. This method is not only inefficient but also escalates labor expenses and heightens the threat of zoonotic diseases. Furthermore, confining pigs in large groups can exacerbate the spread of infections and complicate the monitoring and care of ill pigs. This research executed an experiment to construct a deep-learning sorting mechanism, leveraging a dataset infused with pivotal metrics and breeding imagery gathered over 24 months. This research integrated a Kalman filter-based algorithm to augment the precision of the dynamic sorting operation. This research experiment unveiled a pioneering machine vision sorting system powered by deep learning, adept at handling live imagery for multifaceted recognition objectives. The Individual recognition model based on Residual Neural Network (ResNet) monitors livestock weight for sustained data forecasting, whereas the Wasserstein Generative Adversarial Nets (WGAN) image enhancement algorithm bolsters recognition in distinct settings, fortifying the model's resilience. Notably, system can pinpoint livestock exhibiting signs of potential illness via irregular body appearances and isolate them for safety. Experimental outcomes validate the superiority of this proposed system over traditional counterparts. It not only minimizes manual interventions and data upkeep expenses but also heightens the accuracy of livestock identification and optimizes data usage. This findings reflect an 89% success rate in livestock ID recognition, a 32% surge in obscured image recognition, a 95% leap in livestock categorization accuracy, and a remarkable 98% success rate in discerning images of unwell pigs. In essence, this research augments identification efficiency, curtails operational expenses, and provides enhanced tools for disease monitoring.

现代养猪业在效率方面还有很多不足之处,因为这些系统主要依靠机电控制,只能根据猪的重量对其进行分类。这种方法不仅效率低下,而且会增加劳动力支出,加剧人畜共患疾病的威胁。此外,将猪大规模圈养会加剧感染的传播,并使对患病猪的监测和护理复杂化。这项研究进行了一项实验,构建了一种深度学习排序机制,利用了一个数据集,该数据集融合了24个月来收集的关键指标和繁殖图像。本研究集成了一种基于卡尔曼滤波器的算法,以提高动态排序操作的精度。这项研究实验展示了一种由深度学习驱动的开创性机器视觉分类系统,该系统擅长处理多方面识别目标的实时图像。基于残差神经网络(ResNet)的个体识别模型监测牲畜体重以进行持续的数据预测,而Wasserstein生成对抗性网络(WGAN)图像增强算法在不同的环境中增强了识别,增强了模型的弹性。值得注意的是,该系统可以通过不规则的身体外观来确定表现出潜在疾病迹象的牲畜,并将其隔离以确保安全。实验结果验证了该系统相对于传统系统的优越性。它不仅最大限度地减少了人工干预和数据维护费用,还提高了牲畜识别的准确性并优化了数据使用。这一发现反映了牲畜ID识别的成功率为89%,模糊图像识别的成功度激增32%,牲畜分类准确率跃升95%,识别不适猪图像的成功率高达98%。从本质上讲,这项研究提高了识别效率,减少了运营费用,并为疾病监测提供了增强的工具。
{"title":"Low-cost livestock sorting information management system based on deep learning","authors":"Yuanzhi Pan ,&nbsp;Yuzhen Zhang ,&nbsp;Xiaoping Wang ,&nbsp;Xiang Xiang Gao ,&nbsp;Zhongyu Hou","doi":"10.1016/j.aiia.2023.08.007","DOIUrl":"10.1016/j.aiia.2023.08.007","url":null,"abstract":"<div><p>Modern pig farming leaves much to be desired in terms of efficiency, as these systems rely mainly on electromechanical controls and can only categorize pigs according to their weight. This method is not only inefficient but also escalates labor expenses and heightens the threat of zoonotic diseases. Furthermore, confining pigs in large groups can exacerbate the spread of infections and complicate the monitoring and care of ill pigs. This research executed an experiment to construct a deep-learning sorting mechanism, leveraging a dataset infused with pivotal metrics and breeding imagery gathered over 24 months. This research integrated a Kalman filter-based algorithm to augment the precision of the dynamic sorting operation. This research experiment unveiled a pioneering machine vision sorting system powered by deep learning, adept at handling live imagery for multifaceted recognition objectives. The Individual recognition model based on Residual Neural Network (ResNet) monitors livestock weight for sustained data forecasting, whereas the Wasserstein Generative Adversarial Nets (WGAN) image enhancement algorithm bolsters recognition in distinct settings, fortifying the model's resilience. Notably, system can pinpoint livestock exhibiting signs of potential illness via irregular body appearances and isolate them for safety. Experimental outcomes validate the superiority of this proposed system over traditional counterparts. It not only minimizes manual interventions and data upkeep expenses but also heightens the accuracy of livestock identification and optimizes data usage. This findings reflect an 89% success rate in livestock ID recognition, a 32% surge in obscured image recognition, a 95% leap in livestock categorization accuracy, and a remarkable 98% success rate in discerning images of unwell pigs. In essence, this research augments identification efficiency, curtails operational expenses, and provides enhanced tools for disease monitoring.</p></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"9 ","pages":"Pages 110-126"},"PeriodicalIF":0.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48035348","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Corn kernel classification from few training samples 基于少量训练样本的玉米籽粒分类
Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2023-09-01 DOI: 10.1016/j.aiia.2023.08.006
Patricia L. Suárez , Henry O. Velesaca , Dario Carpio , Angel D. Sappa

This article presents an efficient approach to classify a set of corn kernels in contact, which may contain good, or defective kernels along with impurities. The proposed approach consists of two stages, the first one is a next-generation segmentation network, trained by using a set of synthesized images that is applied to divide the given image into a set of individual instances. An ad-hoc lightweight CNN architecture is then proposed to classify each instance into one of three categories (ie good, defective, and impurities). The segmentation network is trained using a strategy that avoids the time-consuming and human-error-prone task of manual data annotation. Regarding the classification stage, the proposed ad-hoc network is designed with only a few sets of layers to result in a lightweight architecture capable of being used in integrated solutions. Experimental results and comparisons with previous approaches showing both the improvement in accuracy and the reduction in time are provided. Finally, the segmentation and classification approach proposed can be easily adapted for use with other cereal types.

本文提出了一种有效的方法来对一组接触的玉米粒进行分类,这些玉米粒可能含有好的或有缺陷的玉米粒以及杂质。所提出的方法由两个阶段组成,第一个阶段是下一代分割网络,通过使用一组合成图像进行训练,该合成图像用于将给定图像划分为一组单独的实例。然后提出了一种特别的轻量级CNN架构,将每个实例分为三类(即好的、有缺陷的和杂质)之一。使用一种策略来训练分割网络,该策略避免了手动数据注释的耗时且容易出错的任务。关于分类阶段,所提出的自组织网络只设计了几组层,以产生能够在集成解决方案中使用的轻量级架构。提供了实验结果以及与先前方法的比较,显示了准确性的提高和时间的缩短。最后,所提出的分割和分类方法可以很容易地适用于其他谷物类型。
{"title":"Corn kernel classification from few training samples","authors":"Patricia L. Suárez ,&nbsp;Henry O. Velesaca ,&nbsp;Dario Carpio ,&nbsp;Angel D. Sappa","doi":"10.1016/j.aiia.2023.08.006","DOIUrl":"10.1016/j.aiia.2023.08.006","url":null,"abstract":"<div><p>This article presents an efficient approach to classify a set of corn kernels in contact, which may contain good, or defective kernels along with impurities. The proposed approach consists of two stages, the first one is a next-generation segmentation network, trained by using a set of synthesized images that is applied to divide the given image into a set of individual instances. An ad-hoc lightweight CNN architecture is then proposed to classify each instance into one of three categories (ie good, defective, and impurities). The segmentation network is trained using a strategy that avoids the time-consuming and human-error-prone task of manual data annotation. Regarding the classification stage, the proposed ad-hoc network is designed with only a few sets of layers to result in a lightweight architecture capable of being used in integrated solutions. Experimental results and comparisons with previous approaches showing both the improvement in accuracy and the reduction in time are provided. Finally, the segmentation and classification approach proposed can be easily adapted for use with other cereal types.</p></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"9 ","pages":"Pages 89-99"},"PeriodicalIF":0.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43946230","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Estimation of morphological traits of foliage and effective plant spacing in NFT-based aquaponics system 基于NFT的水培系统中叶片形态特征和有效株距的估计
Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2023-09-01 DOI: 10.1016/j.aiia.2023.08.004
R. Abbasi , P. Martinez , R. Ahmad

Deep learning and computer vision techniques have gained significant attention in the agriculture sector due to their non-destructive and contactless features. These techniques are also being integrated into modern farming systems, such as aquaponics, to address the challenges hindering its commercialization and large-scale implementation. Aquaponics is a farming technology that combines a recirculating aquaculture system and soilless hydroponics agriculture, that promises to address food security issues. To complement the current research efforts, a methodology is proposed to automatically measure the morphological traits of crops such as width, length and area and estimate the effective plant spacing between grow channels. Plant spacing is one of the key design parameters that are dependent on crop type and its morphological traits and hence needs to be monitored to ensure high crop yield and quality which can be impacted due to foliage occlusion or overlapping as the crop grows. The proposed approach uses Mask-RCNN to estimate the size of the crops and a mathematical model to determine plant spacing for a self-adaptive aquaponics farm. For common little gem romaine lettuce, the growth is estimated within 2 cm of error for both length and width. The final model is deployed on a cloud-based application and integrated with an ontology model containing domain knowledge of the aquaponics system. The relevant knowledge about crop characteristics and optimal plant spacing is extracted from ontology and compared with results obtained from the final model to suggest further actions. The proposed application finds its significance as a decision support system that can pave the way for intelligent system monitoring and control.

深度学习和计算机视觉技术由于其非破坏性和非接触性的特点,在农业部门受到了极大的关注。这些技术也被纳入现代农业系统,如水培,以应对阻碍其商业化和大规模实施的挑战。水培是一种将循环水产养殖系统和无土水培农业相结合的农业技术,有望解决粮食安全问题。为了补充目前的研究工作,提出了一种方法来自动测量作物的形态特征,如宽度、长度和面积,并估计生长通道之间的有效植物间距。植物间距是取决于作物类型及其形态特征的关键设计参数之一,因此需要对其进行监测,以确保作物产量和质量高,而随着作物的生长,叶片遮挡或重叠可能会影响作物的产量和质量。所提出的方法使用Mask RCNN来估计作物的大小,并使用数学模型来确定自适应水培农场的植物间距。对于普通的小宝石莴苣,其长度和宽度的误差估计在2厘米以内。最终的模型部署在基于云的应用程序上,并与包含水培系统领域知识的本体模型集成。从本体论中提取关于作物特性和最佳株距的相关知识,并将其与最终模型中获得的结果进行比较,以建议进一步的行动。所提出的应用作为一个决策支持系统具有重要意义,可以为智能系统监控铺平道路。
{"title":"Estimation of morphological traits of foliage and effective plant spacing in NFT-based aquaponics system","authors":"R. Abbasi ,&nbsp;P. Martinez ,&nbsp;R. Ahmad","doi":"10.1016/j.aiia.2023.08.004","DOIUrl":"10.1016/j.aiia.2023.08.004","url":null,"abstract":"<div><p>Deep learning and computer vision techniques have gained significant attention in the agriculture sector due to their non-destructive and contactless features. These techniques are also being integrated into modern farming systems, such as aquaponics, to address the challenges hindering its commercialization and large-scale implementation. Aquaponics is a farming technology that combines a recirculating aquaculture system and soilless hydroponics agriculture, that promises to address food security issues. To complement the current research efforts, a methodology is proposed to automatically measure the morphological traits of crops such as width, length and area and estimate the effective plant spacing between grow channels. Plant spacing is one of the key design parameters that are dependent on crop type and its morphological traits and hence needs to be monitored to ensure high crop yield and quality which can be impacted due to foliage occlusion or overlapping as the crop grows. The proposed approach uses Mask-RCNN to estimate the size of the crops and a mathematical model to determine plant spacing for a self-adaptive aquaponics farm. For common little <em>gem</em> romaine lettuce, the growth is estimated within 2 cm of error for both length and width. The final model is deployed on a cloud-based application and integrated with an ontology model containing domain knowledge of the aquaponics system. The relevant knowledge about crop characteristics and optimal plant spacing is extracted from ontology and compared with results obtained from the final model to suggest further actions. The proposed application finds its significance as a decision support system that can pave the way for intelligent system monitoring and control.</p></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"9 ","pages":"Pages 76-88"},"PeriodicalIF":0.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45096541","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Detecting broiler chickens on litter floor with the YOLOv5-CBAM deep learning model 基于YOLOv5-CBAM深度学习模型的窝地板肉鸡检测
Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2023-09-01 DOI: 10.1016/j.aiia.2023.08.002
Yangyang Guo , Samuel E. Aggrey , Xiao Yang , Adelumola Oladeinde , Yongliang Qiao , Lilong Chai

For commercial broiler production, about 20,000–30,000 birds are raised in each confined house, which has caused growing public concerns on animal welfare. Currently, daily evaluation of broiler wellbeing and growth is conducted manually, which is labor-intensive and subjectively subject to human error. Therefore, there is a need for an automatic tool to detect and analyze the behaviors of chickens and predict their welfare status. In this study, we developed a YOLOv5-CBAM-broiler model and tested its performance for detecting broilers on litter floor. The proposed model consisted of two parts: (1) basic YOLOv5 model for bird or broiler feature extraction and object detection; and (2) the convolutional block attention module (CBAM) to improve the feature extraction capability of the network and the problem of missed detection of occluded targets and small targets. A complex dataset of broiler chicken images at different ages, multiple pens and scenes (fresh litter versus reused litter) was constructed to evaluate the effectiveness of the new model. In addition, the model was compared to the Faster R-CNN, SSD, YOLOv3, EfficientDet and YOLOv5 models. The results demonstrate that the precision, recall, F1 score and an [email protected] of the proposed method were 97.3%, 92.3%, 94.7%, and 96.5%, which were superior to the comparison models. In addition, comparing the detection effects in different scenes, the YOLOv5-CBAM model was still better than the comparison method. Overall, the proposed YOLOv5-CBAM-broiler model can achieve real-time accurate and fast target detection and provide technical support for the management and monitoring of birds in commercial broiler houses.

对于商业肉鸡生产,每个密闭的房子里饲养大约20000至30000只家禽,这引起了公众对动物福利的日益担忧。目前,肉鸡健康和生长的日常评估是手动进行的,这是劳动密集型的,主观上容易受到人为错误的影响。因此,需要一种自动工具来检测和分析鸡的行为,并预测它们的福利状况。在本研究中,我们开发了YOLOv5 CBAM肉鸡模型,并测试了其在窝底检测肉鸡的性能。所提出的模型由两部分组成:(1)用于鸟类或肉鸡特征提取和目标检测的YOLOv5基本模型;以及(2)卷积块注意力模块(CBAM),以提高网络的特征提取能力以及遮挡目标和小目标的漏检问题。构建了一个由不同年龄、多个围栏和场景(新鲜垃圾与重复使用垃圾)的肉鸡图像组成的复杂数据集,以评估新模型的有效性。此外,将该模型与Faster R-CNN、SSD、YOLOv3、EfficientDet和YOLOv5模型进行了比较。结果表明,该方法的准确率、召回率、F1评分和[电子邮件保护]分别为97.3%、92.3%、94.7%和96.5%,优于对照模型。此外,比较不同场景下的检测效果,YOLOv5 CBAM模型仍然优于比较方法。总体而言,所提出的YOLOv5 CBAM肉鸡模型可以实现实时、准确、快速的目标检测,并为商业肉鸡饲养场的鸟类管理和监测提供技术支持。
{"title":"Detecting broiler chickens on litter floor with the YOLOv5-CBAM deep learning model","authors":"Yangyang Guo ,&nbsp;Samuel E. Aggrey ,&nbsp;Xiao Yang ,&nbsp;Adelumola Oladeinde ,&nbsp;Yongliang Qiao ,&nbsp;Lilong Chai","doi":"10.1016/j.aiia.2023.08.002","DOIUrl":"10.1016/j.aiia.2023.08.002","url":null,"abstract":"<div><p>For commercial broiler production, about 20,000–30,000 birds are raised in each confined house, which has caused growing public concerns on animal welfare. Currently, daily evaluation of broiler wellbeing and growth is conducted manually, which is labor-intensive and subjectively subject to human error. Therefore, there is a need for an automatic tool to detect and analyze the behaviors of chickens and predict their welfare status. In this study, we developed a YOLOv5-CBAM-broiler model and tested its performance for detecting broilers on litter floor. The proposed model consisted of two parts: (1) basic YOLOv5 model for bird or broiler feature extraction and object detection; and (2) the convolutional block attention module (CBAM) to improve the feature extraction capability of the network and the problem of missed detection of occluded targets and small targets. A complex dataset of broiler chicken images at different ages, multiple pens and scenes (fresh litter versus reused litter) was constructed to evaluate the effectiveness of the new model. In addition, the model was compared to the Faster R-CNN, SSD, YOLOv3, EfficientDet and YOLOv5 models. The results demonstrate that the precision, recall, F1 score and an [email protected] of the proposed method were 97.3%, 92.3%, 94.7%, and 96.5%, which were superior to the comparison models. In addition, comparing the detection effects in different scenes, the YOLOv5-CBAM model was still better than the comparison method. Overall, the proposed YOLOv5-CBAM-broiler model can achieve real-time accurate and fast target detection and provide technical support for the management and monitoring of birds in commercial broiler houses.</p></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"9 ","pages":"Pages 36-45"},"PeriodicalIF":0.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46430631","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Machine learning in nutrient management: A review 机器学习在营养管理中的应用综述
Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2023-09-01 DOI: 10.1016/j.aiia.2023.06.001
Oumnia Ennaji , Leonardus Vergütz , Achraf El Allali

In agriculture, precise fertilization and effective nutrient management are critical. Machine learning (ML) has recently been increasingly used to develop decision support tools for modern agricultural systems, including nutrient management, to improve yields while reducing expenses and environmental impact. ML based systems require huge amounts of data from different platforms to handle non-linear tasks and build predictive models that can improve agricultural productivity. This study reviews machine learning based techniques for estimating fertilizer and nutrient status that have been developed in the last decade. A thorough investigation of detection and classification approaches was conducted, which served as the basis for a detailed assessment of the key challenges that remain to be addressed. The research findings suggest that rapid improvements in machine learning and sensor technology can provide cost-effective and thorough nutrient assessment and decision-making solutions. Future research directions are also recommended to improve the practical application of this technology.

在农业中,精确施肥和有效的营养管理至关重要。机器学习(ML)最近越来越多地被用于开发现代农业系统的决策支持工具,包括营养管理,以提高产量,同时减少开支和环境影响。基于ML的系统需要来自不同平台的大量数据来处理非线性任务,并建立可以提高农业生产力的预测模型。这项研究回顾了过去十年中开发的基于机器学习的肥料和营养状况估计技术。对检测和分类方法进行了彻底调查,这是对仍有待解决的关键挑战进行详细评估的基础。研究结果表明,机器学习和传感器技术的快速改进可以提供成本效益高、全面的营养评估和决策解决方案。并提出了今后的研究方向,以提高该技术的实际应用。
{"title":"Machine learning in nutrient management: A review","authors":"Oumnia Ennaji ,&nbsp;Leonardus Vergütz ,&nbsp;Achraf El Allali","doi":"10.1016/j.aiia.2023.06.001","DOIUrl":"10.1016/j.aiia.2023.06.001","url":null,"abstract":"<div><p>In agriculture, precise fertilization and effective nutrient management are critical. Machine learning (ML) has recently been increasingly used to develop decision support tools for modern agricultural systems, including nutrient management, to improve yields while reducing expenses and environmental impact. ML based systems require huge amounts of data from different platforms to handle non-linear tasks and build predictive models that can improve agricultural productivity. This study reviews machine learning based techniques for estimating fertilizer and nutrient status that have been developed in the last decade. A thorough investigation of detection and classification approaches was conducted, which served as the basis for a detailed assessment of the key challenges that remain to be addressed. The research findings suggest that rapid improvements in machine learning and sensor technology can provide cost-effective and thorough nutrient assessment and decision-making solutions. Future research directions are also recommended to improve the practical application of this technology.</p></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"9 ","pages":"Pages 1-11"},"PeriodicalIF":0.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44445839","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CactiViT: Image-based smartphone application and transformer network for diagnosis of cactus cochineal CactiViT:基于图像的智能手机应用和变压器网络,用于仙人掌胭脂虫的诊断
Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2023-09-01 DOI: 10.1016/j.aiia.2023.07.002
Anas Berka , Adel Hafiane , Youssef Es-Saady , Mohamed El Hajji , Raphaël Canals , Rachid Bouharroud

The cactus is a plant that grows in many rural areas, widely used as a hedge, and has multiple benefits through the manufacture of various cosmetics and other products. However, this crop has been suffering for some time from the attack of the carmine scale Dactylopius opuntia (Hemiptera: Dactylopiidae). The infestation can spread rapidly if not treated in the early stage. Current solutions consist of regular field checks by the naked eyes carried out by experts. The major difficulty is the lack of experts to check all fields, especially in remote areas. In addition, this requires time and resources. Hence the need for a system that can categorize the health level of cacti remotely. To date, deep learning models used to categorize plant diseases from images have not addressed the mealy bug infestation of cacti because computer vision has not sufficiently addressed this disease. Since there is no public dataset and smartphones are commonly used as tools to take pictures, it might then be conceivable for farmers to use them to categorize the infection level of their crops. In this work, we developed a system called CactiVIT that instantly determines the health status of cacti using the Visual image Transformer (ViT) model. We also provided a new image dataset of cochineal infested cacti.1 Finally, we developed a mobile application that delivers the classification results directly to farmers about the infestation in their fields by showing the probabilities related to each class. This study compares the existing models on the new dataset and presents the results obtained. The VIT-B-16 model reveals an approved performance in the literature and in our experiments, in which it achieved 88.73% overall accuracy with an average of +2.61% compared to other convolutional neural network (CNN) models that we evaluated under similar conditions.

仙人掌是一种生长在许多农村地区的植物,被广泛用作树篱,通过生产各种化妆品和其他产品具有多种好处。然而,这种作物已经遭受胭脂虫规模的仙人掌(半翅目:仙人掌科)的袭击一段时间了。如果不在早期进行治疗,虫害可能会迅速蔓延。目前的解决方案包括由专家进行的定期肉眼实地检查。主要的困难是缺乏专家来检查所有领域,尤其是在偏远地区。此外,这需要时间和资源。因此,需要一个可以远程分类仙人掌健康水平的系统。到目前为止,用于从图像中对植物疾病进行分类的深度学习模型还没有解决仙人掌粉虫侵扰的问题,因为计算机视觉还没有充分解决这种疾病。由于没有公共数据集,智能手机通常被用作拍照工具,因此农民可以使用它们来对作物的感染水平进行分类。在这项工作中,我们开发了一个名为CactiVIT的系统,该系统使用视觉图像转换器(ViT)模型即时确定仙人掌的健康状态。我们还提供了一个新的胭脂虫侵扰仙人掌的图像数据集。1最后,我们开发了一个移动应用程序,通过显示与每个类别相关的概率,直接向农民提供有关其田地中侵扰的分类结果。本研究比较了新数据集上的现有模型,并给出了所获得的结果。VIT-B-16模型在文献和我们的实验中显示出了认可的性能,与我们在类似条件下评估的其他卷积神经网络(CNN)模型相比,它实现了88.73%的总体准确率,平均+2.61%。
{"title":"CactiViT: Image-based smartphone application and transformer network for diagnosis of cactus cochineal","authors":"Anas Berka ,&nbsp;Adel Hafiane ,&nbsp;Youssef Es-Saady ,&nbsp;Mohamed El Hajji ,&nbsp;Raphaël Canals ,&nbsp;Rachid Bouharroud","doi":"10.1016/j.aiia.2023.07.002","DOIUrl":"10.1016/j.aiia.2023.07.002","url":null,"abstract":"<div><p>The cactus is a plant that grows in many rural areas, widely used as a hedge, and has multiple benefits through the manufacture of various cosmetics and other products. However, this crop has been suffering for some time from the attack of the carmine scale <em>Dactylopius opuntia</em> (Hemiptera: Dactylopiidae). The infestation can spread rapidly if not treated in the early stage. Current solutions consist of regular field checks by the naked eyes carried out by experts. The major difficulty is the lack of experts to check all fields, especially in remote areas. In addition, this requires time and resources. Hence the need for a system that can categorize the health level of cacti remotely. To date, deep learning models used to categorize plant diseases from images have not addressed the mealy bug infestation of cacti because computer vision has not sufficiently addressed this disease. Since there is no public dataset and smartphones are commonly used as tools to take pictures, it might then be conceivable for farmers to use them to categorize the infection level of their crops. In this work, we developed a system called CactiVIT that instantly determines the health status of cacti using the Visual image Transformer (ViT) model. We also provided a new image dataset of cochineal infested cacti.<span><sup>1</sup></span> Finally, we developed a mobile application that delivers the classification results directly to farmers about the infestation in their fields by showing the probabilities related to each class. This study compares the existing models on the new dataset and presents the results obtained. The VIT-B-16 model reveals an approved performance in the literature and in our experiments, in which it achieved 88.73% overall accuracy with an average of +2.61% compared to other convolutional neural network (CNN) models that we evaluated under similar conditions.</p></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"9 ","pages":"Pages 12-21"},"PeriodicalIF":0.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43334508","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Rice disease identification method based on improved CNN-BiGRU 基于改进CNN-BiGRU的水稻病害识别方法
Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2023-09-01 DOI: 10.1016/j.aiia.2023.08.005
Yang Lu , Xiaoxiao Wu , Pengfei Liu , Hang Li , Wanting Liu

In the field of precision agriculture, diagnosing rice diseases from images remains challenging due to high error rates, multiple influencing factors, and unstable conditions. While machine learning and convolutional neural networks have shown promising results in identifying rice diseases, they were limited in their ability to explain the relationships among disease features. In this study, we proposed an improved rice disease classification method that combines a convolutional neural network (CNN) with a bidirectional gated recurrent unit (BiGRU). Specifically, we introduced a residual mechanism into the Inception module, expanded the module's depth, and integrated an improved Convolutional Block Attention Module (CBAM). We trained and tested the improved CNN and BiGRU, concatenated the outputs of the CNN and BiGRU modules, and passed them to the classification layer for recognition. Our experiments demonstrate that this approach achieves an accuracy of 98.21% in identifying four types of rice diseases, providing a reliable method for rice disease recognition research.

在精准农业领域,由于高错误率、多种影响因素和不稳定的条件,从图像中诊断水稻病害仍然具有挑战性。虽然机器学习和卷积神经网络在识别水稻病害方面显示出了很好的结果,但它们解释病害特征之间关系的能力有限。在本研究中,我们提出了一种改进的水稻病害分类方法,该方法将卷积神经网络(CNN)与双向门控递归单元(BiGRU)相结合。具体来说,我们在Inception模块中引入了残差机制,扩展了模块的深度,并集成了一个改进的卷积块注意力模块(CBAM)。我们对改进的CNN和BiGRU进行了训练和测试,将CNN和BiGRU模块的输出连接起来,并将它们传递到分类层进行识别。实验表明,该方法在识别四种水稻病害方面的准确率达到98.21%,为水稻病害识别研究提供了可靠的方法。
{"title":"Rice disease identification method based on improved CNN-BiGRU","authors":"Yang Lu ,&nbsp;Xiaoxiao Wu ,&nbsp;Pengfei Liu ,&nbsp;Hang Li ,&nbsp;Wanting Liu","doi":"10.1016/j.aiia.2023.08.005","DOIUrl":"10.1016/j.aiia.2023.08.005","url":null,"abstract":"<div><p>In the field of precision agriculture, diagnosing rice diseases from images remains challenging due to high error rates, multiple influencing factors, and unstable conditions. While machine learning and convolutional neural networks have shown promising results in identifying rice diseases, they were limited in their ability to explain the relationships among disease features. In this study, we proposed an improved rice disease classification method that combines a convolutional neural network (CNN) with a bidirectional gated recurrent unit (BiGRU). Specifically, we introduced a residual mechanism into the Inception module, expanded the module's depth, and integrated an improved Convolutional Block Attention Module (CBAM). We trained and tested the improved CNN and BiGRU, concatenated the outputs of the CNN and BiGRU modules, and passed them to the classification layer for recognition. Our experiments demonstrate that this approach achieves an accuracy of 98.21% in identifying four types of rice diseases, providing a reliable method for rice disease recognition research.</p></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"9 ","pages":"Pages 100-109"},"PeriodicalIF":0.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46834924","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Lightweight convolutional neural network models for semantic segmentation of in-field cotton bolls 田间棉铃语义分割的轻量级卷积神经网络模型
Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2023-06-01 DOI: 10.1016/j.aiia.2023.03.001
Naseeb Singh , V.K. Tewari , P.K. Biswas , L.K. Dhruw

Robotic harvesting of cotton bolls will incorporate the benefits of manual picking as well as mechanical harvesting. For robotic harvesting, in-field cotton segmentation with minimal errors is desirable which is a challenging task. In the present study, three lightweight fully convolutional neural network models were developed for the semantic segmentation of in-field cotton bolls. Model 1 does not include any residual or skip connections, while model 2 consists of residual connections to tackle the vanishing gradient problem and skip connections for feature concatenation. Model 3 along with residual and skip connections, consists of filters of multiple sizes. The effects of filter size and the dropout rate were studied. All proposed models segment the cotton bolls successfully with the cotton-IoU (intersection-over-union) value of above 88.0%. The highest cotton-IoU of 91.03% was achieved by model 2. The proposed models achieved F1-score and pixel accuracy values greater than 95.0% and 98.0%, respectively. The developed models were compared with existing state-of-the-art networks namely VGG19, ResNet18, EfficientNet-B1, and InceptionV3. Despite having a limited number of trainable parameters, the proposed models achieved mean-IoU (mean intersection-over-union) of 93.84%, 94.15%, and 94.65% against the mean-IoU values of 95.39%, 96.54%, 96.40%, and 96.37% obtained using state-of-the-art networks. The segmentation time for the developed models was reduced up to 52.0% compared to state-of-the-art networks. The developed lightweight models segmented the in-field cotton bolls comparatively faster and with greater accuracy. Hence, developed models can be deployed to cotton harvesting robots for real-time recognition of in-field cotton bolls for harvesting.

棉铃的机器人收割将结合人工采摘和机械收割的优点。对于机器人收割来说,希望以最小误差进行田间棉花分割,这是一项具有挑战性的任务。在本研究中,开发了三个轻量级的全卷积神经网络模型用于田间棉铃的语义分割。模型1不包括任何残差或跳跃连接,而模型2由用于解决消失梯度问题的残差连接和用于特征级联的跳跃连接组成。模型3以及剩余和跳过连接,由多种尺寸的过滤器组成。研究了滤波器大小和脱落率的影响。所有提出的模型都成功地分割了棉铃,棉花IoU值在88.0%以上。模型2获得了91.03%的最高棉花IoU。所提出的模型的F1分数和像素精度值分别大于95.0%和98.0%。将开发的模型与现有的最先进的网络(即VGG19、ResNet18、EfficientNet-B1和InceptionV3)进行了比较。尽管具有有限数量的可训练参数,但所提出的模型实现了93.84%、94.15%和94.65%的平均IoU(并集上的平均交集),而使用最先进的网络获得的平均IoU值分别为95.39%、96.54%、96.40%和96.37%。与最先进的网络相比,所开发的模型的分割时间减少了52.0%。所开发的轻量级模型对田间棉铃的分割速度相对较快,精度更高。因此,所开发的模型可以部署到棉花收割机器人上,用于实时识别田间待收割的棉铃。
{"title":"Lightweight convolutional neural network models for semantic segmentation of in-field cotton bolls","authors":"Naseeb Singh ,&nbsp;V.K. Tewari ,&nbsp;P.K. Biswas ,&nbsp;L.K. Dhruw","doi":"10.1016/j.aiia.2023.03.001","DOIUrl":"https://doi.org/10.1016/j.aiia.2023.03.001","url":null,"abstract":"<div><p>Robotic harvesting of cotton bolls will incorporate the benefits of manual picking as well as mechanical harvesting. For robotic harvesting, in-field cotton segmentation with minimal errors is desirable which is a challenging task. In the present study, three lightweight fully convolutional neural network models were developed for the semantic segmentation of in-field cotton bolls. Model 1 does not include any residual or skip connections, while model 2 consists of residual connections to tackle the vanishing gradient problem and skip connections for feature concatenation. Model 3 along with residual and skip connections, consists of filters of multiple sizes. The effects of filter size and the dropout rate were studied. All proposed models segment the cotton bolls successfully with the cotton-IoU (intersection-over-union) value of above 88.0%. The highest cotton-IoU of 91.03% was achieved by model 2. The proposed models achieved F1-score and pixel accuracy values greater than 95.0% and 98.0%, respectively. The developed models were compared with existing state-of-the-art networks namely VGG19, ResNet18, EfficientNet-B1, and InceptionV3. Despite having a limited number of trainable parameters, the proposed models achieved mean-IoU (mean intersection-over-union) of 93.84%, 94.15%, and 94.65% against the mean-IoU values of 95.39%, 96.54%, 96.40%, and 96.37% obtained using state-of-the-art networks. The segmentation time for the developed models was reduced up to 52.0% compared to state-of-the-art networks. The developed lightweight models segmented the in-field cotton bolls comparatively faster and with greater accuracy. Hence, developed models can be deployed to cotton harvesting robots for real-time recognition of in-field cotton bolls for harvesting.</p></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"8 ","pages":"Pages 1-19"},"PeriodicalIF":0.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50193228","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Leguminous seeds detection based on convolutional neural networks: Comparison of Faster R-CNN and YOLOv4 on a small custom dataset 基于卷积神经网络的豆科植物种子检测:快速R-CNN和YOLOv4在小型自定义数据集上的比较
Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2023-06-01 DOI: 10.1016/j.aiia.2023.03.002
Noran S. Ouf

This paper help with leguminous seeds detection and smart farming. There are hundreds of kinds of seeds and it can be very difficult to distinguish between them. Botanists and those who study plants, however, can identify the type of seed at a glance. As far as we know, this is the first work to consider leguminous seeds images with different backgrounds and different sizes and crowding. Machine learning is used to automatically classify and locate 11 different seed types. We chose Leguminous seeds from 11 types to be the objects of this study. Those types are of different colors, sizes, and shapes to add variety and complexity to our research. The images dataset of the leguminous seeds was manually collected, annotated, and then split randomly into three sub-datasets train, validation, and test (predictions), with a ratio of 80%, 10%, and 10% respectively. The images considered the variability between different leguminous seed types. The images were captured on five different backgrounds: white A4 paper, black pad, dark blue pad, dark green pad, and green pad. Different heights and shooting angles were considered. The crowdedness of the seeds also varied randomly between 1 and 50 seeds per image. Different combinations and arrangements between the 11 types were considered. Two different image-capturing devices were used: a SAMSUNG smartphone camera and a Canon digital camera. A total of 828 images were obtained, including 9801 seed objects (labels). The dataset contained images of different backgrounds, heights, angles, crowdedness, arrangements, and combinations. The TensorFlow framework was used to construct the Faster Region-based Convolutional Neural Network (R-CNN) model and CSPDarknet53 is used as the backbone for YOLOv4 based on DenseNet designed to connect layers in convolutional neural. Using the transfer learning method, we optimized the seed detection models. The currently dominant object detection methods, Faster R-CNN, and YOLOv4 performances were compared experimentally. The mAP (mean average precision) of the Faster R-CNN and YOLOv4 models were 84.56% and 98.52% respectively. YOLOv4 had a significant advantage in detection speed over Faster R-CNN which makes it suitable for real-time identification as well where high accuracy and low false positives are needed. The results showed that YOLOv4 had better accuracy, and detection ability, as well as faster detection speed beating Faster R-CNN by a large margin. The model can be effectively applied under a variety of backgrounds, image sizes, seed sizes, shooting angles, and shooting heights, as well as different levels of seed crowding. It constitutes an effective and efficient method for detecting different leguminous seeds in complex scenarios. This study provides a reference for further seed testing and enumeration applications.

本文有助于豆科植物种子检测和智能农业。种子有数百种,很难区分它们。然而,植物学家和研究植物的人可以一眼就能识别出种子的类型。据我们所知,这是第一个考虑不同背景、不同大小和拥挤的豆科种子图像的工作。机器学习用于自动分类和定位11种不同的种子类型。我们从11个类型中选择了豆科种子作为本研究的对象。这些类型有不同的颜色、大小和形状,为我们的研究增加了多样性和复杂性。人工收集、注释豆科种子的图像数据集,然后随机分为训练、验证和测试(预测)三个子数据集,比例分别为80%、10%和10%。这些图像考虑了不同豆科种子类型之间的差异。这些图像是在五种不同的背景上拍摄的:白色A4纸、黑色便笺簿、深蓝色便笺簿、墨绿色便笺簿和绿色便笺簿。考虑了不同的高度和拍摄角度。种子的拥挤度也在每张图像1到50个种子之间随机变化。考虑了11种类型之间的不同组合和排列。使用了两种不同的图像捕捉设备:三星智能手机相机和佳能数码相机。共获得828张图像,包括9801个种子对象(标签)。该数据集包含不同背景、高度、角度、拥挤度、排列和组合的图像。TensorFlow框架用于构建基于更快区域的卷积神经网络(R-CNN)模型,CSPDarket53用作基于DenseNet的YOLOv4的主干,设计用于连接卷积神经中的层。利用迁移学习方法对种子检测模型进行了优化。实验比较了目前占主导地位的目标检测方法Faster R-CNN和YOLOv4的性能。Faster R-CNN和YOLOv4模型的mAP(平均精度)分别为84.56%和98.52%。YOLOv4在检测速度方面比Faster R-CNN具有显著优势,这使其适用于需要高精度和低误报的实时识别。结果表明,YOLOv4具有更好的准确性和检测能力,并且检测速度更快,大大超过了faster R-CNN。该模型可以在各种背景、图像大小、种子大小、拍摄角度和拍摄高度以及不同程度的种子拥挤下有效应用。它构成了一种在复杂场景中检测不同豆科种子的有效方法。该研究为进一步的种子测试和枚举应用提供了参考。
{"title":"Leguminous seeds detection based on convolutional neural networks: Comparison of Faster R-CNN and YOLOv4 on a small custom dataset","authors":"Noran S. Ouf","doi":"10.1016/j.aiia.2023.03.002","DOIUrl":"10.1016/j.aiia.2023.03.002","url":null,"abstract":"<div><p>This paper help with leguminous seeds detection and smart farming. There are hundreds of kinds of seeds and it can be very difficult to distinguish between them. Botanists and those who study plants, however, can identify the type of seed at a glance. As far as we know, this is the first work to consider leguminous seeds images with different backgrounds and different sizes and crowding. Machine learning is used to automatically classify and locate 11 different seed types. We chose Leguminous seeds from 11 types to be the objects of this study. Those types are of different colors, sizes, and shapes to add variety and complexity to our research. The images dataset of the leguminous seeds was manually collected, annotated, and then split randomly into three sub-datasets train, validation, and test (predictions), with a ratio of 80%, 10%, and 10% respectively. The images considered the variability between different leguminous seed types. The images were captured on five different backgrounds: white A4 paper, black pad, dark blue pad, dark green pad, and green pad. Different heights and shooting angles were considered. The crowdedness of the seeds also varied randomly between 1 and 50 seeds per image. Different combinations and arrangements between the 11 types were considered. Two different image-capturing devices were used: a SAMSUNG smartphone camera and a Canon digital camera. A total of 828 images were obtained, including 9801 seed objects (labels). The dataset contained images of different backgrounds, heights, angles, crowdedness, arrangements, and combinations. The TensorFlow framework was used to construct the Faster Region-based Convolutional Neural Network (R-CNN) model and CSPDarknet53 is used as the backbone for YOLOv4 based on DenseNet designed to connect layers in convolutional neural. Using the transfer learning method, we optimized the seed detection models. The currently dominant object detection methods, Faster R-CNN, and YOLOv4 performances were compared experimentally. The mAP (mean average precision) of the Faster R-CNN and YOLOv4 models were 84.56% and 98.52% respectively. YOLOv4 had a significant advantage in detection speed over Faster R-CNN which makes it suitable for real-time identification as well where high accuracy and low false positives are needed. The results showed that YOLOv4 had better accuracy, and detection ability, as well as faster detection speed beating Faster R-CNN by a large margin. The model can be effectively applied under a variety of backgrounds, image sizes, seed sizes, shooting angles, and shooting heights, as well as different levels of seed crowding. It constitutes an effective and efficient method for detecting different leguminous seeds in complex scenarios. This study provides a reference for further seed testing and enumeration applications.</p></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"8 ","pages":"Pages 30-45"},"PeriodicalIF":0.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43701153","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
期刊
Artificial Intelligence in Agriculture
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1