首页 > 最新文献

Artificial Intelligence in Agriculture最新文献

英文 中文
Towards sustainable agriculture: Harnessing AI for global food security 实现可持续农业:利用人工智能促进全球粮食安全
Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2024-04-30 DOI: 10.1016/j.aiia.2024.04.003
Dhananjay K. Pandey , Richa Mishra

The issue of food security continues to be a prominent global concern, affecting a significant number of individuals who experience the adverse effects of hunger and malnutrition. The finding of a solution of this intricate issue necessitates the implementation of novel and paradigm-shifting methodologies in agriculture and food sector. In recent times, the domain of artificial intelligence (AI) has emerged as a potent tool capable of instigating a profound influence on the agriculture and food sectors. AI technologies provide significant advantages by optimizing crop cultivation practices, enabling the use of predictive modelling and precision agriculture techniques, and aiding efficient crop monitoring and disease identification. Additionally, AI has the potential to optimize supply chain operations, storage management, transportation systems, and quality assurance processes. It also tackles the problem of food loss and waste through post-harvest loss reduction, predictive analytics, and smart inventory management. This study highlights that how by utilizing the power of AI, we could transform the way we produce, distribute, and manage food, ultimately creating a more secure and sustainable future for all.

粮食安全问题仍然是全球关注的一个突出问题,影响着大量遭受饥饿和营养不良不利影响的人。要解决这一错综复杂的问题,就必须在农业和粮食部门实施新颖的、改变模式的方法。近来,人工智能(AI)领域已成为能够对农业和粮食部门产生深远影响的有力工具。人工智能技术通过优化作物栽培方法、实现预测建模和精准农业技术的使用,以及协助高效的作物监测和疾病识别,提供了显著的优势。此外,人工智能还有可能优化供应链运作、仓储管理、运输系统和质量保证流程。它还可以通过减少收获后损失、预测分析和智能库存管理来解决粮食损失和浪费问题。这项研究强调,通过利用人工智能的力量,我们可以改变生产、分配和管理食品的方式,最终为所有人创造一个更安全、更可持续的未来。
{"title":"Towards sustainable agriculture: Harnessing AI for global food security","authors":"Dhananjay K. Pandey ,&nbsp;Richa Mishra","doi":"10.1016/j.aiia.2024.04.003","DOIUrl":"https://doi.org/10.1016/j.aiia.2024.04.003","url":null,"abstract":"<div><p>The issue of food security continues to be a prominent global concern, affecting a significant number of individuals who experience the adverse effects of hunger and malnutrition. The finding of a solution of this intricate issue necessitates the implementation of novel and paradigm-shifting methodologies in agriculture and food sector. In recent times, the domain of artificial intelligence (AI) has emerged as a potent tool capable of instigating a profound influence on the agriculture and food sectors. AI technologies provide significant advantages by optimizing crop cultivation practices, enabling the use of predictive modelling and precision agriculture techniques, and aiding efficient crop monitoring and disease identification. Additionally, AI has the potential to optimize supply chain operations, storage management, transportation systems, and quality assurance processes. It also tackles the problem of food loss and waste through post-harvest loss reduction, predictive analytics, and smart inventory management. This study highlights that how by utilizing the power of AI, we could transform the way we produce, distribute, and manage food, ultimately creating a more secure and sustainable future for all.</p></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"12 ","pages":"Pages 72-84"},"PeriodicalIF":0.0,"publicationDate":"2024-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2589721724000151/pdfft?md5=a9d0ed80991556893a392b3b0a4013c0&pid=1-s2.0-S2589721724000151-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140880413","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep learning-based intelligent precise aeration strategy for factory recirculating aquaculture systems 基于深度学习的工厂化循环水产养殖系统智能精确曝气策略
Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2024-04-15 DOI: 10.1016/j.aiia.2024.04.001
Junchao Yang , Yuting Zhou , Zhiwei Guo , Yueming Zhou , Yu Shen

Factory recirculating aquaculture system (RAS) is facing in a stage of continuous research and technological innovation. Intelligent aquaculture is an important direction for the future development of aquaculture. However, the RAS nowdays still has poor self-learning and optimal decision-making capabilities, which leads to high aquaculture cost and low running efficiency. In this paper, a precise aeration strategy based on deep learning is designed for improving the healthy growth of breeding objects. Firstly, the situation perception driven by computer vision is used to detect the hypoxia behavior. Then combined with the biological energy model, it is constructed to calculate the breeding objects oxygen consumption. Finally, the optimal adaptive aeration strategy is generated according to hypoxia behavior judgement and biological energy model. Experimental results show that the energy consumption of proposed precise aeration strategy decreased by 26.3% compared with the manual control and 12.8% compared with the threshold control. Meanwhile, stable water quality conditions accelerated breeding objects growth, and the breeding cycle with the average weight of 400 g was shortened from 5 to 6 months to 3–4 months.

工厂化循环水养殖系统(RAS)正面临着一个不断研究和技术创新的阶段。智能化养殖是未来水产养殖发展的重要方向。然而,目前的 RAS 自我学习和优化决策能力仍然较差,导致养殖成本高、运行效率低。本文设计了一种基于深度学习的精准曝气策略,以改善养殖对象的健康生长状况。首先,利用计算机视觉驱动的态势感知来检测缺氧行为。然后结合生物能量模型,计算繁殖对象的耗氧量。最后,根据缺氧行为判断和生物能模型生成最优自适应曝气策略。实验结果表明,所提出的精确曝气策略的能耗比手动控制降低了 26.3%,比阈值控制降低了 12.8%。同时,稳定的水质条件加快了繁殖对象的生长,平均体重 400 克的繁殖周期从 5 至 6 个月缩短到 3 至 4 个月。
{"title":"Deep learning-based intelligent precise aeration strategy for factory recirculating aquaculture systems","authors":"Junchao Yang ,&nbsp;Yuting Zhou ,&nbsp;Zhiwei Guo ,&nbsp;Yueming Zhou ,&nbsp;Yu Shen","doi":"10.1016/j.aiia.2024.04.001","DOIUrl":"10.1016/j.aiia.2024.04.001","url":null,"abstract":"<div><p>Factory recirculating aquaculture system (RAS) is facing in a stage of continuous research and technological innovation. Intelligent aquaculture is an important direction for the future development of aquaculture. However, the RAS nowdays still has poor self-learning and optimal decision-making capabilities, which leads to high aquaculture cost and low running efficiency. In this paper, a precise aeration strategy based on deep learning is designed for improving the healthy growth of breeding objects. Firstly, the situation perception driven by computer vision is used to detect the hypoxia behavior. Then combined with the biological energy model, it is constructed to calculate the breeding objects oxygen consumption. Finally, the optimal adaptive aeration strategy is generated according to hypoxia behavior judgement and biological energy model. Experimental results show that the energy consumption of proposed precise aeration strategy decreased by 26.3% compared with the manual control and 12.8% compared with the threshold control. Meanwhile, stable water quality conditions accelerated breeding objects growth, and the breeding cycle with the average weight of 400 g was shortened from 5 to 6 months to 3–4 months.</p></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"12 ","pages":"Pages 57-71"},"PeriodicalIF":0.0,"publicationDate":"2024-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2589721724000138/pdfft?md5=35867104fdfd8d303cccc4a2f32568ae&pid=1-s2.0-S2589721724000138-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140768894","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Grow-light smart monitoring system leveraging lightweight deep learning for plant disease classification 利用轻量级深度学习进行植物病害分类的生长光智能监控系统
Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2024-04-04 DOI: 10.1016/j.aiia.2024.03.003
William Macdonald , Yuksel Asli Sari , Majid Pahlevani

This work focuses on a novel lightweight machine learning approach to the task of plant disease classification, posing as a core component of a larger grow-light smart monitoring system. To the extent of our knowledge, this work is the first to implement lightweight convolutional neural network architectures leveraging down-scaled versions of inception blocks, residual connections, and dense residual connections applied without pre-training to the PlantVillage dataset. The novel contributions of this work include the proposal of a smart monitoring framework outline; responsible for detection and classification of ailments via the devised lightweight networks as well as interfacing with LED grow-light fixtures to optimize environmental parameters and lighting control for the growth of plants in a greenhouse system. Lightweight adaptation of dense residual connections achieved the best balance of minimizing model parameters and maximizing performance metrics with accuracy, precision, recall, and F1-scores of 96.75%, 97.62%, 97.59%, and 97.58% respectively, while consisting of only 228,479 model parameters. These results are further compared against various full-scale state-of-the-art model architectures trained on the PlantVillage dataset, of which the proposed down-scaled lightweight models were capable of performing equally to, if not better than many large-scale counterparts with drastically less computational requirements.

这项工作的重点是采用一种新颖的轻量级机器学习方法来完成植物病害分类任务,并将其作为大型光生长智能监控系统的核心组件。据我们所知,这项工作是首次在植物村数据集上实施轻量级卷积神经网络架构,利用了缩减版的初始块、残差连接和密集残差连接。这项工作的新贡献包括提出了一个智能监控框架大纲,负责通过所设计的轻量级网络进行病症检测和分类,并与 LED 种植灯具连接,以优化温室系统中植物生长的环境参数和照明控制。密集残差连接的轻量级适配在最小化模型参数和最大化性能指标之间实现了最佳平衡,准确率、精确度、召回率和 F1 分数分别为 96.75%、97.62%、97.59% 和 97.58%,而模型参数只有 228479 个。这些结果进一步与在 PlantVillage 数据集上训练的各种最先进的完整模型架构进行了比较,其中所提出的缩小比例轻量级模型的性能与许多大型同类模型相当,甚至更好,而计算要求却大大降低。
{"title":"Grow-light smart monitoring system leveraging lightweight deep learning for plant disease classification","authors":"William Macdonald ,&nbsp;Yuksel Asli Sari ,&nbsp;Majid Pahlevani","doi":"10.1016/j.aiia.2024.03.003","DOIUrl":"https://doi.org/10.1016/j.aiia.2024.03.003","url":null,"abstract":"<div><p>This work focuses on a novel lightweight machine learning approach to the task of plant disease classification, posing as a core component of a larger grow-light smart monitoring system. To the extent of our knowledge, this work is the first to implement lightweight convolutional neural network architectures leveraging down-scaled versions of inception blocks, residual connections, and dense residual connections applied without pre-training to the PlantVillage dataset. The novel contributions of this work include the proposal of a smart monitoring framework outline; responsible for detection and classification of ailments via the devised lightweight networks as well as interfacing with LED grow-light fixtures to optimize environmental parameters and lighting control for the growth of plants in a greenhouse system. Lightweight adaptation of dense residual connections achieved the best balance of minimizing model parameters and maximizing performance metrics with accuracy, precision, recall, and F1-scores of 96.75%, 97.62%, 97.59%, and 97.58% respectively, while consisting of only 228,479 model parameters. These results are further compared against various full-scale state-of-the-art model architectures trained on the PlantVillage dataset, of which the proposed down-scaled lightweight models were capable of performing equally to, if not better than many large-scale counterparts with drastically less computational requirements.</p></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"12 ","pages":"Pages 44-56"},"PeriodicalIF":0.0,"publicationDate":"2024-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2589721724000126/pdfft?md5=92380011c829045a5c9cecbd59eb4f0b&pid=1-s2.0-S2589721724000126-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140547142","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep learning for broadleaf weed seedlings classification incorporating data variability and model flexibility across two contrasting environments 深度学习用于阔叶杂草幼苗分类,在两种截然不同的环境中结合数据的可变性和模型的灵活性
Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2024-03-13 DOI: 10.1016/j.aiia.2024.03.002
Lorenzo León , Cristóbal Campos , Juan Hirzel

The increasing deployment of deep learning models for distinguishing weeds and crops has witnessed notable strides in agricultural scenarios. However, a conspicuous gap endures in the literature concerning the training and testing of models across disparate environmental conditions. Predominant methodologies either delineate a single dataset distribution into training, validation, and testing subsets or merge datasets from diverse conditions or distributions before their division into the subsets. Our study aims to ameliorate this gap by extending to several broadleaf weed categories across varied distributions, evaluating the impact of training convolutional neural networks on datasets specific to particular conditions or distributions, and assessing their performance in entirely distinct settings through three experiments. By evaluating diverse network architectures and training approaches (finetuning versus feature extraction), testing various architectures, employing different training strategies, and amalgamating data, we devised straightforward guidelines to ensure the model's deployability in contrasting environments with sustained precision and accuracy.

In Experiment 1, conducted in a uniform environment, accuracy ranged from 80% to 100% across all models and training strategies, with finetune mode achieving a superior performance of 94% to 99.9% compared to the feature extraction mode at 80% to 92.96%. Experiment 2 underscored a significant performance decline, with accuracy figures between 25% and 60%, primarily at 40%, when the origin of the test data deviated from the train and validation sets. Experiment 3, spotlighting dataset and distribution amalgamation, yielded promising accuracy metrics, notably a peak of 99.6% for ResNet in finetuning mode to a low of 69.9% for InceptionV3 in feature extraction mode. These pivotal findings emphasize that merging data from diverse distributions, coupled with finetuned training on advanced architectures like ResNet and MobileNet, markedly enhances performance, contrasting with the relatively lower performance exhibited by simpler networks like AlexNet. Our results suggest that embracing data diversity and flexible training methodologies are crucial for optimizing weed classification models when disparate data distributions are available. This study gives a practical alternative for treating diverse datasets with real-world agricultural variances.

越来越多的深度学习模型被用于区分杂草和农作物,在农业场景中取得了显著进展。然而,关于在不同环境条件下训练和测试模型的文献仍存在明显差距。主流方法要么将单一数据集分布划分为训练、验证和测试子集,要么在划分子集之前合并来自不同条件或分布的数据集。我们的研究旨在改善这一差距,我们将研究范围扩展到了不同分布的几种阔叶杂草类别,评估了卷积神经网络在特定条件或分布的数据集上进行训练的影响,并通过三项实验评估了它们在完全不同的环境中的表现。通过评估不同的网络架构和训练方法(微调与特征提取)、测试不同的架构、采用不同的训练策略以及合并数据,我们制定了简单明了的指导原则,以确保该模型可在不同环境中部署,并保持持续的精确度和准确性。在统一环境下进行的实验 1 中,所有模型和训练策略的准确率在 80% 到 100% 之间,微调模式的准确率为 94% 到 99.9%,而特征提取模式的准确率为 80% 到 92.96%。实验 2 显示,当测试数据的来源偏离训练集和验证集时,性能显著下降,准确率在 25% 到 60% 之间,主要是 40%。实验 3 重点考察了数据集和分布的合并情况,结果显示准确率指标很有希望,特别是在微调模式下,ResNet 的准确率最高达到 99.6%,而在特征提取模式下,InceptionV3 的准确率最低为 69.9%。这些重要发现强调,合并来自不同分布的数据,再加上在 ResNet 和 MobileNet 等高级架构上进行微调训练,可显著提高性能,而 AlexNet 等简单网络的性能则相对较低。我们的研究结果表明,在有不同数据分布的情况下,拥抱数据多样性和灵活的训练方法对于优化杂草分类模型至关重要。这项研究为处理具有真实世界农业差异的多样化数据集提供了一种实用的选择。
{"title":"Deep learning for broadleaf weed seedlings classification incorporating data variability and model flexibility across two contrasting environments","authors":"Lorenzo León ,&nbsp;Cristóbal Campos ,&nbsp;Juan Hirzel","doi":"10.1016/j.aiia.2024.03.002","DOIUrl":"10.1016/j.aiia.2024.03.002","url":null,"abstract":"<div><p>The increasing deployment of deep learning models for distinguishing weeds and crops has witnessed notable strides in agricultural scenarios. However, a conspicuous gap endures in the literature concerning the training and testing of models across disparate environmental conditions. Predominant methodologies either delineate a single dataset distribution into training, validation, and testing subsets or merge datasets from diverse conditions or distributions before their division into the subsets. Our study aims to ameliorate this gap by extending to several broadleaf weed categories across varied distributions, evaluating the impact of training convolutional neural networks on datasets specific to particular conditions or distributions, and assessing their performance in entirely distinct settings through three experiments. By evaluating diverse network architectures and training approaches (<em>finetuning</em> versus <em>feature extraction</em>), testing various architectures, employing different training strategies, and amalgamating data, we devised straightforward guidelines to ensure the model's deployability in contrasting environments with sustained precision and accuracy.</p><p>In Experiment 1, conducted in a uniform environment, accuracy ranged from 80% to 100% across all models and training strategies, with <em>finetune</em> mode achieving a superior performance of 94% to 99.9% compared to the <em>feature extraction</em> mode at 80% to 92.96%. Experiment 2 underscored a significant performance decline, with accuracy figures between 25% and 60%, primarily at 40%, when the origin of the test data deviated from the train and validation sets. Experiment 3, spotlighting dataset and distribution amalgamation, yielded promising accuracy metrics, notably a peak of 99.6% for ResNet in <em>finetuning</em> mode to a low of 69.9% for InceptionV3 in <em>feature extraction</em> mode. These pivotal findings emphasize that merging data from diverse distributions, coupled with <em>finetuned</em> training on advanced architectures like ResNet and MobileNet, markedly enhances performance, contrasting with the relatively lower performance exhibited by simpler networks like AlexNet. Our results suggest that embracing data diversity and flexible training methodologies are crucial for optimizing weed classification models when disparate data distributions are available. This study gives a practical alternative for treating diverse datasets with real-world agricultural variances.</p></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"12 ","pages":"Pages 29-43"},"PeriodicalIF":0.0,"publicationDate":"2024-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2589721724000059/pdfft?md5=d8051b8dea55cec53a6ba7889cbc0c03&pid=1-s2.0-S2589721724000059-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140283105","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LeafSpotNet: A deep learning framework for detecting leaf spot disease in jasmine plants LeafSpotNet:检测茉莉花叶斑病的深度学习框架
Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2024-03-11 DOI: 10.1016/j.aiia.2024.02.002
Shwetha V, Arnav Bhagwat, Vijaya Laxmi

Leaf blight spot disease, caused by bacteria and fungi, poses a threat to plant health, leading to leaf discoloration and diminished agricultural yield. In response, we present a MobileNetV3 based classifier designed for the Jasmine plant, leveraging lightweight Convolutional Neural Networks (CNNs) to accurately identify disease stages. The model integrates depth wise convolution layers and max pool layers for enhanced feature extraction, focusing on crucial low level features indicative of the disease. Through preprocessing techniques, including data augmentation with Conditional GAN and Particle Swarm Optimization for feature selection, the classifier achieves robust performance. Evaluation on curated datasets demonstrates an outstanding 97% training accuracy, highlighting its efficacy. Real world testing with diverse conditions, such as extreme camera angles and varied lighting, attests to the model's resilience, yielding test accuracies between 94% and 96%. The dataset's tailored design for CNN based classification ensures result reliability. Importantly, the model's lightweight classification, marked by fast computation time and reduced size, positions it as an efficient solution for real time applications. This comprehensive approach underscores the proposed classifier's significance in addressing leaf blight spot disease challenges in commercial crops.

由细菌和真菌引起的叶斑病对植物健康构成威胁,导致叶片褪色和农业减产。为此,我们提出了一种基于 MobileNetV3 的分类器,该分类器专为茉莉花植物设计,利用轻量级卷积神经网络 (CNN) 准确识别疾病阶段。该模型集成了深度卷积层和最大池层,用于增强特征提取,重点关注指示疾病的关键低级特征。通过预处理技术,包括使用条件 GAN 和粒子群优化技术进行特征选择的数据增强,分类器实现了稳健的性能。在经过策划的数据集上进行的评估显示,其训练准确率高达 97%,充分体现了其功效。在极端摄像机角度和不同光照等各种条件下进行的实际测试证明了该模型的适应能力,测试准确率在 94% 到 96% 之间。该数据集专为基于 CNN 的分类设计,确保了结果的可靠性。重要的是,该模型的轻量级分类具有计算时间短、体积小的特点,是实时应用的高效解决方案。这种综合方法凸显了所提出的分类器在应对经济作物叶枯病病害挑战方面的重要意义。
{"title":"LeafSpotNet: A deep learning framework for detecting leaf spot disease in jasmine plants","authors":"Shwetha V,&nbsp;Arnav Bhagwat,&nbsp;Vijaya Laxmi","doi":"10.1016/j.aiia.2024.02.002","DOIUrl":"https://doi.org/10.1016/j.aiia.2024.02.002","url":null,"abstract":"<div><p>Leaf blight spot disease, caused by bacteria and fungi, poses a threat to plant health, leading to leaf discoloration and diminished agricultural yield. In response, we present a MobileNetV3 based classifier designed for the Jasmine plant, leveraging lightweight Convolutional Neural Networks (CNNs) to accurately identify disease stages. The model integrates depth wise convolution layers and max pool layers for enhanced feature extraction, focusing on crucial low level features indicative of the disease. Through preprocessing techniques, including data augmentation with Conditional GAN and Particle Swarm Optimization for feature selection, the classifier achieves robust performance. Evaluation on curated datasets demonstrates an outstanding 97% training accuracy, highlighting its efficacy. Real world testing with diverse conditions, such as extreme camera angles and varied lighting, attests to the model's resilience, yielding test accuracies between 94% and 96%. The dataset's tailored design for CNN based classification ensures result reliability. Importantly, the model's lightweight classification, marked by fast computation time and reduced size, positions it as an efficient solution for real time applications. This comprehensive approach underscores the proposed classifier's significance in addressing leaf blight spot disease challenges in commercial crops.</p></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"12 ","pages":"Pages 1-18"},"PeriodicalIF":0.0,"publicationDate":"2024-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2589721724000035/pdfft?md5=eeca9eda52b267f86b4fd11610c9f9fd&pid=1-s2.0-S2589721724000035-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140163319","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A novel approach based on a modified mask R-CNN for the weight prediction of live pigs 基于改进型掩膜 R-CNN 的活猪体重预测新方法
Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2024-03-04 DOI: 10.1016/j.aiia.2024.03.001
Chuanqi Xie , Yuji Cang , Xizhong Lou , Hua Xiao , Xing Xu , Xiangjun Li , Weidong Zhou

Since determining the weight of pigs during large-scale breeding and production is challenging, using non-contact estimation methods is vital. This study proposed a novel pig weight prediction method based on a modified mask region-convolutional neural network (mask R-CNN). The modified approach used ResNeSt as the backbone feature extraction network to enhance the image feature extraction ability. The feature pyramid network (FPN) was added to the backbone feature extraction network for multi-scale feature fusion. The channel attention mechanism (CAM) and spatial attention mechanism (SAM) were introduced in the region proposal network (RPN) for the adaptive integration of local features and their global dependencies to capture global information, ultimately improving image segmentation accuracy. The modified network obtained a precision rate (P), recall rate (R), and mean average precision (MAP) of 90.33%, 89.85%, and 95.21%, respectively, effectively segmenting the pig regions in the images. Five image features, namely the back area (A), body length (L), body width (W), average depth (AD), and eccentricity (E), were investigated. The pig depth images were used to build five regression algorithms (ordinary least squares (OLS), AdaBoost, CatBoost, XGBoost, and random forest (RF)) for weight value prediction. AdaBoost achieved the best prediction result with a coefficient of determination (R2) of 0.987, a mean absolute error (MAE) of 2.96 kg, a mean square error (MSE) of 12.87 kg2, and a mean absolute percentage error (MAPE) of 8.45%. The results demonstrated that the machine learning models effectively predicted the weight values of the pigs, providing technical support for intelligent pig farm management.

由于在大规模育种和生产过程中确定猪的体重具有挑战性,因此使用非接触式估算方法至关重要。本研究提出了一种基于改进型掩膜区域卷积神经网络(掩膜 R-CNN)的新型猪体重预测方法。改进方法使用 ResNeSt 作为骨干特征提取网络,以增强图像特征提取能力。在骨干特征提取网络中加入了特征金字塔网络(FPN),用于多尺度特征融合。在区域建议网络(RPN)中引入了通道注意机制(CAM)和空间注意机制(SAM),用于自适应地整合局部特征及其全局依赖关系,以捕捉全局信息,最终提高图像分割精度。改进后的网络获得的精确率(P)、召回率(R)和平均精确率(MAP)分别为 90.33%、89.85% 和 95.21%,有效地分割了图像中的猪区域。研究了五个图像特征,即背部面积(A)、体长(L)、体宽(W)、平均深度(AD)和偏心率(E)。猪的深度图像被用于建立五种回归算法(普通最小二乘法(OLS)、AdaBoost、CatBoost、XGBoost 和随机森林(RF))来预测权重值。AdaBoost 的预测结果最好,其决定系数 (R2) 为 0.987,平均绝对误差 (MAE) 为 2.96 千克,平均平方误差 (MSE) 为 12.87 千克2,平均绝对百分比误差 (MAPE) 为 8.45%。结果表明,机器学习模型能有效预测猪的体重值,为猪场的智能化管理提供了技术支持。
{"title":"A novel approach based on a modified mask R-CNN for the weight prediction of live pigs","authors":"Chuanqi Xie ,&nbsp;Yuji Cang ,&nbsp;Xizhong Lou ,&nbsp;Hua Xiao ,&nbsp;Xing Xu ,&nbsp;Xiangjun Li ,&nbsp;Weidong Zhou","doi":"10.1016/j.aiia.2024.03.001","DOIUrl":"https://doi.org/10.1016/j.aiia.2024.03.001","url":null,"abstract":"<div><p>Since determining the weight of pigs during large-scale breeding and production is challenging, using non-contact estimation methods is vital. This study proposed a novel pig weight prediction method based on a modified mask region-convolutional neural network (mask R-CNN). The modified approach used ResNeSt as the backbone feature extraction network to enhance the image feature extraction ability. The feature pyramid network (FPN) was added to the backbone feature extraction network for multi-scale feature fusion. The channel attention mechanism (CAM) and spatial attention mechanism (SAM) were introduced in the region proposal network (RPN) for the adaptive integration of local features and their global dependencies to capture global information, ultimately improving image segmentation accuracy. The modified network obtained a precision rate (P), recall rate (R), and mean average precision (MAP) of 90.33%, 89.85%, and 95.21%, respectively, effectively segmenting the pig regions in the images. Five image features, namely the back area (A), body length (L), body width (W), average depth (AD), and eccentricity (E), were investigated. The pig depth images were used to build five regression algorithms (ordinary least squares (OLS), AdaBoost, CatBoost, XGBoost, and random forest (RF)) for weight value prediction. AdaBoost achieved the best prediction result with a coefficient of determination (R<sup>2</sup>) of 0.987, a mean absolute error (MAE) of 2.96 kg, a mean square error (MSE) of 12.87 kg<sup>2</sup>, and a mean absolute percentage error (MAPE) of 8.45%. The results demonstrated that the machine learning models effectively predicted the weight values of the pigs, providing technical support for intelligent pig farm management.</p></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"12 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2589721724000047/pdfft?md5=43c515f8d95da29c768ed4d67f22ebc0&pid=1-s2.0-S2589721724000047-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140163321","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Using an improved lightweight YOLOv8 model for real-time detection of multi-stage apple fruit in complex orchard environments 使用改进的轻量级 YOLOv8 模型实时检测复杂果园环境中的多阶段苹果果实
Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2024-03-01 DOI: 10.1016/j.aiia.2024.02.001
Baoling Ma , Zhixin Hua , Yuchen Wen , Hongxing Deng , Yongjie Zhao , Liuru Pu , Huaibo Song

For the purpose of monitoring apple fruits effectively throughout the entire growth period in smart orchards. A lightweight model named YOLOv8n-ShuffleNetv2-Ghost-SE was proposed. The ShuffleNetv2 basic modules and down-sampling modules were alternately connected, replacing the Backbone of YOLOv8n model. The Ghost modules replaced the Conv modules and the C2fGhost modules replaced the C2f modules in the Neck part of the YOLOv8n. ShuffleNetv2 reduced the memory access cost through channel splitting operations. The Ghost module combined linear and non-linear convolutions to reduce the network computation cost. The Wise-IoU (WIoU) replaced the CIoU for calculating the bounding box regression loss, which dynamically adjusted the anchor box quality threshold and gradient gain allocation strategy, optimizing the size and position of predicted bounding boxes. The Squeeze-and-Excitation (SE) was embedded in the Backbone and Neck part of YOLOv8n to enhance the representation ability of feature maps. The algorithm ensured high precision while having small model size and fast detection speed, which facilitated model migration and deployment. Using 9652 images validated the effectiveness of the model. The YOLOv8n-ShuffleNetv2-Ghost-SE model achieved Precision of 94.1%, Recall of 82.6%, mean Average Precision of 91.4%, model size of 2.6 MB, parameters of 1.18 M, FLOPs of 3.9 G, and detection speed of 39.37 fps. The detection speeds on the Jetson Xavier NX development board were 3.17 fps. Comparisons with advanced models including Faster R-CNN, SSD, YOLOv5s, YOLOv7‑tiny, YOLOv8s, YOLOv8n, MobileNetv3_small-Faster, MobileNetv3_small-Ghost, ShuflleNetv2-Faster, ShuflleNetv2-Ghost, ShuflleNetv2-Ghost-CBAM, ShuflleNetv2-Ghost-ECA, and ShuflleNetv2-Ghost-CA demonstrated that the method achieved smaller model and faster detection speed. The research can provide reference for the development of smart devices in apple orchards.

为了在智能果园中对苹果果实的整个生长期进行有效监控。我们提出了一种名为 YOLOv8n-ShuffleNetv2-Ghost-SE 的轻量级模型。ShuffleNetv2 基本模块和下采样模块交替连接,取代了 YOLOv8n 模型的 Backbone。在 YOLOv8n 的 Neck 部分,Ghost 模块取代了 Conv 模块,C2fGhost 模块取代了 C2f 模块。ShuffleNetv2 通过通道分割操作降低了内存访问成本。Ghost 模块结合了线性和非线性卷积,降低了网络计算成本。Wise-IoU(WIoU)取代了计算边界框回归损失的 CIoU,可动态调整锚框质量阈值和梯度增益分配策略,优化预测边界框的大小和位置。YOLOv8n 的骨干和内核部分嵌入了挤压激励算法(SE),以增强特征图的表示能力。该算法在确保高精度的同时,还具有模型体积小、检测速度快的特点,为模型的迁移和部署提供了便利。使用 9652 幅图像验证了模型的有效性。YOLOv8n-ShuffleNetv2-Ghost-SE 模型的精确度为 94.1%,召回率为 82.6%,平均精确度为 91.4%,模型大小为 2.6 MB,参数为 1.18 M,FLOPs 为 3.9 G,检测速度为 39.37 fps。Jetson Xavier NX 开发板的检测速度为 3.17 fps。与 Faster R-CNN、SSD、YOLOv5s、YOLOv7-tiny、YOLOv8s、YOLOv8n、MobileNetv3_small-Faster、MobileNetv3_small-Ghost、ShuflleNetv2-Faster、ShuflleNetv2-Ghost、ShuflleNetv2-Ghost-CBAM、ShuflleNetv2-Ghost-ECA 和 ShuflleNetv2-Ghost-CA 等先进模型的比较表明,该方法实现了更小的模型和更快的检测速度。该研究可为苹果园智能设备的开发提供参考。
{"title":"Using an improved lightweight YOLOv8 model for real-time detection of multi-stage apple fruit in complex orchard environments","authors":"Baoling Ma ,&nbsp;Zhixin Hua ,&nbsp;Yuchen Wen ,&nbsp;Hongxing Deng ,&nbsp;Yongjie Zhao ,&nbsp;Liuru Pu ,&nbsp;Huaibo Song","doi":"10.1016/j.aiia.2024.02.001","DOIUrl":"https://doi.org/10.1016/j.aiia.2024.02.001","url":null,"abstract":"<div><p>For the purpose of monitoring apple fruits effectively throughout the entire growth period in smart orchards. A lightweight model named YOLOv8n-ShuffleNetv2-Ghost-SE was proposed. The ShuffleNetv2 basic modules and down-sampling modules were alternately connected, replacing the Backbone of YOLOv8n model. The Ghost modules replaced the Conv modules and the C2fGhost modules replaced the C2f modules in the Neck part of the YOLOv8n. ShuffleNetv2 reduced the memory access cost through channel splitting operations. The Ghost module combined linear and non-linear convolutions to reduce the network computation cost. The Wise-IoU (WIoU) replaced the CIoU for calculating the bounding box regression loss, which dynamically adjusted the anchor box quality threshold and gradient gain allocation strategy, optimizing the size and position of predicted bounding boxes. The Squeeze-and-Excitation (SE) was embedded in the Backbone and Neck part of YOLOv8n to enhance the representation ability of feature maps. The algorithm ensured high precision while having small model size and fast detection speed, which facilitated model migration and deployment. Using 9652 images validated the effectiveness of the model. The YOLOv8n-ShuffleNetv2-Ghost-SE model achieved Precision of 94.1%, Recall of 82.6%, mean Average Precision of 91.4%, model size of 2.6 MB, parameters of 1.18 M, FLOPs of 3.9 G, and detection speed of 39.37 fps. The detection speeds on the Jetson Xavier NX development board were 3.17 fps. Comparisons with advanced models including Faster R-CNN, SSD, YOLOv5s, YOLOv7‑tiny, YOLOv8s, YOLOv8n, MobileNetv3_small-Faster, MobileNetv3_small-Ghost, ShuflleNetv2-Faster, ShuflleNetv2-Ghost, ShuflleNetv2-Ghost-CBAM, ShuflleNetv2-Ghost-ECA, and ShuflleNetv2-Ghost-CA demonstrated that the method achieved smaller model and faster detection speed. The research can provide reference for the development of smart devices in apple orchards.</p></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"11 ","pages":"Pages 70-82"},"PeriodicalIF":0.0,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2589721724000023/pdfft?md5=6fc303d1eb23f5151de28ee6f36c2d3d&pid=1-s2.0-S2589721724000023-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140031373","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automated quality inspection of baby corn using image processing and deep learning 利用图像处理和深度学习实现婴幼儿玉米质量自动检测
Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2024-01-23 DOI: 10.1016/j.aiia.2024.01.001
Kris Wonggasem, Pongsan Chakranon, Papis Wongchaisuwat

The food industry typically relies heavily on manual operations with high proficiency and skills. According to the quality inspection process, a baby corn with black marks or blemishes is considered a defect or unqualified class which should be discarded. Quality inspection and sorting of agricultural products like baby corn are labor-intensive and time-consuming. The main goal of this work is to develop an automated quality inspection framework to differentiate between ‘pass’ and ‘fail’ categories based on baby corn images. A traditional image processing method using a threshold principle is compared with relatively more advanced deep learning models. Particularly, Convolutional neural networks, specific sub-types of deep learning models, were implemented. Thorough experiments on choices of network architectures and their hyperparameters were conducted and compared. A Shapley additive explanations (SHAP) framework was further utilized for network interpretation purposes. The EfficientNetB5 networks with relatively larger input sizes yielded up to 99.06% accuracy as the best performance against 95.28% obtained from traditional image processing. Incorporating a region of interest identification, several model experiments, data application on baby corn images, and the SHAP framework are our main contributions. Our proposed quality inspection system to automatically differentiate baby corn images provides a potential pipeline to further support the agricultural production process.

食品行业通常非常依赖熟练度和技能都很高的手工操作。根据质量检验流程,有黑印或瑕疵的小玉米被视为缺陷或不合格等级,应予以丢弃。对小玉米等农产品进行质量检验和分拣是一项劳动密集型工作,耗费大量时间。这项工作的主要目标是开发一个自动质量检测框架,根据小玉米图像区分 "合格 "和 "不合格 "类别。使用阈值原理的传统图像处理方法与相对更先进的深度学习模型进行了比较。特别是卷积神经网络,它是深度学习模型的特定子类型。对网络架构及其超参数的选择进行了全面的实验和比较。为了进行网络解释,还进一步利用了沙普利加法解释(SHAP)框架。输入尺寸相对较大的 EfficientNetB5 网络的准确率高达 99.06%,而传统图像处理的准确率为 95.28%。将兴趣区域识别、多个模型实验、婴幼儿玉米图像数据应用和 SHAP 框架结合在一起是我们的主要贡献。我们提出的自动区分玉米图像的质量检测系统为进一步支持农业生产过程提供了一个潜在的管道。
{"title":"Automated quality inspection of baby corn using image processing and deep learning","authors":"Kris Wonggasem,&nbsp;Pongsan Chakranon,&nbsp;Papis Wongchaisuwat","doi":"10.1016/j.aiia.2024.01.001","DOIUrl":"10.1016/j.aiia.2024.01.001","url":null,"abstract":"<div><p>The food industry typically relies heavily on manual operations with high proficiency and skills. According to the quality inspection process, a baby corn with black marks or blemishes is considered a defect or unqualified class which should be discarded. Quality inspection and sorting of agricultural products like baby corn are labor-intensive and time-consuming. The main goal of this work is to develop an automated quality inspection framework to differentiate between ‘pass’ and ‘fail’ categories based on baby corn images. A traditional image processing method using a threshold principle is compared with relatively more advanced deep learning models. Particularly, Convolutional neural networks, specific sub-types of deep learning models, were implemented. Thorough experiments on choices of network architectures and their hyperparameters were conducted and compared. A Shapley additive explanations (SHAP) framework was further utilized for network interpretation purposes. The EfficientNetB5 networks with relatively larger input sizes yielded up to 99.06% accuracy as the best performance against 95.28% obtained from traditional image processing. Incorporating a region of interest identification, several model experiments, data application on baby corn images, and the SHAP framework are our main contributions. Our proposed quality inspection system to automatically differentiate baby corn images provides a potential pipeline to further support the agricultural production process.</p></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"11 ","pages":"Pages 61-69"},"PeriodicalIF":0.0,"publicationDate":"2024-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2589721724000011/pdfft?md5=7f516ee421a879bd329ecdddca0cde40&pid=1-s2.0-S2589721724000011-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139634663","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhanced detection algorithm for apple bruises using structured light imaging 利用结构光成像技术增强苹果伤痕检测算法
Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2023-12-13 DOI: 10.1016/j.aiia.2023.12.001
Haojie Zhu , Lingling Yang , Yu Wang , Yuwei Wang , Wenhui Hou , Yuan Rao , Lu Liu

Bruising reduces the edibility and marketability of fresh apples, inevitably causing economic losses for the apple industry. However, bruises lack obvious visual symptoms, which makes it challenging to detect them using imaging techniques with uniform or diffuse illumination. This study employed the structured light imaging (SLI) technique to detect apple bruises. First, the grayscale reflection images were captured under phase-shifted sinusoidal illumination at three different wavelengths (600, 650, and 700 nm) and six different spatial frequencies (0.05, 0.10, 0.15, 0.20, 0.25, and 0.30 cycles mm−1). Next, the grayscale reflectance images were demodulated to produce direct component (DC) images representing uniform diffuse illumination and amplitude component (AC) images revealing bruises. Then, by quantifying the contrast between bruised regions and sound regions in all AC images, it was found that bruises exhibited the optimal contrast when subjected to sinusoidal illumination at a wavelength of 700 nm and a spatial frequency of 0.25 mm−1. In the AC image with optimal contrast, the developed h-domes segmentation algorithm to accurately segment the location and range of the bruised regions. Moreover, the algorithm successfully accomplished the task of segmenting central bruised regions while addressing the challenge of segmenting edge bruised regions complicated by vignetting. The average Intersection over Union (IoU) values for the three types of bruises were 0.9422, 0.9231, and 0.9183, respectively. This result demonstrated that the combination of SLI and the h-domes segmentation algorithm was a viable approach for the effective detection of fresh apple bruises.

瘀伤会降低新鲜苹果的可食性和适销性,不可避免地会给苹果产业造成经济损失。然而,瘀伤缺乏明显的视觉症状,因此使用均匀或漫射光成像技术检测瘀伤具有挑战性。本研究采用结构光成像(SLI)技术检测苹果淤伤。首先,在三种不同波长(600、650 和 700 nm)和六种不同空间频率(0.05、0.10、0.15、0.20、0.25 和 0.30 周期 mm-1)的相移正弦波照明下采集灰度反射图像。然后,对灰度反射图像进行解调,生成代表均匀漫射光的直接分量(DC)图像和显示瘀伤的振幅分量(AC)图像。然后,通过量化所有 AC 图像中瘀伤区域和健全区域的对比度,发现在波长为 700 nm、空间频率为 0.25 mm-1 的正弦波照明下,瘀伤显示出最佳对比度。在对比度最佳的交流图像中,所开发的 h-domes 分割算法能准确分割出瘀伤区域的位置和范围。此外,该算法还成功地完成了分割中心淤血区域的任务,同时解决了分割边缘淤血区域的难题。三种瘀伤的平均联合交叉(IoU)值分别为 0.9422、0.9231 和 0.9183。这一结果表明,结合 SLI 和 h-domes 分割算法是有效检测新鲜苹果淤伤的可行方法。
{"title":"Enhanced detection algorithm for apple bruises using structured light imaging","authors":"Haojie Zhu ,&nbsp;Lingling Yang ,&nbsp;Yu Wang ,&nbsp;Yuwei Wang ,&nbsp;Wenhui Hou ,&nbsp;Yuan Rao ,&nbsp;Lu Liu","doi":"10.1016/j.aiia.2023.12.001","DOIUrl":"https://doi.org/10.1016/j.aiia.2023.12.001","url":null,"abstract":"<div><p>Bruising reduces the edibility and marketability of fresh apples, inevitably causing economic losses for the apple industry. However, bruises lack obvious visual symptoms, which makes it challenging to detect them using imaging techniques with uniform or diffuse illumination. This study employed the structured light imaging (SLI) technique to detect apple bruises. First, the grayscale reflection images were captured under phase-shifted sinusoidal illumination at three different wavelengths (600, 650, and 700 nm) and six different spatial frequencies (0.05, 0.10, 0.15, 0.20, 0.25, and 0.30 cycles mm<sup>−1</sup>). Next, the grayscale reflectance images were demodulated to produce direct component (DC) images representing uniform diffuse illumination and amplitude component (AC) images revealing bruises. Then, by quantifying the contrast between bruised regions and sound regions in all AC images, it was found that bruises exhibited the optimal contrast when subjected to sinusoidal illumination at a wavelength of 700 nm and a spatial frequency of 0.25 mm<sup>−1</sup>. In the AC image with optimal contrast, the developed <em>h</em>-domes segmentation algorithm to accurately segment the location and range of the bruised regions. Moreover, the algorithm successfully accomplished the task of segmenting central bruised regions while addressing the challenge of segmenting edge bruised regions complicated by vignetting. The average Intersection over Union (IoU) values for the three types of bruises were 0.9422, 0.9231, and 0.9183, respectively. This result demonstrated that the combination of SLI and the <em>h</em>-domes segmentation algorithm was a viable approach for the effective detection of fresh apple bruises.</p></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"11 ","pages":"Pages 50-60"},"PeriodicalIF":0.0,"publicationDate":"2023-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2589721723000508/pdfft?md5=4b5f4f71fba5824f27f3f6fb52807dae&pid=1-s2.0-S2589721723000508-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138769651","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Image classification of lotus in Nong Han Chaloem Phrakiat Lotus Park using convolutional neural networks 利用卷积神经网络对 Nong Han Chaloem Phrakiat 莲花公园的莲花进行图像分类
Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2023-12-12 DOI: 10.1016/j.aiia.2023.12.003
Thanawat Phattaraworamet , Sawinee Sangsuriyun , Phoempol Kutchomsri , Susama Chokphoemphun

The Nong Han Chaloem Phrakiat Lotus Park is a tourist attraction and a source of learning regarding lotus plants. However, as a training area, it lacks appeal and learning motivation due to its conventional presentation of information regarding lotus plants. The current study introduced the concept of smart learning in this setting to increase interest and motivation for learning. Convolutional neural networks (CNNs) were used for the classification of lotus plant species, for use in the development of a mobile application to display details about each species. The scope of the study was to classify 11 species of lotus plants using the proposed CNN model based on different techniques (augmentation, dropout, and L2) and hyper parameters (dropout and epoch number). The expected outcome was to obtain a high-performance CNN model with reduced total parameters compared to using three different pre-trained CNN models (Inception V3, VGG16, and VGG19) as benchmarks. The performance of the model was presented in terms of accuracy, F1-score, precision, and recall values. The results showed that the CNN model with the augmentation, dropout, and L2 techniques at a dropout value of 0.4 and an epoch number of 30 provided the highest testing accuracy of 0.9954. The best proposed model was more accurate than the pre-trained CNN models, especially compared to Inception V3. In addition, the number of total parameters was reduced by approximately 1.80–2.19 times. These findings demonstrated that the proposed model with a small number of total parameters had a satisfactory degree of classification accuracy.

农汉Chaloem Phrakiat莲花公园是一个旅游景点,也是学习莲花的地方。然而,作为一个培训领域,由于其传统的莲花信息呈现,缺乏吸引力和学习动机。本研究在此背景下引入了智能学习的概念,以提高学习的兴趣和动机。卷积神经网络(cnn)被用于莲花植物种类的分类,用于开发显示每个物种详细信息的移动应用程序。本研究的范围是使用基于不同技术(augmentation、dropout和L2)和超参数(dropout和epoch number)的CNN模型对11种荷花植物进行分类。与使用三种不同的预训练CNN模型(Inception V3、VGG16和VGG19)作为基准相比,预期的结果是获得一个总参数减少的高性能CNN模型。模型的性能从正确率、f1分数、精度和召回值方面进行了展示。结果表明,当dropout值为0.4,epoch数为30时,采用augmentation、dropout和L2技术的CNN模型的测试精度最高,为0.9954。提出的最佳模型比预训练的CNN模型更准确,特别是与盗梦空间V3相比。此外,总参数的数量减少了约1.80-2.19倍。这些结果表明,在总参数较少的情况下,所提出的模型具有令人满意的分类精度。
{"title":"Image classification of lotus in Nong Han Chaloem Phrakiat Lotus Park using convolutional neural networks","authors":"Thanawat Phattaraworamet ,&nbsp;Sawinee Sangsuriyun ,&nbsp;Phoempol Kutchomsri ,&nbsp;Susama Chokphoemphun","doi":"10.1016/j.aiia.2023.12.003","DOIUrl":"https://doi.org/10.1016/j.aiia.2023.12.003","url":null,"abstract":"<div><p>The Nong Han Chaloem Phrakiat Lotus Park is a tourist attraction and a source of learning regarding lotus plants. However, as a training area, it lacks appeal and learning motivation due to its conventional presentation of information regarding lotus plants. The current study introduced the concept of smart learning in this setting to increase interest and motivation for learning. Convolutional neural networks (CNNs) were used for the classification of lotus plant species, for use in the development of a mobile application to display details about each species. The scope of the study was to classify 11 species of lotus plants using the proposed CNN model based on different techniques (augmentation, dropout, and L2) and hyper parameters (dropout and epoch number). The expected outcome was to obtain a high-performance CNN model with reduced total parameters compared to using three different pre-trained CNN models (Inception V3, VGG16, and VGG19) as benchmarks. The performance of the model was presented in terms of accuracy, F1-score, precision, and recall values. The results showed that the CNN model with the augmentation, dropout, and L2 techniques at a dropout value of 0.4 and an epoch number of 30 provided the highest testing accuracy of 0.9954. The best proposed model was more accurate than the pre-trained CNN models, especially compared to Inception V3. In addition, the number of total parameters was reduced by approximately 1.80–2.19 times. These findings demonstrated that the proposed model with a small number of total parameters had a satisfactory degree of classification accuracy.</p></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"11 ","pages":"Pages 23-33"},"PeriodicalIF":0.0,"publicationDate":"2023-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2589721723000491/pdfft?md5=d74952e474880b11ee67566302a088f6&pid=1-s2.0-S2589721723000491-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138656373","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Artificial Intelligence in Agriculture
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1