首页 > 最新文献

Artificial Intelligence in Agriculture最新文献

英文 中文
Hazelnut mapping detection system using optical and radar remote sensing: Benchmarking machine learning algorithms 利用光学和雷达遥感的榛子绘图检测系统:机器学习算法基准测试
Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2024-06-01 DOI: 10.1016/j.aiia.2024.05.001
Daniele Sasso , Francesco Lodato , Anna Sabatini , Giorgio Pennazza , Luca Vollero , Marco Santonico , Mario Merone

Mapping hazelnut orchards can facilitate land planning and utilization policies, supporting the development of cooperative precision farming systems. The present work faces the detection of hazelnut crops using optical and radar remote sensing data. A comparative study of Machine Learning techniques is presented. The system proposed utilizes multi-temporal data from the Sentinel-1 and Sentinel-2 datasets extracted over several years and processed with cloud tools. We provide a dataset of 62,982 labeled samples, with 16,561 samples belonging to the ‘hazelnut’ class and 46,421 samples belonging to the ‘other’ class, collected in 8 heterogeneous geographical areas of the Viterbo province. Two different comparative tests are conducted: firstly, we use a Nested 5-Fold Cross-Validation methodology to train, optimize, and compare different Machine Learning algorithms on a single area. In a second experiment, the algorithms were trained on one area and tested on the remaining seven geographical areas. The developed study demonstrates how AI analysis applied to Sentinel-1 and Sentinel-2 data is a valid technology for hazelnut mapping. From the results, it emerges that Random Forest is the classifier with the highest generalizability, achieving the best performance in the second test with an accuracy of 96% and an F1 score of 91% for the ‘hazelnut’ class.

绘制榛子果园地图有助于制定土地规划和利用政策,支持合作精准农业系统的发展。本研究利用光学和雷达遥感数据检测榛子作物。报告对机器学习技术进行了比较研究。所提出的系统利用了从哨兵-1 和哨兵-2 数据集中提取的多年多时数据,并使用云工具进行了处理。我们提供了一个包含 62,982 个标注样本的数据集,其中 16,561 个样本属于 "榛子 "类,46,421 个样本属于 "其他 "类,该数据集收集自维泰博省的 8 个不同地理区域。我们进行了两种不同的比较测试:首先,我们使用嵌套 5 倍交叉验证方法,在单一区域内对不同的机器学习算法进行训练、优化和比较。在第二个实验中,算法在一个地区进行了训练,并在其余七个地理区域进行了测试。这项研究表明,将人工智能分析应用于哨兵-1 和哨兵-2 数据是一种有效的榛子绘图技术。结果表明,随机森林是通用性最强的分类器,在第二次测试中表现最佳,准确率达到 96%,"榛子 "类的 F1 分数达到 91%。
{"title":"Hazelnut mapping detection system using optical and radar remote sensing: Benchmarking machine learning algorithms","authors":"Daniele Sasso ,&nbsp;Francesco Lodato ,&nbsp;Anna Sabatini ,&nbsp;Giorgio Pennazza ,&nbsp;Luca Vollero ,&nbsp;Marco Santonico ,&nbsp;Mario Merone","doi":"10.1016/j.aiia.2024.05.001","DOIUrl":"https://doi.org/10.1016/j.aiia.2024.05.001","url":null,"abstract":"<div><p>Mapping hazelnut orchards can facilitate land planning and utilization policies, supporting the development of cooperative precision farming systems. The present work faces the detection of hazelnut crops using optical and radar remote sensing data. A comparative study of Machine Learning techniques is presented. The system proposed utilizes multi-temporal data from the Sentinel-1 and Sentinel-2 datasets extracted over several years and processed with cloud tools. We provide a dataset of 62,982 labeled samples, with 16,561 samples belonging to the ‘hazelnut’ class and 46,421 samples belonging to the ‘other’ class, collected in 8 heterogeneous geographical areas of the Viterbo province. Two different comparative tests are conducted: firstly, we use a Nested 5-Fold Cross-Validation methodology to train, optimize, and compare different Machine Learning algorithms on a single area. In a second experiment, the algorithms were trained on one area and tested on the remaining seven geographical areas. The developed study demonstrates how AI analysis applied to Sentinel-1 and Sentinel-2 data is a valid technology for hazelnut mapping. From the results, it emerges that Random Forest is the classifier with the highest generalizability, achieving the best performance in the second test with an accuracy of 96% and an F1 score of 91% for the ‘hazelnut’ class.</p></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"12 ","pages":"Pages 97-108"},"PeriodicalIF":0.0,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2589721724000163/pdfft?md5=3c0871cbfa7a056adc6aefce898ac420&pid=1-s2.0-S2589721724000163-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141244415","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
InstaCropNet: An efficient Unet-Based architecture for precise crop row detection in agricultural applications InstaCropNet:基于 Unet 的高效架构,用于农业应用中的作物行精确检测
Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2024-06-01 DOI: 10.1016/j.aiia.2024.05.002
Zhiming Guo , Yuhang Geng , Chuan Wang , Yi Xue , Deng Sun , Zhaoxia Lou , Tianbao Chen , Tianyu Geng , Longzhe Quan

Autonomous navigation in farmlands is one of the key technologies for achieving autonomous management in maize fields. Among various navigation techniques, visual navigation using widely available RGB images is a cost-effective solution. However, current mainstream methods for maize crop row detection often rely on highly specialized, manually devised heuristic rules, limiting the scalability of these methods. To simplify the solution and enhance its universality, we propose an innovative crop row annotation strategy. This strategy, by simulating the strip-like structure of the crop row's central area, effectively avoids interference from lateral growth of crop leaves. Based on this, we developed a deep learning network with a dual-branch architecture, InstaCropNet, which achieves end-to-end segmentation of crop row instances. Subsequently, through the row anchor segmentation technique, we accurately locate the positions of different crop row instances and perform line fitting. Experimental results demonstrate that our method has an average angular deviation of no more than 2°, and the accuracy of crop row detection reaches 96.5%.

农田自主导航是实现玉米田自主管理的关键技术之一。在各种导航技术中,利用广泛可用的 RGB 图像进行视觉导航是一种经济有效的解决方案。然而,目前玉米作物行检测的主流方法往往依赖于高度专业化、人工设计的启发式规则,限制了这些方法的可扩展性。为了简化解决方案并提高其通用性,我们提出了一种创新的作物行注释策略。该策略通过模拟作物行中心区域的条状结构,有效避免了作物叶片横向生长的干扰。在此基础上,我们开发了一种具有双分支架构的深度学习网络--InstaCropNet,实现了对作物行实例的端到端分割。随后,通过行锚分割技术,我们准确定位了不同作物行实例的位置,并进行了线拟合。实验结果表明,我们的方法平均角度偏差不超过 2°,作物行检测准确率达到 96.5%。
{"title":"InstaCropNet: An efficient Unet-Based architecture for precise crop row detection in agricultural applications","authors":"Zhiming Guo ,&nbsp;Yuhang Geng ,&nbsp;Chuan Wang ,&nbsp;Yi Xue ,&nbsp;Deng Sun ,&nbsp;Zhaoxia Lou ,&nbsp;Tianbao Chen ,&nbsp;Tianyu Geng ,&nbsp;Longzhe Quan","doi":"10.1016/j.aiia.2024.05.002","DOIUrl":"10.1016/j.aiia.2024.05.002","url":null,"abstract":"<div><p>Autonomous navigation in farmlands is one of the key technologies for achieving autonomous management in maize fields. Among various navigation techniques, visual navigation using widely available RGB images is a cost-effective solution. However, current mainstream methods for maize crop row detection often rely on highly specialized, manually devised heuristic rules, limiting the scalability of these methods. To simplify the solution and enhance its universality, we propose an innovative crop row annotation strategy. This strategy, by simulating the strip-like structure of the crop row's central area, effectively avoids interference from lateral growth of crop leaves. Based on this, we developed a deep learning network with a dual-branch architecture, InstaCropNet, which achieves end-to-end segmentation of crop row instances. Subsequently, through the row anchor segmentation technique, we accurately locate the positions of different crop row instances and perform line fitting. Experimental results demonstrate that our method has an average angular deviation of no more than 2°, and the accuracy of crop row detection reaches 96.5%.</p></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"12 ","pages":"Pages 85-96"},"PeriodicalIF":0.0,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2589721724000175/pdfft?md5=4c6e92e045769fe5ef6e32adc1438b8b&pid=1-s2.0-S2589721724000175-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141143901","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards sustainable agriculture: Harnessing AI for global food security 实现可持续农业:利用人工智能促进全球粮食安全
Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2024-04-30 DOI: 10.1016/j.aiia.2024.04.003
Dhananjay K. Pandey , Richa Mishra

The issue of food security continues to be a prominent global concern, affecting a significant number of individuals who experience the adverse effects of hunger and malnutrition. The finding of a solution of this intricate issue necessitates the implementation of novel and paradigm-shifting methodologies in agriculture and food sector. In recent times, the domain of artificial intelligence (AI) has emerged as a potent tool capable of instigating a profound influence on the agriculture and food sectors. AI technologies provide significant advantages by optimizing crop cultivation practices, enabling the use of predictive modelling and precision agriculture techniques, and aiding efficient crop monitoring and disease identification. Additionally, AI has the potential to optimize supply chain operations, storage management, transportation systems, and quality assurance processes. It also tackles the problem of food loss and waste through post-harvest loss reduction, predictive analytics, and smart inventory management. This study highlights that how by utilizing the power of AI, we could transform the way we produce, distribute, and manage food, ultimately creating a more secure and sustainable future for all.

粮食安全问题仍然是全球关注的一个突出问题,影响着大量遭受饥饿和营养不良不利影响的人。要解决这一错综复杂的问题,就必须在农业和粮食部门实施新颖的、改变模式的方法。近来,人工智能(AI)领域已成为能够对农业和粮食部门产生深远影响的有力工具。人工智能技术通过优化作物栽培方法、实现预测建模和精准农业技术的使用,以及协助高效的作物监测和疾病识别,提供了显著的优势。此外,人工智能还有可能优化供应链运作、仓储管理、运输系统和质量保证流程。它还可以通过减少收获后损失、预测分析和智能库存管理来解决粮食损失和浪费问题。这项研究强调,通过利用人工智能的力量,我们可以改变生产、分配和管理食品的方式,最终为所有人创造一个更安全、更可持续的未来。
{"title":"Towards sustainable agriculture: Harnessing AI for global food security","authors":"Dhananjay K. Pandey ,&nbsp;Richa Mishra","doi":"10.1016/j.aiia.2024.04.003","DOIUrl":"https://doi.org/10.1016/j.aiia.2024.04.003","url":null,"abstract":"<div><p>The issue of food security continues to be a prominent global concern, affecting a significant number of individuals who experience the adverse effects of hunger and malnutrition. The finding of a solution of this intricate issue necessitates the implementation of novel and paradigm-shifting methodologies in agriculture and food sector. In recent times, the domain of artificial intelligence (AI) has emerged as a potent tool capable of instigating a profound influence on the agriculture and food sectors. AI technologies provide significant advantages by optimizing crop cultivation practices, enabling the use of predictive modelling and precision agriculture techniques, and aiding efficient crop monitoring and disease identification. Additionally, AI has the potential to optimize supply chain operations, storage management, transportation systems, and quality assurance processes. It also tackles the problem of food loss and waste through post-harvest loss reduction, predictive analytics, and smart inventory management. This study highlights that how by utilizing the power of AI, we could transform the way we produce, distribute, and manage food, ultimately creating a more secure and sustainable future for all.</p></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"12 ","pages":"Pages 72-84"},"PeriodicalIF":0.0,"publicationDate":"2024-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2589721724000151/pdfft?md5=a9d0ed80991556893a392b3b0a4013c0&pid=1-s2.0-S2589721724000151-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140880413","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep learning-based intelligent precise aeration strategy for factory recirculating aquaculture systems 基于深度学习的工厂化循环水产养殖系统智能精确曝气策略
Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2024-04-15 DOI: 10.1016/j.aiia.2024.04.001
Junchao Yang , Yuting Zhou , Zhiwei Guo , Yueming Zhou , Yu Shen

Factory recirculating aquaculture system (RAS) is facing in a stage of continuous research and technological innovation. Intelligent aquaculture is an important direction for the future development of aquaculture. However, the RAS nowdays still has poor self-learning and optimal decision-making capabilities, which leads to high aquaculture cost and low running efficiency. In this paper, a precise aeration strategy based on deep learning is designed for improving the healthy growth of breeding objects. Firstly, the situation perception driven by computer vision is used to detect the hypoxia behavior. Then combined with the biological energy model, it is constructed to calculate the breeding objects oxygen consumption. Finally, the optimal adaptive aeration strategy is generated according to hypoxia behavior judgement and biological energy model. Experimental results show that the energy consumption of proposed precise aeration strategy decreased by 26.3% compared with the manual control and 12.8% compared with the threshold control. Meanwhile, stable water quality conditions accelerated breeding objects growth, and the breeding cycle with the average weight of 400 g was shortened from 5 to 6 months to 3–4 months.

工厂化循环水养殖系统(RAS)正面临着一个不断研究和技术创新的阶段。智能化养殖是未来水产养殖发展的重要方向。然而,目前的 RAS 自我学习和优化决策能力仍然较差,导致养殖成本高、运行效率低。本文设计了一种基于深度学习的精准曝气策略,以改善养殖对象的健康生长状况。首先,利用计算机视觉驱动的态势感知来检测缺氧行为。然后结合生物能量模型,计算繁殖对象的耗氧量。最后,根据缺氧行为判断和生物能模型生成最优自适应曝气策略。实验结果表明,所提出的精确曝气策略的能耗比手动控制降低了 26.3%,比阈值控制降低了 12.8%。同时,稳定的水质条件加快了繁殖对象的生长,平均体重 400 克的繁殖周期从 5 至 6 个月缩短到 3 至 4 个月。
{"title":"Deep learning-based intelligent precise aeration strategy for factory recirculating aquaculture systems","authors":"Junchao Yang ,&nbsp;Yuting Zhou ,&nbsp;Zhiwei Guo ,&nbsp;Yueming Zhou ,&nbsp;Yu Shen","doi":"10.1016/j.aiia.2024.04.001","DOIUrl":"10.1016/j.aiia.2024.04.001","url":null,"abstract":"<div><p>Factory recirculating aquaculture system (RAS) is facing in a stage of continuous research and technological innovation. Intelligent aquaculture is an important direction for the future development of aquaculture. However, the RAS nowdays still has poor self-learning and optimal decision-making capabilities, which leads to high aquaculture cost and low running efficiency. In this paper, a precise aeration strategy based on deep learning is designed for improving the healthy growth of breeding objects. Firstly, the situation perception driven by computer vision is used to detect the hypoxia behavior. Then combined with the biological energy model, it is constructed to calculate the breeding objects oxygen consumption. Finally, the optimal adaptive aeration strategy is generated according to hypoxia behavior judgement and biological energy model. Experimental results show that the energy consumption of proposed precise aeration strategy decreased by 26.3% compared with the manual control and 12.8% compared with the threshold control. Meanwhile, stable water quality conditions accelerated breeding objects growth, and the breeding cycle with the average weight of 400 g was shortened from 5 to 6 months to 3–4 months.</p></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"12 ","pages":"Pages 57-71"},"PeriodicalIF":0.0,"publicationDate":"2024-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2589721724000138/pdfft?md5=35867104fdfd8d303cccc4a2f32568ae&pid=1-s2.0-S2589721724000138-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140768894","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Grow-light smart monitoring system leveraging lightweight deep learning for plant disease classification 利用轻量级深度学习进行植物病害分类的生长光智能监控系统
Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2024-04-04 DOI: 10.1016/j.aiia.2024.03.003
William Macdonald , Yuksel Asli Sari , Majid Pahlevani

This work focuses on a novel lightweight machine learning approach to the task of plant disease classification, posing as a core component of a larger grow-light smart monitoring system. To the extent of our knowledge, this work is the first to implement lightweight convolutional neural network architectures leveraging down-scaled versions of inception blocks, residual connections, and dense residual connections applied without pre-training to the PlantVillage dataset. The novel contributions of this work include the proposal of a smart monitoring framework outline; responsible for detection and classification of ailments via the devised lightweight networks as well as interfacing with LED grow-light fixtures to optimize environmental parameters and lighting control for the growth of plants in a greenhouse system. Lightweight adaptation of dense residual connections achieved the best balance of minimizing model parameters and maximizing performance metrics with accuracy, precision, recall, and F1-scores of 96.75%, 97.62%, 97.59%, and 97.58% respectively, while consisting of only 228,479 model parameters. These results are further compared against various full-scale state-of-the-art model architectures trained on the PlantVillage dataset, of which the proposed down-scaled lightweight models were capable of performing equally to, if not better than many large-scale counterparts with drastically less computational requirements.

这项工作的重点是采用一种新颖的轻量级机器学习方法来完成植物病害分类任务,并将其作为大型光生长智能监控系统的核心组件。据我们所知,这项工作是首次在植物村数据集上实施轻量级卷积神经网络架构,利用了缩减版的初始块、残差连接和密集残差连接。这项工作的新贡献包括提出了一个智能监控框架大纲,负责通过所设计的轻量级网络进行病症检测和分类,并与 LED 种植灯具连接,以优化温室系统中植物生长的环境参数和照明控制。密集残差连接的轻量级适配在最小化模型参数和最大化性能指标之间实现了最佳平衡,准确率、精确度、召回率和 F1 分数分别为 96.75%、97.62%、97.59% 和 97.58%,而模型参数只有 228479 个。这些结果进一步与在 PlantVillage 数据集上训练的各种最先进的完整模型架构进行了比较,其中所提出的缩小比例轻量级模型的性能与许多大型同类模型相当,甚至更好,而计算要求却大大降低。
{"title":"Grow-light smart monitoring system leveraging lightweight deep learning for plant disease classification","authors":"William Macdonald ,&nbsp;Yuksel Asli Sari ,&nbsp;Majid Pahlevani","doi":"10.1016/j.aiia.2024.03.003","DOIUrl":"https://doi.org/10.1016/j.aiia.2024.03.003","url":null,"abstract":"<div><p>This work focuses on a novel lightweight machine learning approach to the task of plant disease classification, posing as a core component of a larger grow-light smart monitoring system. To the extent of our knowledge, this work is the first to implement lightweight convolutional neural network architectures leveraging down-scaled versions of inception blocks, residual connections, and dense residual connections applied without pre-training to the PlantVillage dataset. The novel contributions of this work include the proposal of a smart monitoring framework outline; responsible for detection and classification of ailments via the devised lightweight networks as well as interfacing with LED grow-light fixtures to optimize environmental parameters and lighting control for the growth of plants in a greenhouse system. Lightweight adaptation of dense residual connections achieved the best balance of minimizing model parameters and maximizing performance metrics with accuracy, precision, recall, and F1-scores of 96.75%, 97.62%, 97.59%, and 97.58% respectively, while consisting of only 228,479 model parameters. These results are further compared against various full-scale state-of-the-art model architectures trained on the PlantVillage dataset, of which the proposed down-scaled lightweight models were capable of performing equally to, if not better than many large-scale counterparts with drastically less computational requirements.</p></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"12 ","pages":"Pages 44-56"},"PeriodicalIF":0.0,"publicationDate":"2024-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2589721724000126/pdfft?md5=92380011c829045a5c9cecbd59eb4f0b&pid=1-s2.0-S2589721724000126-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140547142","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep learning for broadleaf weed seedlings classification incorporating data variability and model flexibility across two contrasting environments 深度学习用于阔叶杂草幼苗分类,在两种截然不同的环境中结合数据的可变性和模型的灵活性
Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2024-03-13 DOI: 10.1016/j.aiia.2024.03.002
Lorenzo León , Cristóbal Campos , Juan Hirzel

The increasing deployment of deep learning models for distinguishing weeds and crops has witnessed notable strides in agricultural scenarios. However, a conspicuous gap endures in the literature concerning the training and testing of models across disparate environmental conditions. Predominant methodologies either delineate a single dataset distribution into training, validation, and testing subsets or merge datasets from diverse conditions or distributions before their division into the subsets. Our study aims to ameliorate this gap by extending to several broadleaf weed categories across varied distributions, evaluating the impact of training convolutional neural networks on datasets specific to particular conditions or distributions, and assessing their performance in entirely distinct settings through three experiments. By evaluating diverse network architectures and training approaches (finetuning versus feature extraction), testing various architectures, employing different training strategies, and amalgamating data, we devised straightforward guidelines to ensure the model's deployability in contrasting environments with sustained precision and accuracy.

In Experiment 1, conducted in a uniform environment, accuracy ranged from 80% to 100% across all models and training strategies, with finetune mode achieving a superior performance of 94% to 99.9% compared to the feature extraction mode at 80% to 92.96%. Experiment 2 underscored a significant performance decline, with accuracy figures between 25% and 60%, primarily at 40%, when the origin of the test data deviated from the train and validation sets. Experiment 3, spotlighting dataset and distribution amalgamation, yielded promising accuracy metrics, notably a peak of 99.6% for ResNet in finetuning mode to a low of 69.9% for InceptionV3 in feature extraction mode. These pivotal findings emphasize that merging data from diverse distributions, coupled with finetuned training on advanced architectures like ResNet and MobileNet, markedly enhances performance, contrasting with the relatively lower performance exhibited by simpler networks like AlexNet. Our results suggest that embracing data diversity and flexible training methodologies are crucial for optimizing weed classification models when disparate data distributions are available. This study gives a practical alternative for treating diverse datasets with real-world agricultural variances.

越来越多的深度学习模型被用于区分杂草和农作物,在农业场景中取得了显著进展。然而,关于在不同环境条件下训练和测试模型的文献仍存在明显差距。主流方法要么将单一数据集分布划分为训练、验证和测试子集,要么在划分子集之前合并来自不同条件或分布的数据集。我们的研究旨在改善这一差距,我们将研究范围扩展到了不同分布的几种阔叶杂草类别,评估了卷积神经网络在特定条件或分布的数据集上进行训练的影响,并通过三项实验评估了它们在完全不同的环境中的表现。通过评估不同的网络架构和训练方法(微调与特征提取)、测试不同的架构、采用不同的训练策略以及合并数据,我们制定了简单明了的指导原则,以确保该模型可在不同环境中部署,并保持持续的精确度和准确性。在统一环境下进行的实验 1 中,所有模型和训练策略的准确率在 80% 到 100% 之间,微调模式的准确率为 94% 到 99.9%,而特征提取模式的准确率为 80% 到 92.96%。实验 2 显示,当测试数据的来源偏离训练集和验证集时,性能显著下降,准确率在 25% 到 60% 之间,主要是 40%。实验 3 重点考察了数据集和分布的合并情况,结果显示准确率指标很有希望,特别是在微调模式下,ResNet 的准确率最高达到 99.6%,而在特征提取模式下,InceptionV3 的准确率最低为 69.9%。这些重要发现强调,合并来自不同分布的数据,再加上在 ResNet 和 MobileNet 等高级架构上进行微调训练,可显著提高性能,而 AlexNet 等简单网络的性能则相对较低。我们的研究结果表明,在有不同数据分布的情况下,拥抱数据多样性和灵活的训练方法对于优化杂草分类模型至关重要。这项研究为处理具有真实世界农业差异的多样化数据集提供了一种实用的选择。
{"title":"Deep learning for broadleaf weed seedlings classification incorporating data variability and model flexibility across two contrasting environments","authors":"Lorenzo León ,&nbsp;Cristóbal Campos ,&nbsp;Juan Hirzel","doi":"10.1016/j.aiia.2024.03.002","DOIUrl":"10.1016/j.aiia.2024.03.002","url":null,"abstract":"<div><p>The increasing deployment of deep learning models for distinguishing weeds and crops has witnessed notable strides in agricultural scenarios. However, a conspicuous gap endures in the literature concerning the training and testing of models across disparate environmental conditions. Predominant methodologies either delineate a single dataset distribution into training, validation, and testing subsets or merge datasets from diverse conditions or distributions before their division into the subsets. Our study aims to ameliorate this gap by extending to several broadleaf weed categories across varied distributions, evaluating the impact of training convolutional neural networks on datasets specific to particular conditions or distributions, and assessing their performance in entirely distinct settings through three experiments. By evaluating diverse network architectures and training approaches (<em>finetuning</em> versus <em>feature extraction</em>), testing various architectures, employing different training strategies, and amalgamating data, we devised straightforward guidelines to ensure the model's deployability in contrasting environments with sustained precision and accuracy.</p><p>In Experiment 1, conducted in a uniform environment, accuracy ranged from 80% to 100% across all models and training strategies, with <em>finetune</em> mode achieving a superior performance of 94% to 99.9% compared to the <em>feature extraction</em> mode at 80% to 92.96%. Experiment 2 underscored a significant performance decline, with accuracy figures between 25% and 60%, primarily at 40%, when the origin of the test data deviated from the train and validation sets. Experiment 3, spotlighting dataset and distribution amalgamation, yielded promising accuracy metrics, notably a peak of 99.6% for ResNet in <em>finetuning</em> mode to a low of 69.9% for InceptionV3 in <em>feature extraction</em> mode. These pivotal findings emphasize that merging data from diverse distributions, coupled with <em>finetuned</em> training on advanced architectures like ResNet and MobileNet, markedly enhances performance, contrasting with the relatively lower performance exhibited by simpler networks like AlexNet. Our results suggest that embracing data diversity and flexible training methodologies are crucial for optimizing weed classification models when disparate data distributions are available. This study gives a practical alternative for treating diverse datasets with real-world agricultural variances.</p></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"12 ","pages":"Pages 29-43"},"PeriodicalIF":0.0,"publicationDate":"2024-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2589721724000059/pdfft?md5=d8051b8dea55cec53a6ba7889cbc0c03&pid=1-s2.0-S2589721724000059-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140283105","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LeafSpotNet: A deep learning framework for detecting leaf spot disease in jasmine plants LeafSpotNet:检测茉莉花叶斑病的深度学习框架
Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2024-03-11 DOI: 10.1016/j.aiia.2024.02.002
Shwetha V, Arnav Bhagwat, Vijaya Laxmi

Leaf blight spot disease, caused by bacteria and fungi, poses a threat to plant health, leading to leaf discoloration and diminished agricultural yield. In response, we present a MobileNetV3 based classifier designed for the Jasmine plant, leveraging lightweight Convolutional Neural Networks (CNNs) to accurately identify disease stages. The model integrates depth wise convolution layers and max pool layers for enhanced feature extraction, focusing on crucial low level features indicative of the disease. Through preprocessing techniques, including data augmentation with Conditional GAN and Particle Swarm Optimization for feature selection, the classifier achieves robust performance. Evaluation on curated datasets demonstrates an outstanding 97% training accuracy, highlighting its efficacy. Real world testing with diverse conditions, such as extreme camera angles and varied lighting, attests to the model's resilience, yielding test accuracies between 94% and 96%. The dataset's tailored design for CNN based classification ensures result reliability. Importantly, the model's lightweight classification, marked by fast computation time and reduced size, positions it as an efficient solution for real time applications. This comprehensive approach underscores the proposed classifier's significance in addressing leaf blight spot disease challenges in commercial crops.

由细菌和真菌引起的叶斑病对植物健康构成威胁,导致叶片褪色和农业减产。为此,我们提出了一种基于 MobileNetV3 的分类器,该分类器专为茉莉花植物设计,利用轻量级卷积神经网络 (CNN) 准确识别疾病阶段。该模型集成了深度卷积层和最大池层,用于增强特征提取,重点关注指示疾病的关键低级特征。通过预处理技术,包括使用条件 GAN 和粒子群优化技术进行特征选择的数据增强,分类器实现了稳健的性能。在经过策划的数据集上进行的评估显示,其训练准确率高达 97%,充分体现了其功效。在极端摄像机角度和不同光照等各种条件下进行的实际测试证明了该模型的适应能力,测试准确率在 94% 到 96% 之间。该数据集专为基于 CNN 的分类设计,确保了结果的可靠性。重要的是,该模型的轻量级分类具有计算时间短、体积小的特点,是实时应用的高效解决方案。这种综合方法凸显了所提出的分类器在应对经济作物叶枯病病害挑战方面的重要意义。
{"title":"LeafSpotNet: A deep learning framework for detecting leaf spot disease in jasmine plants","authors":"Shwetha V,&nbsp;Arnav Bhagwat,&nbsp;Vijaya Laxmi","doi":"10.1016/j.aiia.2024.02.002","DOIUrl":"https://doi.org/10.1016/j.aiia.2024.02.002","url":null,"abstract":"<div><p>Leaf blight spot disease, caused by bacteria and fungi, poses a threat to plant health, leading to leaf discoloration and diminished agricultural yield. In response, we present a MobileNetV3 based classifier designed for the Jasmine plant, leveraging lightweight Convolutional Neural Networks (CNNs) to accurately identify disease stages. The model integrates depth wise convolution layers and max pool layers for enhanced feature extraction, focusing on crucial low level features indicative of the disease. Through preprocessing techniques, including data augmentation with Conditional GAN and Particle Swarm Optimization for feature selection, the classifier achieves robust performance. Evaluation on curated datasets demonstrates an outstanding 97% training accuracy, highlighting its efficacy. Real world testing with diverse conditions, such as extreme camera angles and varied lighting, attests to the model's resilience, yielding test accuracies between 94% and 96%. The dataset's tailored design for CNN based classification ensures result reliability. Importantly, the model's lightweight classification, marked by fast computation time and reduced size, positions it as an efficient solution for real time applications. This comprehensive approach underscores the proposed classifier's significance in addressing leaf blight spot disease challenges in commercial crops.</p></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"12 ","pages":"Pages 1-18"},"PeriodicalIF":0.0,"publicationDate":"2024-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2589721724000035/pdfft?md5=eeca9eda52b267f86b4fd11610c9f9fd&pid=1-s2.0-S2589721724000035-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140163319","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A novel approach based on a modified mask R-CNN for the weight prediction of live pigs 基于改进型掩膜 R-CNN 的活猪体重预测新方法
Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2024-03-04 DOI: 10.1016/j.aiia.2024.03.001
Chuanqi Xie , Yuji Cang , Xizhong Lou , Hua Xiao , Xing Xu , Xiangjun Li , Weidong Zhou

Since determining the weight of pigs during large-scale breeding and production is challenging, using non-contact estimation methods is vital. This study proposed a novel pig weight prediction method based on a modified mask region-convolutional neural network (mask R-CNN). The modified approach used ResNeSt as the backbone feature extraction network to enhance the image feature extraction ability. The feature pyramid network (FPN) was added to the backbone feature extraction network for multi-scale feature fusion. The channel attention mechanism (CAM) and spatial attention mechanism (SAM) were introduced in the region proposal network (RPN) for the adaptive integration of local features and their global dependencies to capture global information, ultimately improving image segmentation accuracy. The modified network obtained a precision rate (P), recall rate (R), and mean average precision (MAP) of 90.33%, 89.85%, and 95.21%, respectively, effectively segmenting the pig regions in the images. Five image features, namely the back area (A), body length (L), body width (W), average depth (AD), and eccentricity (E), were investigated. The pig depth images were used to build five regression algorithms (ordinary least squares (OLS), AdaBoost, CatBoost, XGBoost, and random forest (RF)) for weight value prediction. AdaBoost achieved the best prediction result with a coefficient of determination (R2) of 0.987, a mean absolute error (MAE) of 2.96 kg, a mean square error (MSE) of 12.87 kg2, and a mean absolute percentage error (MAPE) of 8.45%. The results demonstrated that the machine learning models effectively predicted the weight values of the pigs, providing technical support for intelligent pig farm management.

由于在大规模育种和生产过程中确定猪的体重具有挑战性,因此使用非接触式估算方法至关重要。本研究提出了一种基于改进型掩膜区域卷积神经网络(掩膜 R-CNN)的新型猪体重预测方法。改进方法使用 ResNeSt 作为骨干特征提取网络,以增强图像特征提取能力。在骨干特征提取网络中加入了特征金字塔网络(FPN),用于多尺度特征融合。在区域建议网络(RPN)中引入了通道注意机制(CAM)和空间注意机制(SAM),用于自适应地整合局部特征及其全局依赖关系,以捕捉全局信息,最终提高图像分割精度。改进后的网络获得的精确率(P)、召回率(R)和平均精确率(MAP)分别为 90.33%、89.85% 和 95.21%,有效地分割了图像中的猪区域。研究了五个图像特征,即背部面积(A)、体长(L)、体宽(W)、平均深度(AD)和偏心率(E)。猪的深度图像被用于建立五种回归算法(普通最小二乘法(OLS)、AdaBoost、CatBoost、XGBoost 和随机森林(RF))来预测权重值。AdaBoost 的预测结果最好,其决定系数 (R2) 为 0.987,平均绝对误差 (MAE) 为 2.96 千克,平均平方误差 (MSE) 为 12.87 千克2,平均绝对百分比误差 (MAPE) 为 8.45%。结果表明,机器学习模型能有效预测猪的体重值,为猪场的智能化管理提供了技术支持。
{"title":"A novel approach based on a modified mask R-CNN for the weight prediction of live pigs","authors":"Chuanqi Xie ,&nbsp;Yuji Cang ,&nbsp;Xizhong Lou ,&nbsp;Hua Xiao ,&nbsp;Xing Xu ,&nbsp;Xiangjun Li ,&nbsp;Weidong Zhou","doi":"10.1016/j.aiia.2024.03.001","DOIUrl":"https://doi.org/10.1016/j.aiia.2024.03.001","url":null,"abstract":"<div><p>Since determining the weight of pigs during large-scale breeding and production is challenging, using non-contact estimation methods is vital. This study proposed a novel pig weight prediction method based on a modified mask region-convolutional neural network (mask R-CNN). The modified approach used ResNeSt as the backbone feature extraction network to enhance the image feature extraction ability. The feature pyramid network (FPN) was added to the backbone feature extraction network for multi-scale feature fusion. The channel attention mechanism (CAM) and spatial attention mechanism (SAM) were introduced in the region proposal network (RPN) for the adaptive integration of local features and their global dependencies to capture global information, ultimately improving image segmentation accuracy. The modified network obtained a precision rate (P), recall rate (R), and mean average precision (MAP) of 90.33%, 89.85%, and 95.21%, respectively, effectively segmenting the pig regions in the images. Five image features, namely the back area (A), body length (L), body width (W), average depth (AD), and eccentricity (E), were investigated. The pig depth images were used to build five regression algorithms (ordinary least squares (OLS), AdaBoost, CatBoost, XGBoost, and random forest (RF)) for weight value prediction. AdaBoost achieved the best prediction result with a coefficient of determination (R<sup>2</sup>) of 0.987, a mean absolute error (MAE) of 2.96 kg, a mean square error (MSE) of 12.87 kg<sup>2</sup>, and a mean absolute percentage error (MAPE) of 8.45%. The results demonstrated that the machine learning models effectively predicted the weight values of the pigs, providing technical support for intelligent pig farm management.</p></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"12 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2589721724000047/pdfft?md5=43c515f8d95da29c768ed4d67f22ebc0&pid=1-s2.0-S2589721724000047-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140163321","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Using an improved lightweight YOLOv8 model for real-time detection of multi-stage apple fruit in complex orchard environments 使用改进的轻量级 YOLOv8 模型实时检测复杂果园环境中的多阶段苹果果实
Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2024-03-01 DOI: 10.1016/j.aiia.2024.02.001
Baoling Ma , Zhixin Hua , Yuchen Wen , Hongxing Deng , Yongjie Zhao , Liuru Pu , Huaibo Song

For the purpose of monitoring apple fruits effectively throughout the entire growth period in smart orchards. A lightweight model named YOLOv8n-ShuffleNetv2-Ghost-SE was proposed. The ShuffleNetv2 basic modules and down-sampling modules were alternately connected, replacing the Backbone of YOLOv8n model. The Ghost modules replaced the Conv modules and the C2fGhost modules replaced the C2f modules in the Neck part of the YOLOv8n. ShuffleNetv2 reduced the memory access cost through channel splitting operations. The Ghost module combined linear and non-linear convolutions to reduce the network computation cost. The Wise-IoU (WIoU) replaced the CIoU for calculating the bounding box regression loss, which dynamically adjusted the anchor box quality threshold and gradient gain allocation strategy, optimizing the size and position of predicted bounding boxes. The Squeeze-and-Excitation (SE) was embedded in the Backbone and Neck part of YOLOv8n to enhance the representation ability of feature maps. The algorithm ensured high precision while having small model size and fast detection speed, which facilitated model migration and deployment. Using 9652 images validated the effectiveness of the model. The YOLOv8n-ShuffleNetv2-Ghost-SE model achieved Precision of 94.1%, Recall of 82.6%, mean Average Precision of 91.4%, model size of 2.6 MB, parameters of 1.18 M, FLOPs of 3.9 G, and detection speed of 39.37 fps. The detection speeds on the Jetson Xavier NX development board were 3.17 fps. Comparisons with advanced models including Faster R-CNN, SSD, YOLOv5s, YOLOv7‑tiny, YOLOv8s, YOLOv8n, MobileNetv3_small-Faster, MobileNetv3_small-Ghost, ShuflleNetv2-Faster, ShuflleNetv2-Ghost, ShuflleNetv2-Ghost-CBAM, ShuflleNetv2-Ghost-ECA, and ShuflleNetv2-Ghost-CA demonstrated that the method achieved smaller model and faster detection speed. The research can provide reference for the development of smart devices in apple orchards.

为了在智能果园中对苹果果实的整个生长期进行有效监控。我们提出了一种名为 YOLOv8n-ShuffleNetv2-Ghost-SE 的轻量级模型。ShuffleNetv2 基本模块和下采样模块交替连接,取代了 YOLOv8n 模型的 Backbone。在 YOLOv8n 的 Neck 部分,Ghost 模块取代了 Conv 模块,C2fGhost 模块取代了 C2f 模块。ShuffleNetv2 通过通道分割操作降低了内存访问成本。Ghost 模块结合了线性和非线性卷积,降低了网络计算成本。Wise-IoU(WIoU)取代了计算边界框回归损失的 CIoU,可动态调整锚框质量阈值和梯度增益分配策略,优化预测边界框的大小和位置。YOLOv8n 的骨干和内核部分嵌入了挤压激励算法(SE),以增强特征图的表示能力。该算法在确保高精度的同时,还具有模型体积小、检测速度快的特点,为模型的迁移和部署提供了便利。使用 9652 幅图像验证了模型的有效性。YOLOv8n-ShuffleNetv2-Ghost-SE 模型的精确度为 94.1%,召回率为 82.6%,平均精确度为 91.4%,模型大小为 2.6 MB,参数为 1.18 M,FLOPs 为 3.9 G,检测速度为 39.37 fps。Jetson Xavier NX 开发板的检测速度为 3.17 fps。与 Faster R-CNN、SSD、YOLOv5s、YOLOv7-tiny、YOLOv8s、YOLOv8n、MobileNetv3_small-Faster、MobileNetv3_small-Ghost、ShuflleNetv2-Faster、ShuflleNetv2-Ghost、ShuflleNetv2-Ghost-CBAM、ShuflleNetv2-Ghost-ECA 和 ShuflleNetv2-Ghost-CA 等先进模型的比较表明,该方法实现了更小的模型和更快的检测速度。该研究可为苹果园智能设备的开发提供参考。
{"title":"Using an improved lightweight YOLOv8 model for real-time detection of multi-stage apple fruit in complex orchard environments","authors":"Baoling Ma ,&nbsp;Zhixin Hua ,&nbsp;Yuchen Wen ,&nbsp;Hongxing Deng ,&nbsp;Yongjie Zhao ,&nbsp;Liuru Pu ,&nbsp;Huaibo Song","doi":"10.1016/j.aiia.2024.02.001","DOIUrl":"https://doi.org/10.1016/j.aiia.2024.02.001","url":null,"abstract":"<div><p>For the purpose of monitoring apple fruits effectively throughout the entire growth period in smart orchards. A lightweight model named YOLOv8n-ShuffleNetv2-Ghost-SE was proposed. The ShuffleNetv2 basic modules and down-sampling modules were alternately connected, replacing the Backbone of YOLOv8n model. The Ghost modules replaced the Conv modules and the C2fGhost modules replaced the C2f modules in the Neck part of the YOLOv8n. ShuffleNetv2 reduced the memory access cost through channel splitting operations. The Ghost module combined linear and non-linear convolutions to reduce the network computation cost. The Wise-IoU (WIoU) replaced the CIoU for calculating the bounding box regression loss, which dynamically adjusted the anchor box quality threshold and gradient gain allocation strategy, optimizing the size and position of predicted bounding boxes. The Squeeze-and-Excitation (SE) was embedded in the Backbone and Neck part of YOLOv8n to enhance the representation ability of feature maps. The algorithm ensured high precision while having small model size and fast detection speed, which facilitated model migration and deployment. Using 9652 images validated the effectiveness of the model. The YOLOv8n-ShuffleNetv2-Ghost-SE model achieved Precision of 94.1%, Recall of 82.6%, mean Average Precision of 91.4%, model size of 2.6 MB, parameters of 1.18 M, FLOPs of 3.9 G, and detection speed of 39.37 fps. The detection speeds on the Jetson Xavier NX development board were 3.17 fps. Comparisons with advanced models including Faster R-CNN, SSD, YOLOv5s, YOLOv7‑tiny, YOLOv8s, YOLOv8n, MobileNetv3_small-Faster, MobileNetv3_small-Ghost, ShuflleNetv2-Faster, ShuflleNetv2-Ghost, ShuflleNetv2-Ghost-CBAM, ShuflleNetv2-Ghost-ECA, and ShuflleNetv2-Ghost-CA demonstrated that the method achieved smaller model and faster detection speed. The research can provide reference for the development of smart devices in apple orchards.</p></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"11 ","pages":"Pages 70-82"},"PeriodicalIF":0.0,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2589721724000023/pdfft?md5=6fc303d1eb23f5151de28ee6f36c2d3d&pid=1-s2.0-S2589721724000023-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140031373","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automated quality inspection of baby corn using image processing and deep learning 利用图像处理和深度学习实现婴幼儿玉米质量自动检测
Q1 AGRICULTURE, MULTIDISCIPLINARY Pub Date : 2024-01-23 DOI: 10.1016/j.aiia.2024.01.001
Kris Wonggasem, Pongsan Chakranon, Papis Wongchaisuwat

The food industry typically relies heavily on manual operations with high proficiency and skills. According to the quality inspection process, a baby corn with black marks or blemishes is considered a defect or unqualified class which should be discarded. Quality inspection and sorting of agricultural products like baby corn are labor-intensive and time-consuming. The main goal of this work is to develop an automated quality inspection framework to differentiate between ‘pass’ and ‘fail’ categories based on baby corn images. A traditional image processing method using a threshold principle is compared with relatively more advanced deep learning models. Particularly, Convolutional neural networks, specific sub-types of deep learning models, were implemented. Thorough experiments on choices of network architectures and their hyperparameters were conducted and compared. A Shapley additive explanations (SHAP) framework was further utilized for network interpretation purposes. The EfficientNetB5 networks with relatively larger input sizes yielded up to 99.06% accuracy as the best performance against 95.28% obtained from traditional image processing. Incorporating a region of interest identification, several model experiments, data application on baby corn images, and the SHAP framework are our main contributions. Our proposed quality inspection system to automatically differentiate baby corn images provides a potential pipeline to further support the agricultural production process.

食品行业通常非常依赖熟练度和技能都很高的手工操作。根据质量检验流程,有黑印或瑕疵的小玉米被视为缺陷或不合格等级,应予以丢弃。对小玉米等农产品进行质量检验和分拣是一项劳动密集型工作,耗费大量时间。这项工作的主要目标是开发一个自动质量检测框架,根据小玉米图像区分 "合格 "和 "不合格 "类别。使用阈值原理的传统图像处理方法与相对更先进的深度学习模型进行了比较。特别是卷积神经网络,它是深度学习模型的特定子类型。对网络架构及其超参数的选择进行了全面的实验和比较。为了进行网络解释,还进一步利用了沙普利加法解释(SHAP)框架。输入尺寸相对较大的 EfficientNetB5 网络的准确率高达 99.06%,而传统图像处理的准确率为 95.28%。将兴趣区域识别、多个模型实验、婴幼儿玉米图像数据应用和 SHAP 框架结合在一起是我们的主要贡献。我们提出的自动区分玉米图像的质量检测系统为进一步支持农业生产过程提供了一个潜在的管道。
{"title":"Automated quality inspection of baby corn using image processing and deep learning","authors":"Kris Wonggasem,&nbsp;Pongsan Chakranon,&nbsp;Papis Wongchaisuwat","doi":"10.1016/j.aiia.2024.01.001","DOIUrl":"10.1016/j.aiia.2024.01.001","url":null,"abstract":"<div><p>The food industry typically relies heavily on manual operations with high proficiency and skills. According to the quality inspection process, a baby corn with black marks or blemishes is considered a defect or unqualified class which should be discarded. Quality inspection and sorting of agricultural products like baby corn are labor-intensive and time-consuming. The main goal of this work is to develop an automated quality inspection framework to differentiate between ‘pass’ and ‘fail’ categories based on baby corn images. A traditional image processing method using a threshold principle is compared with relatively more advanced deep learning models. Particularly, Convolutional neural networks, specific sub-types of deep learning models, were implemented. Thorough experiments on choices of network architectures and their hyperparameters were conducted and compared. A Shapley additive explanations (SHAP) framework was further utilized for network interpretation purposes. The EfficientNetB5 networks with relatively larger input sizes yielded up to 99.06% accuracy as the best performance against 95.28% obtained from traditional image processing. Incorporating a region of interest identification, several model experiments, data application on baby corn images, and the SHAP framework are our main contributions. Our proposed quality inspection system to automatically differentiate baby corn images provides a potential pipeline to further support the agricultural production process.</p></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":"11 ","pages":"Pages 61-69"},"PeriodicalIF":0.0,"publicationDate":"2024-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2589721724000011/pdfft?md5=7f516ee421a879bd329ecdddca0cde40&pid=1-s2.0-S2589721724000011-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139634663","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Artificial Intelligence in Agriculture
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1