首页 > 最新文献

Artificial Intelligence in Agriculture最新文献

英文 中文
Predicting the true density of commercial biomass pellets using near-infrared hyperspectral imaging 利用近红外高光谱成像技术预测商用生物质颗粒的真实密度
Q1 Computer Science Pub Date : 2022-01-01 DOI: 10.1016/j.aiia.2022.11.004
Lakkana Pitak , Khwantri Saengprachatanarug , Kittipong Laloon , Jetsada Posom

The use of biomass is increasing because it is a form of renewable energy that provides high heating value. Rapid measurements could be used to check the quality of biomass pellets during production. This research aims to apply a near-infrared (NIR) hyperspectral imaging system for the evaluation of the true density of individual biomass pellets during the production process. Real-time measurement of the true density could be beneficial for the operation settings, such as the ratio of the binding agent to the raw material, the temperature of operation, the production rate, and the mixing ratio. The true density could also be used for rough measurement of the bulk density, which is a necessary parameter in commercial production. Therefore, knowledge of the true density is required during production in order to maintain the pellet quality as well as operation conditions. A prediction model was developed using partial least squares (PLS) regression across different wavelengths selected using different spectral pre-treatment methods and variable selection methods. After model development, the performance of the models was compared. The best model for predicting the true density of individual pellets was developed with first-derivative spectra (D1) and variables selected by the genetic algorithm (GA) method, and the number of variables was reduced from 256 to 53 wavelengths. The model gave R2cal, R2val, SEC, SEP, and RPD values of 0.88, 0.89, 0.08 g/cm3, 0.07 g/cm3, and 3.04, respectively. The optimal prediction model was applied to construct distribution maps of the true density of individual biomass pellets, with the level of the predicted values displayed in colour bars. This imaging technique could be used to check visually the true density of biomass pellets during the production process for warnings to quality control equipment.

生物质的使用正在增加,因为它是一种可再生能源,提供高热值。在生产过程中,可以使用快速测量来检查生物质颗粒的质量。本研究旨在应用近红外(NIR)高光谱成像系统来评估生产过程中单个生物质颗粒的真实密度。实时测量真实密度有助于操作设置,例如粘合剂与原料的比例、操作温度、生产速度和混合比例。真密度也可以用来粗略测量堆积密度,这是商业生产中必要的参数。因此,在生产过程中,为了保持颗粒质量和操作条件,需要了解真实密度。利用偏最小二乘(PLS)回归建立了不同波长的预测模型,采用不同的光谱预处理方法和变量选择方法。模型开发完成后,对模型的性能进行了比较。利用一阶导数光谱(D1)和遗传算法(GA)选择的变量建立了预测颗粒真实密度的最佳模型,并将变量的波长从256个减少到53个。模型的R2cal、R2val、SEC、SEP和RPD值分别为0.88、0.89、0.08 g/cm3、0.07 g/cm3和3.04。应用最优预测模型构建单个生物质颗粒真实密度分布图,预测值的水平以彩色条显示。这种成像技术可用于在生产过程中直观地检查生物质颗粒的真实密度,为质量控制设备提供警告。
{"title":"Predicting the true density of commercial biomass pellets using near-infrared hyperspectral imaging","authors":"Lakkana Pitak ,&nbsp;Khwantri Saengprachatanarug ,&nbsp;Kittipong Laloon ,&nbsp;Jetsada Posom","doi":"10.1016/j.aiia.2022.11.004","DOIUrl":"10.1016/j.aiia.2022.11.004","url":null,"abstract":"<div><p>The use of biomass is increasing because it is a form of renewable energy that provides high heating value. Rapid measurements could be used to check the quality of biomass pellets during production. This research aims to apply a near-infrared (NIR) hyperspectral imaging system for the evaluation of the true density of individual biomass pellets during the production process. Real-time measurement of the true density could be beneficial for the operation settings, such as the ratio of the binding agent to the raw material, the temperature of operation, the production rate, and the mixing ratio. The true density could also be used for rough measurement of the bulk density, which is a necessary parameter in commercial production. Therefore, knowledge of the true density is required during production in order to maintain the pellet quality as well as operation conditions. A prediction model was developed using partial least squares (PLS) regression across different wavelengths selected using different spectral pre-treatment methods and variable selection methods. After model development, the performance of the models was compared. The best model for predicting the true density of individual pellets was developed with first-derivative spectra (D1) and variables selected by the genetic algorithm (GA) method, and the number of variables was reduced from 256 to 53 wavelengths. The model gave R<sup>2</sup><sub>cal</sub>, R<sup>2</sup><sub>val</sub>, SEC, SEP, and RPD values of 0.88, 0.89, 0.08 g/cm<sup>3</sup>, 0.07 g/cm<sup>3</sup>, and 3.04, respectively. The optimal prediction model was applied to construct distribution maps of the true density of individual biomass pellets, with the level of the predicted values displayed in colour bars. This imaging technique could be used to check visually the true density of biomass pellets during the production process for warnings to quality control equipment.</p></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2589721722000228/pdfft?md5=cb77a6134aff04eafcac2350ee68b17f&pid=1-s2.0-S2589721722000228-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47398371","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Prediction of exchangeable potassium in soil through mid-infrared spectroscopy and deep learning: From prediction to explainability 利用中红外光谱和深度学习预测土壤交换性钾:从预测到可解释性
Q1 Computer Science Pub Date : 2022-01-01 DOI: 10.1016/j.aiia.2022.10.001
Franck Albinet , Yi Peng , Tetsuya Eguchi , Erik Smolders , Gerd Dercon

The ability to characterize rapidly and repeatedly exchangeable potassium (Kex) content in the soil is essential for optimizing remediation of radiocaesium contamination in agriculture. In this paper, we show how this can be now achieved using a Convolutional Neural Network (CNN) model trained on a large Mid-Infrared (MIR) soil spectral library (40,000 samples with Kex determined with 1 M NH4OAc, pH 7), compiled by the National Soil Survey Center of the United States Department of Agriculture. Using Partial Least Squares Regression as a baseline, we found that our implemented CNN leads to a significantly higher prediction performance of Kex when a large amount of data is available (10000), increasing the coefficient of determination from 0.64 to 0.79, and reducing the Mean Absolute Percentage Error from 135% to 31%. Furthermore, in order to provide end-users with required interpretive keys, we implemented the GradientShap algorithm to identify the spectral regions considered important by the model for predicting Kex. Used in the context of the implemented CNN on various Soil Taxonomy Orders, it allowed (i) to relate the important spectral features to domain knowledge and (ii) to demonstrate that including all Soil Taxonomy Orders in CNN-based modeling is beneficial as spectral features learned can be reused across different, sometimes underrepresented orders.

表征土壤中快速和反复交换性钾(Kex)含量的能力对于优化农业放射性铯污染的修复至关重要。在本文中,我们展示了如何使用卷积神经网络(CNN)模型在美国农业部国家土壤调查中心编制的大型中红外(MIR)土壤光谱库(40000个样品,用1 M NH4OAc测定Kex, pH为7)上进行训练来实现这一目标。使用偏最小二乘回归作为基线,我们发现我们实现的CNN在大量可用数据(10000)时显著提高了Kex的预测性能,将决定系数从0.64提高到0.79,并将平均绝对百分比误差从135%降低到31%。此外,为了向最终用户提供所需的解释键,我们实现了GradientShap算法来识别模型认为重要的光谱区域,以预测键值。在各种土壤分类阶的实现CNN的背景下使用,它允许(i)将重要的光谱特征与领域知识联系起来,(ii)证明在基于CNN的建模中包括所有土壤分类阶是有益的,因为学习到的光谱特征可以在不同的,有时是代表性不足的阶之间重用。
{"title":"Prediction of exchangeable potassium in soil through mid-infrared spectroscopy and deep learning: From prediction to explainability","authors":"Franck Albinet ,&nbsp;Yi Peng ,&nbsp;Tetsuya Eguchi ,&nbsp;Erik Smolders ,&nbsp;Gerd Dercon","doi":"10.1016/j.aiia.2022.10.001","DOIUrl":"10.1016/j.aiia.2022.10.001","url":null,"abstract":"<div><p>The ability to characterize rapidly and repeatedly exchangeable potassium (K<sub>ex</sub>) content in the soil is essential for optimizing remediation of radiocaesium contamination in agriculture. In this paper, we show how this can be now achieved using a Convolutional Neural Network (CNN) model trained on a large Mid-Infrared (MIR) soil spectral library (40,000 samples with K<sub>ex</sub> determined with 1 M NH<sub>4</sub>OAc, pH 7), compiled by the National Soil Survey Center of the United States Department of Agriculture. Using Partial Least Squares Regression as a baseline, we found that our implemented CNN leads to a significantly higher prediction performance of K<sub>ex</sub> when a large amount of data is available (10000), increasing the coefficient of determination from 0.64 to 0.79, and reducing the Mean Absolute Percentage Error from 135% to 31%. Furthermore, in order to provide end-users with required interpretive keys, we implemented the GradientShap algorithm to identify the spectral regions considered important by the model for predicting K<sub>ex</sub>. Used in the context of the implemented CNN on various Soil Taxonomy Orders, it allowed (i) to relate the important spectral features to domain knowledge and (ii) to demonstrate that including all Soil Taxonomy Orders in CNN-based modeling is beneficial as spectral features learned can be reused across different, sometimes underrepresented orders.</p></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2589721722000186/pdfft?md5=8126426530126bf7ca26081e52cbb6d7&pid=1-s2.0-S2589721722000186-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49217627","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Developing a multi-label tinyML machine learning model for an active and optimized greenhouse microclimate control from multivariate sensed data 根据多变量传感数据开发一个用于主动优化温室小气候控制的多标签tinyML机器学习模型
Q1 Computer Science Pub Date : 2022-01-01 DOI: 10.1016/j.aiia.2022.08.003
Ilham Ihoume, Rachid Tadili, Nora Arbaoui, Mohamed Benchrifa, Ahmed Idrissi, Mohamed Daoudi

In the uncertainties within which the worldwide food security lies nowadays, the agricultural industry is raising a crucial need for being equipped with the state-of-the-art technologies for a more efficient, climate-resilient and sustainable production. The traditional production methods have to be revisited, and opportunities should be given for the innovative solutions henceforth brought by big data analytics, cloud computing and internet of things (IoT). In this context, we develop an optimized tinyML-oriented model for an active machine learning-based greenhouse microclimate management to be integrated in an on-field microcontroller. We design an experimental strawberry greenhouse from which we collect multivariate climate data through installed sensors. The obtained values' combinations are labeled according to a five-action multi-label control strategy, then used to prepare a machine learning-ready dataset. The dataset is used to train and five-fold cross-validate 90 Multi-Layer Perceptrons (MLPs) with varied hyperparameters to select the most performant –yet optimized– model instance for the addressed task. Our multi-label control approach enables designing highly scalable models with reduced computational complexity, comprising only n control neurons instead of (1 + ∑nk=1Cnk) neurons (usually generated from a classic single-label approach from n input variables). Our final selected model incorporates 2 hidden layers with 7 and 8 neurons respectively and 151 parameters; it scored a mean accuracy of 97% during the cross-validation phase, then 96% on our supplementary test set. The model enables an intelligent and autonomous greenhouse management with the less required computations. It can be efficiently deployed in microcontrollers within real world operating conditions.

在当今世界粮食安全的不确定性中,农业行业迫切需要配备最先进的技术,以实现更高效、更适应气候变化和可持续的生产。必须重新审视传统的生产方式,为大数据分析、云计算和物联网带来的创新解决方案提供机会。在这种情况下,我们开发了一个优化的面向tinml的模型,用于基于主动机器学习的温室小气候管理,并将其集成到现场微控制器中。我们设计了一个草莓温室,通过安装传感器收集多变量气候数据。根据五动作多标签控制策略对得到的值的组合进行标记,然后用于准备机器学习准备数据集。该数据集用于训练和五倍交叉验证90个具有不同超参数的多层感知器(mlp),以为所处理的任务选择性能最佳但优化的模型实例。我们的多标签控制方法能够设计具有较低计算复杂性的高度可扩展模型,仅包含n个控制神经元,而不是(1 +∑nk=1Cnk)神经元(通常由n个输入变量的经典单标签方法生成)。我们最终选择的模型包含2个隐藏层,分别有7个和8个神经元,151个参数;它在交叉验证阶段的平均准确率为97%,然后在我们的补充测试集中达到96%。该模型使温室管理智能化、自主化,减少了计算量。它可以在实际操作条件下有效地部署在微控制器中。
{"title":"Developing a multi-label tinyML machine learning model for an active and optimized greenhouse microclimate control from multivariate sensed data","authors":"Ilham Ihoume,&nbsp;Rachid Tadili,&nbsp;Nora Arbaoui,&nbsp;Mohamed Benchrifa,&nbsp;Ahmed Idrissi,&nbsp;Mohamed Daoudi","doi":"10.1016/j.aiia.2022.08.003","DOIUrl":"10.1016/j.aiia.2022.08.003","url":null,"abstract":"<div><p>In the uncertainties within which the worldwide food security lies nowadays, the agricultural industry is raising a crucial need for being equipped with the state-of-the-art technologies for a more efficient, climate-resilient and sustainable production. The traditional production methods have to be revisited, and opportunities should be given for the innovative solutions henceforth brought by big data analytics, cloud computing and internet of things (IoT). In this context, we develop an optimized tinyML-oriented model for an active machine learning-based greenhouse microclimate management to be integrated in an on-field microcontroller. We design an experimental strawberry greenhouse from which we collect multivariate climate data through installed sensors. The obtained values' combinations are labeled according to a five-action multi-label control strategy, then used to prepare a machine learning-ready dataset. The dataset is used to train and five-fold cross-validate 90 Multi-Layer Perceptrons (MLPs) with varied hyperparameters to select the most performant –yet optimized– model instance for the addressed task. Our multi-label control approach enables designing highly scalable models with reduced computational complexity, comprising only <em>n</em> control neurons instead of (1 + ∑<sub><em>n</em></sub><sup><em>k</em>=1</sup><em>C</em><sub><em>n</em></sub><sup><em>k</em></sup>) neurons (usually generated from a classic single-label approach from <em>n</em> input variables). Our final selected model incorporates 2 hidden layers with 7 and 8 neurons respectively and 151 parameters; it scored a mean accuracy of 97% during the cross-validation phase, then 96% on our supplementary test set. The model enables an intelligent and autonomous greenhouse management with the less required computations. It can be efficiently deployed in microcontrollers within real world operating conditions.</p></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2589721722000101/pdfft?md5=73a9c3cd093ea0be14dfa96d10299fd2&pid=1-s2.0-S2589721722000101-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49336654","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Analysis of land surface temperature using Geospatial technologies in Gida Kiremu, Limu, and Amuru District, Western Ethiopia 利用地理空间技术分析埃塞俄比亚西部Gida Kiremu、Limu和Amuru地区地表温度
Q1 Computer Science Pub Date : 2022-01-01 DOI: 10.1016/j.aiia.2022.06.002
Mitiku Badasa Moisa , Bacha Temesgen Gabissa , Lachisa Busha Hinkosa , Indale Niguse Dejene , Dessalegn Obsi Gemeda

Degradation of vegetation cover and expansion of barren land are remained the leading environmental problem at global level. Land surface temperature (LST), Normalized Difference Vegetation Index (NDVI), Normalized Difference Barren Index (NDBaI), and Modified Normalized Difference Water Index (MNDWI) were used to quantify the changing relationships using correlation analysis. This study attempted to analyze the relationship between LST and NDVI, NDBaI, and MNDWI using Geospatial technologies in Gida Kiremu, Limu, and Amuru districts in Western Ethiopia. All indices were estimated by using thermal bands and multispectral bands from Landsat TM 1990, Landsat ETM+ 2003, and Landsat OLI/TIRS 2020. The correlation of LST with NDVI, NDBaI and MNDWI were analyzed by using scatter plot. Accordingly, the NDBaI was positive correlation with LST (R2 = 0.96). However, NDVI and MNDWI were substantially negative relationship with LST (R2 = 0.99, 0.95), respectively. The result shows that, LST was increased by 5 °C due to decline of vegetation cover and increasing of bare land over the study periods. Finally, our result recommended that, decision-makers and environmental analysts should give attention on the importance of vegetation cover, water bodies and wetland in climate change mitigation, particularly, LST in the study area.

植被退化和荒地扩大仍然是全球面临的主要环境问题。利用陆地表面温度(LST)、归一化植被指数(NDVI)、归一化贫瘠指数(NDBaI)和修正归一化水分指数(MNDWI)进行相关分析,量化变化关系。本文利用地理空间技术分析了埃塞俄比亚西部Gida Kiremu、Limu和Amuru地区地表温度与NDVI、NDBaI和ndwi的关系。利用Landsat TM 1990、Landsat ETM+ 2003和Landsat OLI/TIRS 2020的热波段和多光谱波段估算了所有指数。利用散点图分析地表温度与NDVI、NDBaI和MNDWI的相关性。因此,NDBaI与LST呈正相关(R2 = 0.96)。而NDVI和MNDWI与LST呈显著负相关(R2 = 0.99, 0.95)。结果表明:研究期间,由于植被覆盖减少和裸地增加,地表温度升高了5°C;最后,我们的研究结果建议决策者和环境分析人员应重视植被覆盖、水体和湿地在减缓气候变化中的重要性,特别是研究区的地表温度。
{"title":"Analysis of land surface temperature using Geospatial technologies in Gida Kiremu, Limu, and Amuru District, Western Ethiopia","authors":"Mitiku Badasa Moisa ,&nbsp;Bacha Temesgen Gabissa ,&nbsp;Lachisa Busha Hinkosa ,&nbsp;Indale Niguse Dejene ,&nbsp;Dessalegn Obsi Gemeda","doi":"10.1016/j.aiia.2022.06.002","DOIUrl":"10.1016/j.aiia.2022.06.002","url":null,"abstract":"<div><p>Degradation of vegetation cover and expansion of barren land are remained the leading environmental problem at global level. Land surface temperature (LST), Normalized Difference Vegetation Index (NDVI), Normalized Difference Barren Index (NDBaI), and Modified Normalized Difference Water Index (MNDWI) were used to quantify the changing relationships using correlation analysis. This study attempted to analyze the relationship between LST and NDVI, NDBaI, and MNDWI using Geospatial technologies in Gida Kiremu, Limu, and Amuru districts in Western Ethiopia. All indices were estimated by using thermal bands and multispectral bands from Landsat TM 1990, Landsat ETM+ 2003, and Landsat OLI/TIRS 2020. The correlation of LST with NDVI, NDBaI and MNDWI were analyzed by using scatter plot. Accordingly, the NDBaI was positive correlation with LST (R<sup>2</sup> = 0.96). However, NDVI and MNDWI were substantially negative relationship with LST (R<sup>2</sup> = 0.99, 0.95), respectively. The result shows that, LST was increased by 5 °C due to decline of vegetation cover and increasing of bare land over the study periods. Finally, our result recommended that, decision-makers and environmental analysts should give attention on the importance of vegetation cover, water bodies and wetland in climate change mitigation, particularly, LST in the study area.</p></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2589721722000071/pdfft?md5=d8215559b09e2b7f75f1579019af14bd&pid=1-s2.0-S2589721722000071-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"54191449","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Non-destructive silkworm pupa gender classification with X-ray images using ensemble learning 基于集成学习的X射线图像无损蚕蛹性别分类
Q1 Computer Science Pub Date : 2022-01-01 DOI: 10.1016/j.aiia.2022.08.001
Sania Thomas, Jyothi Thomas

Sericulture is the process of cultivating silkworms for the production of silk. High-quality production of silk without mixing with low quality is a great challenge faced in the silk production centers. One of the possibilities to overcome this issue is by separating male and female cocoons before extracting silk fibers from the cocoons as male cocoon silk fibers are finer than females. This study proposes a method for the classification of male and female cocoons with the help of X-ray images without destructing the cocoon. The study used popular single hybrid varieties FC1 and FC2 mulberry silkworm cocoons. The shape features of the pupa are considered for the classification process and were obtained without cutting the cocoon. A novel point interpolation method is used for the computation of the width and height of the cocoon. Different dimensionality reduction methods are employed to enhance the performance of the model. The preprocessed features are fed to the powerful ensemble learning method AdaBoost and used logistic regression as the base learner. This model attained a mean accuracy of 96.3% for FC1 and FC2 in cross-validation and 95.3% in FC1 and 95.1% in FC2 for external validation.

养蚕是指为生产蚕丝而饲养蚕的过程。在丝绸生产中心,如何生产出高质量的丝绸而不掺杂低质量的丝绸是一个巨大的挑战。解决这一问题的方法之一是先将雌雄茧分开,然后再从茧中提取丝纤维,因为雄茧的丝纤维比雌茧细。本研究提出了一种在不破坏茧的情况下,利用x射线图像对雌雄茧进行分类的方法。本研究以流行的单杂交品种FC1和FC2桑蚕蚕茧为研究对象。蛹的形状特征被考虑为分类过程,并在不切割茧的情况下获得。采用一种新颖的点插值方法计算茧的宽度和高度。采用不同的降维方法来提高模型的性能。将预处理后的特征输入到强大的集成学习方法AdaBoost中,并使用逻辑回归作为基础学习器。该模型在交叉验证中FC1和FC2的平均准确率为96.3%,在外部验证中FC1和FC2的平均准确率为95.3%和95.1%。
{"title":"Non-destructive silkworm pupa gender classification with X-ray images using ensemble learning","authors":"Sania Thomas,&nbsp;Jyothi Thomas","doi":"10.1016/j.aiia.2022.08.001","DOIUrl":"10.1016/j.aiia.2022.08.001","url":null,"abstract":"<div><p>Sericulture is the process of cultivating silkworms for the production of silk. High-quality production of silk without mixing with low quality is a great challenge faced in the silk production centers. One of the possibilities to overcome this issue is by separating male and female cocoons before extracting silk fibers from the cocoons as male cocoon silk fibers are finer than females. This study proposes a method for the classification of male and female cocoons with the help of X-ray images without destructing the cocoon. The study used popular single hybrid varieties FC1 and FC2 mulberry silkworm cocoons. The shape features of the pupa are considered for the classification process and were obtained without cutting the cocoon. A novel point interpolation method is used for the computation of the width and height of the cocoon. Different dimensionality reduction methods are employed to enhance the performance of the model. The preprocessed features are fed to the powerful ensemble learning method AdaBoost and used logistic regression as the base learner. This model attained a mean accuracy of 96.3% for FC1 and FC2 in cross-validation and 95.3% in FC1 and 95.1% in FC2 for external validation.</p></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2589721722000083/pdfft?md5=d0cead76b9f690e47295d42b87ef7a7f&pid=1-s2.0-S2589721722000083-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48820696","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Deep learning based computer vision approaches for smart agricultural applications 基于深度学习的智能农业计算机视觉应用方法
Q1 Computer Science Pub Date : 2022-01-01 DOI: 10.1016/j.aiia.2022.09.007
V.G. Dhanya , A. Subeesh , N.L. Kushwaha , Dinesh Kumar Vishwakarma , T. Nagesh Kumar , G. Ritika , A.N. Singh

The agriculture industry is undergoing a rapid digital transformation and is growing powerful by the pillars of cutting-edge approaches like artificial intelligence and allied technologies. At the core of artificial intelligence, deep learning-based computer vision enables various agriculture activities to be performed automatically with utmost precision enabling smart agriculture into reality. Computer vision techniques, in conjunction with high-quality image acquisition using remote cameras, enable non-contact and efficient technology-driven solutions in agriculture. This review contributes to providing state-of-the-art computer vision technologies based on deep learning that can assist farmers in operations starting from land preparation to harvesting. Recent works in the area of computer vision were analyzed in this paper and categorized into (a) seed quality analysis, (b) soil analysis, (c) irrigation water management, (d) plant health analysis, (e) weed management (f) livestock management and (g) yield estimation. The paper also discusses recent trends in computer vision such as generative adversarial networks (GAN), vision transformers (ViT) and other popular deep learning architectures. Additionally, this study pinpoints the challenges in implementing the solutions in the farmer’s field in real-time. The overall finding indicates that convolutional neural networks are the corner stone of modern computer vision approaches and their various architectures provide high-quality solutions across various agriculture activities in terms of precision and accuracy. However, the success of the computer vision approach lies in building the model on a quality dataset and providing real-time solutions.

农业正在经历快速的数字化转型,并在人工智能和相关技术等尖端方法的支柱下变得越来越强大。作为人工智能的核心,基于深度学习的计算机视觉使各种农业活动能够以最高的精度自动执行,使智能农业成为现实。计算机视觉技术与使用远程相机的高质量图像采集相结合,为农业提供了非接触式和高效的技术驱动解决方案。这篇综述有助于提供基于深度学习的最先进的计算机视觉技术,可以帮助农民从土地准备到收获的操作。本文对计算机视觉领域的最新工作进行了分析,并将其分为(a)种子质量分析,(b)土壤分析,(c)灌溉用水管理,(d)植物健康分析,(e)杂草管理,(f)牲畜管理和(g)产量估算。本文还讨论了计算机视觉的最新趋势,如生成对抗网络(GAN),视觉变压器(ViT)和其他流行的深度学习架构。此外,本研究还指出了在农民现场实时实施这些解决方案所面临的挑战。总体发现表明,卷积神经网络是现代计算机视觉方法的基石,其各种架构在精度和准确性方面为各种农业活动提供了高质量的解决方案。然而,计算机视觉方法的成功在于在高质量的数据集上构建模型并提供实时解决方案。
{"title":"Deep learning based computer vision approaches for smart agricultural applications","authors":"V.G. Dhanya ,&nbsp;A. Subeesh ,&nbsp;N.L. Kushwaha ,&nbsp;Dinesh Kumar Vishwakarma ,&nbsp;T. Nagesh Kumar ,&nbsp;G. Ritika ,&nbsp;A.N. Singh","doi":"10.1016/j.aiia.2022.09.007","DOIUrl":"10.1016/j.aiia.2022.09.007","url":null,"abstract":"<div><p>The agriculture industry is undergoing a rapid digital transformation and is growing powerful by the pillars of cutting-edge approaches like artificial intelligence and allied technologies. At the core of artificial intelligence, deep learning-based computer vision enables various agriculture activities to be performed automatically with utmost precision enabling smart agriculture into reality. Computer vision techniques, in conjunction with high-quality image acquisition using remote cameras, enable non-contact and efficient technology-driven solutions in agriculture. This review contributes to providing state-of-the-art computer vision technologies based on deep learning that can assist farmers in operations starting from land preparation to harvesting. Recent works in the area of computer vision were analyzed in this paper and categorized into (a) seed quality analysis, (b) soil analysis, (c) irrigation water management, (d) plant health analysis, (e) weed management (f) livestock management and (g) yield estimation. The paper also discusses recent trends in computer vision such as generative adversarial networks (GAN), vision transformers (ViT) and other popular deep learning architectures. Additionally, this study pinpoints the challenges in implementing the solutions in the farmer’s field in real-time. The overall finding indicates that convolutional neural networks are the corner stone of modern computer vision approaches and their various architectures provide high-quality solutions across various agriculture activities in terms of precision and accuracy. However, the success of the computer vision approach lies in building the model on a quality dataset and providing real-time solutions.</p></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2589721722000174/pdfft?md5=a432376ae19a8efc430e8ac20394f2b0&pid=1-s2.0-S2589721722000174-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42036657","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 30
Automatic marker-free registration of single tree point-cloud data based on rotating projection 基于旋转投影的单树点云数据自动无标记配准
Q1 Computer Science Pub Date : 2022-01-01 DOI: 10.1016/j.aiia.2022.09.005
Xiuxian Xu , Pei Wang , Xiaozheng Gan , Jingqian Sun , Yaxin Li , Li Zhang , Qing Zhang , Mei Zhou , Yinghui Zhao , Xinwei Li

Point-cloud data acquired using a terrestrial laser scanner play an important role in digital forestry research. Multiple scans are generally used to overcome occlusion effects and obtain complete tree structural information. However, the placement of artificial reflectors in a forest with complex terrain for marker-based registration is time-consuming and difficult. In this study, an automatic coarse-to-fine method for the registration of point-cloud data from multiple scans of a single tree was proposed. In coarse registration, point clouds produced by each scan are projected onto a spherical surface to generate a series of two-dimensional (2D) images, which are used to estimate the initial positions of multiple scans. Corresponding feature-point pairs are then extracted from these series of 2D images. In fine registration, point-cloud data slicing and fitting methods are used to extract corresponding central stem and branch centers for use as tie points to calculate fine transformation parameters. To evaluate the accuracy of registration results, we propose a model of error evaluation via calculating the distances between center points from corresponding branches in adjacent scans. For accurate evaluation, we conducted experiments on two simulated trees and six real-world trees. Average registration errors of the proposed method were 0.026 m around on simulated tree point clouds, and 0.049 m around on real-world tree point clouds.

利用地面激光扫描仪获取的点云数据在数字林业研究中发挥着重要作用。通常采用多次扫描来克服遮挡效应,获得完整的树结构信息。然而,在具有复杂地形的森林中放置人工反射器进行基于标记的配准是耗时且困难的。本文提出了一种从单棵树的多次扫描中提取点云数据的自动从粗到精配准方法。在粗配准中,每次扫描产生的点云被投影到球面上,生成一系列二维(2D)图像,用于估计多次扫描的初始位置。然后从这些序列的二维图像中提取相应的特征点对。在精细配准中,采用点云数据切片和拟合的方法,提取相应的中心主干和分支中心作为结合点,计算精细变换参数。为了评估配准结果的准确性,我们提出了一种通过计算相邻扫描中相应分支中心点之间的距离来评估误差的模型。为了准确评估,我们在两棵模拟树和六棵真实树上进行了实验。该方法在模拟树点云上的平均配准误差为0.026 m左右,在真实树点云上的平均配准误差为0.049 m左右。
{"title":"Automatic marker-free registration of single tree point-cloud data based on rotating projection","authors":"Xiuxian Xu ,&nbsp;Pei Wang ,&nbsp;Xiaozheng Gan ,&nbsp;Jingqian Sun ,&nbsp;Yaxin Li ,&nbsp;Li Zhang ,&nbsp;Qing Zhang ,&nbsp;Mei Zhou ,&nbsp;Yinghui Zhao ,&nbsp;Xinwei Li","doi":"10.1016/j.aiia.2022.09.005","DOIUrl":"10.1016/j.aiia.2022.09.005","url":null,"abstract":"<div><p>Point-cloud data acquired using a terrestrial laser scanner play an important role in digital forestry research. Multiple scans are generally used to overcome occlusion effects and obtain complete tree structural information. However, the placement of artificial reflectors in a forest with complex terrain for marker-based registration is time-consuming and difficult. In this study, an automatic coarse-to-fine method for the registration of point-cloud data from multiple scans of a single tree was proposed. In coarse registration, point clouds produced by each scan are projected onto a spherical surface to generate a series of two-dimensional (2D) images, which are used to estimate the initial positions of multiple scans. Corresponding feature-point pairs are then extracted from these series of 2D images. In fine registration, point-cloud data slicing and fitting methods are used to extract corresponding central stem and branch centers for use as tie points to calculate fine transformation parameters. To evaluate the accuracy of registration results, we propose a model of error evaluation via calculating the distances between center points from corresponding branches in adjacent scans. For accurate evaluation, we conducted experiments on two simulated trees and six real-world trees. Average registration errors of the proposed method were 0.026 m around on simulated tree point clouds, and 0.049 m around on real-world tree point clouds.</p></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2589721722000150/pdfft?md5=6ad6a8d0665e0efd291bc1d6b93e8101&pid=1-s2.0-S2589721722000150-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42279619","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Assessing the performance of YOLOv5 algorithm for detecting volunteer cotton plants in corn fields at three different growth stages 评估YOLOv5算法在玉米田三个不同生长阶段检测志愿棉花植株的性能
Q1 Computer Science Pub Date : 2022-01-01 DOI: 10.1016/j.aiia.2022.11.005
Pappu Kumar Yadav , J. Alex Thomasson , Stephen W. Searcy , Robert G. Hardin , Ulisses Braga-Neto , Sorin C. Popescu , Daniel E. Martin , Roberto Rodriguez , Karem Meza , Juan Enciso , Jorge Solórzano Diaz , Tianyi Wang

The feral or volunteer cotton (VC) plants when reach the pinhead squaring phase (5–6 leaf stage) can act as hosts for the boll weevil (Anthonomus grandis L.) pests. The Texas Boll Weevil Eradication Program (TBWEP) employs people to locate and eliminate VC plants growing by the side of roads or fields with rotation crops but the ones growing in the middle of fields remain undetected. In this paper, we demonstrate the application of computer vision (CV) algorithm based on You Only Look Once version 5 (YOLOv5) for detecting VC plants growing in the middle of corn fields at three different growth stages (V3, V6 and VT) using unmanned aircraft systems (UAS) remote sensing imagery. All the four variants of YOLOv5 (s, m, l, and x) were used and their performances were compared based on classification accuracy, mean average precision (mAP) and F1-score. It was found that YOLOv5s could detect VC plants with maximum classification accuracy of 98% and mAP of 96.3% at V6 stage of corn while YOLOv5s and YOLOv5m resulted in the lowest classification accuracy of 85% and YOLOv5m and YOLOv5l had the least mAP of 86.5% at VT stage on images of size 416 × 416 pixels. The developed CV algorithm has the potential to effectively detect and locate VC plants growing in the middle of corn fields as well as expedite the management aspects of TBWEP.

野生或自愿种植的棉花(VC)在达到针尖期(5-6叶期)时可以作为棉铃象鼻虫(Anthonomus grandis L.)害虫的寄主。德州棉铃象鼻虫根除计划(TBWEP)雇用人员定位和消灭生长在公路或轮作作物的田地旁的VC植物,但生长在田地中间的VC植物仍未被发现。在本文中,我们展示了基于You Only Look Once version 5 (YOLOv5)的计算机视觉(CV)算法在利用无人机系统(UAS)遥感图像检测玉米田中部生长在3个不同生长阶段(V3、V6和VT)的VC植物的应用。使用YOLOv5的所有4个变体(s, m, l和x),并根据分类精度,平均平均精度(mAP)和f1评分对其性能进行比较。结果发现,在416 × 416像素的图像上,YOLOv5s在玉米V6期的VC植株分类准确率最高,为98%,mAP为96.3%,而YOLOv5s和YOLOv5m在VT期的分类准确率最低,为85%,YOLOv5m和YOLOv5l的mAP最低,为86.5%。所开发的CV算法有可能有效地检测和定位生长在玉米田中间的VC植物,并加快TBWEP的管理方面。
{"title":"Assessing the performance of YOLOv5 algorithm for detecting volunteer cotton plants in corn fields at three different growth stages","authors":"Pappu Kumar Yadav ,&nbsp;J. Alex Thomasson ,&nbsp;Stephen W. Searcy ,&nbsp;Robert G. Hardin ,&nbsp;Ulisses Braga-Neto ,&nbsp;Sorin C. Popescu ,&nbsp;Daniel E. Martin ,&nbsp;Roberto Rodriguez ,&nbsp;Karem Meza ,&nbsp;Juan Enciso ,&nbsp;Jorge Solórzano Diaz ,&nbsp;Tianyi Wang","doi":"10.1016/j.aiia.2022.11.005","DOIUrl":"https://doi.org/10.1016/j.aiia.2022.11.005","url":null,"abstract":"<div><p>The feral or volunteer cotton (VC) plants when reach the pinhead squaring phase (5–6 leaf stage) can act as hosts for the boll weevil (<em>Anthonomus grandis</em> L.) pests. The Texas Boll Weevil Eradication Program (TBWEP) employs people to locate and eliminate VC plants growing by the side of roads or fields with rotation crops but the ones growing in the middle of fields remain undetected. In this paper, we demonstrate the application of computer vision (CV) algorithm based on You Only Look Once version 5 (YOLOv5) for detecting VC plants growing in the middle of corn fields at three different growth stages (V3, V6 and VT) using unmanned aircraft systems (UAS) remote sensing imagery. All the four variants of YOLOv5 (s, m, l, and x) were used and their performances were compared based on classification accuracy, mean average precision (mAP) and F1-score. It was found that YOLOv5s could detect VC plants with maximum classification accuracy of 98% and mAP of 96.3% at V6 stage of corn while YOLOv5s and YOLOv5m resulted in the lowest classification accuracy of 85% and YOLOv5m and YOLOv5l had the least mAP of 86.5% at VT stage on images of size 416 × 416 pixels. The developed CV algorithm has the potential to effectively detect and locate VC plants growing in the middle of corn fields as well as expedite the management aspects of TBWEP.</p></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S258972172200023X/pdfft?md5=668d0d880037b65dd3f8f7e8cb5d583b&pid=1-s2.0-S258972172200023X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91954150","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep convolutional neural network models for weed detection in polyhouse grown bell peppers 用于温室栽培甜椒杂草检测的深度卷积神经网络模型
Q1 Computer Science Pub Date : 2022-01-01 DOI: 10.1016/j.aiia.2022.01.002
A. Subeesh, S. Bhole, K. Singh, N.S. Chandel, Y.A. Rajwade, K.V.R. Rao, S.P. Kumar, D. Jat

Conventional weed management approaches are inefficient and non-suitable for integration with smart agricultural machinery. Automatic identification and classification of weeds can play a vital role in weed management contributing to better crop yields. Intelligent and smart spot-spraying system's efficiency relies on the accuracy of the computer vision based detectors for autonomous weed control. In the present study, feasibility of deep learning based techniques (Alexnet, GoogLeNet, InceptionV3, Xception) were evaluated in weed identification from RGB images of bell pepper field. The models were trained with different values of epochs (10, 20,30), batch sizes (16, 32), and hyperparameters were tuned to get optimal performance. The overall accuracy of the selected models varied from 94.5 to 97.7%. Among the models, InceptionV3 exhibited superior performance at 30-epoch and 16-batch size with a 97.7% accuracy, 98.5% precision, and 97.8% recall. For this Inception3 model, the type 1 error was obtained as 1.4% and type II error was 0.9%. The effectiveness of the deep learning model presents a clear path towards integrating them with image-based herbicide applicators for precise weed management.

传统的杂草管理方法效率低下,不适合与智能农业机械集成。杂草的自动识别和分类在杂草管理中起着至关重要的作用,有助于提高作物产量。智能点喷系统的效率依赖于基于计算机视觉的自动杂草控制探测器的准确性。研究了基于深度学习技术(Alexnet、GoogLeNet、InceptionV3、Xception)在甜椒RGB图像杂草识别中的可行性。使用不同的epoch值(10,20,30)和batch大小(16,32)来训练模型,并调整超参数以获得最佳性能。所选模型的总体准确率从94.5%到97.7%不等。在这些模型中,InceptionV3在30 epoch和16 batch大小的情况下表现优异,准确率为97.7%,精密度为98.5%,召回率为97.8%。对于这个Inception3模型,1类误差为1.4%,2类误差为0.9%。深度学习模型的有效性为将它们与基于图像的除草剂施用器集成在一起以实现精确的杂草管理提供了一条清晰的道路。
{"title":"Deep convolutional neural network models for weed detection in polyhouse grown bell peppers","authors":"A. Subeesh,&nbsp;S. Bhole,&nbsp;K. Singh,&nbsp;N.S. Chandel,&nbsp;Y.A. Rajwade,&nbsp;K.V.R. Rao,&nbsp;S.P. Kumar,&nbsp;D. Jat","doi":"10.1016/j.aiia.2022.01.002","DOIUrl":"10.1016/j.aiia.2022.01.002","url":null,"abstract":"<div><p>Conventional weed management approaches are inefficient and non-suitable for integration with smart agricultural machinery. Automatic identification and classification of weeds can play a vital role in weed management contributing to better crop yields. Intelligent and smart spot-spraying system's efficiency relies on the accuracy of the computer vision based detectors for autonomous weed control. In the present study, feasibility of deep learning based techniques (Alexnet, GoogLeNet, InceptionV3, Xception) were evaluated in weed identification from RGB images of bell pepper field. The models were trained with different values of epochs (10, 20,30), batch sizes (16, 32), and hyperparameters were tuned to get optimal performance. The overall accuracy of the selected models varied from 94.5 to 97.7%. Among the models, InceptionV3 exhibited superior performance at 30-epoch and 16-batch size with a 97.7% accuracy, 98.5% precision, and 97.8% recall. For this Inception3 model, the type 1 error was obtained as 1.4% and type II error was 0.9%. The effectiveness of the deep learning model presents a clear path towards integrating them with image-based herbicide applicators for precise weed management.</p></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2589721722000034/pdfft?md5=0e46e4734fe4a1ad07168f928407f4d2&pid=1-s2.0-S2589721722000034-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42876776","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 48
Evaluation of model generalization for growing plants using conditional learning 条件学习对植物生长模型泛化的评价
Q1 Computer Science Pub Date : 2022-01-01 DOI: 10.1016/j.aiia.2022.09.006
Hafiz Sami Ullah, Abdul Bais

This paper aims to solve the lack of generalization of existing semantic segmentation models in the crop and weed segmentation domain. We compare two training mechanisms, classical and adversarial, to understand which scheme works best for a particular encoder-decoder model. We use simple U-Net, SegNet, and DeepLabv3+ with ResNet-50 backbone as segmentation networks. The models are trained with cross-entropy loss for classical and PatchGAN loss for adversarial training. By adopting the Conditional Generative Adversarial Network (CGAN) hierarchical settings, we penalize different Generators (G) using PatchGAN Discriminator (D) and L1 loss to generate segmentation output. The generalization is to exhibit fewer failures and perform comparably for growing plants with different data distributions. We utilize the images from four different stages of sugar beet. We divide the data so that the full-grown stage is used for training, whereas earlier stages are entirely dedicated to testing the model. We conclude that U-Net trained in adversarial settings is more robust to changes in the dataset. The adversarially trained U-Net reports 10% overall improvement in the results with mIOU scores of 0.34, 0.55, 0.75, and 0.85 for four different growth stages.

本文旨在解决现有语义分割模型在作物和杂草分割领域缺乏泛化的问题。我们比较了两种训练机制,经典和对抗性,以了解哪种方案最适合特定的编码器-解码器模型。我们使用简单的U-Net, SegNet和DeepLabv3+与ResNet-50骨干网作为分段网络。这些模型在经典训练中使用交叉熵损失,在对抗训练中使用PatchGAN损失。通过采用条件生成对抗网络(CGAN)分层设置,我们使用PatchGAN鉴别器(D)和L1损失来惩罚不同的生成器(G)以生成分割输出。推广是表现出更少的失败,并在不同数据分布的植物生长中表现相当。我们利用了甜菜生长的四个不同阶段的图像。我们对数据进行划分,使成熟阶段用于训练,而早期阶段完全用于测试模型。我们得出的结论是,在对抗设置中训练的U-Net对数据集的变化更健壮。经过对抗性训练的U-Net在四个不同生长阶段的mIOU得分分别为0.34、0.55、0.75和0.85,结果总体改善了10%。
{"title":"Evaluation of model generalization for growing plants using conditional learning","authors":"Hafiz Sami Ullah,&nbsp;Abdul Bais","doi":"10.1016/j.aiia.2022.09.006","DOIUrl":"10.1016/j.aiia.2022.09.006","url":null,"abstract":"<div><p>This paper aims to solve the lack of generalization of existing semantic segmentation models in the crop and weed segmentation domain. We compare two training mechanisms, classical and adversarial, to understand which scheme works best for a particular encoder-decoder model. We use simple U-Net, SegNet, and DeepLabv3+ with ResNet-50 backbone as segmentation networks. The models are trained with cross-entropy loss for classical and PatchGAN loss for adversarial training. By adopting the Conditional Generative Adversarial Network (CGAN) hierarchical settings, we penalize different Generators (G) using PatchGAN Discriminator (D) and L1 loss to generate segmentation output. The generalization is to exhibit fewer failures and perform comparably for growing plants with different data distributions. We utilize the images from four different stages of sugar beet. We divide the data so that the full-grown stage is used for training, whereas earlier stages are entirely dedicated to testing the model. We conclude that U-Net trained in adversarial settings is more robust to changes in the dataset. The adversarially trained U-Net reports 10% overall improvement in the results with mIOU scores of 0.34, 0.55, 0.75, and 0.85 for four different growth stages.</p></div>","PeriodicalId":52814,"journal":{"name":"Artificial Intelligence in Agriculture","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2589721722000162/pdfft?md5=c82f744ddde9eae31b6d43208001f9ef&pid=1-s2.0-S2589721722000162-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44643917","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
期刊
Artificial Intelligence in Agriculture
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1