首页 > 最新文献

Information Processing in Agriculture最新文献

英文 中文
Automated garden-insect recognition using improved lightweight convolution network 利用改进的轻量级卷积网络实现花园昆虫的自动识别
Pub Date : 2023-06-01 DOI: 10.1016/j.inpa.2021.12.006
Zhankui Yang , Xinting Yang , Ming Li , Wenyong Li

Automated recognition of insect category, which currently is performed mainly by agriculture experts, is a challenging problem that has received increasing attention in recent years. The goal of the present research is to develop an intelligent mobile-terminal recognition system based on deep neural networks to recognize garden insects in a device that can be conveniently deployed in mobile terminals. State-of-the-art lightweight convolutional neural networks (such as SqueezeNet and ShuffleNet) have the same accuracy as classical convolutional neural networks such as AlexNet but fewer parameters, thereby not only requiring communication across servers during distributed training but also being more feasible to deploy on mobile terminals and other hardware with limited memory. In this research, we connect with the rich details of the low-level network features and the rich semantic information of the high-level network features to construct more rich semantic information feature maps which can effectively improve SqueezeNet model with a small computational cost. In addition, we developed an off-line insect recognition software that can be deployed on the mobile terminal to solve no network and the time-delay problems in the field. Experiments demonstrate that the proposed method is promising for recognition while remaining within a limited computational budget and delivers a much higher recognition accuracy of 91.64% with less training time relative to other classical convolutional neural networks. We have also verified the results that the improved SqueezeNet model has a 2.3% higher than of the original model in the open insect data IP102.

昆虫类别的自动识别是近年来备受关注的一个具有挑战性的问题,目前主要由农业专家来完成。本研究的目标是开发一种基于深度神经网络的智能移动终端识别系统,以识别可方便部署在移动终端上的设备中的花园昆虫。最先进的轻量级卷积神经网络(如SqueezeNet和ShuffleNet)具有与经典卷积神经网络(如AlexNet)相同的精度,但参数更少,因此在分布式训练时不仅需要跨服务器通信,而且在移动终端和其他内存有限的硬件上部署更可行。在本研究中,我们将底层网络特征的丰富细节和高层网络特征的丰富语义信息结合起来,构建了更丰富的语义信息特征映射,以较小的计算成本有效地改进了SqueezeNet模型。此外,我们开发了一种离线昆虫识别软件,可以部署在移动端,解决了现场无网络和时延问题。实验表明,在有限的计算预算下,该方法具有良好的识别前景,与其他经典卷积神经网络相比,其训练时间更短,识别准确率高达91.64%。我们还在开放昆虫数据IP102中验证了改进的SqueezeNet模型比原始模型高2.3%的结果。
{"title":"Automated garden-insect recognition using improved lightweight convolution network","authors":"Zhankui Yang ,&nbsp;Xinting Yang ,&nbsp;Ming Li ,&nbsp;Wenyong Li","doi":"10.1016/j.inpa.2021.12.006","DOIUrl":"10.1016/j.inpa.2021.12.006","url":null,"abstract":"<div><p>Automated recognition of insect category, which currently is performed mainly by agriculture experts, is a challenging problem that has received increasing attention in recent years. The goal of the present research is to develop an intelligent mobile-terminal recognition system based on deep neural networks to recognize garden insects in a device that can be conveniently deployed in mobile terminals. State-of-the-art lightweight convolutional neural networks (such as SqueezeNet and ShuffleNet) have the same accuracy as classical convolutional neural networks such as AlexNet but fewer parameters, thereby not only requiring communication across servers during distributed training but also being more feasible to deploy on mobile terminals and other hardware with limited memory. In this research, we connect with the rich details of the low-level network features and the rich semantic information of the high-level network features to construct more rich semantic information feature maps which can effectively improve SqueezeNet model with a small computational cost. In addition, we developed an off-line insect recognition software that can be deployed on the mobile terminal to solve no network and the time-delay problems in the field. Experiments demonstrate that the proposed method is promising for recognition while remaining within a limited computational budget and delivers a much higher recognition accuracy of 91.64% with less training time relative to other classical convolutional neural networks. We have also verified the results that the improved SqueezeNet model has a 2.3% higher than of the original model in the open insect data IP102.</p></div>","PeriodicalId":53443,"journal":{"name":"Information Processing in Agriculture","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49654388","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Constrained temperature and relative humidity predictive control: Agricultural greenhouse case of study 约束温度和相对湿度预测控制:农业大棚案例研究
Pub Date : 2023-05-01 DOI: 10.1016/j.inpa.2023.04.003
H. Hamidane, S. El Faiz, Iliass Rkik, M. El khayat, M. Guerbaoui, A. Ed-Dahhak, A. Lachhab
{"title":"Constrained temperature and relative humidity predictive control: Agricultural greenhouse case of study","authors":"H. Hamidane, S. El Faiz, Iliass Rkik, M. El khayat, M. Guerbaoui, A. Ed-Dahhak, A. Lachhab","doi":"10.1016/j.inpa.2023.04.003","DOIUrl":"https://doi.org/10.1016/j.inpa.2023.04.003","url":null,"abstract":"","PeriodicalId":53443,"journal":{"name":"Information Processing in Agriculture","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44233530","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Automated detection of sugarcane crop lines from UAV images using deep learning 利用深度学习从无人机图像中自动检测甘蔗作物线
Pub Date : 2023-04-01 DOI: 10.1016/j.inpa.2023.04.001
João Batista Ribeiro, Renato Rodrigues da Silva, Jocival Dantas Dias, M. Escarpinati, A. R. Backes
{"title":"Automated detection of sugarcane crop lines from UAV images using deep learning","authors":"João Batista Ribeiro, Renato Rodrigues da Silva, Jocival Dantas Dias, M. Escarpinati, A. R. Backes","doi":"10.1016/j.inpa.2023.04.001","DOIUrl":"https://doi.org/10.1016/j.inpa.2023.04.001","url":null,"abstract":"","PeriodicalId":53443,"journal":{"name":"Information Processing in Agriculture","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41738736","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Spectroscopic measurement and dielectric relaxation study of vegetable oils 植物油的光谱测量和介电弛豫研究
Pub Date : 2023-04-01 DOI: 10.1016/j.inpa.2023.04.002
S. M. Sabnis, D. N. Rander, K. S. Kanse, Y. S. Joshi, A. Kumbharkhane
{"title":"Spectroscopic measurement and dielectric relaxation study of vegetable oils","authors":"S. M. Sabnis, D. N. Rander, K. S. Kanse, Y. S. Joshi, A. Kumbharkhane","doi":"10.1016/j.inpa.2023.04.002","DOIUrl":"https://doi.org/10.1016/j.inpa.2023.04.002","url":null,"abstract":"","PeriodicalId":53443,"journal":{"name":"Information Processing in Agriculture","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46607529","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The effect of drying of Piper hispidinervium by different methods and its influence on the yield of essential oil and safrole 不同干燥方法对花椒精油和黄樟油得率的影响
Pub Date : 2023-03-01 DOI: 10.1016/j.inpa.2021.10.003
Helder Kiyoshi Miyagawa, Alberdan Silva Santos

The Piper hispidinervium leaves and thin stems were dried under laboratory and field conditions. Laboratory drying was performed using a shade dryer operating with and without forced convection and an oven dryer operating at 30 and 40 °C. Field experiments were conducted using solar dryers with three different covers, i.e., transparent, black plastic, and palm straw covers. The essential oil extraction was performed by steam distillation, and the safrole content was analyzed by gas chromatography. Five mathematical models (Page, logarithmic, Henderson and Pabis, fractional, and diffusion) were fitted with the experimental data and compared based on the coefficient of determination (R2), root mean square error (RMSE) and χ2. Results suggest that the best model was the logarithmic model (R2 > 0.99, RMSE < 0.000 5, and χ2 < 0.005). With sufficient drying, the safrole content increased up to 95% of the extracted oil; however, when the drying time was prolonged, both the oil yield and safrole content of the extracted oil decreased.

在实验室和田间条件下对花椒叶片和细茎进行干燥。在实验室中,使用带和不带强制对流的遮阳干燥机和30°C和40°C的烘箱干燥机进行干燥。采用透明、黑色塑料、棕榈秸秆三种不同覆盖物的太阳能干燥机进行田间试验。采用水蒸气蒸馏法提取挥发油,气相色谱法分析黄樟油的含量。采用Page、对数、Henderson and Pabis、分数、扩散5种数学模型对实验数据进行拟合,并根据决定系数(R2)、均方根误差(RMSE)和χ2进行比较。结果表明,最佳模型为对数模型(R2 >0.99, RMSE <0.000 5, χ2 <0.005)。充分干燥后,黄樟油含量可达提取油的95%;但随着干燥时间的延长,提取油的出油率和黄樟酚含量均有所下降。
{"title":"The effect of drying of Piper hispidinervium by different methods and its influence on the yield of essential oil and safrole","authors":"Helder Kiyoshi Miyagawa,&nbsp;Alberdan Silva Santos","doi":"10.1016/j.inpa.2021.10.003","DOIUrl":"10.1016/j.inpa.2021.10.003","url":null,"abstract":"<div><p>The <em>Piper hispidinervium</em> leaves and thin stems were dried under laboratory and field conditions. Laboratory drying was performed using a shade dryer operating with and without forced convection and an oven dryer operating at 30 and 40 °C. Field experiments were conducted using solar dryers with three different covers, i.e., transparent, black plastic, and palm straw covers. The essential oil extraction was performed by steam distillation, and the safrole content was analyzed by gas chromatography. Five mathematical models (Page, logarithmic, Henderson and Pabis, fractional, and diffusion) were fitted with the experimental data and compared based on the coefficient of determination (R<sup>2</sup>), root mean square error (RMSE) and χ<sup>2</sup>. Results suggest that the best model was the logarithmic model (R<sup>2</sup> &gt; 0.99, RMSE &lt; 0.000 5, and χ<sup>2</sup> &lt; 0.005). With sufficient drying, the safrole content increased up to 95% of the extracted oil; however, when the drying time was prolonged, both the oil yield and safrole content of the extracted oil decreased.</p></div>","PeriodicalId":53443,"journal":{"name":"Information Processing in Agriculture","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41851526","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Classification of weed seeds based on visual images and deep learning 基于视觉图像和深度学习的杂草种子分类
Pub Date : 2023-03-01 DOI: 10.1016/j.inpa.2021.10.002
Tongyun Luo , Jianye Zhao , Yujuan Gu , Shuo Zhang , Xi Qiao , Wen Tian , Yangchun Han

Weeds are mainly spread by weed seeds being mixed with agricultural and forestry crop seeds, grain, animal hair, and other plant products, and disturb the growing environment of target plants such as crops and wild native plants. The accurate and efficient classification of weed seeds is important for the effective management and control of weeds. However, classification remains mainly dependent on destructive sampling-based manual inspection, which has a high cost and rather low flux. We considered that this problem could be solved using a nondestructive intelligent image recognition method. First, on the basis of the establishment of the image acquisition system for weed seeds, images of single weed seeds were rapidly and completely segmented, and a total of 47 696 samples of 140 species of weed seeds and foreign materials remained. Then, six popular and novel deep Convolutional Neural Network (CNN) models are compared to identify the best method for intelligently identifying 140 species of weed seeds. Of these samples, 33 600 samples are randomly selected as the training dataset for model training, and the remaining 14 096 samples are used as the testing dataset for model testing. AlexNet and GoogLeNet emerged from the quantitative evaluation as the best methods. AlexNet has strong classification accuracy and efficiency (low time consumption), and GoogLeNet has the best classification accuracy. A suitable CNN model for weed seed classification could be selected according to specific identification accuracy requirements and time costs of applications. This research is beneficial for developing a detection system for weed seeds in various applications. The resolution of taxonomic issues and problems associated with the identification of these weed seeds may allow for more effective management and control.

杂草主要通过杂草种子与农林作物种子、粮食、兽毛等植物产品混合传播,扰乱作物、野生原生植物等目标植物的生长环境。准确、高效的杂草种子分类对有效管理和控制杂草具有重要意义。然而,分类仍然主要依赖于基于破坏性采样的人工检测,成本高,通量低。我们认为这个问题可以用一种非破坏性的智能图像识别方法来解决。首先,在建立杂草种子图像采集系统的基础上,对单个杂草种子图像进行了快速完整的分割,共保留了140种杂草种子和外来物质的47 696份样本。然后,比较了六种流行的和新颖的深度卷积神经网络(CNN)模型,确定了智能识别140种杂草种子的最佳方法。其中随机抽取33 600个样本作为训练数据集进行模型训练,其余14 096个样本作为测试数据集进行模型测试。AlexNet和GoogLeNet从定量评估中脱颖而出,成为最佳方法。AlexNet具有较强的分类精度和效率(低耗时),而GoogLeNet具有最好的分类精度。可以根据具体的识别精度要求和应用的时间成本选择适合杂草种子分类的CNN模型。本研究有助于开发各种应用的杂草种子检测系统。分类问题和与这些杂草种子鉴定相关的问题的解决可能允许更有效的管理和控制。
{"title":"Classification of weed seeds based on visual images and deep learning","authors":"Tongyun Luo ,&nbsp;Jianye Zhao ,&nbsp;Yujuan Gu ,&nbsp;Shuo Zhang ,&nbsp;Xi Qiao ,&nbsp;Wen Tian ,&nbsp;Yangchun Han","doi":"10.1016/j.inpa.2021.10.002","DOIUrl":"10.1016/j.inpa.2021.10.002","url":null,"abstract":"<div><p>Weeds are mainly spread by weed seeds being mixed with agricultural and forestry crop seeds, grain, animal hair, and other plant products, and disturb the growing environment of target plants such as crops and wild native plants. The accurate and efficient classification of weed seeds is important for the effective management and control of weeds. However, classification remains mainly dependent on destructive sampling-based manual inspection, which has a high cost and rather low flux. We considered that this problem could be solved using a nondestructive intelligent image recognition method. First, on the basis of the establishment of the image acquisition system for weed seeds, images of single weed seeds were rapidly and completely segmented, and a total of 47 696 samples of 140 species of weed seeds and foreign materials remained. Then, six popular and novel deep Convolutional Neural Network (CNN) models are compared to identify the best method for intelligently identifying 140 species of weed seeds. Of these samples, 33 600 samples are randomly selected as the training dataset for model training, and the remaining 14 096 samples are used as the testing dataset for model testing. AlexNet and GoogLeNet emerged from the quantitative evaluation as the best methods. AlexNet has strong classification accuracy and efficiency (low time consumption), and GoogLeNet has the best classification accuracy. A suitable CNN model for weed seed classification could be selected according to specific identification accuracy requirements and time costs of applications. This research is beneficial for developing a detection system for weed seeds in various applications. The resolution of taxonomic issues and problems associated with the identification of these weed seeds may allow for more effective management and control.</p></div>","PeriodicalId":53443,"journal":{"name":"Information Processing in Agriculture","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48952169","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
A new approach to learning and recognizing leaf diseases from individual lesions using convolutional neural networks 利用卷积神经网络从单个病变中学习和识别叶片疾病的新方法
Pub Date : 2023-03-01 DOI: 10.1016/j.inpa.2021.10.004
Lawrence C. Ngugi , Moataz Abdelwahab , Mohammed Abo-Zahhad

Leaf disease recognition using image processing and deep learning techniques is currently a vibrant research area. Most studies have focused on recognizing diseases from images of whole leaves. This approach limits the resulting models’ ability to estimate leaf disease severity or identify multiple anomalies occurring on the same leaf. Recent studies have demonstrated that classifying leaf diseases based on individual lesions greatly enhances disease recognition accuracy. In those studies, however, the lesions were laboriously cropped by hand. This study proposes a semi-automatic algorithm that facilitates the fast and efficient preparation of datasets of individual lesions and leaf image pixel maps to overcome this problem. These datasets were then used to train and test lesion classifier and semantic segmentation Convolutional Neural Network (CNN) models, respectively. We report that GoogLeNet’s disease recognition accuracy improved by more than 15% when diseases were recognized from lesion images compared to when disease recognition was done using images of whole leaves. A CNN model which performs semantic segmentation of both the leaf and lesions in one pass is also proposed in this paper. The proposed KijaniNet model achieved state-of-the-art segmentation performance in terms of mean Intersection over Union (mIoU) score of 0.8448 and 0.6257 for the leaf and lesion pixel classes, respectively. In terms of mean boundary F1 score, the KijaniNet model attained 0.8241 and 0.7855 for the two pixel classes, respectively. Lastly, a fully automatic algorithm for leaf disease recognition from individual lesions is proposed. The algorithm employs the semantic segmentation network cascaded to a GoogLeNet classifier for lesion-wise disease recognition. The proposed fully automatic algorithm outperforms competing methods in terms of its superior segmentation and classification performance despite being trained on a small dataset.

利用图像处理和深度学习技术识别叶片病害是目前一个充满活力的研究领域。大多数研究都集中在从整片叶子的图像中识别疾病。这种方法限制了最终模型估计叶片疾病严重程度或识别同一叶片上发生的多种异常的能力。近年来的研究表明,基于单个病变对叶片病害进行分类大大提高了病害识别的准确性。然而,在这些研究中,病变是用手工费力地切除的。为了克服这一问题,本研究提出了一种半自动算法,可以快速有效地制备单个病变和叶片图像像素图的数据集。然后使用这些数据集分别训练和测试病变分类器和语义分割卷积神经网络(CNN)模型。我们报告说,与使用全叶图像进行疾病识别相比,从病变图像识别疾病时,GoogLeNet的疾病识别准确率提高了15%以上。本文还提出了一种同时对叶片和病变进行语义分割的CNN模型。所提出的KijaniNet模型在叶片和病变像素类的平均mIoU得分分别为0.8448和0.6257,达到了最先进的分割性能。在平均边界F1得分方面,KijaniNet模型在两个像素类上分别获得了0.8241和0.7855。最后,提出了一种基于单个病变的叶片病害自动识别算法。该算法将语义分割网络级联到GoogLeNet分类器上进行病变识别。尽管该算法是在小数据集上训练的,但其优越的分割和分类性能优于竞争方法。
{"title":"A new approach to learning and recognizing leaf diseases from individual lesions using convolutional neural networks","authors":"Lawrence C. Ngugi ,&nbsp;Moataz Abdelwahab ,&nbsp;Mohammed Abo-Zahhad","doi":"10.1016/j.inpa.2021.10.004","DOIUrl":"10.1016/j.inpa.2021.10.004","url":null,"abstract":"<div><p>Leaf disease recognition using image processing and deep learning techniques is currently a vibrant research area. Most studies have focused on recognizing diseases from images of whole leaves. This approach limits the resulting models’ ability to estimate leaf disease severity or identify multiple anomalies occurring on the same leaf. Recent studies have demonstrated that classifying leaf diseases based on individual lesions greatly enhances disease recognition accuracy. In those studies, however, the lesions were laboriously cropped by hand. This study proposes a semi-automatic algorithm that facilitates the fast and efficient preparation of datasets of individual lesions and leaf image pixel maps to overcome this problem. These datasets were then used to train and test lesion classifier and semantic segmentation Convolutional Neural Network (CNN) models, respectively. We report that GoogLeNet’s disease recognition accuracy improved by more than 15% when diseases were recognized from lesion images compared to when disease recognition was done using images of whole leaves. A CNN model which performs semantic segmentation of both the leaf and lesions in one pass is also proposed in this paper. The proposed<!--> <em>KijaniNet<!--> </em>model achieved state-of-the-art segmentation performance in terms of mean Intersection over Union (mIoU) score of 0.8448 and 0.6257 for the leaf and lesion pixel classes, respectively. In terms of mean boundary F1 score, the<!--> <em>KijaniNet<!--> </em>model attained 0.8241 and 0.7855 for the two pixel classes, respectively. Lastly, a fully automatic algorithm for leaf disease recognition from individual lesions is proposed. The algorithm employs the semantic segmentation network cascaded to a GoogLeNet classifier for lesion-wise disease recognition. The proposed fully automatic algorithm outperforms competing methods in terms of its superior segmentation and classification performance despite being trained on a small dataset.</p></div>","PeriodicalId":53443,"journal":{"name":"Information Processing in Agriculture","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46996222","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Texture-based latent space disentanglement for enhancement of a training dataset for ANN-based classification of fruit and vegetables 基于纹理的潜在空间解缠算法对基于人工神经网络的果蔬分类训练数据集的增强
Pub Date : 2023-03-01 DOI: 10.1016/j.inpa.2021.09.003
Khurram Hameed, Douglas Chai, Alexander Rassau

The capability of Convolutional Neural Networks (CNNs) for sparse representation has significant application to complex tasks like Representation Learning (RL). However, labelled datasets of sufficient size for learning this representation are not easily obtainable. The unsupervised learning capability of Variational Autoencoders (VAEs) and Generative Adversarial Networks (GANs) provide a promising solution to this issue through their capacity to learn representations for novel data samples and classification tasks. In this research, a texture-based latent space disentanglement technique is proposed to enhance learning of representations for novel data samples. A comparison is performed among different VAEs and GANs with the proposed approach for synthesis of new data samples. Two different VAE architectures are considered, a single layer dense VAE and a convolution based VAE, to compare the effectiveness of different architectures for learning of the representations. The GANs are selected based on the distance metric for disjoint distribution divergence estimation of complex representation learning tasks. The proposed texture-based disentanglement has been shown to provide a significant improvement for disentangling the process of representation learning by conditioning the random noise and synthesising texture rich images of fruit and vegetables.

卷积神经网络(cnn)的稀疏表示能力在表示学习(RL)等复杂任务中有着重要的应用。然而,学习这种表示的足够大小的标记数据集并不容易获得。变分自编码器(VAEs)和生成对抗网络(GANs)的无监督学习能力通过学习新数据样本和分类任务的表示,为这一问题提供了一个有希望的解决方案。在本研究中,提出了一种基于纹理的潜在空间解纠缠技术,以增强对新数据样本的表示学习。用该方法合成新数据样本,对不同的vae和gan进行了比较。考虑了两种不同的VAE体系结构,单层密集VAE和基于卷积的VAE,以比较不同体系结构在学习表征方面的有效性。基于距离度量选择gan用于复杂表示学习任务的不相交分布散度估计。本文提出的基于纹理的解纠缠方法通过调节随机噪声和合成富含纹理的水果和蔬菜图像,为表征学习的解纠缠过程提供了显著的改进。
{"title":"Texture-based latent space disentanglement for enhancement of a training dataset for ANN-based classification of fruit and vegetables","authors":"Khurram Hameed,&nbsp;Douglas Chai,&nbsp;Alexander Rassau","doi":"10.1016/j.inpa.2021.09.003","DOIUrl":"10.1016/j.inpa.2021.09.003","url":null,"abstract":"<div><p>The capability of Convolutional Neural Networks (CNNs) for sparse representation has significant application to complex tasks like Representation Learning (RL). However, labelled datasets of sufficient size for learning this representation are not easily obtainable. The unsupervised learning capability of Variational Autoencoders (VAEs) and Generative Adversarial Networks (GANs) provide a promising solution to this issue through their capacity to learn representations for novel data samples and classification tasks. In this research, a texture-based latent space disentanglement technique is proposed to enhance learning of representations for novel data samples. A comparison is performed among different VAEs and GANs with the proposed approach for synthesis of new data samples. Two different VAE architectures are considered, a single layer dense VAE and a convolution based VAE, to compare the effectiveness of different architectures for learning of the representations. The GANs are selected based on the distance metric for disjoint distribution divergence estimation of complex representation learning tasks. The proposed texture-based disentanglement has been shown to provide a significant improvement for disentangling the process of representation learning by conditioning the random noise and synthesising texture rich images of fruit and vegetables.</p></div>","PeriodicalId":53443,"journal":{"name":"Information Processing in Agriculture","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41822406","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Artificial intelligence solutions to reduce information asymmetry for Colombian cocoa small-scale farmers 减少哥伦比亚可可小农户信息不对称的人工智能解决方案
Pub Date : 2023-03-01 DOI: 10.1016/j.inpa.2023.03.001
Nicolás De la Peña, Oscar M. Granados
{"title":"Artificial intelligence solutions to reduce information asymmetry for Colombian cocoa small-scale farmers","authors":"Nicolás De la Peña, Oscar M. Granados","doi":"10.1016/j.inpa.2023.03.001","DOIUrl":"https://doi.org/10.1016/j.inpa.2023.03.001","url":null,"abstract":"","PeriodicalId":53443,"journal":{"name":"Information Processing in Agriculture","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41628896","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Detection and counting method of juvenile abalones based on improved SSD network 基于改进SSD网络的鲍鱼幼鱼检测计数方法
Pub Date : 2023-03-01 DOI: 10.1016/j.inpa.2023.03.002
Runxue Su, Jun Yue, Zhenzhong Li, Shixiang Jia, Guorui Sheng
{"title":"Detection and counting method of juvenile abalones based on improved SSD network","authors":"Runxue Su, Jun Yue, Zhenzhong Li, Shixiang Jia, Guorui Sheng","doi":"10.1016/j.inpa.2023.03.002","DOIUrl":"https://doi.org/10.1016/j.inpa.2023.03.002","url":null,"abstract":"","PeriodicalId":53443,"journal":{"name":"Information Processing in Agriculture","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43549565","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Information Processing in Agriculture
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1