Pub Date : 2023-06-01DOI: 10.1016/j.inpa.2021.12.006
Zhankui Yang , Xinting Yang , Ming Li , Wenyong Li
Automated recognition of insect category, which currently is performed mainly by agriculture experts, is a challenging problem that has received increasing attention in recent years. The goal of the present research is to develop an intelligent mobile-terminal recognition system based on deep neural networks to recognize garden insects in a device that can be conveniently deployed in mobile terminals. State-of-the-art lightweight convolutional neural networks (such as SqueezeNet and ShuffleNet) have the same accuracy as classical convolutional neural networks such as AlexNet but fewer parameters, thereby not only requiring communication across servers during distributed training but also being more feasible to deploy on mobile terminals and other hardware with limited memory. In this research, we connect with the rich details of the low-level network features and the rich semantic information of the high-level network features to construct more rich semantic information feature maps which can effectively improve SqueezeNet model with a small computational cost. In addition, we developed an off-line insect recognition software that can be deployed on the mobile terminal to solve no network and the time-delay problems in the field. Experiments demonstrate that the proposed method is promising for recognition while remaining within a limited computational budget and delivers a much higher recognition accuracy of 91.64% with less training time relative to other classical convolutional neural networks. We have also verified the results that the improved SqueezeNet model has a 2.3% higher than of the original model in the open insect data IP102.
{"title":"Automated garden-insect recognition using improved lightweight convolution network","authors":"Zhankui Yang , Xinting Yang , Ming Li , Wenyong Li","doi":"10.1016/j.inpa.2021.12.006","DOIUrl":"10.1016/j.inpa.2021.12.006","url":null,"abstract":"<div><p>Automated recognition of insect category, which currently is performed mainly by agriculture experts, is a challenging problem that has received increasing attention in recent years. The goal of the present research is to develop an intelligent mobile-terminal recognition system based on deep neural networks to recognize garden insects in a device that can be conveniently deployed in mobile terminals. State-of-the-art lightweight convolutional neural networks (such as SqueezeNet and ShuffleNet) have the same accuracy as classical convolutional neural networks such as AlexNet but fewer parameters, thereby not only requiring communication across servers during distributed training but also being more feasible to deploy on mobile terminals and other hardware with limited memory. In this research, we connect with the rich details of the low-level network features and the rich semantic information of the high-level network features to construct more rich semantic information feature maps which can effectively improve SqueezeNet model with a small computational cost. In addition, we developed an off-line insect recognition software that can be deployed on the mobile terminal to solve no network and the time-delay problems in the field. Experiments demonstrate that the proposed method is promising for recognition while remaining within a limited computational budget and delivers a much higher recognition accuracy of 91.64% with less training time relative to other classical convolutional neural networks. We have also verified the results that the improved SqueezeNet model has a 2.3% higher than of the original model in the open insect data IP102.</p></div>","PeriodicalId":53443,"journal":{"name":"Information Processing in Agriculture","volume":"10 2","pages":"Pages 256-266"},"PeriodicalIF":0.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49654388","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-01DOI: 10.1016/j.inpa.2021.10.003
Helder Kiyoshi Miyagawa, Alberdan Silva Santos
The Piper hispidinervium leaves and thin stems were dried under laboratory and field conditions. Laboratory drying was performed using a shade dryer operating with and without forced convection and an oven dryer operating at 30 and 40 °C. Field experiments were conducted using solar dryers with three different covers, i.e., transparent, black plastic, and palm straw covers. The essential oil extraction was performed by steam distillation, and the safrole content was analyzed by gas chromatography. Five mathematical models (Page, logarithmic, Henderson and Pabis, fractional, and diffusion) were fitted with the experimental data and compared based on the coefficient of determination (R2), root mean square error (RMSE) and χ2. Results suggest that the best model was the logarithmic model (R2 > 0.99, RMSE < 0.000 5, and χ2 < 0.005). With sufficient drying, the safrole content increased up to 95% of the extracted oil; however, when the drying time was prolonged, both the oil yield and safrole content of the extracted oil decreased.
在实验室和田间条件下对花椒叶片和细茎进行干燥。在实验室中,使用带和不带强制对流的遮阳干燥机和30°C和40°C的烘箱干燥机进行干燥。采用透明、黑色塑料、棕榈秸秆三种不同覆盖物的太阳能干燥机进行田间试验。采用水蒸气蒸馏法提取挥发油,气相色谱法分析黄樟油的含量。采用Page、对数、Henderson and Pabis、分数、扩散5种数学模型对实验数据进行拟合,并根据决定系数(R2)、均方根误差(RMSE)和χ2进行比较。结果表明,最佳模型为对数模型(R2 >0.99, RMSE <0.000 5, χ2 <0.005)。充分干燥后,黄樟油含量可达提取油的95%;但随着干燥时间的延长,提取油的出油率和黄樟酚含量均有所下降。
{"title":"The effect of drying of Piper hispidinervium by different methods and its influence on the yield of essential oil and safrole","authors":"Helder Kiyoshi Miyagawa, Alberdan Silva Santos","doi":"10.1016/j.inpa.2021.10.003","DOIUrl":"10.1016/j.inpa.2021.10.003","url":null,"abstract":"<div><p>The <em>Piper hispidinervium</em> leaves and thin stems were dried under laboratory and field conditions. Laboratory drying was performed using a shade dryer operating with and without forced convection and an oven dryer operating at 30 and 40 °C. Field experiments were conducted using solar dryers with three different covers, i.e., transparent, black plastic, and palm straw covers. The essential oil extraction was performed by steam distillation, and the safrole content was analyzed by gas chromatography. Five mathematical models (Page, logarithmic, Henderson and Pabis, fractional, and diffusion) were fitted with the experimental data and compared based on the coefficient of determination (R<sup>2</sup>), root mean square error (RMSE) and χ<sup>2</sup>. Results suggest that the best model was the logarithmic model (R<sup>2</sup> > 0.99, RMSE < 0.000 5, and χ<sup>2</sup> < 0.005). With sufficient drying, the safrole content increased up to 95% of the extracted oil; however, when the drying time was prolonged, both the oil yield and safrole content of the extracted oil decreased.</p></div>","PeriodicalId":53443,"journal":{"name":"Information Processing in Agriculture","volume":"10 1","pages":"Pages 28-39"},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41851526","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-01DOI: 10.1016/j.inpa.2021.10.004
Lawrence C. Ngugi , Moataz Abdelwahab , Mohammed Abo-Zahhad
Leaf disease recognition using image processing and deep learning techniques is currently a vibrant research area. Most studies have focused on recognizing diseases from images of whole leaves. This approach limits the resulting models’ ability to estimate leaf disease severity or identify multiple anomalies occurring on the same leaf. Recent studies have demonstrated that classifying leaf diseases based on individual lesions greatly enhances disease recognition accuracy. In those studies, however, the lesions were laboriously cropped by hand. This study proposes a semi-automatic algorithm that facilitates the fast and efficient preparation of datasets of individual lesions and leaf image pixel maps to overcome this problem. These datasets were then used to train and test lesion classifier and semantic segmentation Convolutional Neural Network (CNN) models, respectively. We report that GoogLeNet’s disease recognition accuracy improved by more than 15% when diseases were recognized from lesion images compared to when disease recognition was done using images of whole leaves. A CNN model which performs semantic segmentation of both the leaf and lesions in one pass is also proposed in this paper. The proposed KijaniNet model achieved state-of-the-art segmentation performance in terms of mean Intersection over Union (mIoU) score of 0.8448 and 0.6257 for the leaf and lesion pixel classes, respectively. In terms of mean boundary F1 score, the KijaniNet model attained 0.8241 and 0.7855 for the two pixel classes, respectively. Lastly, a fully automatic algorithm for leaf disease recognition from individual lesions is proposed. The algorithm employs the semantic segmentation network cascaded to a GoogLeNet classifier for lesion-wise disease recognition. The proposed fully automatic algorithm outperforms competing methods in terms of its superior segmentation and classification performance despite being trained on a small dataset.
{"title":"A new approach to learning and recognizing leaf diseases from individual lesions using convolutional neural networks","authors":"Lawrence C. Ngugi , Moataz Abdelwahab , Mohammed Abo-Zahhad","doi":"10.1016/j.inpa.2021.10.004","DOIUrl":"10.1016/j.inpa.2021.10.004","url":null,"abstract":"<div><p>Leaf disease recognition using image processing and deep learning techniques is currently a vibrant research area. Most studies have focused on recognizing diseases from images of whole leaves. This approach limits the resulting models’ ability to estimate leaf disease severity or identify multiple anomalies occurring on the same leaf. Recent studies have demonstrated that classifying leaf diseases based on individual lesions greatly enhances disease recognition accuracy. In those studies, however, the lesions were laboriously cropped by hand. This study proposes a semi-automatic algorithm that facilitates the fast and efficient preparation of datasets of individual lesions and leaf image pixel maps to overcome this problem. These datasets were then used to train and test lesion classifier and semantic segmentation Convolutional Neural Network (CNN) models, respectively. We report that GoogLeNet’s disease recognition accuracy improved by more than 15% when diseases were recognized from lesion images compared to when disease recognition was done using images of whole leaves. A CNN model which performs semantic segmentation of both the leaf and lesions in one pass is also proposed in this paper. The proposed<!--> <em>KijaniNet<!--> </em>model achieved state-of-the-art segmentation performance in terms of mean Intersection over Union (mIoU) score of 0.8448 and 0.6257 for the leaf and lesion pixel classes, respectively. In terms of mean boundary F1 score, the<!--> <em>KijaniNet<!--> </em>model attained 0.8241 and 0.7855 for the two pixel classes, respectively. Lastly, a fully automatic algorithm for leaf disease recognition from individual lesions is proposed. The algorithm employs the semantic segmentation network cascaded to a GoogLeNet classifier for lesion-wise disease recognition. The proposed fully automatic algorithm outperforms competing methods in terms of its superior segmentation and classification performance despite being trained on a small dataset.</p></div>","PeriodicalId":53443,"journal":{"name":"Information Processing in Agriculture","volume":"10 1","pages":"Pages 11-27"},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46996222","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-01DOI: 10.1016/j.inpa.2021.10.002
Tongyun Luo , Jianye Zhao , Yujuan Gu , Shuo Zhang , Xi Qiao , Wen Tian , Yangchun Han
Weeds are mainly spread by weed seeds being mixed with agricultural and forestry crop seeds, grain, animal hair, and other plant products, and disturb the growing environment of target plants such as crops and wild native plants. The accurate and efficient classification of weed seeds is important for the effective management and control of weeds. However, classification remains mainly dependent on destructive sampling-based manual inspection, which has a high cost and rather low flux. We considered that this problem could be solved using a nondestructive intelligent image recognition method. First, on the basis of the establishment of the image acquisition system for weed seeds, images of single weed seeds were rapidly and completely segmented, and a total of 47 696 samples of 140 species of weed seeds and foreign materials remained. Then, six popular and novel deep Convolutional Neural Network (CNN) models are compared to identify the best method for intelligently identifying 140 species of weed seeds. Of these samples, 33 600 samples are randomly selected as the training dataset for model training, and the remaining 14 096 samples are used as the testing dataset for model testing. AlexNet and GoogLeNet emerged from the quantitative evaluation as the best methods. AlexNet has strong classification accuracy and efficiency (low time consumption), and GoogLeNet has the best classification accuracy. A suitable CNN model for weed seed classification could be selected according to specific identification accuracy requirements and time costs of applications. This research is beneficial for developing a detection system for weed seeds in various applications. The resolution of taxonomic issues and problems associated with the identification of these weed seeds may allow for more effective management and control.
{"title":"Classification of weed seeds based on visual images and deep learning","authors":"Tongyun Luo , Jianye Zhao , Yujuan Gu , Shuo Zhang , Xi Qiao , Wen Tian , Yangchun Han","doi":"10.1016/j.inpa.2021.10.002","DOIUrl":"10.1016/j.inpa.2021.10.002","url":null,"abstract":"<div><p>Weeds are mainly spread by weed seeds being mixed with agricultural and forestry crop seeds, grain, animal hair, and other plant products, and disturb the growing environment of target plants such as crops and wild native plants. The accurate and efficient classification of weed seeds is important for the effective management and control of weeds. However, classification remains mainly dependent on destructive sampling-based manual inspection, which has a high cost and rather low flux. We considered that this problem could be solved using a nondestructive intelligent image recognition method. First, on the basis of the establishment of the image acquisition system for weed seeds, images of single weed seeds were rapidly and completely segmented, and a total of 47 696 samples of 140 species of weed seeds and foreign materials remained. Then, six popular and novel deep Convolutional Neural Network (CNN) models are compared to identify the best method for intelligently identifying 140 species of weed seeds. Of these samples, 33 600 samples are randomly selected as the training dataset for model training, and the remaining 14 096 samples are used as the testing dataset for model testing. AlexNet and GoogLeNet emerged from the quantitative evaluation as the best methods. AlexNet has strong classification accuracy and efficiency (low time consumption), and GoogLeNet has the best classification accuracy. A suitable CNN model for weed seed classification could be selected according to specific identification accuracy requirements and time costs of applications. This research is beneficial for developing a detection system for weed seeds in various applications. The resolution of taxonomic issues and problems associated with the identification of these weed seeds may allow for more effective management and control.</p></div>","PeriodicalId":53443,"journal":{"name":"Information Processing in Agriculture","volume":"10 1","pages":"Pages 40-51"},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48952169","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-01DOI: 10.1016/j.inpa.2021.09.003
Khurram Hameed, Douglas Chai, Alexander Rassau
The capability of Convolutional Neural Networks (CNNs) for sparse representation has significant application to complex tasks like Representation Learning (RL). However, labelled datasets of sufficient size for learning this representation are not easily obtainable. The unsupervised learning capability of Variational Autoencoders (VAEs) and Generative Adversarial Networks (GANs) provide a promising solution to this issue through their capacity to learn representations for novel data samples and classification tasks. In this research, a texture-based latent space disentanglement technique is proposed to enhance learning of representations for novel data samples. A comparison is performed among different VAEs and GANs with the proposed approach for synthesis of new data samples. Two different VAE architectures are considered, a single layer dense VAE and a convolution based VAE, to compare the effectiveness of different architectures for learning of the representations. The GANs are selected based on the distance metric for disjoint distribution divergence estimation of complex representation learning tasks. The proposed texture-based disentanglement has been shown to provide a significant improvement for disentangling the process of representation learning by conditioning the random noise and synthesising texture rich images of fruit and vegetables.
{"title":"Texture-based latent space disentanglement for enhancement of a training dataset for ANN-based classification of fruit and vegetables","authors":"Khurram Hameed, Douglas Chai, Alexander Rassau","doi":"10.1016/j.inpa.2021.09.003","DOIUrl":"10.1016/j.inpa.2021.09.003","url":null,"abstract":"<div><p>The capability of Convolutional Neural Networks (CNNs) for sparse representation has significant application to complex tasks like Representation Learning (RL). However, labelled datasets of sufficient size for learning this representation are not easily obtainable. The unsupervised learning capability of Variational Autoencoders (VAEs) and Generative Adversarial Networks (GANs) provide a promising solution to this issue through their capacity to learn representations for novel data samples and classification tasks. In this research, a texture-based latent space disentanglement technique is proposed to enhance learning of representations for novel data samples. A comparison is performed among different VAEs and GANs with the proposed approach for synthesis of new data samples. Two different VAE architectures are considered, a single layer dense VAE and a convolution based VAE, to compare the effectiveness of different architectures for learning of the representations. The GANs are selected based on the distance metric for disjoint distribution divergence estimation of complex representation learning tasks. The proposed texture-based disentanglement has been shown to provide a significant improvement for disentangling the process of representation learning by conditioning the random noise and synthesising texture rich images of fruit and vegetables.</p></div>","PeriodicalId":53443,"journal":{"name":"Information Processing in Agriculture","volume":"10 1","pages":"Pages 85-105"},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41822406","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-01DOI: 10.1016/j.inpa.2021.09.004
Jeferson Pereira Martins Silva , Mayra Luiza Marques da Silva , Adriano Ribeiro de Mendonça , Gilson Fernandes da Silva , Antônio Almeida de Barros Junior , Evandro Ferreira da Silva , Marcelo Otone Aguiar , Jeangelis Silva Santos , Nívea Maria Mafra Rodrigues
Forest production and growth are obtained from statistical models that allow the generation of information at the tree or forest stand level. Although the use of regression models is common in forest measurement, there is a constant search for estimation procedures that provide greater accuracy. Recently, machine learning techniques have been used with satisfactory performance in measuring forests. However, methods such as Adaptive Neuro-Fuzzy Inference System (ANFIS) and Random Forest are relatively poorly studied for predicting the volume of wood in eucalyptus plantations in Brazil. Therefore, it is essential to check whether these techniques can provide gains in terms of accuracy. Thus, this study aimed to evaluate the use of Random Forest and ANFIS techniques in the prognosis of forest production. The data used come from continuous forest inventories carried out in stands of eucalyptus clones. The data were divided into 70% for training and 30% for validation. The algorithms used to generate rules in ANFIS were Subtractive Clustering and Fuzzy-C-Means. Besides, training was done with the hybrid algorithm (descending gradient and least squares) with the number of seasons ranging from 1 to 20. Several RFs were trained, varying the number of trees from 50 to 850 and the number of observations by five leaves to 35. Artificial neural networks and decision trees were also trained to compare the feasibility of the techniques. The evaluation of the estimates generated by the techniques for training and validation was calculated based on the following statistics: correlation coefficient (r), relative Bias (RB), and the relative root mean square error (RRMSE) in percentage. In general, the techniques studied in this work showed excellent performance for the training and validation data set with RRMSE values <6%, RB < 0.5%, and r > 0.98. The RF presented inferior statistics about the ANFIS for the prognosis of forest production. The Subtractive Clustering (SC) and Fuzzy-C-Means (FCM) algorithms provide accurate baseline and volume projection estimates; both techniques are good alternatives for selecting variables used in modeling forest production.
森林生产和生长是通过统计模型获得的,这些模型可以产生树木或林分水平的信息。虽然在森林测量中普遍使用回归模型,但仍在不断寻求提供更高精度的估计程序。近年来,机器学习技术在森林测量中取得了令人满意的效果。然而,诸如自适应神经模糊推理系统(ANFIS)和随机森林等方法在预测巴西桉树人工林木材量方面的研究相对较少。因此,有必要检查这些技术是否能够在准确性方面提供增益。因此,本研究旨在评估随机森林和ANFIS技术在森林生产预测中的应用。所使用的数据来自于对桉树无性系林分进行的连续森林调查。数据分为70%用于训练,30%用于验证。在ANFIS中用于生成规则的算法有减法聚类算法和模糊c均值算法。采用梯度下降和最小二乘混合算法进行训练,季节数为1 ~ 20。训练了几个RFs,将树木的数量从50棵增加到850棵,将观察到的树叶数量从5片增加到35片。还训练了人工神经网络和决策树来比较这些技术的可行性。对训练和验证技术产生的估计值的评估基于以下统计数据进行计算:相关系数(r)、相对偏差(RB)和相对均方根误差(RRMSE)的百分比。总的来说,本文研究的技术在RRMSE值为<6%, RB <的训练和验证数据集上表现出色。0.5%, r >0.98. 对于森林生产的预测,RF给出了较差的ANFIS统计值。减法聚类(SC)和模糊c均值(FCM)算法提供准确的基线和体积投影估计;这两种技术都是选择用于森林生产建模的变量的良好替代方法。
{"title":"Prognosis of forest production using machine learning techniques","authors":"Jeferson Pereira Martins Silva , Mayra Luiza Marques da Silva , Adriano Ribeiro de Mendonça , Gilson Fernandes da Silva , Antônio Almeida de Barros Junior , Evandro Ferreira da Silva , Marcelo Otone Aguiar , Jeangelis Silva Santos , Nívea Maria Mafra Rodrigues","doi":"10.1016/j.inpa.2021.09.004","DOIUrl":"10.1016/j.inpa.2021.09.004","url":null,"abstract":"<div><p>Forest production and growth are obtained from statistical models that allow the generation of information at the tree or forest stand level. Although the use of regression models is common in forest measurement, there is a constant search for estimation procedures that provide greater accuracy. Recently, machine learning techniques have been used with satisfactory performance in measuring forests. However, methods such as Adaptive Neuro-Fuzzy Inference System (ANFIS) and Random Forest are relatively poorly studied for predicting the volume of wood in eucalyptus plantations in Brazil. Therefore, it is essential to check whether these techniques can provide gains in terms of accuracy. Thus, this study aimed to evaluate the use of Random Forest and ANFIS techniques in the prognosis of forest production. The data used come from continuous forest inventories carried out in stands of eucalyptus clones. The data were divided into 70% for training and 30% for validation. The algorithms used to generate rules in ANFIS were Subtractive Clustering and Fuzzy-C-Means. Besides, training was done with the hybrid algorithm (descending gradient and least squares) with the number of seasons ranging from 1 to 20. Several RFs were trained, varying the number of trees from 50 to 850 and the number of observations by five leaves to 35. Artificial neural networks and decision trees were also trained to compare the feasibility of the techniques. The evaluation of the estimates generated by the techniques for training and validation was calculated based on the following statistics: correlation coefficient (r), relative Bias (RB), and the relative root mean square error (<em>RRMSE</em>) in percentage. In general, the techniques studied in this work showed excellent performance for the training and validation data set with <em>RRMSE</em> values <6%, RB < 0.5%, and r > 0.98. The RF presented inferior statistics about the ANFIS for the prognosis of forest production. The Subtractive Clustering (SC) and Fuzzy-C-Means (FCM) algorithms provide accurate baseline and volume projection estimates; both techniques are good alternatives for selecting variables used in modeling forest production.</p></div>","PeriodicalId":53443,"journal":{"name":"Information Processing in Agriculture","volume":"10 1","pages":"Pages 71-84"},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44416205","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-01DOI: 10.1016/j.inpa.2021.07.004
Bingrui Xu , Li Chai , Chunlong Zhang
Weeds that grow among crops are undesirable plants and have adversely affected crop growth and yield. Therefore, the study explores corn identification and positioning methods based on machine vision. The ultra-green feature algorithm and maximum between-class variance method (OTSU) were used to segment maize corn, weeds, and land; the segmentation effect was significant and can meet the following shape feature extraction requirements. Finally, the identification and positioning of corn were achieved by morphological reconstruction and pixel projection histogram method. The experiment reveals that when a weeding robot travels at a speed of 1.6 km/h, the recognition accuracy can reach 94.1%. The technique used in this study is accessible for normal cases and can make a good recognition effect; the accuracy and real-time requirements of robot recognition are improved and reduced the calculation time.
{"title":"Research and application on corn crop identification and positioning method based on Machine vision","authors":"Bingrui Xu , Li Chai , Chunlong Zhang","doi":"10.1016/j.inpa.2021.07.004","DOIUrl":"10.1016/j.inpa.2021.07.004","url":null,"abstract":"<div><p>Weeds that grow among crops are undesirable plants and have adversely affected crop growth and yield. Therefore, the study explores corn identification and positioning methods based on machine vision. The ultra-green feature algorithm and maximum between-class variance method (OTSU) were used to segment maize corn, weeds, and land; the segmentation effect was significant and can meet the following shape feature extraction requirements. Finally, the identification and positioning of corn were achieved by morphological reconstruction and pixel projection histogram method. The experiment reveals that when a weeding robot travels at a speed of 1.6 km/h, the recognition accuracy can reach 94.1%. The technique used in this study is accessible for normal cases and can make a good recognition effect; the accuracy and real-time requirements of robot recognition are improved and reduced the calculation time.</p></div>","PeriodicalId":53443,"journal":{"name":"Information Processing in Agriculture","volume":"10 1","pages":"Pages 106-113"},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.inpa.2021.07.004","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41470017","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-01DOI: 10.1016/j.inpa.2021.02.006
Shrikrishna Kolhar , Jayant Jagtap
Today there is a rapid development taking place in phenotyping of plants using non-destructive image based machine vision techniques. Machine vision based plant phenotyping ranges from single plant trait estimation to broad assessment of crop canopy for thousands of plants in the field. Plant phenotyping systems either use single imaging method or integrative approach signifying simultaneous use of some of the imaging techniques like visible red, green and blue (RGB) imaging, thermal imaging, chlorophyll fluorescence imaging (CFIM), hyperspectral imaging, 3-dimensional (3-D) imaging or high resolution volumetric imaging. This paper provides an overview of imaging techniques and their applications in the field of plant phenotyping. This paper presents a comprehensive survey on recent machine vision methods for plant trait estimation and classification. In this paper, information about publicly available datasets is provided for uniform comparison among the state-of-the-art phenotyping methods. This paper also presents future research directions related to the use of deep learning based machine vision algorithms for structural (2-D and 3-D), physiological and temporal trait estimation, and classification studies in plants.
{"title":"Plant trait estimation and classification studies in plant phenotyping using machine vision – A review","authors":"Shrikrishna Kolhar , Jayant Jagtap","doi":"10.1016/j.inpa.2021.02.006","DOIUrl":"10.1016/j.inpa.2021.02.006","url":null,"abstract":"<div><p>Today there is a rapid development taking place in phenotyping of plants using non-destructive image based machine vision techniques. Machine vision based plant phenotyping ranges from single plant trait estimation to broad assessment of crop canopy for thousands of plants in the field. Plant phenotyping systems either use single imaging method or integrative approach signifying simultaneous use of some of the imaging techniques like visible red, green and blue (RGB) imaging, thermal imaging, chlorophyll fluorescence imaging (CFIM), hyperspectral imaging, 3-dimensional (3-D) imaging or high resolution volumetric imaging. This paper provides an overview of imaging techniques and their applications in the field of plant phenotyping. This paper presents a comprehensive survey on recent machine vision methods for plant trait estimation and classification. In this paper, information about publicly available datasets is provided for uniform comparison among the state-of-the-art phenotyping methods. This paper also presents future research directions related to the use of deep learning based machine vision algorithms for structural (2-D and 3-D), physiological and temporal trait estimation, and classification studies in plants.</p></div>","PeriodicalId":53443,"journal":{"name":"Information Processing in Agriculture","volume":"10 1","pages":"Pages 114-135"},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.inpa.2021.02.006","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47110333","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-01DOI: 10.1016/j.inpa.2021.02.001
Chongyuan Zhang , Sara Serra , Juan Quirós-Vargas , Worasit Sangjan , Stefano Musacchi , Sindhuja Sankaran
Tree fruit architecture results from combination of the training system and pruning and thinning processes across multiple growth and development years. Further, the tree fruit architecture contributes to the light interception and improves tree growth, fruit quality, and fruit yield, in addition to easing the process of orchard management and harvest. Currently tree architectural traits are measured manually by researchers or growers, which is labor-intensive and time-consuming. In this study, the remote sensing techniques were evaluated to phenotype critical architectural traits with the final goal to assist tree fruit breeders, physiologists and growers in collecting architectural traits efficiently and in a standardized manner. For this, a consumer-grade red–green–blue (RGB) camera was used to collect apple tree side-images, while an unmanned aerial vehicle (UAV) integrated RGB camera was programmed to image tree canopy at 15 m above ground level to evaluate multiple tree fruit architectures. The sensing data were compared to ground reference data associated with tree orchard blocks within three training systems (Spindle, V-trellis, Bi-axis), two rootstocks (‘WA 38 trees grafted on G41 and M9-Nic29) and two pruning methods (referred as bending and click pruning). The data were processed to extract architectural features from ground-based 2D images and UAV-based 3D digital surface model. The traits extracted from sensing data included box-counting fractal dimension (DBs), middle branch angle, number of branches, trunk basal diameter, and tree row volume (TRV). The results from ground-based sensing data indicated that there was a significant (P < 0.0001) difference in DBs between Spindle and V-trellis training systems, and correlations between DBs with tree height (r = 0.79) and total fruit yield per unit area (r = 0.74) were significant (P < 0.05). Moreover, correlations between average or total TRV and ground reference data, such as tree height and total fruit yield per unit area, were significant (P < 0.05). With the reported findings, this study demonstrated the potential of sensing techniques for phenotyping tree fruit architectural traits.
{"title":"Non-invasive sensing techniques to phenotype multiple apple tree architectures","authors":"Chongyuan Zhang , Sara Serra , Juan Quirós-Vargas , Worasit Sangjan , Stefano Musacchi , Sindhuja Sankaran","doi":"10.1016/j.inpa.2021.02.001","DOIUrl":"10.1016/j.inpa.2021.02.001","url":null,"abstract":"<div><p>Tree fruit architecture results from combination of the training system and pruning and thinning processes across multiple growth and development years. Further, the tree fruit architecture contributes to the light interception and improves tree growth, fruit quality, and fruit yield, in addition to easing the process of orchard management and harvest. Currently tree architectural traits are measured manually by researchers or growers, which is labor-intensive and time-consuming. In this study, the remote sensing techniques were evaluated to phenotype critical architectural traits with the final goal to assist tree fruit breeders, physiologists and growers in collecting architectural traits efficiently and in a standardized manner. For this, a consumer-grade red–green–blue (RGB) camera was used to collect apple tree side-images, while an unmanned aerial vehicle (UAV) integrated RGB camera was programmed to image tree canopy at 15 m above ground level to evaluate multiple tree fruit architectures. The sensing data were compared to ground reference data associated with tree orchard blocks within three training systems (Spindle, V-trellis, Bi-axis), two rootstocks (‘WA 38 trees grafted on G41 and M9-Nic29) and two pruning methods (referred as bending and click pruning). The data were processed to extract architectural features from ground-based 2D images and UAV-based 3D digital surface model. The traits extracted from sensing data included box-counting fractal dimension (D<sub>B</sub>s), middle branch angle, number of branches, trunk basal diameter, and tree row volume (TRV). The results from ground-based sensing data indicated that there was a significant (P < 0.0001) difference in D<sub>B</sub>s between Spindle and V-trellis training systems, and correlations between D<sub>B</sub>s with tree height (<em>r</em> = 0.79) and total fruit yield per unit area (<em>r</em> = 0.74) were significant (P < 0.05). Moreover, correlations between average or total TRV and ground reference data, such as tree height and total fruit yield per unit area, were significant (P < 0.05). With the reported findings, this study demonstrated the potential of sensing techniques for phenotyping tree fruit architectural traits.</p></div>","PeriodicalId":53443,"journal":{"name":"Information Processing in Agriculture","volume":"10 1","pages":"Pages 136-147"},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.inpa.2021.02.001","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47351980","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This study aimed to develop and evaluate the performance of a service system platform based on the Internet of Things (IoT) for monitoring nutritional deficiencies in plants and providing fertilizer recommendations. There are two distinct differences between this work and previous ones; namely, this service system platform has been developed based on IoT using a system engineering approach and its performance has been evaluated using dependability. We have successfully developed and integrated a service system platform and chlorophyll meter that is based on IoT. We have also successfully tested the performance of the service system platform using the JMeter software. The dependability value measured from the five tested variables (reliability, availability, integrity, maintainability, and safety) showed a value of 0.97 which represents a very good level of system confidence in not failing to deliver services to users under normal operational conditions. From a future perspective, this platform can be used as an alternative service to monitor nutrient deficiencies in plants and provide fertilization recommendations to increase yields, reduce fertilizer costs, and prevent the use of excessive fertilizers, which can cause environmental pollution.
{"title":"Performance evaluation of IoT-based service system for monitoring nutritional deficiencies in plants","authors":"Heri Andrianto , Suhardi , Ahmad Faizal , Novianto Budi Kurniawan , Dimas Praja Purwa Aji","doi":"10.1016/j.inpa.2021.10.001","DOIUrl":"10.1016/j.inpa.2021.10.001","url":null,"abstract":"<div><p>This study aimed to develop and evaluate the performance of a service system platform based on the Internet of Things (IoT) for monitoring nutritional deficiencies in plants and providing fertilizer recommendations. There are two distinct differences between this work and previous ones; namely, this service system platform has been developed based on IoT using a system engineering approach and its performance has been evaluated using dependability. We have successfully developed and integrated a service system platform and chlorophyll meter that is based on IoT. We have also successfully tested the performance of the service system platform using the JMeter software. The dependability value measured from the five tested variables (reliability, availability, integrity, maintainability, and safety) showed a value of 0.97 which represents a very good level of system confidence in not failing to deliver services to users under normal operational conditions. From a future perspective, this platform can be used as an alternative service to monitor nutrient deficiencies in plants and provide fertilization recommendations to increase yields, reduce fertilizer costs, and prevent the use of excessive fertilizers, which can cause environmental pollution.</p></div>","PeriodicalId":53443,"journal":{"name":"Information Processing in Agriculture","volume":"10 1","pages":"Pages 52-70"},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44658339","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}