Syed Umar Rasheed, Wasif Muhammad, Irfan Qaiser, M. J. Irshad
{"title":"A Multispectral Pest-Detection Algorithm for Precision Agriculture","authors":"Syed Umar Rasheed, Wasif Muhammad, Irfan Qaiser, M. J. Irshad","doi":"10.3390/engproc2021012046","DOIUrl":null,"url":null,"abstract":"Invertebrates are abundant in horticulture and farming environments, and can be detrimental. Early pest detection for an integrated pest-management system with an integration of physical, biological, and prophylactic methods has huge potential for the better yield of crops. Computer vision techniques with multispectral images are used to detect and classify pests in dynamic environmental conditions, such as sunlight variations, partial occlusions, low contrast, etc. Various state-of-art, deep learning approaches have been proposed, but there are some major limitations to these methods. For example, labelled images are required to supervise the training of deep networks, which is tiresome work. Secondly, a huge in-situ database with variant environmental conditions is not available for deep learning, or is difficult to build for fretful bioaggressors. In this paper, we propose a machine-vision-based multispectral pest-detection algorithm, which does not require any kind of supervised network training. Multispectral images are used as input for the proposed pest-detection algorithm, and each image provides comprehensive information about different textural and morphological features, and visible information, i.e., size, shape, orientation, color, and wing patterns for each insect. Feature identification is performed by a SURF algorithm, and feature extraction is accomplished by least median of square regression (LMEDS). Feature fusion of RGB and NIR images onto the coordinates of Ultraviolet (UV) is performed after affine transformation. The mean identification errors of type I, II, and total mean error surpass the mean errors of the state-of-art methods. The type I, II, and total mean errors, with 6.672% UV weights, were emanated to 1.62, 40.27, and 3.26, respectively.","PeriodicalId":11748,"journal":{"name":"Engineering Proceedings","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2021-12-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Engineering Proceedings","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3390/engproc2021012046","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3
Abstract
Invertebrates are abundant in horticulture and farming environments, and can be detrimental. Early pest detection for an integrated pest-management system with an integration of physical, biological, and prophylactic methods has huge potential for the better yield of crops. Computer vision techniques with multispectral images are used to detect and classify pests in dynamic environmental conditions, such as sunlight variations, partial occlusions, low contrast, etc. Various state-of-art, deep learning approaches have been proposed, but there are some major limitations to these methods. For example, labelled images are required to supervise the training of deep networks, which is tiresome work. Secondly, a huge in-situ database with variant environmental conditions is not available for deep learning, or is difficult to build for fretful bioaggressors. In this paper, we propose a machine-vision-based multispectral pest-detection algorithm, which does not require any kind of supervised network training. Multispectral images are used as input for the proposed pest-detection algorithm, and each image provides comprehensive information about different textural and morphological features, and visible information, i.e., size, shape, orientation, color, and wing patterns for each insect. Feature identification is performed by a SURF algorithm, and feature extraction is accomplished by least median of square regression (LMEDS). Feature fusion of RGB and NIR images onto the coordinates of Ultraviolet (UV) is performed after affine transformation. The mean identification errors of type I, II, and total mean error surpass the mean errors of the state-of-art methods. The type I, II, and total mean errors, with 6.672% UV weights, were emanated to 1.62, 40.27, and 3.26, respectively.