Pub Date : 2003-10-27DOI: 10.1109/WARSD.2003.1295213
M. Crawford, Jisoo Ham, Yangchi Chen, Joydeep Ghosh
Statistical classification of hyperspectral data is challenging because the input space is high in dimension and correlated, but labeled information to characterize the class distributions is typically sparse. The resulting classifiers are often unstable and have poor generalization. A new approach that is based on the concept of random forests of classifiers and implemented within a multiclassifier system arranged as a binary hierarchy is proposed. The primary goal is to achieve improved generalization of the classifier in analysis of hyperspectral data, particularly when the quantity of training data is limited. The new classifier incorporates bagging of training samples and adaptive random subspace feature selection with the binary hierarchical classifier (BHC), such that the number of features that is selected at each node of the tree is dependent on the quantity of associated training data. Classification results from experiments on data acquired by the Hyperion sensor on the NASA EO-1 satellite over the Okavango Delta of Botswana are superior to those from our original best basis BHC algorithm, a random subspace extension of the BHC, and a random forest implementation using the CART classifier.
{"title":"Random forests of binary hierarchical classifiers for analysis of hyperspectral data","authors":"M. Crawford, Jisoo Ham, Yangchi Chen, Joydeep Ghosh","doi":"10.1109/WARSD.2003.1295213","DOIUrl":"https://doi.org/10.1109/WARSD.2003.1295213","url":null,"abstract":"Statistical classification of hyperspectral data is challenging because the input space is high in dimension and correlated, but labeled information to characterize the class distributions is typically sparse. The resulting classifiers are often unstable and have poor generalization. A new approach that is based on the concept of random forests of classifiers and implemented within a multiclassifier system arranged as a binary hierarchy is proposed. The primary goal is to achieve improved generalization of the classifier in analysis of hyperspectral data, particularly when the quantity of training data is limited. The new classifier incorporates bagging of training samples and adaptive random subspace feature selection with the binary hierarchical classifier (BHC), such that the number of features that is selected at each node of the tree is dependent on the quantity of associated training data. Classification results from experiments on data acquired by the Hyperion sensor on the NASA EO-1 satellite over the Okavango Delta of Botswana are superior to those from our original best basis BHC algorithm, a random subspace extension of the BHC, and a random forest implementation using the CART classifier.","PeriodicalId":395735,"journal":{"name":"IEEE Workshop on Advances in Techniques for Analysis of Remotely Sensed Data, 2003","volume":"184 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123722429","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-10-27DOI: 10.1109/WARSD.2003.1295186
P. Kersten, J. Lee, T. Ainsworth, M. Grunes
Clustering is a well known technique for classification in polarimetric synthetic aperture radar (POLSAR) images. Pixels are represented as complex covariance matrices, which demand dissimilarity measures that can capture the phase relationships between the polar components of the returns. Four dissimilarity measures are compared to judge their efficacy to separate complex covariances within the fuzzy clustering process. When these four measures are used to classify, a POLSAR image, the measures that are based upon the Wishart distribution outperform the standard metrics because they better represent the total information contained in the polarimetric data. The Expectation Maximization (EM) algorithm is applied to a mixture of complex Wishart distributions to classify the image. Its performance matches the FCM clustering results yielding a tentative conclusion that the Wishart distribution model is more important than the clustering mechanism itself.
{"title":"Classification of polarimetric synthetic aperture radar images using fuzzy clustering","authors":"P. Kersten, J. Lee, T. Ainsworth, M. Grunes","doi":"10.1109/WARSD.2003.1295186","DOIUrl":"https://doi.org/10.1109/WARSD.2003.1295186","url":null,"abstract":"Clustering is a well known technique for classification in polarimetric synthetic aperture radar (POLSAR) images. Pixels are represented as complex covariance matrices, which demand dissimilarity measures that can capture the phase relationships between the polar components of the returns. Four dissimilarity measures are compared to judge their efficacy to separate complex covariances within the fuzzy clustering process. When these four measures are used to classify, a POLSAR image, the measures that are based upon the Wishart distribution outperform the standard metrics because they better represent the total information contained in the polarimetric data. The Expectation Maximization (EM) algorithm is applied to a mixture of complex Wishart distributions to classify the image. Its performance matches the FCM clustering results yielding a tentative conclusion that the Wishart distribution model is more important than the clustering mechanism itself.","PeriodicalId":395735,"journal":{"name":"IEEE Workshop on Advances in Techniques for Analysis of Remotely Sensed Data, 2003","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122436699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-10-27DOI: 10.1109/WARSD.2003.1295204
David A Clausi, H. Deng
The Canadian Ice Service (CIS) is a government agency responsible for monitoring ice-infested regions in Canada's jurisdiction. Synthetic aperture radar (SAR) is the primary tool used for monitoring such vast, inaccessible regions. Ice maps of different regions are generated each day in support of navigation operations and environmental assessments. Currently, operators digitally segment the SAR data manually using primarily tone and texture visual characteristics. Regions containing multiple ice types are identified, however, it is not feasible to produce a pixel-based segmentation due to time constraints. In this research, advanced methods for performing texture feature extraction, incorporating tonal features, and performing the segmentation are presented. Examples of the segmentation of a SAR image that is difficult to segment manually and that requires the inclusion of both tone and texture features are presented.
{"title":"Operational segmentation and classification of SAR sea ice imagery","authors":"David A Clausi, H. Deng","doi":"10.1109/WARSD.2003.1295204","DOIUrl":"https://doi.org/10.1109/WARSD.2003.1295204","url":null,"abstract":"The Canadian Ice Service (CIS) is a government agency responsible for monitoring ice-infested regions in Canada's jurisdiction. Synthetic aperture radar (SAR) is the primary tool used for monitoring such vast, inaccessible regions. Ice maps of different regions are generated each day in support of navigation operations and environmental assessments. Currently, operators digitally segment the SAR data manually using primarily tone and texture visual characteristics. Regions containing multiple ice types are identified, however, it is not feasible to produce a pixel-based segmentation due to time constraints. In this research, advanced methods for performing texture feature extraction, incorporating tonal features, and performing the segmentation are presented. Examples of the segmentation of a SAR image that is difficult to segment manually and that requires the inclusion of both tone and texture features are presented.","PeriodicalId":395735,"journal":{"name":"IEEE Workshop on Advances in Techniques for Analysis of Remotely Sensed Data, 2003","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115508480","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-10-27DOI: 10.1109/WARSD.2003.1295183
P. Groves, P. Bajcsy
Feature selection is one of the fundamental problems in nearly every application of statistical modeling, and hyperspectral data analysis is no exception. We propose a new methodology for combining unsupervised and supervised methods under classification accuracy and computational requirement constraints. It is designed to perform not only hyperspectral band (wavelength range) selection but also classification method selection. The procedure involves ranking hands based on information content and redundancy and evaluating a varying number of the top ranked bands. We term this technique Rank Ordered With Accuracy Selection (ROWAS). It provides a good tradeoff between feature space exploration and computational efficiency. To verify our methodology, we conducted experiments with a georeferenced hyperspectral image (acquired by an AVIRIS sensor) and categorical ground measurements.
{"title":"Methodology for hyperspectral band and classification model selection","authors":"P. Groves, P. Bajcsy","doi":"10.1109/WARSD.2003.1295183","DOIUrl":"https://doi.org/10.1109/WARSD.2003.1295183","url":null,"abstract":"Feature selection is one of the fundamental problems in nearly every application of statistical modeling, and hyperspectral data analysis is no exception. We propose a new methodology for combining unsupervised and supervised methods under classification accuracy and computational requirement constraints. It is designed to perform not only hyperspectral band (wavelength range) selection but also classification method selection. The procedure involves ranking hands based on information content and redundancy and evaluating a varying number of the top ranked bands. We term this technique Rank Ordered With Accuracy Selection (ROWAS). It provides a good tradeoff between feature space exploration and computational efficiency. To verify our methodology, we conducted experiments with a georeferenced hyperspectral image (acquired by an AVIRIS sensor) and categorical ground measurements.","PeriodicalId":395735,"journal":{"name":"IEEE Workshop on Advances in Techniques for Analysis of Remotely Sensed Data, 2003","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125390182","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-10-27DOI: 10.1109/WARSD.2003.1295194
X. Song, F. Guoliang, M. Rao
We develop an Automated Feature Information Retrieval System (AFIRS) for accurate classification of multisource geospatial data, which involves multispectral Landsat imagery, ancillary geographic information system (GIS) data and other derived features. Two machine learning approaches, i.e., decision tree classifier (DTC) and support vector machine (SVM), are implemented as multisource geospatial data classifiers in the AFIRS. Specifically, we apply the AFIRS to the mapping of United States Department of Agriculture (USDA)'s Conservation Reserve Program (CRP) tracts in Texas County, Oklahoma. CRP is a nationwide program, and recently USDA announced payments of nearly $1.6 billion for new CRP enrollments. It is imperative to obtain accurate CRP maps for effective and efficient management and evaluation of the CRP program. However, most existing CRP maps are inaccurate and little work has been done to improve their accuracy. The proposed AFIRS is capable of handling the complex CRP mapping problem with high accuracy when limited training samples are available. Simulation results show that 5-10% improvements can be obtained by incorporating GIS ancillary data and other derived features in addition to multispectral imagery. This work validates the applicability of machine learning approaches to the complex real-world remote sensing applications.
{"title":"Machine learning approaches to multisource geospatial data classification: application to CRP mapping in Texas County, Oklahoma","authors":"X. Song, F. Guoliang, M. Rao","doi":"10.1109/WARSD.2003.1295194","DOIUrl":"https://doi.org/10.1109/WARSD.2003.1295194","url":null,"abstract":"We develop an Automated Feature Information Retrieval System (AFIRS) for accurate classification of multisource geospatial data, which involves multispectral Landsat imagery, ancillary geographic information system (GIS) data and other derived features. Two machine learning approaches, i.e., decision tree classifier (DTC) and support vector machine (SVM), are implemented as multisource geospatial data classifiers in the AFIRS. Specifically, we apply the AFIRS to the mapping of United States Department of Agriculture (USDA)'s Conservation Reserve Program (CRP) tracts in Texas County, Oklahoma. CRP is a nationwide program, and recently USDA announced payments of nearly $1.6 billion for new CRP enrollments. It is imperative to obtain accurate CRP maps for effective and efficient management and evaluation of the CRP program. However, most existing CRP maps are inaccurate and little work has been done to improve their accuracy. The proposed AFIRS is capable of handling the complex CRP mapping problem with high accuracy when limited training samples are available. Simulation results show that 5-10% improvements can be obtained by incorporating GIS ancillary data and other derived features in addition to multispectral imagery. This work validates the applicability of machine learning approaches to the complex real-world remote sensing applications.","PeriodicalId":395735,"journal":{"name":"IEEE Workshop on Advances in Techniques for Analysis of Remotely Sensed Data, 2003","volume":"259 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121408841","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-10-27DOI: 10.1109/WARSD.2003.1295196
G. Lampropoulos, J. Chan, J. Secker, Y. Li, A. Jouan
Presents a new and robust method to perform multisensor image registration from dissimilar sources. It is a proof of concept demonstration. It is based on multiple transformations of two quite dissimilar images into new domains, where local or global similarities are extracted.
{"title":"Automatic registration of electro-optical and SAR images","authors":"G. Lampropoulos, J. Chan, J. Secker, Y. Li, A. Jouan","doi":"10.1109/WARSD.2003.1295196","DOIUrl":"https://doi.org/10.1109/WARSD.2003.1295196","url":null,"abstract":"Presents a new and robust method to perform multisensor image registration from dissimilar sources. It is a proof of concept demonstration. It is based on multiple transformations of two quite dissimilar images into new domains, where local or global similarities are extracted.","PeriodicalId":395735,"journal":{"name":"IEEE Workshop on Advances in Techniques for Analysis of Remotely Sensed Data, 2003","volume":"149 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121597303","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-10-27DOI: 10.1109/WARSD.2003.1295179
James Norman Sweet
Hyperspectral images have considerable information content and are becoming common. Analysis tools must keep up with the changing demands and opportunities posed by the new datasets. Many spectral image analysis algorithms depend on a scalar measure of spectral similarity or 'spectral distance' to provide an estimate of how closely two spectra resemble each other. Unfortunately, traditional spectral similarity measures are ambiguous in their distinction of similarity. Traditional metrics can define a pair of spectra to be nearly identical mathematically yet visual inspection shows them to be spectroscopically dissimilar. These algorithms do not separately quantify both magnitude and direction differences. Three common algorithms used to measure the distance between remotely sensed reflectance spectra are Euclidean distance, correlation coefficient, and spectral angle. Euclidean distance primarily measures overall brightness differences but does not respond to the correlation (or lack thereof) between two spectra. The correlation coefficient is very responsive to differences in direction (i.e. spectral shape) but does not respond to brightness differences due to band-independent gain or offset factors. Spectral angle is closely related mathematically to the correlation coefficient and is primarily responsive to differences in spectral shape. However, spectral angle does respond to brightness differences due to a uniform offset, which confounds the interpretation of the spectral angle value. This paper proposes the spectral similarity scale (SSS) as an algorithm that objectively quantifies differences between reflectance spectra in both magnitude and direction dimensions (i.e. brightness and spectral shape). Therefore, the SSS is a fundamental improvement in the description of distance or similarity between two reflectance spectra. In addition, it demonstrates the use of the SSS by discussing an unsupervised classification algorithm based on the SSS named ClaSSS.
{"title":"The spectral similarity scale and its application to the classification of hyperspectral remote sensing data","authors":"James Norman Sweet","doi":"10.1109/WARSD.2003.1295179","DOIUrl":"https://doi.org/10.1109/WARSD.2003.1295179","url":null,"abstract":"Hyperspectral images have considerable information content and are becoming common. Analysis tools must keep up with the changing demands and opportunities posed by the new datasets. Many spectral image analysis algorithms depend on a scalar measure of spectral similarity or 'spectral distance' to provide an estimate of how closely two spectra resemble each other. Unfortunately, traditional spectral similarity measures are ambiguous in their distinction of similarity. Traditional metrics can define a pair of spectra to be nearly identical mathematically yet visual inspection shows them to be spectroscopically dissimilar. These algorithms do not separately quantify both magnitude and direction differences. Three common algorithms used to measure the distance between remotely sensed reflectance spectra are Euclidean distance, correlation coefficient, and spectral angle. Euclidean distance primarily measures overall brightness differences but does not respond to the correlation (or lack thereof) between two spectra. The correlation coefficient is very responsive to differences in direction (i.e. spectral shape) but does not respond to brightness differences due to band-independent gain or offset factors. Spectral angle is closely related mathematically to the correlation coefficient and is primarily responsive to differences in spectral shape. However, spectral angle does respond to brightness differences due to a uniform offset, which confounds the interpretation of the spectral angle value. This paper proposes the spectral similarity scale (SSS) as an algorithm that objectively quantifies differences between reflectance spectra in both magnitude and direction dimensions (i.e. brightness and spectral shape). Therefore, the SSS is a fundamental improvement in the description of distance or similarity between two reflectance spectra. In addition, it demonstrates the use of the SSS by discussing an unsupervised classification algorithm based on the SSS named ClaSSS.","PeriodicalId":395735,"journal":{"name":"IEEE Workshop on Advances in Techniques for Analysis of Remotely Sensed Data, 2003","volume":"133 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116079974","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-10-27DOI: 10.1109/WARSD.2003.1295188
R. Nagura
Recently, an enormous amount of progress has been made in remote sensing imaging, providing images in the ground pixel size resolution and spectral band width resolution. These imaging methods are very important and indispensable for the progress of remote sensing from space. With this progress, the data rates from satellites are extensively increasing. The typical value of the net data rate exceeds 1 Gbps, not including any synchronous code nor error correcting code. Therefore, the high performance data expression should be indispensable especially in the high resolution observation system. This paper reports the optimum transmission system using the data compression and the error correction code under the above circumstances. There are many kinds of data compression techniques, however we need accurate and high signal to noise ratio of compressed images for Earth observation. This paper mainly considers the JPEG2000 compression method. In the transmission of compressed data, the effects of bit error would be very important and sometimes fatally damages the image quality. Accordingly, error correction would be indispensable for high quality data transmission. The paper mainly discusses error correction using the Turbo code, and shows the effect of error disappearance in the receiving data of the compressed image. Finally, the paper proposes the time integration method and improvement of the signal to noise ratio of original images.
{"title":"Optimum data transmission and imaging method for high resolution imaging from Earth observation satellite","authors":"R. Nagura","doi":"10.1109/WARSD.2003.1295188","DOIUrl":"https://doi.org/10.1109/WARSD.2003.1295188","url":null,"abstract":"Recently, an enormous amount of progress has been made in remote sensing imaging, providing images in the ground pixel size resolution and spectral band width resolution. These imaging methods are very important and indispensable for the progress of remote sensing from space. With this progress, the data rates from satellites are extensively increasing. The typical value of the net data rate exceeds 1 Gbps, not including any synchronous code nor error correcting code. Therefore, the high performance data expression should be indispensable especially in the high resolution observation system. This paper reports the optimum transmission system using the data compression and the error correction code under the above circumstances. There are many kinds of data compression techniques, however we need accurate and high signal to noise ratio of compressed images for Earth observation. This paper mainly considers the JPEG2000 compression method. In the transmission of compressed data, the effects of bit error would be very important and sometimes fatally damages the image quality. Accordingly, error correction would be indispensable for high quality data transmission. The paper mainly discusses error correction using the Turbo code, and shows the effect of error disappearance in the receiving data of the compressed image. Finally, the paper proposes the time integration method and improvement of the signal to noise ratio of original images.","PeriodicalId":395735,"journal":{"name":"IEEE Workshop on Advances in Techniques for Analysis of Remotely Sensed Data, 2003","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134292271","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-10-27DOI: 10.1109/WARSD.2003.1295195
S. Aksoy, K. Koperski, C. Tusk, G. Marchisio, J. Tilton
A challenging problem in image content extraction and classification is building a system that automatically learns high-level semantic interpretations of images. We describe a Bayesian framework for a visual grammar that aims to reduce the gap between low-level features and user semantics. Our approach includes learning prototypes of regions and their spatial relationships for scene classification. First, naive Bayes classifiers perform automatic fusion of features and learn models for region segmentation and classification using positive and negative examples for user-defined semantic land cover labels. Then, the system automatically learns how to distinguish the spatial relationships of these regions from training data and builds visual grammar models. Experiments using LANDSAT scenes show that the visual grammar enables creation of higher level classes that cannot be modeled by individual pixels or regions. Furthermore, learning of the classifiers requires only a few training examples.
{"title":"Learning Bayesian classifiers for a visual grammar","authors":"S. Aksoy, K. Koperski, C. Tusk, G. Marchisio, J. Tilton","doi":"10.1109/WARSD.2003.1295195","DOIUrl":"https://doi.org/10.1109/WARSD.2003.1295195","url":null,"abstract":"A challenging problem in image content extraction and classification is building a system that automatically learns high-level semantic interpretations of images. We describe a Bayesian framework for a visual grammar that aims to reduce the gap between low-level features and user semantics. Our approach includes learning prototypes of regions and their spatial relationships for scene classification. First, naive Bayes classifiers perform automatic fusion of features and learn models for region segmentation and classification using positive and negative examples for user-defined semantic land cover labels. Then, the system automatically learns how to distinguish the spatial relationships of these regions from training data and builds visual grammar models. Experiments using LANDSAT scenes show that the visual grammar enables creation of higher level classes that cannot be modeled by individual pixels or regions. Furthermore, learning of the classifiers requires only a few training examples.","PeriodicalId":395735,"journal":{"name":"IEEE Workshop on Advances in Techniques for Analysis of Remotely Sensed Data, 2003","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129089348","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-10-27DOI: 10.1109/WARSD.2003.1295222
X. Jia, J. Richards
In this paper a fast k-nearest neighbour (k-NN) algorithm is presented which combines k-NN with a cluster-space data representation. Implementation of the algorithm is easier and classification time can be significantly reduced. Results from tests carried out with a Hyperion data set demonstrate that the simplification has little effect on classification performance and yet efficiency is greatly improved.
{"title":"Cluster-space classification: a fast k-nearest neighbour classification for remote sensing hyperspectral data","authors":"X. Jia, J. Richards","doi":"10.1109/WARSD.2003.1295222","DOIUrl":"https://doi.org/10.1109/WARSD.2003.1295222","url":null,"abstract":"In this paper a fast k-nearest neighbour (k-NN) algorithm is presented which combines k-NN with a cluster-space data representation. Implementation of the algorithm is easier and classification time can be significantly reduced. Results from tests carried out with a Hyperion data set demonstrate that the simplification has little effect on classification performance and yet efficiency is greatly improved.","PeriodicalId":395735,"journal":{"name":"IEEE Workshop on Advances in Techniques for Analysis of Remotely Sensed Data, 2003","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130523951","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}