Pub Date : 2003-10-15DOI: 10.1109/AIPR.2003.1284255
Hongtao Du, H. Qi, Xiaoling Wang, R. Ramanath, W. Snyder
Although hyperspectral images provide abundant information about objects, their high dimensionality also substantially increases computational burden. Dimensionality reduction offers one approach to Hyperspectral Image (HSI) analysis. Currently, there are two methods to reduce the dimension, band selection and feature extraction. In this paper, we present a band selection method based on Independent Component Analysis (ICA). This method, instead of transforming the original hyperspectral images, evaluates the weight matrix to observe how each band contributes to the ICA unmixing procedure. It compares the average absolute weight coefficients of individual spectral bands and selects bands that contain more information. As a significant benefit, the ICA-based band selection retains most physical features of the spectral profiles given only the observations of hyperspectral images. We compare this method with ICA transformation and Principal Component Analysis (PCA) transformation on classification accuracy. The experimental results show that ICA-based band selection is more effective in dimensionality reduction for HSI analysis.
{"title":"Band selection using independent component analysis for hyperspectral image processing","authors":"Hongtao Du, H. Qi, Xiaoling Wang, R. Ramanath, W. Snyder","doi":"10.1109/AIPR.2003.1284255","DOIUrl":"https://doi.org/10.1109/AIPR.2003.1284255","url":null,"abstract":"Although hyperspectral images provide abundant information about objects, their high dimensionality also substantially increases computational burden. Dimensionality reduction offers one approach to Hyperspectral Image (HSI) analysis. Currently, there are two methods to reduce the dimension, band selection and feature extraction. In this paper, we present a band selection method based on Independent Component Analysis (ICA). This method, instead of transforming the original hyperspectral images, evaluates the weight matrix to observe how each band contributes to the ICA unmixing procedure. It compares the average absolute weight coefficients of individual spectral bands and selects bands that contain more information. As a significant benefit, the ICA-based band selection retains most physical features of the spectral profiles given only the observations of hyperspectral images. We compare this method with ICA transformation and Principal Component Analysis (PCA) transformation on classification accuracy. The experimental results show that ICA-based band selection is more effective in dimensionality reduction for HSI analysis.","PeriodicalId":176987,"journal":{"name":"32nd Applied Imagery Pattern Recognition Workshop, 2003. Proceedings.","volume":"158 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134190248","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-10-15DOI: 10.1109/AIPR.2003.1284248
G. Beach, C. Cohen, G. Moody, Martha Henry
The U.S. Army plans for the needs of future warfare to retain its technological superiority. Future Combat Systems (FCS) is a major effort designed to meet this need. FCS includes multiple automated fire weapons. On current systems, a human typically enters information about each projectile loaded. This is a slow process, placing the soldier and the weapon in danger. Cybernet (through funding by TACOM-ARDEC) has created a vision system that leverages multiple simple and mature image processing techniques to recognize the projectile type as it is loaded into the system's magazine. The system uses a combination of shape detection, color detection, and character identification, along with knowledge of the projectile (such as CAD model, text location, coloring, etc.) to identify the projectile. The system processes the data in real-time, allowing the soldier to load the projectiles as quickly as possible. The system has been designed with a modular recognition framework.
{"title":"Projectile identification system","authors":"G. Beach, C. Cohen, G. Moody, Martha Henry","doi":"10.1109/AIPR.2003.1284248","DOIUrl":"https://doi.org/10.1109/AIPR.2003.1284248","url":null,"abstract":"The U.S. Army plans for the needs of future warfare to retain its technological superiority. Future Combat Systems (FCS) is a major effort designed to meet this need. FCS includes multiple automated fire weapons. On current systems, a human typically enters information about each projectile loaded. This is a slow process, placing the soldier and the weapon in danger. Cybernet (through funding by TACOM-ARDEC) has created a vision system that leverages multiple simple and mature image processing techniques to recognize the projectile type as it is loaded into the system's magazine. The system uses a combination of shape detection, color detection, and character identification, along with knowledge of the projectile (such as CAD model, text location, coloring, etc.) to identify the projectile. The system processes the data in real-time, allowing the soldier to load the projectiles as quickly as possible. The system has been designed with a modular recognition framework.","PeriodicalId":176987,"journal":{"name":"32nd Applied Imagery Pattern Recognition Workshop, 2003. Proceedings.","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125123679","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-10-15DOI: 10.1109/AIPR.2003.1284256
P. Blake, Terry W. Brown
Simulated imagery is a useful adjunct to actual imagery collected from a sensor platform. Simulation allows control of multiple parameters and combinations of parameters that might otherwise be difficult to capture in an actual measurement, leading to a fuller understanding of processes and phenomenology under consideration. However, the complexity that exists in actual, measured imagery can be difficult to capture in simulation. Such complexity, coupled with the other natural ambiguities of measured data, makes it difficult to compare results achieved from algorithms applied to simulated imagery with algorithmic results achieved with actual data. We demonstrate the use of Sequential Quantitative Performance Assessment (SQPA) as a means of fusing results from simulated and actual imagery.
{"title":"Quantitative fusion of performance results from actual and simulated image data","authors":"P. Blake, Terry W. Brown","doi":"10.1109/AIPR.2003.1284256","DOIUrl":"https://doi.org/10.1109/AIPR.2003.1284256","url":null,"abstract":"Simulated imagery is a useful adjunct to actual imagery collected from a sensor platform. Simulation allows control of multiple parameters and combinations of parameters that might otherwise be difficult to capture in an actual measurement, leading to a fuller understanding of processes and phenomenology under consideration. However, the complexity that exists in actual, measured imagery can be difficult to capture in simulation. Such complexity, coupled with the other natural ambiguities of measured data, makes it difficult to compare results achieved from algorithms applied to simulated imagery with algorithmic results achieved with actual data. We demonstrate the use of Sequential Quantitative Performance Assessment (SQPA) as a means of fusing results from simulated and actual imagery.","PeriodicalId":176987,"journal":{"name":"32nd Applied Imagery Pattern Recognition Workshop, 2003. Proceedings.","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121187730","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-10-15DOI: 10.1109/AIPR.2003.1284250
R. Bonneau
Conventional phased arrays operate on narrow bandwidth principles to achieve resolution in imaging of buildings and other objects of interest. Unfortunately, such narrow bandwidth methods to not allow sufficient resolution to reconstruct objects of interest in 3 dimensions at low frequencies and with small apertures. We propose a method that is computationally efficient and allows dynamic use of spectrum to achieve high resolution 3 dimensional reconstruction of objects from small or distributed apertures. This method also allows available spectrum bands to be used on a non-interference basis.
{"title":"3-dimensional object reconstruction from frequency diverse RF systems","authors":"R. Bonneau","doi":"10.1109/AIPR.2003.1284250","DOIUrl":"https://doi.org/10.1109/AIPR.2003.1284250","url":null,"abstract":"Conventional phased arrays operate on narrow bandwidth principles to achieve resolution in imaging of buildings and other objects of interest. Unfortunately, such narrow bandwidth methods to not allow sufficient resolution to reconstruct objects of interest in 3 dimensions at low frequencies and with small apertures. We propose a method that is computationally efficient and allows dynamic use of spectrum to achieve high resolution 3 dimensional reconstruction of objects from small or distributed apertures. This method also allows available spectrum bands to be used on a non-interference basis.","PeriodicalId":176987,"journal":{"name":"32nd Applied Imagery Pattern Recognition Workshop, 2003. Proceedings.","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121643111","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-10-15DOI: 10.1109/AIPR.2003.1284253
N. Bose
Due to cost of hardware, size, and fabrication complexity limitations, imaging systems like CCD detector arrays or digital cameras often provide only multiple low-resolution (LR) degraded images. However, a high-resolution (HR) image is indispensable in many applications including health diagnosis and monitoring, military surveillance, and terrain mapping by remote sensing. Other intriguing possibilities include substituting expensive high-resolution instruments like scanning electron microscopes by their cruder, cheaper counterparts and then applying technical methods for increasing the resolution to that derivable with much more costly equipment. This paper presents a comparison between the various popular approaches to the attaining of superresolution following image acquisition.
{"title":"Superresolution from image sequence","authors":"N. Bose","doi":"10.1109/AIPR.2003.1284253","DOIUrl":"https://doi.org/10.1109/AIPR.2003.1284253","url":null,"abstract":"Due to cost of hardware, size, and fabrication complexity limitations, imaging systems like CCD detector arrays or digital cameras often provide only multiple low-resolution (LR) degraded images. However, a high-resolution (HR) image is indispensable in many applications including health diagnosis and monitoring, military surveillance, and terrain mapping by remote sensing. Other intriguing possibilities include substituting expensive high-resolution instruments like scanning electron microscopes by their cruder, cheaper counterparts and then applying technical methods for increasing the resolution to that derivable with much more costly equipment. This paper presents a comparison between the various popular approaches to the attaining of superresolution following image acquisition.","PeriodicalId":176987,"journal":{"name":"32nd Applied Imagery Pattern Recognition Workshop, 2003. Proceedings.","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128902342","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-10-15DOI: 10.1109/AIPR.2003.1284281
C. Dima, N. Vandapel, M. Hebert
This paper describes an approach for using several levels of data fusion in the domain of autonomous off-road navigation. We are focusing on outdoor obstacle detection, and we present techniques that leverage on data fusion and machine learning for increasing the reliability of obstacle detection systems. We are combining color and IR imagery with range information from a laser range finder. We show that in addition to fusing data at the pixel level, performing high level classifier fusion is beneficial in our domain. Our general approach is to use machine learning techniques for automatically deriving effective models of the classes of interest (obstacle and non-obstacle for example). We train classifiers on different subsets of the features we extract from our sensor suite and show how different classifier fusion schemes can be applied for obtaining a multiple classifier system that is more robust than any of the classifiers presented as input. We present experimental results we obtained on data collected with both the Experimental Unmanned Vehicle (XUV) and a CMU developed robotic vehicle.
{"title":"Sensor and classifier fusion for outdoor obstacle detection: an application of data fusion to autonomous off-road navigation","authors":"C. Dima, N. Vandapel, M. Hebert","doi":"10.1109/AIPR.2003.1284281","DOIUrl":"https://doi.org/10.1109/AIPR.2003.1284281","url":null,"abstract":"This paper describes an approach for using several levels of data fusion in the domain of autonomous off-road navigation. We are focusing on outdoor obstacle detection, and we present techniques that leverage on data fusion and machine learning for increasing the reliability of obstacle detection systems. We are combining color and IR imagery with range information from a laser range finder. We show that in addition to fusing data at the pixel level, performing high level classifier fusion is beneficial in our domain. Our general approach is to use machine learning techniques for automatically deriving effective models of the classes of interest (obstacle and non-obstacle for example). We train classifiers on different subsets of the features we extract from our sensor suite and show how different classifier fusion schemes can be applied for obtaining a multiple classifier system that is more robust than any of the classifiers presented as input. We present experimental results we obtained on data collected with both the Experimental Unmanned Vehicle (XUV) and a CMU developed robotic vehicle.","PeriodicalId":176987,"journal":{"name":"32nd Applied Imagery Pattern Recognition Workshop, 2003. Proceedings.","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131341363","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-10-15DOI: 10.1109/AIPR.2003.1284259
A. Rebmann, J. Butman
Heterogeneity of gray matter signal intensity can be demonstrated on some MR sequences, particularly FLAIR. Quantifying this heterogeneity is of interest as it may distinguish among different cortical areas. Gray matter segmentation fails on FLAIR data due to overlap of gray and white matter signal intensity. This overlap also compromises region of interest based approaches. Although volume rendering can visualize some of these differences, it is non quantitative and averaging gray and white matter cannot be avoided. To overcome these obstacles we obtained T1 weighted data in addition to FLAIR data. T1 weighted data provides strong gray/white contrast, allowing a cortical surface to be extracted. Volume based registration of the FLAIR data set to the T1 data allows FLAIR signal intensity data to be mapped onto the surface generated from the T1 dataset. This allows regional FLAIR signal intensity differences to be visualized and to be compared across subjects.
{"title":"Heterogeneity of MR signal intensity mapped onto brain surface models","authors":"A. Rebmann, J. Butman","doi":"10.1109/AIPR.2003.1284259","DOIUrl":"https://doi.org/10.1109/AIPR.2003.1284259","url":null,"abstract":"Heterogeneity of gray matter signal intensity can be demonstrated on some MR sequences, particularly FLAIR. Quantifying this heterogeneity is of interest as it may distinguish among different cortical areas. Gray matter segmentation fails on FLAIR data due to overlap of gray and white matter signal intensity. This overlap also compromises region of interest based approaches. Although volume rendering can visualize some of these differences, it is non quantitative and averaging gray and white matter cannot be avoided. To overcome these obstacles we obtained T1 weighted data in addition to FLAIR data. T1 weighted data provides strong gray/white contrast, allowing a cortical surface to be extracted. Volume based registration of the FLAIR data set to the T1 data allows FLAIR signal intensity data to be mapped onto the surface generated from the T1 dataset. This allows regional FLAIR signal intensity differences to be visualized and to be compared across subjects.","PeriodicalId":176987,"journal":{"name":"32nd Applied Imagery Pattern Recognition Workshop, 2003. Proceedings.","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134179058","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-10-15DOI: 10.1109/AIPR.2003.1284265
P. Conrad, Mike Foedisch
We present a comparison of two methods for color based road segmentation. The first was implemented using a neural network, while the second approach is based on support vector machines. A large number of training images were used with varying road conditions including roads with snow, dirt or gravel surfaces, and asphalt. We experimented with grouping the training images by road condition and generating a separate model for each group. The system would automatically select the appropriate one for each novel image. Those results were compared with creating a single model with all images. In another set of experiments, we added the image coordinates of each point as an additional feature in the models. Finally, we compared the results and the efficiency of neural networks and support vector machines of segmentation with each combination of feature sets and image groups.
{"title":"Performance evaluation of color based road detection using neural nets and support vector machines","authors":"P. Conrad, Mike Foedisch","doi":"10.1109/AIPR.2003.1284265","DOIUrl":"https://doi.org/10.1109/AIPR.2003.1284265","url":null,"abstract":"We present a comparison of two methods for color based road segmentation. The first was implemented using a neural network, while the second approach is based on support vector machines. A large number of training images were used with varying road conditions including roads with snow, dirt or gravel surfaces, and asphalt. We experimented with grouping the training images by road condition and generating a separate model for each group. The system would automatically select the appropriate one for each novel image. Those results were compared with creating a single model with all images. In another set of experiments, we added the image coordinates of each point as an additional feature in the models. Finally, we compared the results and the efficiency of neural networks and support vector machines of segmentation with each combination of feature sets and image groups.","PeriodicalId":176987,"journal":{"name":"32nd Applied Imagery Pattern Recognition Workshop, 2003. Proceedings.","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132180428","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-10-15DOI: 10.1109/AIPR.2003.1284257
K. Walli
This paper develops a technique for the registration of multisensor images utilizing the Laplacian of Gaussian (LoG) filter to automatically determine semi-invariant ground control points (GCPs). These points are then related through the development of point matching techniques and statistical analysis. Through the use of matrix transformations, efficient management of multiple affine operations can be obtained and stored in a composite transform. Wavelet theory is used to enable the multi-resolution analysis critical for multisensor image registration and predictive transformations. Multiple methods have been discussed to test the accuracy of the resulting image registration. Benefits of this technique against parallax and moving objects within the scene has also been highlighted. Finally, an example of 'wavelet sharpening' has been demonstrated that preserves radiometric integrity.
{"title":"Automated multisensor image registration","authors":"K. Walli","doi":"10.1109/AIPR.2003.1284257","DOIUrl":"https://doi.org/10.1109/AIPR.2003.1284257","url":null,"abstract":"This paper develops a technique for the registration of multisensor images utilizing the Laplacian of Gaussian (LoG) filter to automatically determine semi-invariant ground control points (GCPs). These points are then related through the development of point matching techniques and statistical analysis. Through the use of matrix transformations, efficient management of multiple affine operations can be obtained and stored in a composite transform. Wavelet theory is used to enable the multi-resolution analysis critical for multisensor image registration and predictive transformations. Multiple methods have been discussed to test the accuracy of the resulting image registration. Benefits of this technique against parallax and moving objects within the scene has also been highlighted. Finally, an example of 'wavelet sharpening' has been demonstrated that preserves radiometric integrity.","PeriodicalId":176987,"journal":{"name":"32nd Applied Imagery Pattern Recognition Workshop, 2003. Proceedings.","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122134857","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-10-15DOI: 10.1109/AIPR.2003.1284239
A. Poland, G. Withbroe, John C. Evans
Space weather research involves the study of the Sun and Earth from a systems viewpoint to improve the understanding and prediction of solar-terrestrial variability. There are a wide variety of solar-terrestrial imagery, spectroscopic measurements, and in situ space environmental data that can be exploited to improve our knowledge and understanding of the phenomena and processes involved in space weather.
{"title":"Space weather research: a major application of imagery and data fusion","authors":"A. Poland, G. Withbroe, John C. Evans","doi":"10.1109/AIPR.2003.1284239","DOIUrl":"https://doi.org/10.1109/AIPR.2003.1284239","url":null,"abstract":"Space weather research involves the study of the Sun and Earth from a systems viewpoint to improve the understanding and prediction of solar-terrestrial variability. There are a wide variety of solar-terrestrial imagery, spectroscopic measurements, and in situ space environmental data that can be exploited to improve our knowledge and understanding of the phenomena and processes involved in space weather.","PeriodicalId":176987,"journal":{"name":"32nd Applied Imagery Pattern Recognition Workshop, 2003. Proceedings.","volume":"126 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122399449","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}