Pub Date : 2003-10-15DOI: 10.1109/AIPR.2003.1284255
Hongtao Du, H. Qi, Xiaoling Wang, R. Ramanath, W. Snyder
Although hyperspectral images provide abundant information about objects, their high dimensionality also substantially increases computational burden. Dimensionality reduction offers one approach to Hyperspectral Image (HSI) analysis. Currently, there are two methods to reduce the dimension, band selection and feature extraction. In this paper, we present a band selection method based on Independent Component Analysis (ICA). This method, instead of transforming the original hyperspectral images, evaluates the weight matrix to observe how each band contributes to the ICA unmixing procedure. It compares the average absolute weight coefficients of individual spectral bands and selects bands that contain more information. As a significant benefit, the ICA-based band selection retains most physical features of the spectral profiles given only the observations of hyperspectral images. We compare this method with ICA transformation and Principal Component Analysis (PCA) transformation on classification accuracy. The experimental results show that ICA-based band selection is more effective in dimensionality reduction for HSI analysis.
{"title":"Band selection using independent component analysis for hyperspectral image processing","authors":"Hongtao Du, H. Qi, Xiaoling Wang, R. Ramanath, W. Snyder","doi":"10.1109/AIPR.2003.1284255","DOIUrl":"https://doi.org/10.1109/AIPR.2003.1284255","url":null,"abstract":"Although hyperspectral images provide abundant information about objects, their high dimensionality also substantially increases computational burden. Dimensionality reduction offers one approach to Hyperspectral Image (HSI) analysis. Currently, there are two methods to reduce the dimension, band selection and feature extraction. In this paper, we present a band selection method based on Independent Component Analysis (ICA). This method, instead of transforming the original hyperspectral images, evaluates the weight matrix to observe how each band contributes to the ICA unmixing procedure. It compares the average absolute weight coefficients of individual spectral bands and selects bands that contain more information. As a significant benefit, the ICA-based band selection retains most physical features of the spectral profiles given only the observations of hyperspectral images. We compare this method with ICA transformation and Principal Component Analysis (PCA) transformation on classification accuracy. The experimental results show that ICA-based band selection is more effective in dimensionality reduction for HSI analysis.","PeriodicalId":176987,"journal":{"name":"32nd Applied Imagery Pattern Recognition Workshop, 2003. Proceedings.","volume":"158 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134190248","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-10-15DOI: 10.1109/AIPR.2003.1284248
G. Beach, C. Cohen, G. Moody, Martha Henry
The U.S. Army plans for the needs of future warfare to retain its technological superiority. Future Combat Systems (FCS) is a major effort designed to meet this need. FCS includes multiple automated fire weapons. On current systems, a human typically enters information about each projectile loaded. This is a slow process, placing the soldier and the weapon in danger. Cybernet (through funding by TACOM-ARDEC) has created a vision system that leverages multiple simple and mature image processing techniques to recognize the projectile type as it is loaded into the system's magazine. The system uses a combination of shape detection, color detection, and character identification, along with knowledge of the projectile (such as CAD model, text location, coloring, etc.) to identify the projectile. The system processes the data in real-time, allowing the soldier to load the projectiles as quickly as possible. The system has been designed with a modular recognition framework.
{"title":"Projectile identification system","authors":"G. Beach, C. Cohen, G. Moody, Martha Henry","doi":"10.1109/AIPR.2003.1284248","DOIUrl":"https://doi.org/10.1109/AIPR.2003.1284248","url":null,"abstract":"The U.S. Army plans for the needs of future warfare to retain its technological superiority. Future Combat Systems (FCS) is a major effort designed to meet this need. FCS includes multiple automated fire weapons. On current systems, a human typically enters information about each projectile loaded. This is a slow process, placing the soldier and the weapon in danger. Cybernet (through funding by TACOM-ARDEC) has created a vision system that leverages multiple simple and mature image processing techniques to recognize the projectile type as it is loaded into the system's magazine. The system uses a combination of shape detection, color detection, and character identification, along with knowledge of the projectile (such as CAD model, text location, coloring, etc.) to identify the projectile. The system processes the data in real-time, allowing the soldier to load the projectiles as quickly as possible. The system has been designed with a modular recognition framework.","PeriodicalId":176987,"journal":{"name":"32nd Applied Imagery Pattern Recognition Workshop, 2003. Proceedings.","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125123679","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-10-15DOI: 10.1109/AIPR.2003.1284256
P. Blake, Terry W. Brown
Simulated imagery is a useful adjunct to actual imagery collected from a sensor platform. Simulation allows control of multiple parameters and combinations of parameters that might otherwise be difficult to capture in an actual measurement, leading to a fuller understanding of processes and phenomenology under consideration. However, the complexity that exists in actual, measured imagery can be difficult to capture in simulation. Such complexity, coupled with the other natural ambiguities of measured data, makes it difficult to compare results achieved from algorithms applied to simulated imagery with algorithmic results achieved with actual data. We demonstrate the use of Sequential Quantitative Performance Assessment (SQPA) as a means of fusing results from simulated and actual imagery.
{"title":"Quantitative fusion of performance results from actual and simulated image data","authors":"P. Blake, Terry W. Brown","doi":"10.1109/AIPR.2003.1284256","DOIUrl":"https://doi.org/10.1109/AIPR.2003.1284256","url":null,"abstract":"Simulated imagery is a useful adjunct to actual imagery collected from a sensor platform. Simulation allows control of multiple parameters and combinations of parameters that might otherwise be difficult to capture in an actual measurement, leading to a fuller understanding of processes and phenomenology under consideration. However, the complexity that exists in actual, measured imagery can be difficult to capture in simulation. Such complexity, coupled with the other natural ambiguities of measured data, makes it difficult to compare results achieved from algorithms applied to simulated imagery with algorithmic results achieved with actual data. We demonstrate the use of Sequential Quantitative Performance Assessment (SQPA) as a means of fusing results from simulated and actual imagery.","PeriodicalId":176987,"journal":{"name":"32nd Applied Imagery Pattern Recognition Workshop, 2003. Proceedings.","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121187730","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-10-15DOI: 10.1109/AIPR.2003.1284250
R. Bonneau
Conventional phased arrays operate on narrow bandwidth principles to achieve resolution in imaging of buildings and other objects of interest. Unfortunately, such narrow bandwidth methods to not allow sufficient resolution to reconstruct objects of interest in 3 dimensions at low frequencies and with small apertures. We propose a method that is computationally efficient and allows dynamic use of spectrum to achieve high resolution 3 dimensional reconstruction of objects from small or distributed apertures. This method also allows available spectrum bands to be used on a non-interference basis.
{"title":"3-dimensional object reconstruction from frequency diverse RF systems","authors":"R. Bonneau","doi":"10.1109/AIPR.2003.1284250","DOIUrl":"https://doi.org/10.1109/AIPR.2003.1284250","url":null,"abstract":"Conventional phased arrays operate on narrow bandwidth principles to achieve resolution in imaging of buildings and other objects of interest. Unfortunately, such narrow bandwidth methods to not allow sufficient resolution to reconstruct objects of interest in 3 dimensions at low frequencies and with small apertures. We propose a method that is computationally efficient and allows dynamic use of spectrum to achieve high resolution 3 dimensional reconstruction of objects from small or distributed apertures. This method also allows available spectrum bands to be used on a non-interference basis.","PeriodicalId":176987,"journal":{"name":"32nd Applied Imagery Pattern Recognition Workshop, 2003. Proceedings.","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121643111","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-10-15DOI: 10.1109/AIPR.2003.1284253
N. Bose
Due to cost of hardware, size, and fabrication complexity limitations, imaging systems like CCD detector arrays or digital cameras often provide only multiple low-resolution (LR) degraded images. However, a high-resolution (HR) image is indispensable in many applications including health diagnosis and monitoring, military surveillance, and terrain mapping by remote sensing. Other intriguing possibilities include substituting expensive high-resolution instruments like scanning electron microscopes by their cruder, cheaper counterparts and then applying technical methods for increasing the resolution to that derivable with much more costly equipment. This paper presents a comparison between the various popular approaches to the attaining of superresolution following image acquisition.
{"title":"Superresolution from image sequence","authors":"N. Bose","doi":"10.1109/AIPR.2003.1284253","DOIUrl":"https://doi.org/10.1109/AIPR.2003.1284253","url":null,"abstract":"Due to cost of hardware, size, and fabrication complexity limitations, imaging systems like CCD detector arrays or digital cameras often provide only multiple low-resolution (LR) degraded images. However, a high-resolution (HR) image is indispensable in many applications including health diagnosis and monitoring, military surveillance, and terrain mapping by remote sensing. Other intriguing possibilities include substituting expensive high-resolution instruments like scanning electron microscopes by their cruder, cheaper counterparts and then applying technical methods for increasing the resolution to that derivable with much more costly equipment. This paper presents a comparison between the various popular approaches to the attaining of superresolution following image acquisition.","PeriodicalId":176987,"journal":{"name":"32nd Applied Imagery Pattern Recognition Workshop, 2003. Proceedings.","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128902342","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-10-15DOI: 10.1109/AIPR.2003.1284259
A. Rebmann, J. Butman
Heterogeneity of gray matter signal intensity can be demonstrated on some MR sequences, particularly FLAIR. Quantifying this heterogeneity is of interest as it may distinguish among different cortical areas. Gray matter segmentation fails on FLAIR data due to overlap of gray and white matter signal intensity. This overlap also compromises region of interest based approaches. Although volume rendering can visualize some of these differences, it is non quantitative and averaging gray and white matter cannot be avoided. To overcome these obstacles we obtained T1 weighted data in addition to FLAIR data. T1 weighted data provides strong gray/white contrast, allowing a cortical surface to be extracted. Volume based registration of the FLAIR data set to the T1 data allows FLAIR signal intensity data to be mapped onto the surface generated from the T1 dataset. This allows regional FLAIR signal intensity differences to be visualized and to be compared across subjects.
{"title":"Heterogeneity of MR signal intensity mapped onto brain surface models","authors":"A. Rebmann, J. Butman","doi":"10.1109/AIPR.2003.1284259","DOIUrl":"https://doi.org/10.1109/AIPR.2003.1284259","url":null,"abstract":"Heterogeneity of gray matter signal intensity can be demonstrated on some MR sequences, particularly FLAIR. Quantifying this heterogeneity is of interest as it may distinguish among different cortical areas. Gray matter segmentation fails on FLAIR data due to overlap of gray and white matter signal intensity. This overlap also compromises region of interest based approaches. Although volume rendering can visualize some of these differences, it is non quantitative and averaging gray and white matter cannot be avoided. To overcome these obstacles we obtained T1 weighted data in addition to FLAIR data. T1 weighted data provides strong gray/white contrast, allowing a cortical surface to be extracted. Volume based registration of the FLAIR data set to the T1 data allows FLAIR signal intensity data to be mapped onto the surface generated from the T1 dataset. This allows regional FLAIR signal intensity differences to be visualized and to be compared across subjects.","PeriodicalId":176987,"journal":{"name":"32nd Applied Imagery Pattern Recognition Workshop, 2003. Proceedings.","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134179058","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-10-15DOI: 10.1109/AIPR.2003.1284281
C. Dima, N. Vandapel, M. Hebert
This paper describes an approach for using several levels of data fusion in the domain of autonomous off-road navigation. We are focusing on outdoor obstacle detection, and we present techniques that leverage on data fusion and machine learning for increasing the reliability of obstacle detection systems. We are combining color and IR imagery with range information from a laser range finder. We show that in addition to fusing data at the pixel level, performing high level classifier fusion is beneficial in our domain. Our general approach is to use machine learning techniques for automatically deriving effective models of the classes of interest (obstacle and non-obstacle for example). We train classifiers on different subsets of the features we extract from our sensor suite and show how different classifier fusion schemes can be applied for obtaining a multiple classifier system that is more robust than any of the classifiers presented as input. We present experimental results we obtained on data collected with both the Experimental Unmanned Vehicle (XUV) and a CMU developed robotic vehicle.
{"title":"Sensor and classifier fusion for outdoor obstacle detection: an application of data fusion to autonomous off-road navigation","authors":"C. Dima, N. Vandapel, M. Hebert","doi":"10.1109/AIPR.2003.1284281","DOIUrl":"https://doi.org/10.1109/AIPR.2003.1284281","url":null,"abstract":"This paper describes an approach for using several levels of data fusion in the domain of autonomous off-road navigation. We are focusing on outdoor obstacle detection, and we present techniques that leverage on data fusion and machine learning for increasing the reliability of obstacle detection systems. We are combining color and IR imagery with range information from a laser range finder. We show that in addition to fusing data at the pixel level, performing high level classifier fusion is beneficial in our domain. Our general approach is to use machine learning techniques for automatically deriving effective models of the classes of interest (obstacle and non-obstacle for example). We train classifiers on different subsets of the features we extract from our sensor suite and show how different classifier fusion schemes can be applied for obtaining a multiple classifier system that is more robust than any of the classifiers presented as input. We present experimental results we obtained on data collected with both the Experimental Unmanned Vehicle (XUV) and a CMU developed robotic vehicle.","PeriodicalId":176987,"journal":{"name":"32nd Applied Imagery Pattern Recognition Workshop, 2003. Proceedings.","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131341363","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-10-15DOI: 10.1109/AIPR.2003.1284265
P. Conrad, Mike Foedisch
We present a comparison of two methods for color based road segmentation. The first was implemented using a neural network, while the second approach is based on support vector machines. A large number of training images were used with varying road conditions including roads with snow, dirt or gravel surfaces, and asphalt. We experimented with grouping the training images by road condition and generating a separate model for each group. The system would automatically select the appropriate one for each novel image. Those results were compared with creating a single model with all images. In another set of experiments, we added the image coordinates of each point as an additional feature in the models. Finally, we compared the results and the efficiency of neural networks and support vector machines of segmentation with each combination of feature sets and image groups.
{"title":"Performance evaluation of color based road detection using neural nets and support vector machines","authors":"P. Conrad, Mike Foedisch","doi":"10.1109/AIPR.2003.1284265","DOIUrl":"https://doi.org/10.1109/AIPR.2003.1284265","url":null,"abstract":"We present a comparison of two methods for color based road segmentation. The first was implemented using a neural network, while the second approach is based on support vector machines. A large number of training images were used with varying road conditions including roads with snow, dirt or gravel surfaces, and asphalt. We experimented with grouping the training images by road condition and generating a separate model for each group. The system would automatically select the appropriate one for each novel image. Those results were compared with creating a single model with all images. In another set of experiments, we added the image coordinates of each point as an additional feature in the models. Finally, we compared the results and the efficiency of neural networks and support vector machines of segmentation with each combination of feature sets and image groups.","PeriodicalId":176987,"journal":{"name":"32nd Applied Imagery Pattern Recognition Workshop, 2003. Proceedings.","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132180428","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-10-15DOI: 10.1109/AIPR.2003.1284246
G. Beach, C. Lomont, C. Cohen
Moore's law states that computing performance doubles every 18 months. While this has held true for 40 years, it is widely believed that this will soon come to an end. Quantum computation offers a potential solution to the eventual failure of Moore's law. Researchers have shown that efficient quantum algorithms exist and can perform some calculations significantly faster than classical computers. Quantum computers require very different algorithms than classical computers, so the challenge of quantum computation is to develop efficient quantum algorithms. Cybernet is working with the Air Force Research Laboratory (AFRL) to create image processing algorithms for quantum computers. We have shown that existing quantum algorithms (such as Grover's algorithm) are applicable to image processing tasks. We are continuing to identify other areas of image processing which can be improved through the application of quantum computing.
{"title":"Quantum image processing (QuIP)","authors":"G. Beach, C. Lomont, C. Cohen","doi":"10.1109/AIPR.2003.1284246","DOIUrl":"https://doi.org/10.1109/AIPR.2003.1284246","url":null,"abstract":"Moore's law states that computing performance doubles every 18 months. While this has held true for 40 years, it is widely believed that this will soon come to an end. Quantum computation offers a potential solution to the eventual failure of Moore's law. Researchers have shown that efficient quantum algorithms exist and can perform some calculations significantly faster than classical computers. Quantum computers require very different algorithms than classical computers, so the challenge of quantum computation is to develop efficient quantum algorithms. Cybernet is working with the Air Force Research Laboratory (AFRL) to create image processing algorithms for quantum computers. We have shown that existing quantum algorithms (such as Grover's algorithm) are applicable to image processing tasks. We are continuing to identify other areas of image processing which can be improved through the application of quantum computing.","PeriodicalId":176987,"journal":{"name":"32nd Applied Imagery Pattern Recognition Workshop, 2003. Proceedings.","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114024670","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-10-15DOI: 10.1109/AIPR.2003.1284241
J. Caulfield
Raytheon Vision Systems (RVS) has invented and demonstrated a new class of advanced focal plane arrays. These Advanced FPAs are sometimes called 3rd Generation or "Next Generation" FPAs because they have integrated onto the FPA the ability to sense multiple IR spectrums, and conduct image processing on the FPA ROIC. These Next Generation of IRFPAs are allowing more functionality and the detection of a more diverse set of data than previously possible with 2nd Gen FPAs. Examples and history of 3rd Gen FPAs are shown including RVS' Multispectral, Uncooled, and Adaptive Sensors.
{"title":"Next generation IR focal plane arrays and applications","authors":"J. Caulfield","doi":"10.1109/AIPR.2003.1284241","DOIUrl":"https://doi.org/10.1109/AIPR.2003.1284241","url":null,"abstract":"Raytheon Vision Systems (RVS) has invented and demonstrated a new class of advanced focal plane arrays. These Advanced FPAs are sometimes called 3rd Generation or \"Next Generation\" FPAs because they have integrated onto the FPA the ability to sense multiple IR spectrums, and conduct image processing on the FPA ROIC. These Next Generation of IRFPAs are allowing more functionality and the detection of a more diverse set of data than previously possible with 2nd Gen FPAs. Examples and history of 3rd Gen FPAs are shown including RVS' Multispectral, Uncooled, and Adaptive Sensors.","PeriodicalId":176987,"journal":{"name":"32nd Applied Imagery Pattern Recognition Workshop, 2003. Proceedings.","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122390620","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}