Over the past few years, the Naval Research Laboratory (NRL) has been developing gunfire detection systems using infrared sensors. During the past year, the primary focus of this effort has been on algorithm performance improvements for gunfire detection from infrared imagery. A database of recordings of small arms fire and background clutter is being developed to allow lab testing of new algorithms. As the amount of data continues to grow, the testing analysis becomes lengthier. New tools and methods are being developed to reduce the post analysis time. Results of algorithm improvements for probability of detection and false alarm reduction through use of the database and tools will be presented.
{"title":"Rapid Development of a Gunfire Detection Algorithm Using an Imagery Database","authors":"William Seisler, N. Terry, E. Williams","doi":"10.1109/AIPR.2006.31","DOIUrl":"https://doi.org/10.1109/AIPR.2006.31","url":null,"abstract":"Over the past few years, the Naval Research Laboratory (NRL) has been developing gunfire detection systems using infrared sensors. During the past year, the primary focus of this effort has been on algorithm performance improvements for gunfire detection from infrared imagery. A database of recordings of small arms fire and background clutter is being developed to allow lab testing of new algorithms. As the amount of data continues to grow, the testing analysis becomes lengthier. New tools and methods are being developed to reduce the post analysis time. Results of algorithm improvements for probability of detection and false alarm reduction through use of the database and tools will be presented.","PeriodicalId":375571,"journal":{"name":"35th IEEE Applied Imagery and Pattern Recognition Workshop (AIPR'06)","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129913247","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The application of IR signature prediction codes in DoD has been predominantly in two areas: 1.) the development of total signature requirements under a broad set of environmental and operational conditions and 2.) the evaluation of signatures of vessels and signature treatments to ensure the specifications are met. As computing power and IR scene generation techniques have advanced, simulation capabilities have evolved to scene injection into real hardware systems. To capture the real world effects required to accurately analyze search and track algorithms, the fidelity of the complete IR scene has required improvement. New validation methodologies are required to evaluate the accuracy of advanced IR scene generation models. This paper will review some of the approaches incorporated into a new model validation tool that will be able to verify model inputs and quantitatively evaluate differences between measured and predicted imagery.
{"title":"Model Analysis Geometry Imagery Correlation Tool Kit (MAGIC-TK) for Model Development and Image Analysis","authors":"T. Taczak, M. Rundquist, Colin P. Cahill","doi":"10.1109/AIPR.2006.26","DOIUrl":"https://doi.org/10.1109/AIPR.2006.26","url":null,"abstract":"The application of IR signature prediction codes in DoD has been predominantly in two areas: 1.) the development of total signature requirements under a broad set of environmental and operational conditions and 2.) the evaluation of signatures of vessels and signature treatments to ensure the specifications are met. As computing power and IR scene generation techniques have advanced, simulation capabilities have evolved to scene injection into real hardware systems. To capture the real world effects required to accurately analyze search and track algorithms, the fidelity of the complete IR scene has required improvement. New validation methodologies are required to evaluate the accuracy of advanced IR scene generation models. This paper will review some of the approaches incorporated into a new model validation tool that will be able to verify model inputs and quantitatively evaluate differences between measured and predicted imagery.","PeriodicalId":375571,"journal":{"name":"35th IEEE Applied Imagery and Pattern Recognition Workshop (AIPR'06)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131441958","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The development of Web-based tools for visualization and processing of hyper-spectral images has been slow. Memory and processing capabilities of personal computers may have precluded the development of Web-based tools. However, fast access to remote databases, increasing microprocessors' speed, and grid portals that provide interconnection between remote nodes sharing data and computing resources, make possible remote exploration and analysis of hyper-spectral data cubes. This paper presents a Web-based visualization tool for exploring moderate resolution imaging spectroradiometer (MODIS) data cubes. It provides capabilities for individual pixel's reflectance-spectra visualization, on-the-fly per-pixel calculation and visualization of chlorophyll-a andphytoplankton-carbon concentration values. The Web-based interface also generates normalized difference vegetation index images from the multi-spectral information contained in MODIS datasets. The tool is applied to estimate phytoplankton concentrations in the Saint Louis Bay estuary (Mississippi). Chlorophyll-a estimations produced by the Web-based tool compare well with in-situ measurements from a field survey performed during August 2001. Phytoplankton concentrations are calculated using those estimations of chlorophyll-a concentrations generated by the Web-based tool. The higher spatial resolution provided by the interface allowed estimating constituents concentrations at geographical locations near the coast.
{"title":"Estimation of Estuary Phytoplankton using a Web-based Tool for Visualization of Hyper-spectral Images","authors":"V. J. Alarcon, J. V. D. Zwaag, R. Moorhead","doi":"10.1109/AIPR.2006.22","DOIUrl":"https://doi.org/10.1109/AIPR.2006.22","url":null,"abstract":"The development of Web-based tools for visualization and processing of hyper-spectral images has been slow. Memory and processing capabilities of personal computers may have precluded the development of Web-based tools. However, fast access to remote databases, increasing microprocessors' speed, and grid portals that provide interconnection between remote nodes sharing data and computing resources, make possible remote exploration and analysis of hyper-spectral data cubes. This paper presents a Web-based visualization tool for exploring moderate resolution imaging spectroradiometer (MODIS) data cubes. It provides capabilities for individual pixel's reflectance-spectra visualization, on-the-fly per-pixel calculation and visualization of chlorophyll-a andphytoplankton-carbon concentration values. The Web-based interface also generates normalized difference vegetation index images from the multi-spectral information contained in MODIS datasets. The tool is applied to estimate phytoplankton concentrations in the Saint Louis Bay estuary (Mississippi). Chlorophyll-a estimations produced by the Web-based tool compare well with in-situ measurements from a field survey performed during August 2001. Phytoplankton concentrations are calculated using those estimations of chlorophyll-a concentrations generated by the Web-based tool. The higher spatial resolution provided by the interface allowed estimating constituents concentrations at geographical locations near the coast.","PeriodicalId":375571,"journal":{"name":"35th IEEE Applied Imagery and Pattern Recognition Workshop (AIPR'06)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126431959","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A Gabor wavelet based modular PCA approach for face recognition is proposed in this paper. The proposed technique improves the efficiency of face recognition, under varying illumination and expression conditions for face images when compared to traditional PCA techniques. In this algorithm the face images are divided into smaller sub-images called modules and a series of Gabor wavelets at different scales and orientations are applied on these localized modules for feature extraction. A modified PCA approach is then applied for dimensionality reduction. Due to the extraction of localized features using Gabor wavelets, the proposed algorithm is expected to give improved recognition rate when compared to other traditional techniques. The performance of the proposed technique is evaluated under conditions of varying illumination, expression and variation in pose up to a certain range using standard face databases.
{"title":"Gabor Wavelet Based Modular PCA Approach for Expression and Illumination Invariant Face Recognition","authors":"Neeharika Gudur, V. Asari","doi":"10.1109/AIPR.2006.24","DOIUrl":"https://doi.org/10.1109/AIPR.2006.24","url":null,"abstract":"A Gabor wavelet based modular PCA approach for face recognition is proposed in this paper. The proposed technique improves the efficiency of face recognition, under varying illumination and expression conditions for face images when compared to traditional PCA techniques. In this algorithm the face images are divided into smaller sub-images called modules and a series of Gabor wavelets at different scales and orientations are applied on these localized modules for feature extraction. A modified PCA approach is then applied for dimensionality reduction. Due to the extraction of localized features using Gabor wavelets, the proposed algorithm is expected to give improved recognition rate when compared to other traditional techniques. The performance of the proposed technique is evaluated under conditions of varying illumination, expression and variation in pose up to a certain range using standard face databases.","PeriodicalId":375571,"journal":{"name":"35th IEEE Applied Imagery and Pattern Recognition Workshop (AIPR'06)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127889704","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A prototype image processing system has recently been developed which generates, displays and analyzes three-dimensional ladar data in real time. It is based upon a suite of novel algorithms that transform raw ladar data into cleaned 3D images. These algorithms perform noise reduction, ground plane identification, detector response deconvolution and illumination pattern renormalization. The system also discriminates static from dynamic objects in a scene. In order to achieve real-time throughput, we have parallelized these algorithms on a Linux cluster. We demonstrate that multiprocessor software plus Blade hardware result in a compact, real-time imagery generation adjunct to an operating ladar.
{"title":"Real-Time 3D Ladar Imaging","authors":"P. Cho, H. Anderson, R. Hatch, P. Ramaswami","doi":"10.1117/12.664904","DOIUrl":"https://doi.org/10.1117/12.664904","url":null,"abstract":"A prototype image processing system has recently been developed which generates, displays and analyzes three-dimensional ladar data in real time. It is based upon a suite of novel algorithms that transform raw ladar data into cleaned 3D images. These algorithms perform noise reduction, ground plane identification, detector response deconvolution and illumination pattern renormalization. The system also discriminates static from dynamic objects in a scene. In order to achieve real-time throughput, we have parallelized these algorithms on a Linux cluster. We demonstrate that multiprocessor software plus Blade hardware result in a compact, real-time imagery generation adjunct to an operating ladar.","PeriodicalId":375571,"journal":{"name":"35th IEEE Applied Imagery and Pattern Recognition Workshop (AIPR'06)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123746503","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Scott K. Ralph, J. Irvine, M. Snorrason, Steve Vanstone
Automatic target detection (ATD) systems process imagery to detect and locate targets in imagery in support of a variety of military missions. Accurate prediction of ATD performance would assist in system design and trade studies, collection management, and mission planning. A need exists for ATD performance prediction based exclusively on information available from the imagery and its associated metadata. We present a predictor based on image measures quantifying the intrinsic ATD difficulty on an image. The modeling effort consists of two phases: a learning phase, where image measures are computed for a set of test images, the ATD performance is measured, and a prediction model is developed; and a second phase to test and validate performance prediction. The learning phase produces a mapping, valid across various ATR algorithms, which is even applicable when no image truth is avail-able (e.g., when evaluating denied area imagery). The testbed has plug-in capability to allow rapid evaluation of new ATR algorithms. The image measures employed in the model include: statistics derived from a constant false alarm rate (CFAR) processor, the power spectrum signature, and others. We present a performance predictor using a trained classifier ATD that was constructed using GENIE, a tool developed at Los Alamos National Laboratory. The paper concludes with a discussion of future research.
{"title":"An Image Metric-Based ATR Performance Prediction Testbed","authors":"Scott K. Ralph, J. Irvine, M. Snorrason, Steve Vanstone","doi":"10.1109/AIPR.2006.13","DOIUrl":"https://doi.org/10.1109/AIPR.2006.13","url":null,"abstract":"Automatic target detection (ATD) systems process imagery to detect and locate targets in imagery in support of a variety of military missions. Accurate prediction of ATD performance would assist in system design and trade studies, collection management, and mission planning. A need exists for ATD performance prediction based exclusively on information available from the imagery and its associated metadata. We present a predictor based on image measures quantifying the intrinsic ATD difficulty on an image. The modeling effort consists of two phases: a learning phase, where image measures are computed for a set of test images, the ATD performance is measured, and a prediction model is developed; and a second phase to test and validate performance prediction. The learning phase produces a mapping, valid across various ATR algorithms, which is even applicable when no image truth is avail-able (e.g., when evaluating denied area imagery). The testbed has plug-in capability to allow rapid evaluation of new ATR algorithms. The image measures employed in the model include: statistics derived from a constant false alarm rate (CFAR) processor, the power spectrum signature, and others. We present a performance predictor using a trained classifier ATD that was constructed using GENIE, a tool developed at Los Alamos National Laboratory. The paper concludes with a discussion of future research.","PeriodicalId":375571,"journal":{"name":"35th IEEE Applied Imagery and Pattern Recognition Workshop (AIPR'06)","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128506896","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}