Pub Date : 2003-10-15DOI: 10.1109/AIPR.2003.1284239
A. Poland, G. Withbroe, John C. Evans
Space weather research involves the study of the Sun and Earth from a systems viewpoint to improve the understanding and prediction of solar-terrestrial variability. There are a wide variety of solar-terrestrial imagery, spectroscopic measurements, and in situ space environmental data that can be exploited to improve our knowledge and understanding of the phenomena and processes involved in space weather.
{"title":"Space weather research: a major application of imagery and data fusion","authors":"A. Poland, G. Withbroe, John C. Evans","doi":"10.1109/AIPR.2003.1284239","DOIUrl":"https://doi.org/10.1109/AIPR.2003.1284239","url":null,"abstract":"Space weather research involves the study of the Sun and Earth from a systems viewpoint to improve the understanding and prediction of solar-terrestrial variability. There are a wide variety of solar-terrestrial imagery, spectroscopic measurements, and in situ space environmental data that can be exploited to improve our knowledge and understanding of the phenomena and processes involved in space weather.","PeriodicalId":176987,"journal":{"name":"32nd Applied Imagery Pattern Recognition Workshop, 2003. Proceedings.","volume":"126 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122399449","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-10-15DOI: 10.1109/AIPR.2003.1284257
K. Walli
This paper develops a technique for the registration of multisensor images utilizing the Laplacian of Gaussian (LoG) filter to automatically determine semi-invariant ground control points (GCPs). These points are then related through the development of point matching techniques and statistical analysis. Through the use of matrix transformations, efficient management of multiple affine operations can be obtained and stored in a composite transform. Wavelet theory is used to enable the multi-resolution analysis critical for multisensor image registration and predictive transformations. Multiple methods have been discussed to test the accuracy of the resulting image registration. Benefits of this technique against parallax and moving objects within the scene has also been highlighted. Finally, an example of 'wavelet sharpening' has been demonstrated that preserves radiometric integrity.
{"title":"Automated multisensor image registration","authors":"K. Walli","doi":"10.1109/AIPR.2003.1284257","DOIUrl":"https://doi.org/10.1109/AIPR.2003.1284257","url":null,"abstract":"This paper develops a technique for the registration of multisensor images utilizing the Laplacian of Gaussian (LoG) filter to automatically determine semi-invariant ground control points (GCPs). These points are then related through the development of point matching techniques and statistical analysis. Through the use of matrix transformations, efficient management of multiple affine operations can be obtained and stored in a composite transform. Wavelet theory is used to enable the multi-resolution analysis critical for multisensor image registration and predictive transformations. Multiple methods have been discussed to test the accuracy of the resulting image registration. Benefits of this technique against parallax and moving objects within the scene has also been highlighted. Finally, an example of 'wavelet sharpening' has been demonstrated that preserves radiometric integrity.","PeriodicalId":176987,"journal":{"name":"32nd Applied Imagery Pattern Recognition Workshop, 2003. Proceedings.","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122134857","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-10-15DOI: 10.1109/AIPR.2003.1284279
T. Boult
This paper reviews some of the history of automated visual surveillance, from the second and third generation VMD days of the early 1990s, to the current state of the art. It discusses the inherent limitations that resulted in a nearly negligible "increase" in performance throughout the 1990s and still exist in commercially available systems. Then we review an approach that overcomes these limitations-active visual surveillance with geo-spatial rules. Active visual surveillance uses data from computer controlled Pan/Tilt/Zoom (PTZ) units combined with state of the art video detection and tracking to provide active assessment of potential targets in a cost effective manner. This active assessment allows an increase in the number of pixels on target and provides a secondary viewpoint for data fusion, while still allowing coverage of a very large surveillance area. This active approach and multi-sensor fusion, not a new concept, was developed as part of the DARPA Video Surveillance and Monitoring (VSAM) program in the late 90's. While we have continued to expand upon it since that time, there has been no commercial video surveillance, before Guardian Solutions, that provided these important abilities. The core ideas in this paper address limitations of the original VSAM designs, briefly introducing our enhancements including geo-spatial rules for wide area multi-sensor fusion, and key design issues to allow us to support wireless networks.
{"title":"Geo-spatial active visual surveillance on wireless networks","authors":"T. Boult","doi":"10.1109/AIPR.2003.1284279","DOIUrl":"https://doi.org/10.1109/AIPR.2003.1284279","url":null,"abstract":"This paper reviews some of the history of automated visual surveillance, from the second and third generation VMD days of the early 1990s, to the current state of the art. It discusses the inherent limitations that resulted in a nearly negligible \"increase\" in performance throughout the 1990s and still exist in commercially available systems. Then we review an approach that overcomes these limitations-active visual surveillance with geo-spatial rules. Active visual surveillance uses data from computer controlled Pan/Tilt/Zoom (PTZ) units combined with state of the art video detection and tracking to provide active assessment of potential targets in a cost effective manner. This active assessment allows an increase in the number of pixels on target and provides a secondary viewpoint for data fusion, while still allowing coverage of a very large surveillance area. This active approach and multi-sensor fusion, not a new concept, was developed as part of the DARPA Video Surveillance and Monitoring (VSAM) program in the late 90's. While we have continued to expand upon it since that time, there has been no commercial video surveillance, before Guardian Solutions, that provided these important abilities. The core ideas in this paper address limitations of the original VSAM designs, briefly introducing our enhancements including geo-spatial rules for wide area multi-sensor fusion, and key design issues to allow us to support wireless networks.","PeriodicalId":176987,"journal":{"name":"32nd Applied Imagery Pattern Recognition Workshop, 2003. Proceedings.","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117047979","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-10-15DOI: 10.1109/AIPR.2003.1284266
Henry Y. T. Ngan, G. Pang, S. Yung, M. K. Ng
The techniques for defect detection on plain (unpatterned) fabrics have been well developed nowadays. This paper is on developing visual inspection methods for defect detection on patterned fabrics. A review on some defect detection methods on patterned fabrics is given. Then, a new method for patterned fabric inspection called Golden Image Subtraction (GIS) is introduced. GIS is an efficient and fast method, which can segment out the defective regions on patterned fabric effectively. An improved version of the GIS method using wavelet transform is also given. This research results contribute to the development of an automated fabric inspection machine for the textile industry.
{"title":"Defect detection on patterned jacquard fabric","authors":"Henry Y. T. Ngan, G. Pang, S. Yung, M. K. Ng","doi":"10.1109/AIPR.2003.1284266","DOIUrl":"https://doi.org/10.1109/AIPR.2003.1284266","url":null,"abstract":"The techniques for defect detection on plain (unpatterned) fabrics have been well developed nowadays. This paper is on developing visual inspection methods for defect detection on patterned fabrics. A review on some defect detection methods on patterned fabrics is given. Then, a new method for patterned fabric inspection called Golden Image Subtraction (GIS) is introduced. GIS is an efficient and fast method, which can segment out the defective regions on patterned fabric effectively. An improved version of the GIS method using wavelet transform is also given. This research results contribute to the development of an automated fabric inspection machine for the textile industry.","PeriodicalId":176987,"journal":{"name":"32nd Applied Imagery Pattern Recognition Workshop, 2003. Proceedings.","volume":"190 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124213929","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-10-15DOI: 10.1109/AIPR.2003.1284260
Yumei Miao, Yusong Miao
The diagnostic value of CT (Computed Tomography) checking for encephalic illness is affirmative. For clinical doctors, they are in urgent need of a good approach for this monomodality medical image fusion at an acceptable accuracy, in order to obtain some visual comparison about a patient in normal and pathologic conditions, tracing the development of focus, determining the regimen and so on. Thus is also the purpose of this paper. The usual method is merging images at pixel-level or feature-level. In this paper, we develop a semantic-level fusion technique that is matched with semantic descriptions associated to images. Content-based semantic information can be used on image segmentation and similarity matching image retrieval through prior-knowledge support. Then we apply a weighted complex similarity retrieval algorithm (WK-NN) to implement. Finally, the integrated images with semantic information are presented.
{"title":"The research of semantic content applied to image fusion","authors":"Yumei Miao, Yusong Miao","doi":"10.1109/AIPR.2003.1284260","DOIUrl":"https://doi.org/10.1109/AIPR.2003.1284260","url":null,"abstract":"The diagnostic value of CT (Computed Tomography) checking for encephalic illness is affirmative. For clinical doctors, they are in urgent need of a good approach for this monomodality medical image fusion at an acceptable accuracy, in order to obtain some visual comparison about a patient in normal and pathologic conditions, tracing the development of focus, determining the regimen and so on. Thus is also the purpose of this paper. The usual method is merging images at pixel-level or feature-level. In this paper, we develop a semantic-level fusion technique that is matched with semantic descriptions associated to images. Content-based semantic information can be used on image segmentation and similarity matching image retrieval through prior-knowledge support. Then we apply a weighted complex similarity retrieval algorithm (WK-NN) to implement. Finally, the integrated images with semantic information are presented.","PeriodicalId":176987,"journal":{"name":"32nd Applied Imagery Pattern Recognition Workshop, 2003. Proceedings.","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121631742","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-10-15DOI: 10.1109/AIPR.2003.1284252
M. Hinnrichs, N. Gupta, A. Goldberg
The demonstration of a dual band, MWIR/LWIR hyperspectral imaging system with a single lens and single focal plane array was performed at the Army Research Laboratory in the spring of 2003. To our knowledge this is the first time that a single two color focal plane array has been used for hyperspectral imaging. The ability of the IMSS diffractive optic to image both bands (MWIR/LWIR) simultaneously has allowed this new and innovative technique to work. The diffractive lens images the first order in the longwave infrared and the second order in the midwave infrared. Since the light is dispersed along the optical axis, as opposed to perpendicular to the optical axis, both color bands are imaged in parallel. This paper reports on this work showing a demonstration of a dual band hyperspectral image.
{"title":"Dual band (MWIR/LWIR) hyperspectral imager","authors":"M. Hinnrichs, N. Gupta, A. Goldberg","doi":"10.1109/AIPR.2003.1284252","DOIUrl":"https://doi.org/10.1109/AIPR.2003.1284252","url":null,"abstract":"The demonstration of a dual band, MWIR/LWIR hyperspectral imaging system with a single lens and single focal plane array was performed at the Army Research Laboratory in the spring of 2003. To our knowledge this is the first time that a single two color focal plane array has been used for hyperspectral imaging. The ability of the IMSS diffractive optic to image both bands (MWIR/LWIR) simultaneously has allowed this new and innovative technique to work. The diffractive lens images the first order in the longwave infrared and the second order in the midwave infrared. Since the light is dispersed along the optical axis, as opposed to perpendicular to the optical axis, both color bands are imaged in parallel. This paper reports on this work showing a demonstration of a dual band hyperspectral image.","PeriodicalId":176987,"journal":{"name":"32nd Applied Imagery Pattern Recognition Workshop, 2003. Proceedings.","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114639232","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-10-15DOI: 10.1109/AIPR.2003.1284275
M. Pauli, M. C. Ertem, E. Heidhausen
The VIPER infrared muzzle flash detection system was deployed from a helicopter and an airship in response to the Washington, DC area sniper attacks in October 2002. The system consist of a midwave IR camera, which was used to detect muzzle flash and cue a visible light camera on a gimbal to the detected event. The helicopter installation was done to prove that a manned airborne installation of the VIPER detection system would work. Within 36 hours of the request to deploy the system, it had been modified, approved by the FAA inspector and flown. Testing at the Ft. Meade rifle range showed that in the helicopter installation the system worked at least as well as the ground based system. Because of the limited endurance that a helicopter allows, the system was then installed aboard a Navy leased airship. It was flown at Elizabeth City, NC and was tested against live fire. These were the first flights of the airborne VIPER payload. It has since been flown numerous times on helicopters and tested against various guns, mortars, and artillery. The Naval Research Laboratory has demonstrated multiple payloads, each of which flew in manned helicopters and all controlled from a single ground station.
{"title":"Quick response airborne deployment of VIPER muzzle flash detection and location system during DC sniper attacks","authors":"M. Pauli, M. C. Ertem, E. Heidhausen","doi":"10.1109/AIPR.2003.1284275","DOIUrl":"https://doi.org/10.1109/AIPR.2003.1284275","url":null,"abstract":"The VIPER infrared muzzle flash detection system was deployed from a helicopter and an airship in response to the Washington, DC area sniper attacks in October 2002. The system consist of a midwave IR camera, which was used to detect muzzle flash and cue a visible light camera on a gimbal to the detected event. The helicopter installation was done to prove that a manned airborne installation of the VIPER detection system would work. Within 36 hours of the request to deploy the system, it had been modified, approved by the FAA inspector and flown. Testing at the Ft. Meade rifle range showed that in the helicopter installation the system worked at least as well as the ground based system. Because of the limited endurance that a helicopter allows, the system was then installed aboard a Navy leased airship. It was flown at Elizabeth City, NC and was tested against live fire. These were the first flights of the airborne VIPER payload. It has since been flown numerous times on helicopters and tested against various guns, mortars, and artillery. The Naval Research Laboratory has demonstrated multiple payloads, each of which flew in manned helicopters and all controlled from a single ground station.","PeriodicalId":176987,"journal":{"name":"32nd Applied Imagery Pattern Recognition Workshop, 2003. Proceedings.","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130379993","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-10-15DOI: 10.1109/AIPR.2003.1284278
Youn-Hee Gil, Dosung Ahn, S. Pan, Yongwha Chung
Biometric based applications guarantee for resolving numerous security hazards. As a method of preserving of privacy and the security of sensitive information, biometrics has been studied and used for the past few decades. Fingerprint is one of the most widely used biometrics. A number of fingerprint verification approaches have been proposed until now. However, fingerprint images acquired using current fingerprint input devices that have small field of view are from just very limited areas of whole fingertips. Therefore, essential information required to distinguish fingerprints could be missed, or extracted falsely. The limited and somewhat distorted information are detected from them, which might reduce the accuracy of fingerprint verification systems. In the systems that verify the identity of two fingerprints using fingerprint features, it is critical to extract the correct feature information. In order to deal with these problems, compensation of imperfect information can be performed using multiple impressions of enrollee's fingerprints. In this paper, additional three fingerprint images are used in enrollment phase of fingerprint verification system. Our experiments using FVC 2002 databases show that the enrollment using multiple impressions improves the performance of the whole fingerprint verification system.
{"title":"Access control system with high level security using fingerprints","authors":"Youn-Hee Gil, Dosung Ahn, S. Pan, Yongwha Chung","doi":"10.1109/AIPR.2003.1284278","DOIUrl":"https://doi.org/10.1109/AIPR.2003.1284278","url":null,"abstract":"Biometric based applications guarantee for resolving numerous security hazards. As a method of preserving of privacy and the security of sensitive information, biometrics has been studied and used for the past few decades. Fingerprint is one of the most widely used biometrics. A number of fingerprint verification approaches have been proposed until now. However, fingerprint images acquired using current fingerprint input devices that have small field of view are from just very limited areas of whole fingertips. Therefore, essential information required to distinguish fingerprints could be missed, or extracted falsely. The limited and somewhat distorted information are detected from them, which might reduce the accuracy of fingerprint verification systems. In the systems that verify the identity of two fingerprints using fingerprint features, it is critical to extract the correct feature information. In order to deal with these problems, compensation of imperfect information can be performed using multiple impressions of enrollee's fingerprints. In this paper, additional three fingerprint images are used in enrollment phase of fingerprint verification system. Our experiments using FVC 2002 databases show that the enrollment using multiple impressions improves the performance of the whole fingerprint verification system.","PeriodicalId":176987,"journal":{"name":"32nd Applied Imagery Pattern Recognition Workshop, 2003. Proceedings.","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132642505","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-10-15DOI: 10.1109/AIPR.2003.1284271
Lori M. Levin
This discussion focuses on beginning readers’ perceptions and observations of picture book images they encounter in both school and home literacy environments. The data gathered from the subjects were organized in order to describe how visual literacy develops simultaneously with conventional literacy. Beginning with what research tells us about what strategic readers do in order to comprehend print, the current study seeks to understand if similar competencies are used when beginning readers view or “read” pictures in children’s books.
{"title":"Children's understanding of imagery in picture books","authors":"Lori M. Levin","doi":"10.1109/AIPR.2003.1284271","DOIUrl":"https://doi.org/10.1109/AIPR.2003.1284271","url":null,"abstract":"This discussion focuses on beginning readers’ perceptions and observations of picture book images they encounter in both school and home literacy environments. The data gathered from the subjects were organized in order to describe how visual literacy develops simultaneously with conventional literacy. Beginning with what research tells us about what strategic readers do in order to comprehend print, the current study seeks to understand if similar competencies are used when beginning readers view or “read” pictures in children’s books.","PeriodicalId":176987,"journal":{"name":"32nd Applied Imagery Pattern Recognition Workshop, 2003. Proceedings.","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128996705","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-10-15DOI: 10.1109/AIPR.2003.1284276
S. Israel, W. T. Scruggs, W. Worek, J. Irvine
Single modality biometric identification systems exhibit performance that may not be adequate for many security applications. Face and fingerprint modalities dominate the biometric verification/identification field. However, both face and fingerprint can be compromised using counterfeit credentials. Previous research has demonstrated the use of the electrocardiogram (ECG) as a novel biometric. This paper explores the fusion of a traditional face recognition technique with ECG. System performance with multimodality fusion can be superior to reliance on a single biometric, but performance depends heavily on the fusion technique. In addition, a fusion-based system is more difficult to defeat, since an imposter must provide counterfeit credentials for both face and cardiovascular function.
{"title":"Fusing face and ECG for personal identification","authors":"S. Israel, W. T. Scruggs, W. Worek, J. Irvine","doi":"10.1109/AIPR.2003.1284276","DOIUrl":"https://doi.org/10.1109/AIPR.2003.1284276","url":null,"abstract":"Single modality biometric identification systems exhibit performance that may not be adequate for many security applications. Face and fingerprint modalities dominate the biometric verification/identification field. However, both face and fingerprint can be compromised using counterfeit credentials. Previous research has demonstrated the use of the electrocardiogram (ECG) as a novel biometric. This paper explores the fusion of a traditional face recognition technique with ECG. System performance with multimodality fusion can be superior to reliance on a single biometric, but performance depends heavily on the fusion technique. In addition, a fusion-based system is more difficult to defeat, since an imposter must provide counterfeit credentials for both face and cardiovascular function.","PeriodicalId":176987,"journal":{"name":"32nd Applied Imagery Pattern Recognition Workshop, 2003. Proceedings.","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129320740","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}