Pub Date : 2010-10-01DOI: 10.1109/AIPR.2010.5759702
Majid Mahrooghy, V. Anantharaj, N. Younan, J. Aanstoos
A satellite precipitation estimation algorithm based on wavelet features is investigated to find the optimal wavelet features in terms of wavelet family and sliding window size. In this work, the infrared satellite based images along with ground gauge (radar corrected) observations are used for the retrieval rainfall. The goal of this work is to find an optimal wavelet transform to represent better features for cloud classification and rainfall estimation. Our approach involves the following four steps: 1) segmentation of infrared cloud images into patches; 2) feature extraction using a wavelet-based method; 3) clustering and classification of cloud patches using neural network, and 4) dynamic application of brightness temperature (Tb) and rain rate relationships, derived using satellite observations. The results show that Haar and Symlet wavelets with sliding window size 5×5 have better estimate performance than other wavelet families and window sizes.
{"title":"Optimal wavelet features for an infrared satellite precipitation estimate algorithm","authors":"Majid Mahrooghy, V. Anantharaj, N. Younan, J. Aanstoos","doi":"10.1109/AIPR.2010.5759702","DOIUrl":"https://doi.org/10.1109/AIPR.2010.5759702","url":null,"abstract":"A satellite precipitation estimation algorithm based on wavelet features is investigated to find the optimal wavelet features in terms of wavelet family and sliding window size. In this work, the infrared satellite based images along with ground gauge (radar corrected) observations are used for the retrieval rainfall. The goal of this work is to find an optimal wavelet transform to represent better features for cloud classification and rainfall estimation. Our approach involves the following four steps: 1) segmentation of infrared cloud images into patches; 2) feature extraction using a wavelet-based method; 3) clustering and classification of cloud patches using neural network, and 4) dynamic application of brightness temperature (Tb) and rain rate relationships, derived using satellite observations. The results show that Haar and Symlet wavelets with sliding window size 5×5 have better estimate performance than other wavelet families and window sizes.","PeriodicalId":128378,"journal":{"name":"2010 IEEE 39th Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124850836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-10-01DOI: 10.1109/AIPR.2010.5759718
P. Robertson, W. B. Coney, R. Bobrow
The suspension systems of production automobiles and trucks are designed to support the comfort and safety of human occupants. The response of these vehicles to the road surface is a function of vehicle loading. In this research we demonstrate the automatic monitoring of vehicle load using an optical sensor and a speed bump. This paper investigates the dynamics of vehicle response and describes the software developed to extract vibrational information from video.
{"title":"Vehicle load estimation from observation of vibration response","authors":"P. Robertson, W. B. Coney, R. Bobrow","doi":"10.1109/AIPR.2010.5759718","DOIUrl":"https://doi.org/10.1109/AIPR.2010.5759718","url":null,"abstract":"The suspension systems of production automobiles and trucks are designed to support the comfort and safety of human occupants. The response of these vehicles to the road surface is a function of vehicle loading. In this research we demonstrate the automatic monitoring of vehicle load using an optical sensor and a speed bump. This paper investigates the dynamics of vehicle response and describes the software developed to extract vibrational information from video.","PeriodicalId":128378,"journal":{"name":"2010 IEEE 39th Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126773475","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-10-01DOI: 10.1109/AIPR.2010.5759692
M. Z. Aziz, B. Mertsching
A quick estimation of depth is required by artificial vision systems for their self survival and navigation through the environment. Following the selection strategy of biological vision, known as visual attention, can help in accelerating extraction of depth for important and relevant portions of given scenes. Recent studies on depth perception in biological vision indicate that disparity is computed using object detection in the brain. The proposed method uses concepts from these studies and determines the shift that objects go through in the stereo frames using data regarding their borders. This enables efficient creation of depth saliency map for artificial visual attention. Results of the proposed model have shown success in selecting those locations from stereo scenes that are salient for human perception in terms of depth.
{"title":"Pre-attentive detection of depth saliency using stereo vision","authors":"M. Z. Aziz, B. Mertsching","doi":"10.1109/AIPR.2010.5759692","DOIUrl":"https://doi.org/10.1109/AIPR.2010.5759692","url":null,"abstract":"A quick estimation of depth is required by artificial vision systems for their self survival and navigation through the environment. Following the selection strategy of biological vision, known as visual attention, can help in accelerating extraction of depth for important and relevant portions of given scenes. Recent studies on depth perception in biological vision indicate that disparity is computed using object detection in the brain. The proposed method uses concepts from these studies and determines the shift that objects go through in the stereo frames using data regarding their borders. This enables efficient creation of depth saliency map for artificial visual attention. Results of the proposed model have shown success in selecting those locations from stereo scenes that are salient for human perception in terms of depth.","PeriodicalId":128378,"journal":{"name":"2010 IEEE 39th Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126459524","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-10-01DOI: 10.1109/AIPR.2010.5759703
Lalitha Dabbiru, J. Aanstoos, N. Younan
The recent catastrophe caused by hurricane Katrina emphasizes the importance of examination of levees to improve the condition of those that are prone to failure during floods. On-site inspection of levees is costly and time-consuming, so there is a need to develop efficient techniques based on remote sensing technologies to identify levees that are more vulnerable to failure under flood loading. This research uses NASA JPL's Uninhabited Aerial Vehicle Synthetic Aperture Radar (UAVSAR) backscatter data for classification and analysis of earthen levees. The overall purpose of this research is to detect the problem areas along the levee such as through-seepage, sand boils and slough slides. This paper focuses on detection of slough slides. Since the UAVSAR is a quad-polarized L-band (λ = 25 cm) radar, the radar signals penetrate into the soil which aids in detecting soil property variations in the top layer. The research methodology comprises three steps: initially the SAR image is classified into three scattering components using the Freeman-Durden decomposition algorithm; then unsupervised classification is performed based on the polarimetric decomposition parameters: entropy (H) and alpha (α); and finally reclassified using the Wishart classifier. A 3×3 coherency matrix is calculated for each pixel of the radar's compressed Stokes matrix multi-look backscatter data and is used to retrieve these parameters. Different scattering mechanisms like surface scattering, dihedral scattering and volume scattering are observed to distinguish different targets along the levee. The experimental results show that the Wishart classifier can be used to detect slough slides on levees.
{"title":"Classification of levees using polarimetric Synthetic Aperture Radar (SAR) imagery","authors":"Lalitha Dabbiru, J. Aanstoos, N. Younan","doi":"10.1109/AIPR.2010.5759703","DOIUrl":"https://doi.org/10.1109/AIPR.2010.5759703","url":null,"abstract":"The recent catastrophe caused by hurricane Katrina emphasizes the importance of examination of levees to improve the condition of those that are prone to failure during floods. On-site inspection of levees is costly and time-consuming, so there is a need to develop efficient techniques based on remote sensing technologies to identify levees that are more vulnerable to failure under flood loading. This research uses NASA JPL's Uninhabited Aerial Vehicle Synthetic Aperture Radar (UAVSAR) backscatter data for classification and analysis of earthen levees. The overall purpose of this research is to detect the problem areas along the levee such as through-seepage, sand boils and slough slides. This paper focuses on detection of slough slides. Since the UAVSAR is a quad-polarized L-band (λ = 25 cm) radar, the radar signals penetrate into the soil which aids in detecting soil property variations in the top layer. The research methodology comprises three steps: initially the SAR image is classified into three scattering components using the Freeman-Durden decomposition algorithm; then unsupervised classification is performed based on the polarimetric decomposition parameters: entropy (H) and alpha (α); and finally reclassified using the Wishart classifier. A 3×3 coherency matrix is calculated for each pixel of the radar's compressed Stokes matrix multi-look backscatter data and is used to retrieve these parameters. Different scattering mechanisms like surface scattering, dihedral scattering and volume scattering are observed to distinguish different targets along the levee. The experimental results show that the Wishart classifier can be used to detect slough slides on levees.","PeriodicalId":128378,"journal":{"name":"2010 IEEE 39th Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130287836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-10-01DOI: 10.1109/AIPR.2010.5759713
V. Dvornychenko
The National Institute of Standards and Technology (NIST), with participation of the biometrics community, conducts evaluations of biometrics-based verification and identification systems. Of these, one of the more challenging is that of automated matching of latent fingerprints. There are many special challenges involved. First, since participation in these tests is voluntary and at the expense of the participant, NIST needs to exercise moderation in what, and how much, software is requested. As a result, it may not be possible to design tests which cover and resolve all possible outcomes. Conclusions may have to be inferred from studies that have limited results.
{"title":"Successful design of biometric tests in a constrained environment","authors":"V. Dvornychenko","doi":"10.1109/AIPR.2010.5759713","DOIUrl":"https://doi.org/10.1109/AIPR.2010.5759713","url":null,"abstract":"The National Institute of Standards and Technology (NIST), with participation of the biometrics community, conducts evaluations of biometrics-based verification and identification systems. Of these, one of the more challenging is that of automated matching of latent fingerprints. There are many special challenges involved. First, since participation in these tests is voluntary and at the expense of the participant, NIST needs to exercise moderation in what, and how much, software is requested. As a result, it may not be possible to design tests which cover and resolve all possible outcomes. Conclusions may have to be inferred from studies that have limited results.","PeriodicalId":128378,"journal":{"name":"2010 IEEE 39th Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121865314","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-10-01DOI: 10.1109/AIPR.2010.5759686
J. Isaacs, R. Goroshin
The classical paradigm of line and curve detection in images, as prescribed by the Hough transform, breaks down in cluttered and noisy imagery. In this paper we present an "upgraded" and ultimately more robust approach to line detection in images. The classical approach to line detection in imagery is low-pass filtering, followed by edge detection, followed by the application of the Hough transform. Peaks in the Hough transform correspond to straight line segments in the image. In our approach we replace low pass filtering by anisotropic diffusion; we replace edge detection by phase analysis of frequency components; and finally, lines corresponding to peaks in the Hough transform are statistically analyzed to reveal the most prominent and likely line segments (especially if the line thickness is known a priori) in the context of sampling distributions. The technique is demonstrated on real and synthetic aperture sonar (SAS) imagery.
{"title":"Tracking cables in sonar and optical imagery","authors":"J. Isaacs, R. Goroshin","doi":"10.1109/AIPR.2010.5759686","DOIUrl":"https://doi.org/10.1109/AIPR.2010.5759686","url":null,"abstract":"The classical paradigm of line and curve detection in images, as prescribed by the Hough transform, breaks down in cluttered and noisy imagery. In this paper we present an \"upgraded\" and ultimately more robust approach to line detection in images. The classical approach to line detection in imagery is low-pass filtering, followed by edge detection, followed by the application of the Hough transform. Peaks in the Hough transform correspond to straight line segments in the image. In our approach we replace low pass filtering by anisotropic diffusion; we replace edge detection by phase analysis of frequency components; and finally, lines corresponding to peaks in the Hough transform are statistically analyzed to reveal the most prominent and likely line segments (especially if the line thickness is known a priori) in the context of sampling distributions. The technique is demonstrated on real and synthetic aperture sonar (SAS) imagery.","PeriodicalId":128378,"journal":{"name":"2010 IEEE 39th Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127429273","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-10-01DOI: 10.1109/AIPR.2010.5759707
A. Mathew, A. Alex, V. Asari
In this paper, we propose a manifold-based methodology for color constancy. It is observed that the center surround information of an image creates a manifold in color space. The relationship between the points in the manifold is modeled as a line. The human visual system is capable of learning these relationships. This is the basis of color constancy. In illumination correction, the image in the reference illumination is operated on with a wide Gaussian function to extract the global illumination information. The global illumination information creates a manifold in color space which is learnt by the system as a line. An image in a different color perception creates a different manifold in color space. To transform the color perception of a scene in a given illumination to the reference color perception, the color relationships in the reference color perception are applied on the new image. This is achieved by projecting the pixels in the new image to the line representing the manifold of reference color perception. This model can be used for color correction of images with different color perceptions to a learnt color perception. This method, unlike other approaches, has a single step convergence and hence is faster.
{"title":"A manifold based methodology for color constancy","authors":"A. Mathew, A. Alex, V. Asari","doi":"10.1109/AIPR.2010.5759707","DOIUrl":"https://doi.org/10.1109/AIPR.2010.5759707","url":null,"abstract":"In this paper, we propose a manifold-based methodology for color constancy. It is observed that the center surround information of an image creates a manifold in color space. The relationship between the points in the manifold is modeled as a line. The human visual system is capable of learning these relationships. This is the basis of color constancy. In illumination correction, the image in the reference illumination is operated on with a wide Gaussian function to extract the global illumination information. The global illumination information creates a manifold in color space which is learnt by the system as a line. An image in a different color perception creates a different manifold in color space. To transform the color perception of a scene in a given illumination to the reference color perception, the color relationships in the reference color perception are applied on the new image. This is achieved by projecting the pixels in the new image to the line representing the manifold of reference color perception. This model can be used for color correction of images with different color perceptions to a learnt color perception. This method, unlike other approaches, has a single step convergence and hence is faster.","PeriodicalId":128378,"journal":{"name":"2010 IEEE 39th Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122337286","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-10-01DOI: 10.1109/AIPR.2010.5759693
A. Skurikhin
With increasing deployment of satellite imaging systems, only a small fraction of collected data can be subject to expert scrutiny. We present and evaluate a two-tier approach to broad area search for signs of anthropogenic activities in highresolution commercial satellite imagery. The method filters image information using semantically oriented interest points by combining Harris corner detection and spatial pyramid matching. The idea is that anthropogenic structures, such as rooftop outlines, fence corners, road junctions, are locally arranged in specific angular relations to each other. They are often oriented at approximately right angles to each other (which is known as rectilinearity relation). Detecting rectilinear structures provides an opportunity to highlight regions most likely to contain anthropogenic activity. This is followed by supervised classification of regions surrounding the detected corner points as anthropogenic vs. natural scenes. We consider, in particular, a search for signs of anthropogenic activities in uncluttered areas.
{"title":"Visual attention based detection of signs of anthropogenic activities in satellite imagery","authors":"A. Skurikhin","doi":"10.1109/AIPR.2010.5759693","DOIUrl":"https://doi.org/10.1109/AIPR.2010.5759693","url":null,"abstract":"With increasing deployment of satellite imaging systems, only a small fraction of collected data can be subject to expert scrutiny. We present and evaluate a two-tier approach to broad area search for signs of anthropogenic activities in highresolution commercial satellite imagery. The method filters image information using semantically oriented interest points by combining Harris corner detection and spatial pyramid matching. The idea is that anthropogenic structures, such as rooftop outlines, fence corners, road junctions, are locally arranged in specific angular relations to each other. They are often oriented at approximately right angles to each other (which is known as rectilinearity relation). Detecting rectilinear structures provides an opportunity to highlight regions most likely to contain anthropogenic activity. This is followed by supervised classification of regions surrounding the detected corner points as anthropogenic vs. natural scenes. We consider, in particular, a search for signs of anthropogenic activities in uncluttered areas.","PeriodicalId":128378,"journal":{"name":"2010 IEEE 39th Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122445568","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-10-01DOI: 10.1109/AIPR.2010.5759680
E. Mendi, Coskun Bayrak
In this paper, we have investigated the capabilities of 4 approaches for image search for a CBIR system. First two approaches are based on comparing the images using color histograms of RGB and HSV spaces, respectively. The other 2 approaches are based on two quantitative image fidelity measurements, Mean Square Error (MSE) and Structural Similarity Index (SSIM), which provide a degree of similarity between two images. The precision performances of approaches have been evaluated by using a public image database containing 1000 images. Finally effectiveness of retrieval has been measured for each method.
{"title":"Performance evaluation of color image retrieval","authors":"E. Mendi, Coskun Bayrak","doi":"10.1109/AIPR.2010.5759680","DOIUrl":"https://doi.org/10.1109/AIPR.2010.5759680","url":null,"abstract":"In this paper, we have investigated the capabilities of 4 approaches for image search for a CBIR system. First two approaches are based on comparing the images using color histograms of RGB and HSV spaces, respectively. The other 2 approaches are based on two quantitative image fidelity measurements, Mean Square Error (MSE) and Structural Similarity Index (SSIM), which provide a degree of similarity between two images. The precision performances of approaches have been evaluated by using a public image database containing 1000 images. Finally effectiveness of retrieval has been measured for each method.","PeriodicalId":128378,"journal":{"name":"2010 IEEE 39th Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133268888","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-10-01DOI: 10.1109/AIPR.2010.5759700
Darrell L. Young
Metadata is considered vital in making sense of ISR sensor data because it provides the context needed to interpret motion imagery. For example, metadata provides the fundamental information needed to associate the imagery with location and time. But, more than that, metadata provides information that can assist in automated video analysis. This paper describes some of the ways that metadata can be used to improve automated video processing.
{"title":"Motion imagery metadata standards assist in object and activity classification","authors":"Darrell L. Young","doi":"10.1109/AIPR.2010.5759700","DOIUrl":"https://doi.org/10.1109/AIPR.2010.5759700","url":null,"abstract":"Metadata is considered vital in making sense of ISR sensor data because it provides the context needed to interpret motion imagery. For example, metadata provides the fundamental information needed to associate the imagery with location and time. But, more than that, metadata provides information that can assist in automated video analysis. This paper describes some of the ways that metadata can be used to improve automated video processing.","PeriodicalId":128378,"journal":{"name":"2010 IEEE 39th Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133771653","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}