首页 > 最新文献

2010 IEEE 39th Applied Imagery Pattern Recognition Workshop (AIPR)最新文献

英文 中文
Optimal wavelet features for an infrared satellite precipitation estimate algorithm 红外卫星降水估计算法的最优小波特征
Pub Date : 2010-10-01 DOI: 10.1109/AIPR.2010.5759702
Majid Mahrooghy, V. Anantharaj, N. Younan, J. Aanstoos
A satellite precipitation estimation algorithm based on wavelet features is investigated to find the optimal wavelet features in terms of wavelet family and sliding window size. In this work, the infrared satellite based images along with ground gauge (radar corrected) observations are used for the retrieval rainfall. The goal of this work is to find an optimal wavelet transform to represent better features for cloud classification and rainfall estimation. Our approach involves the following four steps: 1) segmentation of infrared cloud images into patches; 2) feature extraction using a wavelet-based method; 3) clustering and classification of cloud patches using neural network, and 4) dynamic application of brightness temperature (Tb) and rain rate relationships, derived using satellite observations. The results show that Haar and Symlet wavelets with sliding window size 5×5 have better estimate performance than other wavelet families and window sizes.
研究了一种基于小波特征的卫星降水估计算法,从小波族和滑动窗口大小两个方面寻找最优的小波特征。在这项工作中,基于红外卫星的图像以及地面测量(雷达校正)观测用于检索降雨量。这项工作的目标是找到一个最佳的小波变换来表示云分类和降雨估计的更好的特征。我们的方法包括以下四个步骤:1)将红外云图分割成小块;2)基于小波的特征提取方法;3)利用神经网络对云块进行聚类和分类;4)利用卫星观测数据推导出的亮度温度(Tb)和雨率关系的动态应用。结果表明,具有滑动窗口大小5×5的Haar和Symlet小波比其他小波族和窗口大小具有更好的估计性能。
{"title":"Optimal wavelet features for an infrared satellite precipitation estimate algorithm","authors":"Majid Mahrooghy, V. Anantharaj, N. Younan, J. Aanstoos","doi":"10.1109/AIPR.2010.5759702","DOIUrl":"https://doi.org/10.1109/AIPR.2010.5759702","url":null,"abstract":"A satellite precipitation estimation algorithm based on wavelet features is investigated to find the optimal wavelet features in terms of wavelet family and sliding window size. In this work, the infrared satellite based images along with ground gauge (radar corrected) observations are used for the retrieval rainfall. The goal of this work is to find an optimal wavelet transform to represent better features for cloud classification and rainfall estimation. Our approach involves the following four steps: 1) segmentation of infrared cloud images into patches; 2) feature extraction using a wavelet-based method; 3) clustering and classification of cloud patches using neural network, and 4) dynamic application of brightness temperature (Tb) and rain rate relationships, derived using satellite observations. The results show that Haar and Symlet wavelets with sliding window size 5×5 have better estimate performance than other wavelet families and window sizes.","PeriodicalId":128378,"journal":{"name":"2010 IEEE 39th Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124850836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Vehicle load estimation from observation of vibration response 基于振动响应观测的车辆载荷估计
Pub Date : 2010-10-01 DOI: 10.1109/AIPR.2010.5759718
P. Robertson, W. B. Coney, R. Bobrow
The suspension systems of production automobiles and trucks are designed to support the comfort and safety of human occupants. The response of these vehicles to the road surface is a function of vehicle loading. In this research we demonstrate the automatic monitoring of vehicle load using an optical sensor and a speed bump. This paper investigates the dynamics of vehicle response and describes the software developed to extract vibrational information from video.
生产汽车和卡车的悬挂系统的设计是为了支持人类乘员的舒适性和安全性。这些车辆对路面的响应是车辆载荷的函数。在本研究中,我们演示了使用光学传感器和减速带对车辆负载进行自动监测。本文研究了车辆的响应动力学,并介绍了从视频中提取振动信息的软件。
{"title":"Vehicle load estimation from observation of vibration response","authors":"P. Robertson, W. B. Coney, R. Bobrow","doi":"10.1109/AIPR.2010.5759718","DOIUrl":"https://doi.org/10.1109/AIPR.2010.5759718","url":null,"abstract":"The suspension systems of production automobiles and trucks are designed to support the comfort and safety of human occupants. The response of these vehicles to the road surface is a function of vehicle loading. In this research we demonstrate the automatic monitoring of vehicle load using an optical sensor and a speed bump. This paper investigates the dynamics of vehicle response and describes the software developed to extract vibrational information from video.","PeriodicalId":128378,"journal":{"name":"2010 IEEE 39th Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126773475","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Pre-attentive detection of depth saliency using stereo vision 使用立体视觉的深度显著性的预先注意检测
Pub Date : 2010-10-01 DOI: 10.1109/AIPR.2010.5759692
M. Z. Aziz, B. Mertsching
A quick estimation of depth is required by artificial vision systems for their self survival and navigation through the environment. Following the selection strategy of biological vision, known as visual attention, can help in accelerating extraction of depth for important and relevant portions of given scenes. Recent studies on depth perception in biological vision indicate that disparity is computed using object detection in the brain. The proposed method uses concepts from these studies and determines the shift that objects go through in the stereo frames using data regarding their borders. This enables efficient creation of depth saliency map for artificial visual attention. Results of the proposed model have shown success in selecting those locations from stereo scenes that are salient for human perception in terms of depth.
人工视觉系统需要快速估计深度,以便在环境中生存和导航。遵循生物视觉的选择策略,被称为视觉注意,可以帮助加速提取给定场景中重要和相关部分的深度。近年来对生物视觉深度感知的研究表明,视差是通过大脑中的物体检测来计算的。所提出的方法使用这些研究中的概念,并使用有关其边界的数据确定物体在立体框架中所经历的位移。这使得人工视觉注意的深度显著性图的高效创建成为可能。该模型的结果表明,它可以成功地从立体场景中选择那些在深度方面对人类感知显著的位置。
{"title":"Pre-attentive detection of depth saliency using stereo vision","authors":"M. Z. Aziz, B. Mertsching","doi":"10.1109/AIPR.2010.5759692","DOIUrl":"https://doi.org/10.1109/AIPR.2010.5759692","url":null,"abstract":"A quick estimation of depth is required by artificial vision systems for their self survival and navigation through the environment. Following the selection strategy of biological vision, known as visual attention, can help in accelerating extraction of depth for important and relevant portions of given scenes. Recent studies on depth perception in biological vision indicate that disparity is computed using object detection in the brain. The proposed method uses concepts from these studies and determines the shift that objects go through in the stereo frames using data regarding their borders. This enables efficient creation of depth saliency map for artificial visual attention. Results of the proposed model have shown success in selecting those locations from stereo scenes that are salient for human perception in terms of depth.","PeriodicalId":128378,"journal":{"name":"2010 IEEE 39th Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126459524","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Classification of levees using polarimetric Synthetic Aperture Radar (SAR) imagery 利用偏振合成孔径雷达(SAR)图像对堤防进行分类
Pub Date : 2010-10-01 DOI: 10.1109/AIPR.2010.5759703
Lalitha Dabbiru, J. Aanstoos, N. Younan
The recent catastrophe caused by hurricane Katrina emphasizes the importance of examination of levees to improve the condition of those that are prone to failure during floods. On-site inspection of levees is costly and time-consuming, so there is a need to develop efficient techniques based on remote sensing technologies to identify levees that are more vulnerable to failure under flood loading. This research uses NASA JPL's Uninhabited Aerial Vehicle Synthetic Aperture Radar (UAVSAR) backscatter data for classification and analysis of earthen levees. The overall purpose of this research is to detect the problem areas along the levee such as through-seepage, sand boils and slough slides. This paper focuses on detection of slough slides. Since the UAVSAR is a quad-polarized L-band (λ = 25 cm) radar, the radar signals penetrate into the soil which aids in detecting soil property variations in the top layer. The research methodology comprises three steps: initially the SAR image is classified into three scattering components using the Freeman-Durden decomposition algorithm; then unsupervised classification is performed based on the polarimetric decomposition parameters: entropy (H) and alpha (α); and finally reclassified using the Wishart classifier. A 3×3 coherency matrix is calculated for each pixel of the radar's compressed Stokes matrix multi-look backscatter data and is used to retrieve these parameters. Different scattering mechanisms like surface scattering, dihedral scattering and volume scattering are observed to distinguish different targets along the levee. The experimental results show that the Wishart classifier can be used to detect slough slides on levees.
最近卡特里娜飓风造成的灾难强调了检查堤坝的重要性,以改善那些在洪水期间容易破裂的堤坝的状况。对堤坝进行现场检查既昂贵又耗时,因此需要开发基于遥感技术的高效技术,以识别在洪水荷载下更容易破坏的堤坝。本研究使用NASA喷气推进实验室的无人飞行器合成孔径雷达(UAVSAR)后向散射数据对土堤进行分类和分析。本次研究的总体目的是检测堤防沿线的渗漏、渗水、溃沙、滑坡等问题区域。本文的研究重点是滑坡的检测。由于UAVSAR是四极化l波段(λ = 25 cm)雷达,雷达信号穿透土壤,有助于探测顶层土壤性质的变化。研究方法包括三个步骤:首先利用Freeman-Durden分解算法将SAR图像分成三个散射分量;然后根据极化分解参数熵(H)和α (α)进行无监督分类;最后使用Wishart分类器进行重新分类。对雷达压缩Stokes矩阵多视后向散射数据的每个像素计算3×3相干矩阵,并用于检索这些参数。利用不同的散射机制,如表面散射、二面体散射和体积散射来区分堤防沿线不同的目标。实验结果表明,Wishart分类器可以用于堤防滑坡的检测。
{"title":"Classification of levees using polarimetric Synthetic Aperture Radar (SAR) imagery","authors":"Lalitha Dabbiru, J. Aanstoos, N. Younan","doi":"10.1109/AIPR.2010.5759703","DOIUrl":"https://doi.org/10.1109/AIPR.2010.5759703","url":null,"abstract":"The recent catastrophe caused by hurricane Katrina emphasizes the importance of examination of levees to improve the condition of those that are prone to failure during floods. On-site inspection of levees is costly and time-consuming, so there is a need to develop efficient techniques based on remote sensing technologies to identify levees that are more vulnerable to failure under flood loading. This research uses NASA JPL's Uninhabited Aerial Vehicle Synthetic Aperture Radar (UAVSAR) backscatter data for classification and analysis of earthen levees. The overall purpose of this research is to detect the problem areas along the levee such as through-seepage, sand boils and slough slides. This paper focuses on detection of slough slides. Since the UAVSAR is a quad-polarized L-band (λ = 25 cm) radar, the radar signals penetrate into the soil which aids in detecting soil property variations in the top layer. The research methodology comprises three steps: initially the SAR image is classified into three scattering components using the Freeman-Durden decomposition algorithm; then unsupervised classification is performed based on the polarimetric decomposition parameters: entropy (H) and alpha (α); and finally reclassified using the Wishart classifier. A 3×3 coherency matrix is calculated for each pixel of the radar's compressed Stokes matrix multi-look backscatter data and is used to retrieve these parameters. Different scattering mechanisms like surface scattering, dihedral scattering and volume scattering are observed to distinguish different targets along the levee. The experimental results show that the Wishart classifier can be used to detect slough slides on levees.","PeriodicalId":128378,"journal":{"name":"2010 IEEE 39th Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130287836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Successful design of biometric tests in a constrained environment 在受限环境下成功设计生物特征测试
Pub Date : 2010-10-01 DOI: 10.1109/AIPR.2010.5759713
V. Dvornychenko
The National Institute of Standards and Technology (NIST), with participation of the biometrics community, conducts evaluations of biometrics-based verification and identification systems. Of these, one of the more challenging is that of automated matching of latent fingerprints. There are many special challenges involved. First, since participation in these tests is voluntary and at the expense of the participant, NIST needs to exercise moderation in what, and how much, software is requested. As a result, it may not be possible to design tests which cover and resolve all possible outcomes. Conclusions may have to be inferred from studies that have limited results.
美国国家标准与技术研究所(NIST)在生物识别社区的参与下,对基于生物识别技术的验证和识别系统进行评估。其中,最具挑战性的是潜在指纹的自动匹配。其中涉及许多特殊的挑战。首先,由于参与这些测试是自愿的,并且需要参与者承担费用,因此NIST需要在要求什么软件以及要求多少软件方面进行节制。因此,可能不可能设计出涵盖并解决所有可能结果的测试。结论可能必须从结果有限的研究中推断出来。
{"title":"Successful design of biometric tests in a constrained environment","authors":"V. Dvornychenko","doi":"10.1109/AIPR.2010.5759713","DOIUrl":"https://doi.org/10.1109/AIPR.2010.5759713","url":null,"abstract":"The National Institute of Standards and Technology (NIST), with participation of the biometrics community, conducts evaluations of biometrics-based verification and identification systems. Of these, one of the more challenging is that of automated matching of latent fingerprints. There are many special challenges involved. First, since participation in these tests is voluntary and at the expense of the participant, NIST needs to exercise moderation in what, and how much, software is requested. As a result, it may not be possible to design tests which cover and resolve all possible outcomes. Conclusions may have to be inferred from studies that have limited results.","PeriodicalId":128378,"journal":{"name":"2010 IEEE 39th Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121865314","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Tracking cables in sonar and optical imagery 声纳和光学成像跟踪电缆
Pub Date : 2010-10-01 DOI: 10.1109/AIPR.2010.5759686
J. Isaacs, R. Goroshin
The classical paradigm of line and curve detection in images, as prescribed by the Hough transform, breaks down in cluttered and noisy imagery. In this paper we present an "upgraded" and ultimately more robust approach to line detection in images. The classical approach to line detection in imagery is low-pass filtering, followed by edge detection, followed by the application of the Hough transform. Peaks in the Hough transform correspond to straight line segments in the image. In our approach we replace low pass filtering by anisotropic diffusion; we replace edge detection by phase analysis of frequency components; and finally, lines corresponding to peaks in the Hough transform are statistically analyzed to reveal the most prominent and likely line segments (especially if the line thickness is known a priori) in the context of sampling distributions. The technique is demonstrated on real and synthetic aperture sonar (SAS) imagery.
由霍夫变换规定的图像中直线和曲线检测的经典范式在杂乱和噪声图像中失效。在本文中,我们提出了一种“升级”的、最终更鲁棒的图像线检测方法。图像中线检测的经典方法是低通滤波,然后是边缘检测,最后是霍夫变换的应用。霍夫变换中的峰对应于图像中的直线段。在我们的方法中,我们用各向异性扩散代替低通滤波;我们用频率分量的相位分析代替边缘检测;最后,对Hough变换中峰值对应的线进行统计分析,以揭示抽样分布背景下最突出和最可能的线段(特别是如果线粗已知先验)。在真实和合成孔径声呐(SAS)图像上对该技术进行了验证。
{"title":"Tracking cables in sonar and optical imagery","authors":"J. Isaacs, R. Goroshin","doi":"10.1109/AIPR.2010.5759686","DOIUrl":"https://doi.org/10.1109/AIPR.2010.5759686","url":null,"abstract":"The classical paradigm of line and curve detection in images, as prescribed by the Hough transform, breaks down in cluttered and noisy imagery. In this paper we present an \"upgraded\" and ultimately more robust approach to line detection in images. The classical approach to line detection in imagery is low-pass filtering, followed by edge detection, followed by the application of the Hough transform. Peaks in the Hough transform correspond to straight line segments in the image. In our approach we replace low pass filtering by anisotropic diffusion; we replace edge detection by phase analysis of frequency components; and finally, lines corresponding to peaks in the Hough transform are statistically analyzed to reveal the most prominent and likely line segments (especially if the line thickness is known a priori) in the context of sampling distributions. The technique is demonstrated on real and synthetic aperture sonar (SAS) imagery.","PeriodicalId":128378,"journal":{"name":"2010 IEEE 39th Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127429273","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A manifold based methodology for color constancy 基于流形的颜色恒定性方法
Pub Date : 2010-10-01 DOI: 10.1109/AIPR.2010.5759707
A. Mathew, A. Alex, V. Asari
In this paper, we propose a manifold-based methodology for color constancy. It is observed that the center surround information of an image creates a manifold in color space. The relationship between the points in the manifold is modeled as a line. The human visual system is capable of learning these relationships. This is the basis of color constancy. In illumination correction, the image in the reference illumination is operated on with a wide Gaussian function to extract the global illumination information. The global illumination information creates a manifold in color space which is learnt by the system as a line. An image in a different color perception creates a different manifold in color space. To transform the color perception of a scene in a given illumination to the reference color perception, the color relationships in the reference color perception are applied on the new image. This is achieved by projecting the pixels in the new image to the line representing the manifold of reference color perception. This model can be used for color correction of images with different color perceptions to a learnt color perception. This method, unlike other approaches, has a single step convergence and hence is faster.
在本文中,我们提出了一种基于流形的颜色常数方法。可以观察到,图像的中心环绕信息在色彩空间中创建了一个流形。流形中点之间的关系被建模为一条线。人类的视觉系统能够学习这些关系。这是色彩恒常性的基础。在光照校正中,对参考光照下的图像进行宽高斯函数处理,提取全局光照信息。全局照明信息在色彩空间中创建一个流形,系统将其作为一条线学习。不同颜色感知的图像会在色彩空间中产生不同的流形。为了将给定照明下的场景颜色感知转换为参考颜色感知,将参考颜色感知中的颜色关系应用到新图像上。这是通过将新图像中的像素投影到表示参考颜色感知歧管的线上来实现的。该模型可用于对具有不同颜色感知的图像进行颜色校正。与其他方法不同的是,这种方法只有一步收敛,因此速度更快。
{"title":"A manifold based methodology for color constancy","authors":"A. Mathew, A. Alex, V. Asari","doi":"10.1109/AIPR.2010.5759707","DOIUrl":"https://doi.org/10.1109/AIPR.2010.5759707","url":null,"abstract":"In this paper, we propose a manifold-based methodology for color constancy. It is observed that the center surround information of an image creates a manifold in color space. The relationship between the points in the manifold is modeled as a line. The human visual system is capable of learning these relationships. This is the basis of color constancy. In illumination correction, the image in the reference illumination is operated on with a wide Gaussian function to extract the global illumination information. The global illumination information creates a manifold in color space which is learnt by the system as a line. An image in a different color perception creates a different manifold in color space. To transform the color perception of a scene in a given illumination to the reference color perception, the color relationships in the reference color perception are applied on the new image. This is achieved by projecting the pixels in the new image to the line representing the manifold of reference color perception. This model can be used for color correction of images with different color perceptions to a learnt color perception. This method, unlike other approaches, has a single step convergence and hence is faster.","PeriodicalId":128378,"journal":{"name":"2010 IEEE 39th Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122337286","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Visual attention based detection of signs of anthropogenic activities in satellite imagery 卫星图像中基于视觉注意的人为活动迹象检测
Pub Date : 2010-10-01 DOI: 10.1109/AIPR.2010.5759693
A. Skurikhin
With increasing deployment of satellite imaging systems, only a small fraction of collected data can be subject to expert scrutiny. We present and evaluate a two-tier approach to broad area search for signs of anthropogenic activities in highresolution commercial satellite imagery. The method filters image information using semantically oriented interest points by combining Harris corner detection and spatial pyramid matching. The idea is that anthropogenic structures, such as rooftop outlines, fence corners, road junctions, are locally arranged in specific angular relations to each other. They are often oriented at approximately right angles to each other (which is known as rectilinearity relation). Detecting rectilinear structures provides an opportunity to highlight regions most likely to contain anthropogenic activity. This is followed by supervised classification of regions surrounding the detected corner points as anthropogenic vs. natural scenes. We consider, in particular, a search for signs of anthropogenic activities in uncluttered areas.
随着越来越多的卫星成像系统的部署,只有一小部分收集的数据可以接受专家的审查。我们提出并评估了在高分辨率商业卫星图像中广泛搜索人类活动迹象的两层方法。该方法结合哈里斯角点检测和空间金字塔匹配,利用感兴趣点对图像信息进行语义过滤。这个想法是,人为的结构,如屋顶轮廓,栅栏角,道路路口,在局部以特定的角度关系排列。它们通常以彼此近似成直角的方向排列(这被称为直线关系)。检测直线结构提供了一个机会来突出最有可能包含人类活动的区域。接下来是对检测到的角点周围的区域进行监督分类,作为人为场景与自然场景。我们特别考虑在整洁的地区寻找人类活动的迹象。
{"title":"Visual attention based detection of signs of anthropogenic activities in satellite imagery","authors":"A. Skurikhin","doi":"10.1109/AIPR.2010.5759693","DOIUrl":"https://doi.org/10.1109/AIPR.2010.5759693","url":null,"abstract":"With increasing deployment of satellite imaging systems, only a small fraction of collected data can be subject to expert scrutiny. We present and evaluate a two-tier approach to broad area search for signs of anthropogenic activities in highresolution commercial satellite imagery. The method filters image information using semantically oriented interest points by combining Harris corner detection and spatial pyramid matching. The idea is that anthropogenic structures, such as rooftop outlines, fence corners, road junctions, are locally arranged in specific angular relations to each other. They are often oriented at approximately right angles to each other (which is known as rectilinearity relation). Detecting rectilinear structures provides an opportunity to highlight regions most likely to contain anthropogenic activity. This is followed by supervised classification of regions surrounding the detected corner points as anthropogenic vs. natural scenes. We consider, in particular, a search for signs of anthropogenic activities in uncluttered areas.","PeriodicalId":128378,"journal":{"name":"2010 IEEE 39th Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122445568","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Performance evaluation of color image retrieval 彩色图像检索的性能评价
Pub Date : 2010-10-01 DOI: 10.1109/AIPR.2010.5759680
E. Mendi, Coskun Bayrak
In this paper, we have investigated the capabilities of 4 approaches for image search for a CBIR system. First two approaches are based on comparing the images using color histograms of RGB and HSV spaces, respectively. The other 2 approaches are based on two quantitative image fidelity measurements, Mean Square Error (MSE) and Structural Similarity Index (SSIM), which provide a degree of similarity between two images. The precision performances of approaches have been evaluated by using a public image database containing 1000 images. Finally effectiveness of retrieval has been measured for each method.
在本文中,我们研究了4种方法的图像搜索能力,为一个CBIR系统。前两种方法分别基于比较使用RGB和HSV空间的颜色直方图的图像。另外两种方法基于两种定量图像保真度测量,均方误差(MSE)和结构相似指数(SSIM),它们提供了两幅图像之间的相似程度。通过包含1000张图像的公共图像数据库,对方法的精度性能进行了评估。最后对每种方法的检索效果进行了测试。
{"title":"Performance evaluation of color image retrieval","authors":"E. Mendi, Coskun Bayrak","doi":"10.1109/AIPR.2010.5759680","DOIUrl":"https://doi.org/10.1109/AIPR.2010.5759680","url":null,"abstract":"In this paper, we have investigated the capabilities of 4 approaches for image search for a CBIR system. First two approaches are based on comparing the images using color histograms of RGB and HSV spaces, respectively. The other 2 approaches are based on two quantitative image fidelity measurements, Mean Square Error (MSE) and Structural Similarity Index (SSIM), which provide a degree of similarity between two images. The precision performances of approaches have been evaluated by using a public image database containing 1000 images. Finally effectiveness of retrieval has been measured for each method.","PeriodicalId":128378,"journal":{"name":"2010 IEEE 39th Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133268888","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Motion imagery metadata standards assist in object and activity classification 运动图像元数据标准有助于对象和活动分类
Pub Date : 2010-10-01 DOI: 10.1109/AIPR.2010.5759700
Darrell L. Young
Metadata is considered vital in making sense of ISR sensor data because it provides the context needed to interpret motion imagery. For example, metadata provides the fundamental information needed to associate the imagery with location and time. But, more than that, metadata provides information that can assist in automated video analysis. This paper describes some of the ways that metadata can be used to improve automated video processing.
元数据被认为对理解ISR传感器数据至关重要,因为它提供了解释运动图像所需的上下文。例如,元数据提供了将图像与位置和时间关联起来所需的基本信息。但是,不仅如此,元数据还提供了有助于自动视频分析的信息。本文描述了使用元数据来改进自动视频处理的一些方法。
{"title":"Motion imagery metadata standards assist in object and activity classification","authors":"Darrell L. Young","doi":"10.1109/AIPR.2010.5759700","DOIUrl":"https://doi.org/10.1109/AIPR.2010.5759700","url":null,"abstract":"Metadata is considered vital in making sense of ISR sensor data because it provides the context needed to interpret motion imagery. For example, metadata provides the fundamental information needed to associate the imagery with location and time. But, more than that, metadata provides information that can assist in automated video analysis. This paper describes some of the ways that metadata can be used to improve automated video processing.","PeriodicalId":128378,"journal":{"name":"2010 IEEE 39th Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133771653","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
2010 IEEE 39th Applied Imagery Pattern Recognition Workshop (AIPR)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1