首页 > 最新文献

The First Asian Conference on Pattern Recognition最新文献

英文 中文
Spatio-Temporal Interest Points Chain (STIPC) for activity recognition 基于时空兴趣点链(STIPC)的活动识别
Pub Date : 2011-11-01 DOI: 10.1109/ACPR.2011.6166581
Fei Yuan, Gui-Song Xia, H. Sahbi, V. Prinet
We present a novel feature, named Spatio-Temporal Interest Points Chain (STIPC), for activity representation and recognition. This new feature consists of a set of trackable spatio-temporal interest points, which correspond to a series of discontinuous motion among a long-term motion of an object or its part. By this chain feature, we can not only capture the discriminative motion information which space-time interest point-like feature try to pursue, but also build the connection between them. Specifically, we first extract the point trajectories from the image sequences, then partition the points on each trajectory into two kinds of different yet close related points: discontinuous motion points and continuous motion points. We extract local space-time features around discontinuous motion points and use a chain model to represent them. Furthermore, we introduce a chain descriptor to encode the temporal relationships between these interdependent local space-time features. The experimental results on challenging datasets show that our STIPC features improves local space-time features and achieve state-of-the-art results.
我们提出了一种新的特征,称为时空兴趣点链(STIPC),用于活动表示和识别。这一新特征由一组可跟踪的时空兴趣点组成,这些兴趣点对应于一个物体或其部分的长期运动中的一系列不连续运动。通过这种链式特征,我们不仅可以捕捉到时空兴趣点特征所追求的鉴别运动信息,而且可以建立它们之间的联系。具体来说,我们首先从图像序列中提取点轨迹,然后将每个轨迹上的点划分为两种不同但密切相关的点:不连续运动点和连续运动点。我们提取了不连续运动点周围的局部时空特征,并使用链模型来表示它们。此外,我们引入了一个链描述符来编码这些相互依赖的局部时空特征之间的时间关系。在具有挑战性的数据集上的实验结果表明,我们的STIPC特征改进了局部时空特征,达到了最先进的结果。
{"title":"Spatio-Temporal Interest Points Chain (STIPC) for activity recognition","authors":"Fei Yuan, Gui-Song Xia, H. Sahbi, V. Prinet","doi":"10.1109/ACPR.2011.6166581","DOIUrl":"https://doi.org/10.1109/ACPR.2011.6166581","url":null,"abstract":"We present a novel feature, named Spatio-Temporal Interest Points Chain (STIPC), for activity representation and recognition. This new feature consists of a set of trackable spatio-temporal interest points, which correspond to a series of discontinuous motion among a long-term motion of an object or its part. By this chain feature, we can not only capture the discriminative motion information which space-time interest point-like feature try to pursue, but also build the connection between them. Specifically, we first extract the point trajectories from the image sequences, then partition the points on each trajectory into two kinds of different yet close related points: discontinuous motion points and continuous motion points. We extract local space-time features around discontinuous motion points and use a chain model to represent them. Furthermore, we introduce a chain descriptor to encode the temporal relationships between these interdependent local space-time features. The experimental results on challenging datasets show that our STIPC features improves local space-time features and achieve state-of-the-art results.","PeriodicalId":287232,"journal":{"name":"The First Asian Conference on Pattern Recognition","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123790948","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Modeling spectral smoothness principle for monaural voiced speech separation 单耳浊音分离的谱平滑原理建模
Pub Date : 2011-11-01 DOI: 10.1109/ACPR.2011.6166549
Wei Jiang, Wenju Liu, Pengfei Hu
The smoothness of spectral envelope is a commonly known attribute of clean speech. In this study, this principle is modeled through oscillation degree of each time-frequency (T-F) unit, and then incorporated into a computational auditory scene analysis (CASA) system for monaural voiced speech separation. Specifically, oscillation degrees of autocorrelation function (ODACF) and of envelope autocorrelation function (ODEACF) are extracted for each T-F unit, which are then utilized in T-F unit labeling. Experiment results indicate that target units and interference units are distinguished more effectively by incorporating the spectral smoothness principle than by using the harmonic principle alone, and obvious segregation improvements are obtained.
频谱包络的平滑度是清洁语音的一个普遍特征。在本研究中,该原理通过每个时频(T-F)单元的振荡程度来建模,然后将其纳入计算听觉场景分析(CASA)系统中,用于单耳浊音分离。具体而言,提取每个T-F单元的自相关函数(ODACF)和包络自相关函数(ODEACF)的振荡度,然后将其用于T-F单元标记。实验结果表明,结合谱平滑原理比单独使用谐波原理能更有效地区分目标单元和干扰单元,分离效果得到明显改善。
{"title":"Modeling spectral smoothness principle for monaural voiced speech separation","authors":"Wei Jiang, Wenju Liu, Pengfei Hu","doi":"10.1109/ACPR.2011.6166549","DOIUrl":"https://doi.org/10.1109/ACPR.2011.6166549","url":null,"abstract":"The smoothness of spectral envelope is a commonly known attribute of clean speech. In this study, this principle is modeled through oscillation degree of each time-frequency (T-F) unit, and then incorporated into a computational auditory scene analysis (CASA) system for monaural voiced speech separation. Specifically, oscillation degrees of autocorrelation function (ODACF) and of envelope autocorrelation function (ODEACF) are extracted for each T-F unit, which are then utilized in T-F unit labeling. Experiment results indicate that target units and interference units are distinguished more effectively by incorporating the spectral smoothness principle than by using the harmonic principle alone, and obvious segregation improvements are obtained.","PeriodicalId":287232,"journal":{"name":"The First Asian Conference on Pattern Recognition","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122940358","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Sparse bilinear preserving projections 稀疏双线性保持投影
Pub Date : 2011-11-01 DOI: 10.1109/ACPR.2011.6166647
Zhihui Lai, Qingcai Chen, Zhong Jin
The techniques of linear dimensionality reduction have been attracted widely attention in the fields of computer vision and pattern recognition. In this paper, we propose a novel framework called Sparse Bilinear Preserving Projections (SBPP) for image feature extraction. We generalized the image-based bilinear preserving projections into sparse case for feature extraction. Different from the popular bilinear linear projection techniques, the projections of SBPP are sparse, i.e. most elements in the projections are zeros. In the proposed framework, we use the local neighborhood graph to model the manifold structure of the data set at first, and then spectral analysis and L1-norm regression by using the Elastic Net are combined together to iteratively learn the sparse bilinear projections, which optimal preserve the local geometric structure of the image manifold. Experiments on some databases show that SBPP is competitive to some state-of-the-art techniques.
线性降维技术在计算机视觉和模式识别领域受到了广泛的关注。本文提出了一种新的图像特征提取框架——稀疏双线性保持投影(SBPP)。我们将基于图像的双线性保持投影推广到稀疏情况下进行特征提取。与目前流行的双线性投影技术不同,SBPP的投影是稀疏的,即投影中的大部分元素为零。在该框架中,首先利用局部邻域图对数据集的流形结构进行建模,然后结合光谱分析和弹性网络的l1范数回归,迭代学习稀疏双线性投影,最优地保留了图像流形的局部几何结构。在一些数据库上的实验表明,SBPP与一些最先进的技术相比具有竞争力。
{"title":"Sparse bilinear preserving projections","authors":"Zhihui Lai, Qingcai Chen, Zhong Jin","doi":"10.1109/ACPR.2011.6166647","DOIUrl":"https://doi.org/10.1109/ACPR.2011.6166647","url":null,"abstract":"The techniques of linear dimensionality reduction have been attracted widely attention in the fields of computer vision and pattern recognition. In this paper, we propose a novel framework called Sparse Bilinear Preserving Projections (SBPP) for image feature extraction. We generalized the image-based bilinear preserving projections into sparse case for feature extraction. Different from the popular bilinear linear projection techniques, the projections of SBPP are sparse, i.e. most elements in the projections are zeros. In the proposed framework, we use the local neighborhood graph to model the manifold structure of the data set at first, and then spectral analysis and L1-norm regression by using the Elastic Net are combined together to iteratively learn the sparse bilinear projections, which optimal preserve the local geometric structure of the image manifold. Experiments on some databases show that SBPP is competitive to some state-of-the-art techniques.","PeriodicalId":287232,"journal":{"name":"The First Asian Conference on Pattern Recognition","volume":"107 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127670946","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Matrix Exponential LPP for face recognition 矩阵指数LPP人脸识别
Pub Date : 2011-11-01 DOI: 10.1109/ACPR.2011.6166706
Sujing Wang, Chengcheng Jia, Huiling Chen, Bo Wu, Chunguang Zhou
Face recognition plays a important role in computer vision. Recent researches show that high dimensional face images lie on or close to a low dimensional manifold. LPP is a widely used manifold reduced dimensionality technique. But it suffers two problem: (1) Small Sample Size problem; (2)the performance is sensitive to the neighborhood size k. In order to address the problems, this paper proposed a Matrix Exponential LPP. To void the singular matrix, the proposed algorithm introduced the matrix exponential to obtain more valuable information for LPP. The experiments were conducted on two face database, Yale and Georgia Tech. And the results proved the performances of the proposed algorithm was better than that of LPP.
人脸识别在计算机视觉中占有重要地位。近年来的研究表明,高维人脸图像位于低维流形上或其附近。LPP是一种广泛应用的流形降维技术。但存在两个问题:(1)样本量小问题;(2)性能对邻域大小k敏感。为了解决这一问题,本文提出了矩阵指数LPP。为了消除奇异矩阵,该算法引入了矩阵指数,以获得更有价值的LPP信息。在耶鲁大学和佐治亚理工大学两个人脸数据库上进行了实验,结果证明了该算法的性能优于LPP算法。
{"title":"Matrix Exponential LPP for face recognition","authors":"Sujing Wang, Chengcheng Jia, Huiling Chen, Bo Wu, Chunguang Zhou","doi":"10.1109/ACPR.2011.6166706","DOIUrl":"https://doi.org/10.1109/ACPR.2011.6166706","url":null,"abstract":"Face recognition plays a important role in computer vision. Recent researches show that high dimensional face images lie on or close to a low dimensional manifold. LPP is a widely used manifold reduced dimensionality technique. But it suffers two problem: (1) Small Sample Size problem; (2)the performance is sensitive to the neighborhood size k. In order to address the problems, this paper proposed a Matrix Exponential LPP. To void the singular matrix, the proposed algorithm introduced the matrix exponential to obtain more valuable information for LPP. The experiments were conducted on two face database, Yale and Georgia Tech. And the results proved the performances of the proposed algorithm was better than that of LPP.","PeriodicalId":287232,"journal":{"name":"The First Asian Conference on Pattern Recognition","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133535962","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
3D LIDAR-based ground segmentation 基于三维激光雷达的地面分割
Pub Date : 2011-11-01 DOI: 10.1109/ACPR.2011.6166587
Tongtong Chen, Bin Dai, Daxue Liu, Bo Zhang, Qixu Liu
Obtaining a comprehensive model of large and complex ground typically is crucial for autonomous driving both in urban and countryside environments. This paper presents an improved ground segmentation method for 3D LIDAR point clouds. Our approach builds on a polar grid map, which is divided into some sectors, then 1D Gaussian process (GP) regression model and Incremental Sample Consensus (INSAC) algorithm is used to extract ground for every sector. Experiments are carried out at the autonomous vehicle in different outdoor scenes, and results are compared to those of the existing method. We show that our method can get more promising performance.
无论在城市还是乡村环境中,获得大型复杂地面的综合模型对于自动驾驶至关重要。提出了一种改进的三维激光雷达点云地面分割方法。我们的方法建立在极地网格地图上,该地图被划分为一些扇区,然后使用一维高斯过程(GP)回归模型和增量样本共识(INSAC)算法提取每个扇区的地面。在不同室外场景下的自动驾驶汽车上进行了实验,并与现有方法的结果进行了比较。实验结果表明,该方法可以获得更理想的性能。
{"title":"3D LIDAR-based ground segmentation","authors":"Tongtong Chen, Bin Dai, Daxue Liu, Bo Zhang, Qixu Liu","doi":"10.1109/ACPR.2011.6166587","DOIUrl":"https://doi.org/10.1109/ACPR.2011.6166587","url":null,"abstract":"Obtaining a comprehensive model of large and complex ground typically is crucial for autonomous driving both in urban and countryside environments. This paper presents an improved ground segmentation method for 3D LIDAR point clouds. Our approach builds on a polar grid map, which is divided into some sectors, then 1D Gaussian process (GP) regression model and Incremental Sample Consensus (INSAC) algorithm is used to extract ground for every sector. Experiments are carried out at the autonomous vehicle in different outdoor scenes, and results are compared to those of the existing method. We show that our method can get more promising performance.","PeriodicalId":287232,"journal":{"name":"The First Asian Conference on Pattern Recognition","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115022478","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 37
Contextual Constrained Independent Component Analysis based foreground detection for indoor surveillance 基于上下文约束独立分量分析的室内监控前景检测
Pub Date : 2011-11-01 DOI: 10.1109/ACPR.2011.6166656
Zhong Zhang, Baihua Xiao, Chunheng Wang, Wen Zhou, Shuang Liu
Recently, Independent Component Analysis based foreground detection has been proposed for indoor surveillance applications where the foreground tends to move slowly or remain still. Yet such a method often causes discrete segmented foreground objects. In this paper, we propose a novel foreground detection method named Contextual Constrained Independent Component Analysis (CCICA) to tackle this problem. In our method, the contextual constraints are explicitly added to the optimization objective function, which indicate the similarity relationship among neighboring pixels. In this way, the obtained de-mixing matrix can produce the complete foreground compared with the previous ICA model. In addition, our method performs robust to the indoor illumination changes and features a high processing speed. Two sets of image sequences involving room lights switching on/of and door opening/closing are tested. The experimental results clearly demonstrate an improvement over the basic ICA model and the image difference method.
最近,基于独立分量分析的前景检测被提出用于前景移动缓慢或保持静止的室内监控应用。然而,这种方法往往导致离散的分割前景对象。为了解决这一问题,本文提出了一种新的前景检测方法——上下文约束独立分量分析(CCICA)。在我们的方法中,上下文约束被显式地添加到优化目标函数中,这表明了相邻像素之间的相似关系。这样,得到的去混矩阵与之前的ICA模型相比,可以得到完整的前景。此外,该方法对室内光照变化具有鲁棒性,处理速度快。测试了两组图像序列,包括房间灯的开/关和门的开/关。实验结果清楚地表明,该方法比基本ICA模型和图像差分方法有了改进。
{"title":"Contextual Constrained Independent Component Analysis based foreground detection for indoor surveillance","authors":"Zhong Zhang, Baihua Xiao, Chunheng Wang, Wen Zhou, Shuang Liu","doi":"10.1109/ACPR.2011.6166656","DOIUrl":"https://doi.org/10.1109/ACPR.2011.6166656","url":null,"abstract":"Recently, Independent Component Analysis based foreground detection has been proposed for indoor surveillance applications where the foreground tends to move slowly or remain still. Yet such a method often causes discrete segmented foreground objects. In this paper, we propose a novel foreground detection method named Contextual Constrained Independent Component Analysis (CCICA) to tackle this problem. In our method, the contextual constraints are explicitly added to the optimization objective function, which indicate the similarity relationship among neighboring pixels. In this way, the obtained de-mixing matrix can produce the complete foreground compared with the previous ICA model. In addition, our method performs robust to the indoor illumination changes and features a high processing speed. Two sets of image sequences involving room lights switching on/of and door opening/closing are tested. The experimental results clearly demonstrate an improvement over the basic ICA model and the image difference method.","PeriodicalId":287232,"journal":{"name":"The First Asian Conference on Pattern Recognition","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114332398","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Discriminative model selection for Gaussian mixture models for classification 高斯混合模型的判别模型选择
Pub Date : 2011-11-01 DOI: 10.1109/ACPR.2011.6166658
Xiao-Hua Liu, Cheng-Lin Liu
The Gaussian mixture model (GMM) has been widely used in pattern recognition problems for clustering and probability density estimation. Given the number of mixture components (model order), the parameters of GMM can be estimated by the EM algorithm. The model order selection, however, remains an open problem. For classification purpose, we propose a discriminative model selection method to optimize the orders of all classes. Based on the GMMs initialized in some way, the orders of all classes are adjusted heuristically to improve the cross-validated classification accuracy. The model orders selected in this discriminative way are expected to give higher generalized accuracy than classwise model selection. Our experimental results on some UCI datasets demonstrate the superior classification performance of the proposed method.
高斯混合模型(GMM)广泛应用于聚类和概率密度估计的模式识别问题。给定混合成分的数量(模型阶数),可以用EM算法估计出GMM的参数。然而,模型顺序的选择仍然是一个悬而未决的问题。为了实现分类目的,我们提出了一种判别模型选择方法来优化所有类的排序。在以某种方式初始化gmm的基础上,启发式地调整所有类的顺序,以提高交叉验证的分类精度。以这种判别方式选择的模型顺序有望比分类模型选择提供更高的广义精度。在一些UCI数据集上的实验结果表明,该方法具有较好的分类性能。
{"title":"Discriminative model selection for Gaussian mixture models for classification","authors":"Xiao-Hua Liu, Cheng-Lin Liu","doi":"10.1109/ACPR.2011.6166658","DOIUrl":"https://doi.org/10.1109/ACPR.2011.6166658","url":null,"abstract":"The Gaussian mixture model (GMM) has been widely used in pattern recognition problems for clustering and probability density estimation. Given the number of mixture components (model order), the parameters of GMM can be estimated by the EM algorithm. The model order selection, however, remains an open problem. For classification purpose, we propose a discriminative model selection method to optimize the orders of all classes. Based on the GMMs initialized in some way, the orders of all classes are adjusted heuristically to improve the cross-validated classification accuracy. The model orders selected in this discriminative way are expected to give higher generalized accuracy than classwise model selection. Our experimental results on some UCI datasets demonstrate the superior classification performance of the proposed method.","PeriodicalId":287232,"journal":{"name":"The First Asian Conference on Pattern Recognition","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124703264","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Silhouette extraction based on time-series statistical modeling and k-means clustering 基于时间序列统计建模和k-means聚类的轮廓提取
Pub Date : 2011-11-01 DOI: 10.1109/ACPR.2011.6166672
A. Hamad, N. Tsumura
This paper proposes a simple and a robust method to detect and extract the silhouettes from a video sequence of a static camera based on background subtraction technique. The proposed method analyse the pixel history as a time series observations. A robust technique to detect motion based on kernel density estimation is presented. Two consecutive stages of the k-means clustering algorithm are utilized to identify the most reliable background regions and decrease false positives. Pixel and object based updating mechanism is presented to cope with challenges like gradual and sudden illumination changes, ghost appearance, and non-stationary background objects. Experimental results show the efficiency and the robustness of the proposed method to detect and extract silhouettes for outdoor and indoor environments.
本文提出了一种简单、鲁棒的基于背景减法的静态摄像机视频序列剪影检测与提取方法。该方法将像素历史作为时间序列观测来分析。提出了一种基于核密度估计的鲁棒运动检测技术。利用连续两个阶段的k-means聚类算法来识别最可靠的背景区域并减少误报。提出了基于像素和对象的更新机制,以应对渐变和突然的光照变化、鬼影现象和非静止背景物体等挑战。实验结果表明了该方法在室外和室内环境下轮廓检测和提取的有效性和鲁棒性。
{"title":"Silhouette extraction based on time-series statistical modeling and k-means clustering","authors":"A. Hamad, N. Tsumura","doi":"10.1109/ACPR.2011.6166672","DOIUrl":"https://doi.org/10.1109/ACPR.2011.6166672","url":null,"abstract":"This paper proposes a simple and a robust method to detect and extract the silhouettes from a video sequence of a static camera based on background subtraction technique. The proposed method analyse the pixel history as a time series observations. A robust technique to detect motion based on kernel density estimation is presented. Two consecutive stages of the k-means clustering algorithm are utilized to identify the most reliable background regions and decrease false positives. Pixel and object based updating mechanism is presented to cope with challenges like gradual and sudden illumination changes, ghost appearance, and non-stationary background objects. Experimental results show the efficiency and the robustness of the proposed method to detect and extract silhouettes for outdoor and indoor environments.","PeriodicalId":287232,"journal":{"name":"The First Asian Conference on Pattern Recognition","volume":"83 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126272285","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Detecting multiple symmetries with extended SIFT 扩展SIFT检测多重对称性
Pub Date : 2011-11-01 DOI: 10.1109/ACPR.2011.6166683
Qian Chen, Haiyuan Wu, H. Taki
This paper describes an effective method for detecting multiple symmetric objects in an image. A “pseudo-affine invariant SIFT” is used for detecting symmetric feature pairs in perspective images. Candidates of symmetric axes are estimated from every two symmetric feature pairs, and the one supported by the most symmetric feature pairs is detected as the most relevant symmetric axis of a symmetric object. The symmetric feature pairs supporting the symmetric axis are then used to detect other symmetric axes in the same symmetric object. This procedure is applied repeatedly to the symmetric feature pairs after eliminating the ones that support the already detected symmetric axes to detect all symmetric objects in the image. The effectiveness of this method has been confirmed through several experiments using real images and common image databases.
本文描述了一种检测图像中多个对称物体的有效方法。采用“伪仿射不变SIFT”检测透视图像中的对称特征对。从每两个对称特征对中估计对称轴的候选轴,并将最对称特征对支持的候选轴检测为对称对象的最相关对称轴。然后使用支持对称轴的对称特征对来检测同一对称对象中的其他对称轴。在去除支持已检测到的对称轴的对称特征对后,对对称特征对重复应用此过程,以检测图像中所有对称对象。通过实际图像和常用图像数据库的实验,验证了该方法的有效性。
{"title":"Detecting multiple symmetries with extended SIFT","authors":"Qian Chen, Haiyuan Wu, H. Taki","doi":"10.1109/ACPR.2011.6166683","DOIUrl":"https://doi.org/10.1109/ACPR.2011.6166683","url":null,"abstract":"This paper describes an effective method for detecting multiple symmetric objects in an image. A “pseudo-affine invariant SIFT” is used for detecting symmetric feature pairs in perspective images. Candidates of symmetric axes are estimated from every two symmetric feature pairs, and the one supported by the most symmetric feature pairs is detected as the most relevant symmetric axis of a symmetric object. The symmetric feature pairs supporting the symmetric axis are then used to detect other symmetric axes in the same symmetric object. This procedure is applied repeatedly to the symmetric feature pairs after eliminating the ones that support the already detected symmetric axes to detect all symmetric objects in the image. The effectiveness of this method has been confirmed through several experiments using real images and common image databases.","PeriodicalId":287232,"journal":{"name":"The First Asian Conference on Pattern Recognition","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129295517","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Interesting region detection in aerial video using Bayesian topic models 基于贝叶斯主题模型的航拍视频兴趣区域检测
Pub Date : 2011-11-01 DOI: 10.1109/acpr.2011.6166550
Jiewei Wang, Yunhong Wang, Zhaoxiang Zhang
{"title":"Interesting region detection in aerial video using Bayesian topic models","authors":"Jiewei Wang, Yunhong Wang, Zhaoxiang Zhang","doi":"10.1109/acpr.2011.6166550","DOIUrl":"https://doi.org/10.1109/acpr.2011.6166550","url":null,"abstract":"","PeriodicalId":287232,"journal":{"name":"The First Asian Conference on Pattern Recognition","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130860536","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
The First Asian Conference on Pattern Recognition
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1