首页 > 最新文献

The First Asian Conference on Pattern Recognition最新文献

英文 中文
Spatio-Temporal Interest Points Chain (STIPC) for activity recognition 基于时空兴趣点链(STIPC)的活动识别
Pub Date : 2011-11-01 DOI: 10.1109/ACPR.2011.6166581
Fei Yuan, Gui-Song Xia, H. Sahbi, V. Prinet
We present a novel feature, named Spatio-Temporal Interest Points Chain (STIPC), for activity representation and recognition. This new feature consists of a set of trackable spatio-temporal interest points, which correspond to a series of discontinuous motion among a long-term motion of an object or its part. By this chain feature, we can not only capture the discriminative motion information which space-time interest point-like feature try to pursue, but also build the connection between them. Specifically, we first extract the point trajectories from the image sequences, then partition the points on each trajectory into two kinds of different yet close related points: discontinuous motion points and continuous motion points. We extract local space-time features around discontinuous motion points and use a chain model to represent them. Furthermore, we introduce a chain descriptor to encode the temporal relationships between these interdependent local space-time features. The experimental results on challenging datasets show that our STIPC features improves local space-time features and achieve state-of-the-art results.
我们提出了一种新的特征,称为时空兴趣点链(STIPC),用于活动表示和识别。这一新特征由一组可跟踪的时空兴趣点组成,这些兴趣点对应于一个物体或其部分的长期运动中的一系列不连续运动。通过这种链式特征,我们不仅可以捕捉到时空兴趣点特征所追求的鉴别运动信息,而且可以建立它们之间的联系。具体来说,我们首先从图像序列中提取点轨迹,然后将每个轨迹上的点划分为两种不同但密切相关的点:不连续运动点和连续运动点。我们提取了不连续运动点周围的局部时空特征,并使用链模型来表示它们。此外,我们引入了一个链描述符来编码这些相互依赖的局部时空特征之间的时间关系。在具有挑战性的数据集上的实验结果表明,我们的STIPC特征改进了局部时空特征,达到了最先进的结果。
{"title":"Spatio-Temporal Interest Points Chain (STIPC) for activity recognition","authors":"Fei Yuan, Gui-Song Xia, H. Sahbi, V. Prinet","doi":"10.1109/ACPR.2011.6166581","DOIUrl":"https://doi.org/10.1109/ACPR.2011.6166581","url":null,"abstract":"We present a novel feature, named Spatio-Temporal Interest Points Chain (STIPC), for activity representation and recognition. This new feature consists of a set of trackable spatio-temporal interest points, which correspond to a series of discontinuous motion among a long-term motion of an object or its part. By this chain feature, we can not only capture the discriminative motion information which space-time interest point-like feature try to pursue, but also build the connection between them. Specifically, we first extract the point trajectories from the image sequences, then partition the points on each trajectory into two kinds of different yet close related points: discontinuous motion points and continuous motion points. We extract local space-time features around discontinuous motion points and use a chain model to represent them. Furthermore, we introduce a chain descriptor to encode the temporal relationships between these interdependent local space-time features. The experimental results on challenging datasets show that our STIPC features improves local space-time features and achieve state-of-the-art results.","PeriodicalId":287232,"journal":{"name":"The First Asian Conference on Pattern Recognition","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123790948","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Modeling spectral smoothness principle for monaural voiced speech separation 单耳浊音分离的谱平滑原理建模
Pub Date : 2011-11-01 DOI: 10.1109/ACPR.2011.6166549
Wei Jiang, Wenju Liu, Pengfei Hu
The smoothness of spectral envelope is a commonly known attribute of clean speech. In this study, this principle is modeled through oscillation degree of each time-frequency (T-F) unit, and then incorporated into a computational auditory scene analysis (CASA) system for monaural voiced speech separation. Specifically, oscillation degrees of autocorrelation function (ODACF) and of envelope autocorrelation function (ODEACF) are extracted for each T-F unit, which are then utilized in T-F unit labeling. Experiment results indicate that target units and interference units are distinguished more effectively by incorporating the spectral smoothness principle than by using the harmonic principle alone, and obvious segregation improvements are obtained.
频谱包络的平滑度是清洁语音的一个普遍特征。在本研究中,该原理通过每个时频(T-F)单元的振荡程度来建模,然后将其纳入计算听觉场景分析(CASA)系统中,用于单耳浊音分离。具体而言,提取每个T-F单元的自相关函数(ODACF)和包络自相关函数(ODEACF)的振荡度,然后将其用于T-F单元标记。实验结果表明,结合谱平滑原理比单独使用谐波原理能更有效地区分目标单元和干扰单元,分离效果得到明显改善。
{"title":"Modeling spectral smoothness principle for monaural voiced speech separation","authors":"Wei Jiang, Wenju Liu, Pengfei Hu","doi":"10.1109/ACPR.2011.6166549","DOIUrl":"https://doi.org/10.1109/ACPR.2011.6166549","url":null,"abstract":"The smoothness of spectral envelope is a commonly known attribute of clean speech. In this study, this principle is modeled through oscillation degree of each time-frequency (T-F) unit, and then incorporated into a computational auditory scene analysis (CASA) system for monaural voiced speech separation. Specifically, oscillation degrees of autocorrelation function (ODACF) and of envelope autocorrelation function (ODEACF) are extracted for each T-F unit, which are then utilized in T-F unit labeling. Experiment results indicate that target units and interference units are distinguished more effectively by incorporating the spectral smoothness principle than by using the harmonic principle alone, and obvious segregation improvements are obtained.","PeriodicalId":287232,"journal":{"name":"The First Asian Conference on Pattern Recognition","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122940358","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Sparse bilinear preserving projections 稀疏双线性保持投影
Pub Date : 2011-11-01 DOI: 10.1109/ACPR.2011.6166647
Zhihui Lai, Qingcai Chen, Zhong Jin
The techniques of linear dimensionality reduction have been attracted widely attention in the fields of computer vision and pattern recognition. In this paper, we propose a novel framework called Sparse Bilinear Preserving Projections (SBPP) for image feature extraction. We generalized the image-based bilinear preserving projections into sparse case for feature extraction. Different from the popular bilinear linear projection techniques, the projections of SBPP are sparse, i.e. most elements in the projections are zeros. In the proposed framework, we use the local neighborhood graph to model the manifold structure of the data set at first, and then spectral analysis and L1-norm regression by using the Elastic Net are combined together to iteratively learn the sparse bilinear projections, which optimal preserve the local geometric structure of the image manifold. Experiments on some databases show that SBPP is competitive to some state-of-the-art techniques.
线性降维技术在计算机视觉和模式识别领域受到了广泛的关注。本文提出了一种新的图像特征提取框架——稀疏双线性保持投影(SBPP)。我们将基于图像的双线性保持投影推广到稀疏情况下进行特征提取。与目前流行的双线性投影技术不同,SBPP的投影是稀疏的,即投影中的大部分元素为零。在该框架中,首先利用局部邻域图对数据集的流形结构进行建模,然后结合光谱分析和弹性网络的l1范数回归,迭代学习稀疏双线性投影,最优地保留了图像流形的局部几何结构。在一些数据库上的实验表明,SBPP与一些最先进的技术相比具有竞争力。
{"title":"Sparse bilinear preserving projections","authors":"Zhihui Lai, Qingcai Chen, Zhong Jin","doi":"10.1109/ACPR.2011.6166647","DOIUrl":"https://doi.org/10.1109/ACPR.2011.6166647","url":null,"abstract":"The techniques of linear dimensionality reduction have been attracted widely attention in the fields of computer vision and pattern recognition. In this paper, we propose a novel framework called Sparse Bilinear Preserving Projections (SBPP) for image feature extraction. We generalized the image-based bilinear preserving projections into sparse case for feature extraction. Different from the popular bilinear linear projection techniques, the projections of SBPP are sparse, i.e. most elements in the projections are zeros. In the proposed framework, we use the local neighborhood graph to model the manifold structure of the data set at first, and then spectral analysis and L1-norm regression by using the Elastic Net are combined together to iteratively learn the sparse bilinear projections, which optimal preserve the local geometric structure of the image manifold. Experiments on some databases show that SBPP is competitive to some state-of-the-art techniques.","PeriodicalId":287232,"journal":{"name":"The First Asian Conference on Pattern Recognition","volume":"107 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127670946","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Matrix Exponential LPP for face recognition 矩阵指数LPP人脸识别
Pub Date : 2011-11-01 DOI: 10.1109/ACPR.2011.6166706
Sujing Wang, Chengcheng Jia, Huiling Chen, Bo Wu, Chunguang Zhou
Face recognition plays a important role in computer vision. Recent researches show that high dimensional face images lie on or close to a low dimensional manifold. LPP is a widely used manifold reduced dimensionality technique. But it suffers two problem: (1) Small Sample Size problem; (2)the performance is sensitive to the neighborhood size k. In order to address the problems, this paper proposed a Matrix Exponential LPP. To void the singular matrix, the proposed algorithm introduced the matrix exponential to obtain more valuable information for LPP. The experiments were conducted on two face database, Yale and Georgia Tech. And the results proved the performances of the proposed algorithm was better than that of LPP.
人脸识别在计算机视觉中占有重要地位。近年来的研究表明,高维人脸图像位于低维流形上或其附近。LPP是一种广泛应用的流形降维技术。但存在两个问题:(1)样本量小问题;(2)性能对邻域大小k敏感。为了解决这一问题,本文提出了矩阵指数LPP。为了消除奇异矩阵,该算法引入了矩阵指数,以获得更有价值的LPP信息。在耶鲁大学和佐治亚理工大学两个人脸数据库上进行了实验,结果证明了该算法的性能优于LPP算法。
{"title":"Matrix Exponential LPP for face recognition","authors":"Sujing Wang, Chengcheng Jia, Huiling Chen, Bo Wu, Chunguang Zhou","doi":"10.1109/ACPR.2011.6166706","DOIUrl":"https://doi.org/10.1109/ACPR.2011.6166706","url":null,"abstract":"Face recognition plays a important role in computer vision. Recent researches show that high dimensional face images lie on or close to a low dimensional manifold. LPP is a widely used manifold reduced dimensionality technique. But it suffers two problem: (1) Small Sample Size problem; (2)the performance is sensitive to the neighborhood size k. In order to address the problems, this paper proposed a Matrix Exponential LPP. To void the singular matrix, the proposed algorithm introduced the matrix exponential to obtain more valuable information for LPP. The experiments were conducted on two face database, Yale and Georgia Tech. And the results proved the performances of the proposed algorithm was better than that of LPP.","PeriodicalId":287232,"journal":{"name":"The First Asian Conference on Pattern Recognition","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133535962","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Discriminative model selection for Gaussian mixture models for classification 高斯混合模型的判别模型选择
Pub Date : 2011-11-01 DOI: 10.1109/ACPR.2011.6166658
Xiao-Hua Liu, Cheng-Lin Liu
The Gaussian mixture model (GMM) has been widely used in pattern recognition problems for clustering and probability density estimation. Given the number of mixture components (model order), the parameters of GMM can be estimated by the EM algorithm. The model order selection, however, remains an open problem. For classification purpose, we propose a discriminative model selection method to optimize the orders of all classes. Based on the GMMs initialized in some way, the orders of all classes are adjusted heuristically to improve the cross-validated classification accuracy. The model orders selected in this discriminative way are expected to give higher generalized accuracy than classwise model selection. Our experimental results on some UCI datasets demonstrate the superior classification performance of the proposed method.
高斯混合模型(GMM)广泛应用于聚类和概率密度估计的模式识别问题。给定混合成分的数量(模型阶数),可以用EM算法估计出GMM的参数。然而,模型顺序的选择仍然是一个悬而未决的问题。为了实现分类目的,我们提出了一种判别模型选择方法来优化所有类的排序。在以某种方式初始化gmm的基础上,启发式地调整所有类的顺序,以提高交叉验证的分类精度。以这种判别方式选择的模型顺序有望比分类模型选择提供更高的广义精度。在一些UCI数据集上的实验结果表明,该方法具有较好的分类性能。
{"title":"Discriminative model selection for Gaussian mixture models for classification","authors":"Xiao-Hua Liu, Cheng-Lin Liu","doi":"10.1109/ACPR.2011.6166658","DOIUrl":"https://doi.org/10.1109/ACPR.2011.6166658","url":null,"abstract":"The Gaussian mixture model (GMM) has been widely used in pattern recognition problems for clustering and probability density estimation. Given the number of mixture components (model order), the parameters of GMM can be estimated by the EM algorithm. The model order selection, however, remains an open problem. For classification purpose, we propose a discriminative model selection method to optimize the orders of all classes. Based on the GMMs initialized in some way, the orders of all classes are adjusted heuristically to improve the cross-validated classification accuracy. The model orders selected in this discriminative way are expected to give higher generalized accuracy than classwise model selection. Our experimental results on some UCI datasets demonstrate the superior classification performance of the proposed method.","PeriodicalId":287232,"journal":{"name":"The First Asian Conference on Pattern Recognition","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124703264","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
A new approach of color image quantization based on Normalized Cut algorithm 基于归一化切割算法的彩色图像量化新方法
Pub Date : 2011-11-01 DOI: 10.1109/ACPR.2011.6166589
Jin Zhang, Yonghong Song, Yuanlin Zhang, Xiaobing Wang
This paper presents a novel color quantization method based on Normalized Cut clustering algorithm, in order to generate a quantized image with the minimum loss of information and the maximum compression ratio, which benefits the storage and transmission of the color image. This new method uses a deformed Median Cut algorithm as a coarse partition of color pixels in the RGB color space, and then take the average color of each partition as the representative color of a node to construct a condensed graph. By employing the Normalized Cut clustering algorithm, we could get the palette with defined color number, and then reconstruct the quantized image. Experiments on common used test images demonstrate that our method is very competitive with state-of-the-art color quantization methods in terms of image quality, compression ratio and computation time.
提出了一种基于归一化切聚类算法的彩色量化方法,以最小的信息损失和最大的压缩比生成量化图像,有利于彩色图像的存储和传输。该方法采用一种变形中值切割算法对RGB颜色空间中的颜色像素进行粗分割,然后以每个分割的平均颜色作为节点的代表颜色来构造一个凝聚图。采用归一化切聚类算法,得到具有定义颜色数的调色板,然后重建量化后的图像。在常用的测试图像上进行的实验表明,我们的方法在图像质量、压缩比和计算时间方面与目前最先进的颜色量化方法相比具有很强的竞争力。
{"title":"A new approach of color image quantization based on Normalized Cut algorithm","authors":"Jin Zhang, Yonghong Song, Yuanlin Zhang, Xiaobing Wang","doi":"10.1109/ACPR.2011.6166589","DOIUrl":"https://doi.org/10.1109/ACPR.2011.6166589","url":null,"abstract":"This paper presents a novel color quantization method based on Normalized Cut clustering algorithm, in order to generate a quantized image with the minimum loss of information and the maximum compression ratio, which benefits the storage and transmission of the color image. This new method uses a deformed Median Cut algorithm as a coarse partition of color pixels in the RGB color space, and then take the average color of each partition as the representative color of a node to construct a condensed graph. By employing the Normalized Cut clustering algorithm, we could get the palette with defined color number, and then reconstruct the quantized image. Experiments on common used test images demonstrate that our method is very competitive with state-of-the-art color quantization methods in terms of image quality, compression ratio and computation time.","PeriodicalId":287232,"journal":{"name":"The First Asian Conference on Pattern Recognition","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134223912","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
3D LIDAR-based ground segmentation 基于三维激光雷达的地面分割
Pub Date : 2011-11-01 DOI: 10.1109/ACPR.2011.6166587
Tongtong Chen, Bin Dai, Daxue Liu, Bo Zhang, Qixu Liu
Obtaining a comprehensive model of large and complex ground typically is crucial for autonomous driving both in urban and countryside environments. This paper presents an improved ground segmentation method for 3D LIDAR point clouds. Our approach builds on a polar grid map, which is divided into some sectors, then 1D Gaussian process (GP) regression model and Incremental Sample Consensus (INSAC) algorithm is used to extract ground for every sector. Experiments are carried out at the autonomous vehicle in different outdoor scenes, and results are compared to those of the existing method. We show that our method can get more promising performance.
无论在城市还是乡村环境中,获得大型复杂地面的综合模型对于自动驾驶至关重要。提出了一种改进的三维激光雷达点云地面分割方法。我们的方法建立在极地网格地图上,该地图被划分为一些扇区,然后使用一维高斯过程(GP)回归模型和增量样本共识(INSAC)算法提取每个扇区的地面。在不同室外场景下的自动驾驶汽车上进行了实验,并与现有方法的结果进行了比较。实验结果表明,该方法可以获得更理想的性能。
{"title":"3D LIDAR-based ground segmentation","authors":"Tongtong Chen, Bin Dai, Daxue Liu, Bo Zhang, Qixu Liu","doi":"10.1109/ACPR.2011.6166587","DOIUrl":"https://doi.org/10.1109/ACPR.2011.6166587","url":null,"abstract":"Obtaining a comprehensive model of large and complex ground typically is crucial for autonomous driving both in urban and countryside environments. This paper presents an improved ground segmentation method for 3D LIDAR point clouds. Our approach builds on a polar grid map, which is divided into some sectors, then 1D Gaussian process (GP) regression model and Incremental Sample Consensus (INSAC) algorithm is used to extract ground for every sector. Experiments are carried out at the autonomous vehicle in different outdoor scenes, and results are compared to those of the existing method. We show that our method can get more promising performance.","PeriodicalId":287232,"journal":{"name":"The First Asian Conference on Pattern Recognition","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115022478","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 37
Classification based character segmentation guided by Fast-Hessian-Affine regions 基于快速黑森仿射区域的分类字符分割
Pub Date : 2011-11-01 DOI: 10.1109/ACPR.2011.6166546
Takahiro Ota, T. Wada
This paper presents a method of fast and accurate character localization for OCR (Optical Character Reader). We already proposed an acceleration framework of arbitrary classifiers, classifier molding, for real-time verification of characters printed by Industrial Ink Jet Printer (IIJP). In this framework, the behavior of accurate but slow character classifier is learnt by linear regression tree. The resulted classifier is up to 1,500 times faster than the original one but is not fast enough for real-time pyramidal scan of VGA images, which is necessary for scale-free character recognition. For solving this problem, we also proposed CCS (Classification based Character Segmentation). This method finds character arrangement that maximizes the sum of the likelihood of character regions assuming that all characters are horizontally aligned with almost regular intervals. This assumption is not always true even for the characters printed by IIJP. For solving this problem, we extended the idea of CCS to arbitrary located characters. Our method first generates character-region candidates based on local elliptical regions, named Fast-Hessian-Affine regions, and finds most likely character arrangement. Through experiments, we confirmed that our method quickly and accurately recognizes non-uniformly arranged characters.
提出了一种快速准确的OCR (Optical character Reader)字符定位方法。我们已经提出了一个任意分类器的加速框架,分类器成型,用于工业喷墨打印机(IIJP)打印的字符的实时验证。在该框架中,通过线性回归树学习准确但速度慢的字符分类器的行为。所得到的分类器比原来的分类器快了1500倍,但对于VGA图像的实时金字塔扫描来说还不够快,这是无比例字符识别所必需的。为了解决这个问题,我们还提出了基于分类的字符分割(CCS)。该方法找到的字符排列,最大限度地提高字符区域的可能性的总和,假设所有字符水平对齐几乎有规则的间隔。即使对于IIJP打印的字符,这个假设也不总是正确的。为了解决这个问题,我们将CCS的理念扩展到任意位置的字符。该方法首先基于局部椭圆区域(Fast-Hessian-Affine region)生成候选字符区域,并找到最可能的字符排列。实验结果表明,该方法能够快速准确地识别非均匀排列字符。
{"title":"Classification based character segmentation guided by Fast-Hessian-Affine regions","authors":"Takahiro Ota, T. Wada","doi":"10.1109/ACPR.2011.6166546","DOIUrl":"https://doi.org/10.1109/ACPR.2011.6166546","url":null,"abstract":"This paper presents a method of fast and accurate character localization for OCR (Optical Character Reader). We already proposed an acceleration framework of arbitrary classifiers, classifier molding, for real-time verification of characters printed by Industrial Ink Jet Printer (IIJP). In this framework, the behavior of accurate but slow character classifier is learnt by linear regression tree. The resulted classifier is up to 1,500 times faster than the original one but is not fast enough for real-time pyramidal scan of VGA images, which is necessary for scale-free character recognition. For solving this problem, we also proposed CCS (Classification based Character Segmentation). This method finds character arrangement that maximizes the sum of the likelihood of character regions assuming that all characters are horizontally aligned with almost regular intervals. This assumption is not always true even for the characters printed by IIJP. For solving this problem, we extended the idea of CCS to arbitrary located characters. Our method first generates character-region candidates based on local elliptical regions, named Fast-Hessian-Affine regions, and finds most likely character arrangement. Through experiments, we confirmed that our method quickly and accurately recognizes non-uniformly arranged characters.","PeriodicalId":287232,"journal":{"name":"The First Asian Conference on Pattern Recognition","volume":"140 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122404767","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Detecting multiple symmetries with extended SIFT 扩展SIFT检测多重对称性
Pub Date : 2011-11-01 DOI: 10.1109/ACPR.2011.6166683
Qian Chen, Haiyuan Wu, H. Taki
This paper describes an effective method for detecting multiple symmetric objects in an image. A “pseudo-affine invariant SIFT” is used for detecting symmetric feature pairs in perspective images. Candidates of symmetric axes are estimated from every two symmetric feature pairs, and the one supported by the most symmetric feature pairs is detected as the most relevant symmetric axis of a symmetric object. The symmetric feature pairs supporting the symmetric axis are then used to detect other symmetric axes in the same symmetric object. This procedure is applied repeatedly to the symmetric feature pairs after eliminating the ones that support the already detected symmetric axes to detect all symmetric objects in the image. The effectiveness of this method has been confirmed through several experiments using real images and common image databases.
本文描述了一种检测图像中多个对称物体的有效方法。采用“伪仿射不变SIFT”检测透视图像中的对称特征对。从每两个对称特征对中估计对称轴的候选轴,并将最对称特征对支持的候选轴检测为对称对象的最相关对称轴。然后使用支持对称轴的对称特征对来检测同一对称对象中的其他对称轴。在去除支持已检测到的对称轴的对称特征对后,对对称特征对重复应用此过程,以检测图像中所有对称对象。通过实际图像和常用图像数据库的实验,验证了该方法的有效性。
{"title":"Detecting multiple symmetries with extended SIFT","authors":"Qian Chen, Haiyuan Wu, H. Taki","doi":"10.1109/ACPR.2011.6166683","DOIUrl":"https://doi.org/10.1109/ACPR.2011.6166683","url":null,"abstract":"This paper describes an effective method for detecting multiple symmetric objects in an image. A “pseudo-affine invariant SIFT” is used for detecting symmetric feature pairs in perspective images. Candidates of symmetric axes are estimated from every two symmetric feature pairs, and the one supported by the most symmetric feature pairs is detected as the most relevant symmetric axis of a symmetric object. The symmetric feature pairs supporting the symmetric axis are then used to detect other symmetric axes in the same symmetric object. This procedure is applied repeatedly to the symmetric feature pairs after eliminating the ones that support the already detected symmetric axes to detect all symmetric objects in the image. The effectiveness of this method has been confirmed through several experiments using real images and common image databases.","PeriodicalId":287232,"journal":{"name":"The First Asian Conference on Pattern Recognition","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129295517","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Interesting region detection in aerial video using Bayesian topic models 基于贝叶斯主题模型的航拍视频兴趣区域检测
Pub Date : 2011-11-01 DOI: 10.1109/acpr.2011.6166550
Jiewei Wang, Yunhong Wang, Zhaoxiang Zhang
{"title":"Interesting region detection in aerial video using Bayesian topic models","authors":"Jiewei Wang, Yunhong Wang, Zhaoxiang Zhang","doi":"10.1109/acpr.2011.6166550","DOIUrl":"https://doi.org/10.1109/acpr.2011.6166550","url":null,"abstract":"","PeriodicalId":287232,"journal":{"name":"The First Asian Conference on Pattern Recognition","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130860536","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
The First Asian Conference on Pattern Recognition
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1