首页 > 最新文献

The First Asian Conference on Pattern Recognition最新文献

英文 中文
Interclass visual similarity based visual vocabulary learning 基于班级间视觉相似性的视觉词汇学习
Pub Date : 2011-11-01 DOI: 10.1109/ACPR.2011.6166597
Guangming Chang, Chunfen Yuan, Weiming Hu
Visual vocabulary is now widely used in many video analysis tasks, such as event detection, video retrieval and video classification. In most approaches the vocabularies are solely based on statistics of visual features and generated by clustering. Little attention has been paid to the interclass similarity among different events or actions. In this paper, we present a novel approach to mine the interclass visual similarity statistically and then use it to supervise the generation of visual vocabulary. We construct a measurement of interclass similarity, embed the similarity to the Euclidean distance and use the refined distance to generate visual vocabulary iteratively. The experiments in Weizmann and KTH datasets show that our approach outperforms the traditional vocabulary based approach by about 5%.
视觉词汇在视频事件检测、视频检索、视频分类等视频分析任务中得到了广泛的应用。在大多数方法中,词汇表仅基于视觉特征的统计并通过聚类生成。人们很少关注不同事件或行为之间的类间相似性。本文提出了一种统计挖掘类间视觉相似度的新方法,并用它来监督视觉词汇的生成。我们构建了类间相似度度量,将相似度嵌入到欧几里得距离中,并使用改进的距离迭代生成视觉词汇。在Weizmann和KTH数据集上的实验表明,我们的方法比传统的基于词汇的方法性能高出约5%。
{"title":"Interclass visual similarity based visual vocabulary learning","authors":"Guangming Chang, Chunfen Yuan, Weiming Hu","doi":"10.1109/ACPR.2011.6166597","DOIUrl":"https://doi.org/10.1109/ACPR.2011.6166597","url":null,"abstract":"Visual vocabulary is now widely used in many video analysis tasks, such as event detection, video retrieval and video classification. In most approaches the vocabularies are solely based on statistics of visual features and generated by clustering. Little attention has been paid to the interclass similarity among different events or actions. In this paper, we present a novel approach to mine the interclass visual similarity statistically and then use it to supervise the generation of visual vocabulary. We construct a measurement of interclass similarity, embed the similarity to the Euclidean distance and use the refined distance to generate visual vocabulary iteratively. The experiments in Weizmann and KTH datasets show that our approach outperforms the traditional vocabulary based approach by about 5%.","PeriodicalId":287232,"journal":{"name":"The First Asian Conference on Pattern Recognition","volume":"87 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116086711","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Elliptical symmetric distribution based maximal margin classification for hyperspectral imagery 基于椭圆对称分布的高光谱图像最大边界分类
Pub Date : 2011-11-01 DOI: 10.1109/ACPR.2011.6166571
Lin He, Z. Yu, Z. Gu, Yuanqing Li
It has been verified that hyperspectral data is statistically characterized by elliptical symmetric distribution. Accordingly, we introduce the ellipsoidal discriminant boundaries and present an elliptical symmetric distribution based maximal margin (ESD-MM) classifier for hypespectral classification. In this method, the characteristic of elliptical symmetric distribution (ESD) of hyperspectral data is combined with the maximal margin rule. This strategy enables the ESD-MM classifier to achieve good performance, especially when follows dimensionality reduction. Experimental results on real Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) data demonstrated that ESD-MM classifier has better performance than commonly used Bayes classifier, Fisher linear discriminant (FLD) and linear support vector machine (SVM).
验证了高光谱数据在统计上具有椭圆对称分布的特征。在此基础上,引入椭球体判别边界,提出了一种基于椭圆对称分布的最大边界分类器(ESD-MM)。该方法将高光谱数据的椭圆对称分布(ESD)特性与最大边界规则相结合。该策略使ESD-MM分类器能够获得良好的性能,特别是在进行降维时。在机载可见/红外成像光谱仪(AVIRIS)的真实数据上进行的实验结果表明,静电散射- mm分类器的分类性能优于常用的贝叶斯分类器、Fisher线性判别器(FLD)和线性支持向量机(SVM)。
{"title":"Elliptical symmetric distribution based maximal margin classification for hyperspectral imagery","authors":"Lin He, Z. Yu, Z. Gu, Yuanqing Li","doi":"10.1109/ACPR.2011.6166571","DOIUrl":"https://doi.org/10.1109/ACPR.2011.6166571","url":null,"abstract":"It has been verified that hyperspectral data is statistically characterized by elliptical symmetric distribution. Accordingly, we introduce the ellipsoidal discriminant boundaries and present an elliptical symmetric distribution based maximal margin (ESD-MM) classifier for hypespectral classification. In this method, the characteristic of elliptical symmetric distribution (ESD) of hyperspectral data is combined with the maximal margin rule. This strategy enables the ESD-MM classifier to achieve good performance, especially when follows dimensionality reduction. Experimental results on real Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) data demonstrated that ESD-MM classifier has better performance than commonly used Bayes classifier, Fisher linear discriminant (FLD) and linear support vector machine (SVM).","PeriodicalId":287232,"journal":{"name":"The First Asian Conference on Pattern Recognition","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116076639","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards exaggerated image stereotypes 走向夸张的形象刻板印象
Pub Date : 2011-11-01 DOI: 10.1109/ACPR.2011.6166569
Cheng Chen, F. Lauze, C. Igel, Aasa Feragen, M. Loog, M. Nielsen
Given a training set of images and a binary classifier, we introduce the notion of an exaggerated image stereotype for some image class of interest, which emphasizes/exaggerates the characteristic patterns in an image and visualizes which visual information the classification relies on. This is useful for gaining insight into the classification mechanism. The exaggerated image stereotypes results in a proper trade-off between classification accuracy and likelihood of being generated from the class of interest. This is done by optimizing an objective function which consists of a discriminative term based on the classification result, and a generative term based on the assumption of the class distribution. We use this idea with Fisher's Linear Discriminant rule, and assume a multivariate normal distribution for samples within a class. The proposed framework has been applied on handwritten digit data, illustrating specific features differentiating digits. Then it is applied to a face dataset using Active Appearance Model (AAM), where male faces stereotypes are evolved from initial female faces.
给定一个图像训练集和一个二值分类器,我们为一些感兴趣的图像类别引入了夸张图像刻板印象的概念,它强调/夸大图像中的特征模式,并可视化分类所依赖的视觉信息。这对于深入了解分类机制非常有用。夸张的图像刻板印象导致分类准确性和从感兴趣的类生成的可能性之间的适当权衡。这是通过优化目标函数来实现的,该目标函数由基于分类结果的判别项和基于类分布假设的生成项组成。我们将这个想法与费雪的线性判别规则一起使用,并假设一个类内样本的多元正态分布。该框架已应用于手写数字数据,说明了区分数字的具体特征。然后使用主动外观模型(AAM)将其应用于人脸数据集,其中男性面孔的刻板印象是从最初的女性面孔演变而来的。
{"title":"Towards exaggerated image stereotypes","authors":"Cheng Chen, F. Lauze, C. Igel, Aasa Feragen, M. Loog, M. Nielsen","doi":"10.1109/ACPR.2011.6166569","DOIUrl":"https://doi.org/10.1109/ACPR.2011.6166569","url":null,"abstract":"Given a training set of images and a binary classifier, we introduce the notion of an exaggerated image stereotype for some image class of interest, which emphasizes/exaggerates the characteristic patterns in an image and visualizes which visual information the classification relies on. This is useful for gaining insight into the classification mechanism. The exaggerated image stereotypes results in a proper trade-off between classification accuracy and likelihood of being generated from the class of interest. This is done by optimizing an objective function which consists of a discriminative term based on the classification result, and a generative term based on the assumption of the class distribution. We use this idea with Fisher's Linear Discriminant rule, and assume a multivariate normal distribution for samples within a class. The proposed framework has been applied on handwritten digit data, illustrating specific features differentiating digits. Then it is applied to a face dataset using Active Appearance Model (AAM), where male faces stereotypes are evolved from initial female faces.","PeriodicalId":287232,"journal":{"name":"The First Asian Conference on Pattern Recognition","volume":"272 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122837763","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
What is happening in a still picture? 静止画面中发生了什么?
Pub Date : 2011-11-01 DOI: 10.1109/ACPR.2011.6166555
Piji Li, Jun Ma
We consider the problem of generating concise sentences to describe still pictures automatically. We treat objects in images (nouns in sentences) as hidden information of actions (verbs). Therefore, the sentence generation problem can be transformed into action detection and scene classification problems. We employ Latent Multiple Kernel Learning (L-MKL) to learn the action detectors from “Exemplarlets”, and utilize MKL to learn the scene classifiers. The image features employed include distribution of edges, dense visual words and feature descriptors at different levels of spatial pyramid. For a new image we can detect the action using a sliding-window detector learnt via L-MKL, predict the scene the action happened in and build haction, scenei tuples. Finally, these tuples will be translated into concise sentences according to previously defined grammar template. We show both the classification and sentence generating results on our newly collected dataset of six actions as well as demonstrate improved performance over existing methods.
我们考虑了自动生成简明句子来描述静态图片的问题。我们将图像中的物体(句子中的名词)视为动作(动词)的隐藏信息。因此,句子生成问题可以转化为动作检测和场景分类问题。我们利用潜多核学习(L-MKL)从“Exemplarlets”中学习动作检测器,并利用潜多核学习学习场景分类器。采用的图像特征包括边缘分布、密集的视觉词和空间金字塔不同层次上的特征描述符。对于新图像,我们可以使用通过L-MKL学习的滑动窗口检测器来检测动作,预测动作发生的场景并构建动作、场景元组。最后,根据之前定义的语法模板将这些元组翻译成简明的句子。我们在新收集的六个动作数据集上展示了分类和句子生成结果,并展示了比现有方法更好的性能。
{"title":"What is happening in a still picture?","authors":"Piji Li, Jun Ma","doi":"10.1109/ACPR.2011.6166555","DOIUrl":"https://doi.org/10.1109/ACPR.2011.6166555","url":null,"abstract":"We consider the problem of generating concise sentences to describe still pictures automatically. We treat objects in images (nouns in sentences) as hidden information of actions (verbs). Therefore, the sentence generation problem can be transformed into action detection and scene classification problems. We employ Latent Multiple Kernel Learning (L-MKL) to learn the action detectors from “Exemplarlets”, and utilize MKL to learn the scene classifiers. The image features employed include distribution of edges, dense visual words and feature descriptors at different levels of spatial pyramid. For a new image we can detect the action using a sliding-window detector learnt via L-MKL, predict the scene the action happened in and build haction, scenei tuples. Finally, these tuples will be translated into concise sentences according to previously defined grammar template. We show both the classification and sentence generating results on our newly collected dataset of six actions as well as demonstrate improved performance over existing methods.","PeriodicalId":287232,"journal":{"name":"The First Asian Conference on Pattern Recognition","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127903171","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Saliency based natural image understanding 基于自然图像理解的显著性
Pub Date : 2011-11-01 DOI: 10.1109/ACPR.2011.6166648
Qingshan Li, Yue Zhou, Lei Xu
This paper presents a novel method for natural image understanding. We improved the effect of saliency detection for the purpose of image segmentation at first. Then Graph cuts are used to find global optimal segmentation of N-dimensional image. After that, we adopt the scheme of supervised learning to classify the scene type of the image. The main advantages of our method are that: Firstly we revised the existed sparse saliency model to better suit for image segmentation, Secondly we propose a new color modeling method during the process of GrabCut segmentation. Finally we extract object-level top down information and low-level image cues together to distinguish the type of images. Experiments show that our proposed scheme can obtain comparable performance to other approaches.
提出了一种新的自然图像理解方法。我们首先改进了显著性检测的效果,以达到图像分割的目的。然后利用图切算法对n维图像进行全局最优分割。然后,我们采用监督学习的方案对图像的场景类型进行分类。该方法的主要优点是:首先,我们改进了现有的稀疏显著性模型,使其更适合图像分割;其次,我们在GrabCut分割过程中提出了一种新的颜色建模方法。最后,我们提取对象级自顶向下信息和底层图像线索,以区分图像的类型。实验结果表明,该方法可以获得与其他方法相当的性能。
{"title":"Saliency based natural image understanding","authors":"Qingshan Li, Yue Zhou, Lei Xu","doi":"10.1109/ACPR.2011.6166648","DOIUrl":"https://doi.org/10.1109/ACPR.2011.6166648","url":null,"abstract":"This paper presents a novel method for natural image understanding. We improved the effect of saliency detection for the purpose of image segmentation at first. Then Graph cuts are used to find global optimal segmentation of N-dimensional image. After that, we adopt the scheme of supervised learning to classify the scene type of the image. The main advantages of our method are that: Firstly we revised the existed sparse saliency model to better suit for image segmentation, Secondly we propose a new color modeling method during the process of GrabCut segmentation. Finally we extract object-level top down information and low-level image cues together to distinguish the type of images. Experiments show that our proposed scheme can obtain comparable performance to other approaches.","PeriodicalId":287232,"journal":{"name":"The First Asian Conference on Pattern Recognition","volume":"256 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122502641","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An improvment of weight scheme on adaBoost in the presence of noisy data adaBoost中存在噪声数据时权值方案的改进
Pub Date : 2011-11-01 DOI: 10.1109/ACPR.2011.6166557
Shihai Wang, Geng Li
The first strand of this research is concerned with the classification noise issue. Classification noise, (worry labeling), is a further consequence of the difficulties in accurately labeling the real training data. For efficient reduction of the negative influence produced by noisy samples, we propose a new weight scheme with a nonlinear model with the local proximity assumption for the Boosting algorithm. The effectiveness of our method has been evaluated by using a set of University of California Irvine Machine Learning Repository (UCI) [1] benchmarks. We report promising results.
本研究的第一部分涉及分类噪声问题。分类噪声(忧虑标注)是难以准确标注真实训练数据的进一步后果。为了有效地降低噪声样本产生的负面影响,我们提出了一种新的加权算法,该算法采用非线性模型和局部接近假设。我们的方法的有效性已经通过使用一组加州大学欧文分校机器学习存储库(UCI)[1]基准进行了评估。我们报告了有希望的结果。
{"title":"An improvment of weight scheme on adaBoost in the presence of noisy data","authors":"Shihai Wang, Geng Li","doi":"10.1109/ACPR.2011.6166557","DOIUrl":"https://doi.org/10.1109/ACPR.2011.6166557","url":null,"abstract":"The first strand of this research is concerned with the classification noise issue. Classification noise, (worry labeling), is a further consequence of the difficulties in accurately labeling the real training data. For efficient reduction of the negative influence produced by noisy samples, we propose a new weight scheme with a nonlinear model with the local proximity assumption for the Boosting algorithm. The effectiveness of our method has been evaluated by using a set of University of California Irvine Machine Learning Repository (UCI) [1] benchmarks. We report promising results.","PeriodicalId":287232,"journal":{"name":"The First Asian Conference on Pattern Recognition","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126556984","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fusion of features and classifiers for off-line handwritten signature verification 特征与分类器融合的离线手写签名验证
Pub Date : 2011-11-01 DOI: 10.1109/ACPR.2011.6166701
Juan Hu, Youbin Chen
A method for writer-independent off-line handwritten signature verification based on grey level feature extraction and Real Adaboost algorithm is proposed. Firstly, both global and local features are used simultaneously. The texture information such as co-occurrence matrix and local binary pattern are analyzed and used as features. Secondly, Support Vector Machines (SVMs) and the squared Mahalanobis distance classifier are introduced. Finally, Real Adaboost algorithm is applied. Experiments on the public signature database GPDS Corpus show that our proposed method has achieved the FRR 5.64% and the FAR 5.37% which are the best so far compared with other published results.
提出了一种基于灰度特征提取和Real Adaboost算法的离线手写签名验证方法。首先,全局特征和局部特征同时使用。对共现矩阵和局部二值模式等纹理信息进行分析并作为特征。其次,介绍了支持向量机(svm)和马氏距离平方分类器。最后,应用Real Adaboost算法。在公共特征库GPDS语料库上进行的实验表明,与已有的研究结果相比,本文提出的方法的FRR为5.64%,FAR为5.37%,是目前为止最好的。
{"title":"Fusion of features and classifiers for off-line handwritten signature verification","authors":"Juan Hu, Youbin Chen","doi":"10.1109/ACPR.2011.6166701","DOIUrl":"https://doi.org/10.1109/ACPR.2011.6166701","url":null,"abstract":"A method for writer-independent off-line handwritten signature verification based on grey level feature extraction and Real Adaboost algorithm is proposed. Firstly, both global and local features are used simultaneously. The texture information such as co-occurrence matrix and local binary pattern are analyzed and used as features. Secondly, Support Vector Machines (SVMs) and the squared Mahalanobis distance classifier are introduced. Finally, Real Adaboost algorithm is applied. Experiments on the public signature database GPDS Corpus show that our proposed method has achieved the FRR 5.64% and the FAR 5.37% which are the best so far compared with other published results.","PeriodicalId":287232,"journal":{"name":"The First Asian Conference on Pattern Recognition","volume":"99 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126536137","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A local learning based Image-To-Class distance for image classification 基于局部学习的图像到类距离分类方法
Pub Date : 2011-11-01 DOI: 10.1109/ACPR.2011.6166577
Xinyuan Cai, Baihua Xiao, Chunheng Wang, Rongguo Zhang
Image-To-Class distance is first proposed in Naive-Bayes Nearest-Neighbor. NBNN is a feature-based image classifier, and can achieve impressive classification accuracy. However, the performance of NBNN relies heavily on the large number of training samples. If using small number of training samples, the performance will degrade. The goal of this paper is to address this issue. The main contribution of this paper is that we propose a robust Image-to-Class distance by local learning. We define the patch-to-class distance as the distance between the input patch to its nearest neighbor in one class, which is reconstructed in the local manifold space; and then our image-to-class distance is the sum of patch-to-class distance. Furthermore, we take advantage of large-margin metric learning framework to obtain a proper Mahalanobis metric for each class. We evaluate the proposed method on four benchmark datasets: Caltech, Corel, Scene13, and Graz. The results show that our defined Image-To-Class Distance is more robust than NBNN and Optimal-NBNN, and by combining with the learned metric for each class, our method can achieve significant improvement over previous reported results on these datasets.
Image-To-Class距离最早是在Naive-Bayes最近邻算法中提出的。NBNN是一种基于特征的图像分类器,可以达到令人印象深刻的分类精度。然而,NBNN的性能很大程度上依赖于大量的训练样本。如果使用少量的训练样本,性能会下降。本文的目标就是解决这个问题。本文的主要贡献在于我们通过局部学习提出了一个鲁棒的图像到班级的距离。我们将patch到类的距离定义为输入patch到类中最近邻居的距离,该距离在局部流形空间中重构;然后图像到类的距离就是斑块到类的距离之和。此外,我们利用大余量度量学习框架为每个类别获得合适的马氏度规。我们在四个基准数据集上评估了所提出的方法:Caltech, Corel, Scene13和Graz。结果表明,我们定义的图像到类距离比NBNN和Optimal-NBNN具有更强的鲁棒性,并且通过结合每个类的学习度量,我们的方法可以在这些数据集上取得显著的改进。
{"title":"A local learning based Image-To-Class distance for image classification","authors":"Xinyuan Cai, Baihua Xiao, Chunheng Wang, Rongguo Zhang","doi":"10.1109/ACPR.2011.6166577","DOIUrl":"https://doi.org/10.1109/ACPR.2011.6166577","url":null,"abstract":"Image-To-Class distance is first proposed in Naive-Bayes Nearest-Neighbor. NBNN is a feature-based image classifier, and can achieve impressive classification accuracy. However, the performance of NBNN relies heavily on the large number of training samples. If using small number of training samples, the performance will degrade. The goal of this paper is to address this issue. The main contribution of this paper is that we propose a robust Image-to-Class distance by local learning. We define the patch-to-class distance as the distance between the input patch to its nearest neighbor in one class, which is reconstructed in the local manifold space; and then our image-to-class distance is the sum of patch-to-class distance. Furthermore, we take advantage of large-margin metric learning framework to obtain a proper Mahalanobis metric for each class. We evaluate the proposed method on four benchmark datasets: Caltech, Corel, Scene13, and Graz. The results show that our defined Image-To-Class Distance is more robust than NBNN and Optimal-NBNN, and by combining with the learned metric for each class, our method can achieve significant improvement over previous reported results on these datasets.","PeriodicalId":287232,"journal":{"name":"The First Asian Conference on Pattern Recognition","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124944082","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
View sequence generation for view-based outdoor navigation 基于视图的室外导航的视图序列生成
Pub Date : 2011-11-01 DOI: 10.1109/ACPR.2011.6166691
Y. Kaneko, J. Miura
This paper describes a method of generating a new view sequence for view-based outdoor navigation. View-based navigation approaches have been shown to be effective but have a drawback that a view sequence for the route to be navigated is needed beforehand. This will be an issue especially for navigation in an open space where numerous potential routes exist; it is almost impossible to take view sequences for all of the routes by actually moving on them. We therefore develop a method of generating a view sequence for arbitrary routes from an omnidirectional view sequence taken at a limited movement. The method is based on visual odometry-based map generation and an image-to-image morphing using homography. The effectiveness of the method is validated by view-based localization experiments.
本文描述了一种基于视图的户外导航新视图序列的生成方法。基于视图的导航方法已被证明是有效的,但有一个缺点,即需要事先为要导航的路线提供一个视图序列。这将是一个问题,特别是在一个开放空间的导航,有许多潜在的路线存在;几乎不可能通过实际移动来获取所有路线的视图序列。因此,我们开发了一种方法,从一个有限运动的全向视图序列中生成任意路线的视图序列。该方法基于基于视觉里程计的地图生成和使用单应性的图像到图像变形。通过基于视图的定位实验验证了该方法的有效性。
{"title":"View sequence generation for view-based outdoor navigation","authors":"Y. Kaneko, J. Miura","doi":"10.1109/ACPR.2011.6166691","DOIUrl":"https://doi.org/10.1109/ACPR.2011.6166691","url":null,"abstract":"This paper describes a method of generating a new view sequence for view-based outdoor navigation. View-based navigation approaches have been shown to be effective but have a drawback that a view sequence for the route to be navigated is needed beforehand. This will be an issue especially for navigation in an open space where numerous potential routes exist; it is almost impossible to take view sequences for all of the routes by actually moving on them. We therefore develop a method of generating a view sequence for arbitrary routes from an omnidirectional view sequence taken at a limited movement. The method is based on visual odometry-based map generation and an image-to-image morphing using homography. The effectiveness of the method is validated by view-based localization experiments.","PeriodicalId":287232,"journal":{"name":"The First Asian Conference on Pattern Recognition","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129055030","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Identical object segmentation through level sets with similarity constraint 基于相似度约束的水平集分割同一目标
Pub Date : 2011-11-01 DOI: 10.1109/ACPR.2011.6166609
Hongbin Xie, Gang Zeng, Rui Gan, H. Zha
Unsupervised identical object segmentation remains a challenging problem in vision research due to the difficulties in obtaining high-level structural knowledge about the scene. In this paper, we present an algorithm based on level set with a novel similarity constraint term for identical objects segmentation. The key component of the proposed algorithm is to embed the similarity constraint into curve evolution, where the evolving speed is high in regions of similar appearance and becomes low in areas with distinct contents. The algorithm starts with a pair of seed matches (e.g. SIFT) and evolve the small initial circle to form large similar regions under the similarity constraint. The similarity constraint is related to local alignment with assumption that the warp between identical objects is affine transformation. The right warp aligns the identical objects and promotes the similar regions growth. The alignment and expansion alternate until the curve reaches the boundaries of similar objects. Real experiments validates the efficiency and effectiveness of the proposed algorithm.
由于难以获得关于场景的高层次结构知识,无监督的同物分割一直是视觉研究中的一个难题。本文提出了一种基于水平集的相同目标分割算法,并提出了一种新的相似约束项。该算法的关键部分是将相似性约束嵌入到曲线演化中,即在外观相似的区域进化速度快,而在内容不同的区域进化速度慢。该算法从一对种子匹配(如SIFT)开始,在相似度约束下,将小的初始圆进化成大的相似区域。相似性约束与局部对齐有关,假设相同物体之间的翘曲是仿射变换。右曲使相同的物体对齐,并促进相似区域的生长。对齐和扩展交替进行,直到曲线到达相似物体的边界。实际实验验证了该算法的有效性和有效性。
{"title":"Identical object segmentation through level sets with similarity constraint","authors":"Hongbin Xie, Gang Zeng, Rui Gan, H. Zha","doi":"10.1109/ACPR.2011.6166609","DOIUrl":"https://doi.org/10.1109/ACPR.2011.6166609","url":null,"abstract":"Unsupervised identical object segmentation remains a challenging problem in vision research due to the difficulties in obtaining high-level structural knowledge about the scene. In this paper, we present an algorithm based on level set with a novel similarity constraint term for identical objects segmentation. The key component of the proposed algorithm is to embed the similarity constraint into curve evolution, where the evolving speed is high in regions of similar appearance and becomes low in areas with distinct contents. The algorithm starts with a pair of seed matches (e.g. SIFT) and evolve the small initial circle to form large similar regions under the similarity constraint. The similarity constraint is related to local alignment with assumption that the warp between identical objects is affine transformation. The right warp aligns the identical objects and promotes the similar regions growth. The alignment and expansion alternate until the curve reaches the boundaries of similar objects. Real experiments validates the efficiency and effectiveness of the proposed algorithm.","PeriodicalId":287232,"journal":{"name":"The First Asian Conference on Pattern Recognition","volume":"88 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123523327","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
The First Asian Conference on Pattern Recognition
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1