Pub Date : 2011-11-01DOI: 10.1109/ACPR.2011.6166569
Cheng Chen, F. Lauze, C. Igel, Aasa Feragen, M. Loog, M. Nielsen
Given a training set of images and a binary classifier, we introduce the notion of an exaggerated image stereotype for some image class of interest, which emphasizes/exaggerates the characteristic patterns in an image and visualizes which visual information the classification relies on. This is useful for gaining insight into the classification mechanism. The exaggerated image stereotypes results in a proper trade-off between classification accuracy and likelihood of being generated from the class of interest. This is done by optimizing an objective function which consists of a discriminative term based on the classification result, and a generative term based on the assumption of the class distribution. We use this idea with Fisher's Linear Discriminant rule, and assume a multivariate normal distribution for samples within a class. The proposed framework has been applied on handwritten digit data, illustrating specific features differentiating digits. Then it is applied to a face dataset using Active Appearance Model (AAM), where male faces stereotypes are evolved from initial female faces.
{"title":"Towards exaggerated image stereotypes","authors":"Cheng Chen, F. Lauze, C. Igel, Aasa Feragen, M. Loog, M. Nielsen","doi":"10.1109/ACPR.2011.6166569","DOIUrl":"https://doi.org/10.1109/ACPR.2011.6166569","url":null,"abstract":"Given a training set of images and a binary classifier, we introduce the notion of an exaggerated image stereotype for some image class of interest, which emphasizes/exaggerates the characteristic patterns in an image and visualizes which visual information the classification relies on. This is useful for gaining insight into the classification mechanism. The exaggerated image stereotypes results in a proper trade-off between classification accuracy and likelihood of being generated from the class of interest. This is done by optimizing an objective function which consists of a discriminative term based on the classification result, and a generative term based on the assumption of the class distribution. We use this idea with Fisher's Linear Discriminant rule, and assume a multivariate normal distribution for samples within a class. The proposed framework has been applied on handwritten digit data, illustrating specific features differentiating digits. Then it is applied to a face dataset using Active Appearance Model (AAM), where male faces stereotypes are evolved from initial female faces.","PeriodicalId":287232,"journal":{"name":"The First Asian Conference on Pattern Recognition","volume":"272 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122837763","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-11-01DOI: 10.1109/ACPR.2011.6166674
Genquan Duan, H. Ai, Takayoshi Yamashita, S. Lao
In this paper, we propose to detect human centric objects, including face, head shoulder, upper body, left body, right body and whole body, which can provide essential information to locate humans in highly crowed scenes. In the literature, the approaches to detect multi-class objects are either taking each class independently to learn and apply its classifier successively or taking all classes as a whole to learn individual classifier based on sharing features and to detect by step-by-step dividing. Different from these works, we consider two issues, one is the similarities and discriminations of different classes and the other is the semantic relations among them. Our main idea is to predict class labels quickly using a Salient Patch Model (SPM) first, and then do detection accurately using detectors of predicted classes in which a Semantic Relation Model (SRM) is proposed to capture relations among classes for efficient inferences. SPM and SRM are designed for these two issues respectively. Experiments on challenging real-world datasets demonstrate that our proposed approach can achieve significant performance improvements.
{"title":"Human centric object detection in highly crowded scenes","authors":"Genquan Duan, H. Ai, Takayoshi Yamashita, S. Lao","doi":"10.1109/ACPR.2011.6166674","DOIUrl":"https://doi.org/10.1109/ACPR.2011.6166674","url":null,"abstract":"In this paper, we propose to detect human centric objects, including face, head shoulder, upper body, left body, right body and whole body, which can provide essential information to locate humans in highly crowed scenes. In the literature, the approaches to detect multi-class objects are either taking each class independently to learn and apply its classifier successively or taking all classes as a whole to learn individual classifier based on sharing features and to detect by step-by-step dividing. Different from these works, we consider two issues, one is the similarities and discriminations of different classes and the other is the semantic relations among them. Our main idea is to predict class labels quickly using a Salient Patch Model (SPM) first, and then do detection accurately using detectors of predicted classes in which a Semantic Relation Model (SRM) is proposed to capture relations among classes for efficient inferences. SPM and SRM are designed for these two issues respectively. Experiments on challenging real-world datasets demonstrate that our proposed approach can achieve significant performance improvements.","PeriodicalId":287232,"journal":{"name":"The First Asian Conference on Pattern Recognition","volume":"113 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124747145","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Histogram features, such as SIFT, HOG, LBP et al, are widely used in modern computer vision algorithms. According to [18], chi-square distance is an effective measure for comparing histogram features. In this paper, we propose a new method, named the Quadric-chi similarity metric learning (QCSML) for histogram features. The main contribution of this paper is that we propose a new metric learning method based on chi-square distance, in contrast with traditional Mahalanobis distance metric learning methods. The use of quadric-chi similarity in our method leads to an effective learning algorithm. Our method is tested on SIFT features for face identification, and compared with the state-of-art metric learning method (LDML) on the benchmark dataset, the Labeled Faces in the Wild (LFW). Experimental results show that our method can achieve clear performance gains over LDML.
直方图特征在现代计算机视觉算法中得到了广泛的应用,如SIFT、HOG、LBP等。根据[18],卡方距离是比较直方图特征的有效度量。在本文中,我们提出了一种新的直方图特征的相似度度量学习(QCSML)方法。本文的主要贡献在于,与传统的马氏距离度量学习方法相比,我们提出了一种新的基于卡方距离的度量学习方法。在我们的方法中使用二次chi相似度导致了一个有效的学习算法。我们的方法在SIFT特征上进行了人脸识别测试,并在基准数据集Labeled Faces in the Wild (LFW)上与最先进的度量学习方法(LDML)进行了比较。实验结果表明,与LDML相比,我们的方法可以获得明显的性能提升。
{"title":"Quadratic-chi similarity metric learning for histogram feature","authors":"Xinyuan Cai, Baihua Xiao, Chunheng Wang, Rongguo Zhang","doi":"10.1109/ACPR.2011.6166698","DOIUrl":"https://doi.org/10.1109/ACPR.2011.6166698","url":null,"abstract":"Histogram features, such as SIFT, HOG, LBP et al, are widely used in modern computer vision algorithms. According to [18], chi-square distance is an effective measure for comparing histogram features. In this paper, we propose a new method, named the Quadric-chi similarity metric learning (QCSML) for histogram features. The main contribution of this paper is that we propose a new metric learning method based on chi-square distance, in contrast with traditional Mahalanobis distance metric learning methods. The use of quadric-chi similarity in our method leads to an effective learning algorithm. Our method is tested on SIFT features for face identification, and compared with the state-of-art metric learning method (LDML) on the benchmark dataset, the Labeled Faces in the Wild (LFW). Experimental results show that our method can achieve clear performance gains over LDML.","PeriodicalId":287232,"journal":{"name":"The First Asian Conference on Pattern Recognition","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124818472","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Image-To-Class distance is first proposed in Naive-Bayes Nearest-Neighbor. NBNN is a feature-based image classifier, and can achieve impressive classification accuracy. However, the performance of NBNN relies heavily on the large number of training samples. If using small number of training samples, the performance will degrade. The goal of this paper is to address this issue. The main contribution of this paper is that we propose a robust Image-to-Class distance by local learning. We define the patch-to-class distance as the distance between the input patch to its nearest neighbor in one class, which is reconstructed in the local manifold space; and then our image-to-class distance is the sum of patch-to-class distance. Furthermore, we take advantage of large-margin metric learning framework to obtain a proper Mahalanobis metric for each class. We evaluate the proposed method on four benchmark datasets: Caltech, Corel, Scene13, and Graz. The results show that our defined Image-To-Class Distance is more robust than NBNN and Optimal-NBNN, and by combining with the learned metric for each class, our method can achieve significant improvement over previous reported results on these datasets.
{"title":"A local learning based Image-To-Class distance for image classification","authors":"Xinyuan Cai, Baihua Xiao, Chunheng Wang, Rongguo Zhang","doi":"10.1109/ACPR.2011.6166577","DOIUrl":"https://doi.org/10.1109/ACPR.2011.6166577","url":null,"abstract":"Image-To-Class distance is first proposed in Naive-Bayes Nearest-Neighbor. NBNN is a feature-based image classifier, and can achieve impressive classification accuracy. However, the performance of NBNN relies heavily on the large number of training samples. If using small number of training samples, the performance will degrade. The goal of this paper is to address this issue. The main contribution of this paper is that we propose a robust Image-to-Class distance by local learning. We define the patch-to-class distance as the distance between the input patch to its nearest neighbor in one class, which is reconstructed in the local manifold space; and then our image-to-class distance is the sum of patch-to-class distance. Furthermore, we take advantage of large-margin metric learning framework to obtain a proper Mahalanobis metric for each class. We evaluate the proposed method on four benchmark datasets: Caltech, Corel, Scene13, and Graz. The results show that our defined Image-To-Class Distance is more robust than NBNN and Optimal-NBNN, and by combining with the learned metric for each class, our method can achieve significant improvement over previous reported results on these datasets.","PeriodicalId":287232,"journal":{"name":"The First Asian Conference on Pattern Recognition","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124944082","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-11-01DOI: 10.1109/ACPR.2011.6166617
Xiao-Xin Li, D. Dai, Xiao-Fei Zhang, Chuan-Xian Ren
Partially occluded faces are common in automatic face recognition in the real world. Existing methods, such as sparse error correction with Markov random fields, correntropy-based sparse representation and robust sparse coding, are all based on error correction, which relies on the perfect reconstruction of the occluded facial image and limits their recognition rates especially when the occluded regions are large. It helps to enhance recognition rates if we can detect the occluded portions and exclude them from further classification. Based on a magnitude order measure, we propose an innovative effective occlusion detection algorithm, called Partially Iteratively Reweighted Sparse Coding (PIRSC). Compared to the state-of-the-art methods, our PIRSC based classifier greatly improve the face recognition rate especially when the occlusion percentage is large.
{"title":"Face recognition with continuous occlusion using partially iteratively reweighted sparse coding","authors":"Xiao-Xin Li, D. Dai, Xiao-Fei Zhang, Chuan-Xian Ren","doi":"10.1109/ACPR.2011.6166617","DOIUrl":"https://doi.org/10.1109/ACPR.2011.6166617","url":null,"abstract":"Partially occluded faces are common in automatic face recognition in the real world. Existing methods, such as sparse error correction with Markov random fields, correntropy-based sparse representation and robust sparse coding, are all based on error correction, which relies on the perfect reconstruction of the occluded facial image and limits their recognition rates especially when the occluded regions are large. It helps to enhance recognition rates if we can detect the occluded portions and exclude them from further classification. Based on a magnitude order measure, we propose an innovative effective occlusion detection algorithm, called Partially Iteratively Reweighted Sparse Coding (PIRSC). Compared to the state-of-the-art methods, our PIRSC based classifier greatly improve the face recognition rate especially when the occlusion percentage is large.","PeriodicalId":287232,"journal":{"name":"The First Asian Conference on Pattern Recognition","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122093256","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-11-01DOI: 10.1109/ACPR.2011.6166701
Juan Hu, Youbin Chen
A method for writer-independent off-line handwritten signature verification based on grey level feature extraction and Real Adaboost algorithm is proposed. Firstly, both global and local features are used simultaneously. The texture information such as co-occurrence matrix and local binary pattern are analyzed and used as features. Secondly, Support Vector Machines (SVMs) and the squared Mahalanobis distance classifier are introduced. Finally, Real Adaboost algorithm is applied. Experiments on the public signature database GPDS Corpus show that our proposed method has achieved the FRR 5.64% and the FAR 5.37% which are the best so far compared with other published results.
{"title":"Fusion of features and classifiers for off-line handwritten signature verification","authors":"Juan Hu, Youbin Chen","doi":"10.1109/ACPR.2011.6166701","DOIUrl":"https://doi.org/10.1109/ACPR.2011.6166701","url":null,"abstract":"A method for writer-independent off-line handwritten signature verification based on grey level feature extraction and Real Adaboost algorithm is proposed. Firstly, both global and local features are used simultaneously. The texture information such as co-occurrence matrix and local binary pattern are analyzed and used as features. Secondly, Support Vector Machines (SVMs) and the squared Mahalanobis distance classifier are introduced. Finally, Real Adaboost algorithm is applied. Experiments on the public signature database GPDS Corpus show that our proposed method has achieved the FRR 5.64% and the FAR 5.37% which are the best so far compared with other published results.","PeriodicalId":287232,"journal":{"name":"The First Asian Conference on Pattern Recognition","volume":"99 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126536137","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-11-01DOI: 10.1109/ACPR.2011.6166557
Shihai Wang, Geng Li
The first strand of this research is concerned with the classification noise issue. Classification noise, (worry labeling), is a further consequence of the difficulties in accurately labeling the real training data. For efficient reduction of the negative influence produced by noisy samples, we propose a new weight scheme with a nonlinear model with the local proximity assumption for the Boosting algorithm. The effectiveness of our method has been evaluated by using a set of University of California Irvine Machine Learning Repository (UCI) [1] benchmarks. We report promising results.
{"title":"An improvment of weight scheme on adaBoost in the presence of noisy data","authors":"Shihai Wang, Geng Li","doi":"10.1109/ACPR.2011.6166557","DOIUrl":"https://doi.org/10.1109/ACPR.2011.6166557","url":null,"abstract":"The first strand of this research is concerned with the classification noise issue. Classification noise, (worry labeling), is a further consequence of the difficulties in accurately labeling the real training data. For efficient reduction of the negative influence produced by noisy samples, we propose a new weight scheme with a nonlinear model with the local proximity assumption for the Boosting algorithm. The effectiveness of our method has been evaluated by using a set of University of California Irvine Machine Learning Repository (UCI) [1] benchmarks. We report promising results.","PeriodicalId":287232,"journal":{"name":"The First Asian Conference on Pattern Recognition","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126556984","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-11-01DOI: 10.1109/ACPR.2011.6166555
Piji Li, Jun Ma
We consider the problem of generating concise sentences to describe still pictures automatically. We treat objects in images (nouns in sentences) as hidden information of actions (verbs). Therefore, the sentence generation problem can be transformed into action detection and scene classification problems. We employ Latent Multiple Kernel Learning (L-MKL) to learn the action detectors from “Exemplarlets”, and utilize MKL to learn the scene classifiers. The image features employed include distribution of edges, dense visual words and feature descriptors at different levels of spatial pyramid. For a new image we can detect the action using a sliding-window detector learnt via L-MKL, predict the scene the action happened in and build haction, scenei tuples. Finally, these tuples will be translated into concise sentences according to previously defined grammar template. We show both the classification and sentence generating results on our newly collected dataset of six actions as well as demonstrate improved performance over existing methods.
{"title":"What is happening in a still picture?","authors":"Piji Li, Jun Ma","doi":"10.1109/ACPR.2011.6166555","DOIUrl":"https://doi.org/10.1109/ACPR.2011.6166555","url":null,"abstract":"We consider the problem of generating concise sentences to describe still pictures automatically. We treat objects in images (nouns in sentences) as hidden information of actions (verbs). Therefore, the sentence generation problem can be transformed into action detection and scene classification problems. We employ Latent Multiple Kernel Learning (L-MKL) to learn the action detectors from “Exemplarlets”, and utilize MKL to learn the scene classifiers. The image features employed include distribution of edges, dense visual words and feature descriptors at different levels of spatial pyramid. For a new image we can detect the action using a sliding-window detector learnt via L-MKL, predict the scene the action happened in and build haction, scenei tuples. Finally, these tuples will be translated into concise sentences according to previously defined grammar template. We show both the classification and sentence generating results on our newly collected dataset of six actions as well as demonstrate improved performance over existing methods.","PeriodicalId":287232,"journal":{"name":"The First Asian Conference on Pattern Recognition","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127903171","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-11-01DOI: 10.1109/ACPR.2011.6166609
Hongbin Xie, Gang Zeng, Rui Gan, H. Zha
Unsupervised identical object segmentation remains a challenging problem in vision research due to the difficulties in obtaining high-level structural knowledge about the scene. In this paper, we present an algorithm based on level set with a novel similarity constraint term for identical objects segmentation. The key component of the proposed algorithm is to embed the similarity constraint into curve evolution, where the evolving speed is high in regions of similar appearance and becomes low in areas with distinct contents. The algorithm starts with a pair of seed matches (e.g. SIFT) and evolve the small initial circle to form large similar regions under the similarity constraint. The similarity constraint is related to local alignment with assumption that the warp between identical objects is affine transformation. The right warp aligns the identical objects and promotes the similar regions growth. The alignment and expansion alternate until the curve reaches the boundaries of similar objects. Real experiments validates the efficiency and effectiveness of the proposed algorithm.
{"title":"Identical object segmentation through level sets with similarity constraint","authors":"Hongbin Xie, Gang Zeng, Rui Gan, H. Zha","doi":"10.1109/ACPR.2011.6166609","DOIUrl":"https://doi.org/10.1109/ACPR.2011.6166609","url":null,"abstract":"Unsupervised identical object segmentation remains a challenging problem in vision research due to the difficulties in obtaining high-level structural knowledge about the scene. In this paper, we present an algorithm based on level set with a novel similarity constraint term for identical objects segmentation. The key component of the proposed algorithm is to embed the similarity constraint into curve evolution, where the evolving speed is high in regions of similar appearance and becomes low in areas with distinct contents. The algorithm starts with a pair of seed matches (e.g. SIFT) and evolve the small initial circle to form large similar regions under the similarity constraint. The similarity constraint is related to local alignment with assumption that the warp between identical objects is affine transformation. The right warp aligns the identical objects and promotes the similar regions growth. The alignment and expansion alternate until the curve reaches the boundaries of similar objects. Real experiments validates the efficiency and effectiveness of the proposed algorithm.","PeriodicalId":287232,"journal":{"name":"The First Asian Conference on Pattern Recognition","volume":"88 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123523327","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-11-01DOI: 10.1109/ACPR.2011.6166529
Dongbo Zhang, Xiong Li, Xingyu Shang, Yao Yi, Yaonan Wang
To improve the robust performance to detect hemorrhage lesions in diabetic retinopathy image, a background estimation and vessel exclusion based algorithm is proposed in this paper. Candidate hemorrhages are located by background estimation and Mahalanobis distance, and then on the basis of shape analysis, vessel exclusion is conducted to remove non hemorrhage pixels. Experiments results show that the performance of our method is effective to reduce the false negative results arise from inaccurate vessel structure.
{"title":"Robust hemorrhage detection in diabetic retinopathy image","authors":"Dongbo Zhang, Xiong Li, Xingyu Shang, Yao Yi, Yaonan Wang","doi":"10.1109/ACPR.2011.6166529","DOIUrl":"https://doi.org/10.1109/ACPR.2011.6166529","url":null,"abstract":"To improve the robust performance to detect hemorrhage lesions in diabetic retinopathy image, a background estimation and vessel exclusion based algorithm is proposed in this paper. Candidate hemorrhages are located by background estimation and Mahalanobis distance, and then on the basis of shape analysis, vessel exclusion is conducted to remove non hemorrhage pixels. Experiments results show that the performance of our method is effective to reduce the false negative results arise from inaccurate vessel structure.","PeriodicalId":287232,"journal":{"name":"The First Asian Conference on Pattern Recognition","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123835987","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}