Pub Date : 2011-11-01DOI: 10.1109/ACPR.2011.6166625
W. Lin, Chung-Lin Huang, Shih-Chung Hsu, Hung-Wei Lin, Hau-Wei Wang
The markerless vision-based human motion parameters capturing has been widely applied for human-machine interface. However, it faces two problems: the high-dimensional parameter estimation and the self-occlusion. Here, we propose a 3-D human model with structural, kinematic, and temporal constraints to track a walking human object in any viewing direction. Our method modifies the Annealed Particle Filter (APF) by applying the pre-trained spatial correlation map and the temporal constraint to estimate the motion parameters of a walking human object. In the experiments, we demonstrate that the proposed method requires less computation time and generates more accurate results.
{"title":"A vision-based walking motion parameters capturing system","authors":"W. Lin, Chung-Lin Huang, Shih-Chung Hsu, Hung-Wei Lin, Hau-Wei Wang","doi":"10.1109/ACPR.2011.6166625","DOIUrl":"https://doi.org/10.1109/ACPR.2011.6166625","url":null,"abstract":"The markerless vision-based human motion parameters capturing has been widely applied for human-machine interface. However, it faces two problems: the high-dimensional parameter estimation and the self-occlusion. Here, we propose a 3-D human model with structural, kinematic, and temporal constraints to track a walking human object in any viewing direction. Our method modifies the Annealed Particle Filter (APF) by applying the pre-trained spatial correlation map and the temporal constraint to estimate the motion parameters of a walking human object. In the experiments, we demonstrate that the proposed method requires less computation time and generates more accurate results.","PeriodicalId":287232,"journal":{"name":"The First Asian Conference on Pattern Recognition","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127282219","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-11-01DOI: 10.1109/ACPR.2011.6166611
Gang Zeng, Rui Gan, H. Zha
Like many natural and other man-made objects, buildings contain repeating elements. The repetition is an important cue for most applications, and can be partial, approximate or both. This paper presents a robust and accurate building facade interpretation algorithm that processes a single input image and efficiently discovers and extracts the repeating elements (e.g. windows) without any prior knowledge about their shape, intensity or structure. The method is based on locally registering certain key regions in pairs and using these matches to accumulate evidence for averaged templates. These templates are determined via the graph-theoretical concept of minimum spanning tree (MST) and via mutual information (MI). Based on the templates, the repeating elements are finally extracted from the input image. Real scene examples demonstrate the ability of the proposed algorithm to capture important high-level information about the structure of a building facade, which in turn can support further processing operations, including compression, segmentation, editing and reconstruction.
{"title":"Building facade interpretation exploiting repetition and mixed templates","authors":"Gang Zeng, Rui Gan, H. Zha","doi":"10.1109/ACPR.2011.6166611","DOIUrl":"https://doi.org/10.1109/ACPR.2011.6166611","url":null,"abstract":"Like many natural and other man-made objects, buildings contain repeating elements. The repetition is an important cue for most applications, and can be partial, approximate or both. This paper presents a robust and accurate building facade interpretation algorithm that processes a single input image and efficiently discovers and extracts the repeating elements (e.g. windows) without any prior knowledge about their shape, intensity or structure. The method is based on locally registering certain key regions in pairs and using these matches to accumulate evidence for averaged templates. These templates are determined via the graph-theoretical concept of minimum spanning tree (MST) and via mutual information (MI). Based on the templates, the repeating elements are finally extracted from the input image. Real scene examples demonstrate the ability of the proposed algorithm to capture important high-level information about the structure of a building facade, which in turn can support further processing operations, including compression, segmentation, editing and reconstruction.","PeriodicalId":287232,"journal":{"name":"The First Asian Conference on Pattern Recognition","volume":"28 7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130329518","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-11-01DOI: 10.1109/ACPR.2011.6166659
Xiaoyuan Jing, Chao Lan, Min Li, Yong-Fang Yao, D. Zhang, Jing-yu Yang
Feature extraction is an important research topic in the field of pattern recognition. The class-specific idea tends to recast a traditional multi-class feature extraction and recognition task into several binary class problems, and therefore inevitably class imbalance problem, where the minority class is the specific class, and the majority class consists of all the other classes. However, discriminative information from binary class problems is usually limited, and imbalanced data may have negative effect on the recognition performance. For solving these problems, in this paper, we propose two novel approaches to learn discriminant features from imbalanced data, named class-balanced discrimination (CBD) and orthogonal CBD (OCBD). For a specific class, we select a reduced counterpart class whose data are nearest to the data of specific class, and further divide them into smaller subsets, each of which has the same size as the specific class, to achieve balance. Then, each subset is combined with the minority class, and linear discriminant analysis (LDA) is performed on them to extract discriminative vectors. To further remove redundant information, we impose orthogonal constraint on the extracted discriminant vectors among correlated classes. Experimental results on three public image databases demonstrate that the proposed approaches outperform several related image feature extraction and recognition methods.
{"title":"Class-imbalance learning based discriminant analysis","authors":"Xiaoyuan Jing, Chao Lan, Min Li, Yong-Fang Yao, D. Zhang, Jing-yu Yang","doi":"10.1109/ACPR.2011.6166659","DOIUrl":"https://doi.org/10.1109/ACPR.2011.6166659","url":null,"abstract":"Feature extraction is an important research topic in the field of pattern recognition. The class-specific idea tends to recast a traditional multi-class feature extraction and recognition task into several binary class problems, and therefore inevitably class imbalance problem, where the minority class is the specific class, and the majority class consists of all the other classes. However, discriminative information from binary class problems is usually limited, and imbalanced data may have negative effect on the recognition performance. For solving these problems, in this paper, we propose two novel approaches to learn discriminant features from imbalanced data, named class-balanced discrimination (CBD) and orthogonal CBD (OCBD). For a specific class, we select a reduced counterpart class whose data are nearest to the data of specific class, and further divide them into smaller subsets, each of which has the same size as the specific class, to achieve balance. Then, each subset is combined with the minority class, and linear discriminant analysis (LDA) is performed on them to extract discriminative vectors. To further remove redundant information, we impose orthogonal constraint on the extracted discriminant vectors among correlated classes. Experimental results on three public image databases demonstrate that the proposed approaches outperform several related image feature extraction and recognition methods.","PeriodicalId":287232,"journal":{"name":"The First Asian Conference on Pattern Recognition","volume":"19 12","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131658159","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-11-01DOI: 10.1109/ACPR.2011.6166551
Jianyun Liu, Yunhong Wang, Zhaoxiang Zhang, Yi Mo
Moving objects classification in traffic scene videos is a hot topic in recent years. It has significant meaning to intelligent traffic system by classifying moving traffic objects into pedestrians, motor vehicles, non-motor vehicles etc.. Traditional machine learning approaches make the assumption that source scene objects and target scene objects share same distributions, which does not hold for most occasions. Under this circumstance, large amount of manual labeling for target scene data is needed, which is time and labor consuming. In this paper, we introduce TrAdaBoost, a transfer learning algorithm, to bridge the gap between source and target scene. During training procedure, TrAdaBoost makes full use of the source scene data that is most similar to the target scene data so that only small number of labeled target scene data could help improve the performance significantly. The features used for classification are Histogram of Oriented Gradient features of the appearance based instances. The experiment results show the outstanding performance of the transfer learning method comparing with traditional machine learning algorithm.
{"title":"Multi-view moving objects classification via transfer learning","authors":"Jianyun Liu, Yunhong Wang, Zhaoxiang Zhang, Yi Mo","doi":"10.1109/ACPR.2011.6166551","DOIUrl":"https://doi.org/10.1109/ACPR.2011.6166551","url":null,"abstract":"Moving objects classification in traffic scene videos is a hot topic in recent years. It has significant meaning to intelligent traffic system by classifying moving traffic objects into pedestrians, motor vehicles, non-motor vehicles etc.. Traditional machine learning approaches make the assumption that source scene objects and target scene objects share same distributions, which does not hold for most occasions. Under this circumstance, large amount of manual labeling for target scene data is needed, which is time and labor consuming. In this paper, we introduce TrAdaBoost, a transfer learning algorithm, to bridge the gap between source and target scene. During training procedure, TrAdaBoost makes full use of the source scene data that is most similar to the target scene data so that only small number of labeled target scene data could help improve the performance significantly. The features used for classification are Histogram of Oriented Gradient features of the appearance based instances. The experiment results show the outstanding performance of the transfer learning method comparing with traditional machine learning algorithm.","PeriodicalId":287232,"journal":{"name":"The First Asian Conference on Pattern Recognition","volume":"194 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129279338","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-11-01DOI: 10.1109/ACPR.2011.6166707
Wuxia Zhang, Yuan Yuan, Xuelong Li, Pingkun Yan
Image segmentation plays a critical role in medical imaging applications, whereas it is still a challenging problem due to the complex shapes and complicated texture of structures in medical images. Model based methods have been widely used for medical image segmentation as a priori knowledge can be incorporated. Accurate shape prior estimation is one of the major factors affecting the accuracy of model based segmentation methods. This paper proposes a novel statistical shape modeling method, which aims to estimate target-oriented shape prior by applying the constraint from the intrinsic structure of the training shape set. The proposed shape modeling method is incorporated into a deformable model based framework for image segmentation. The experimental results showed that the proposed method can achieve more accurate segmentation compared with other existing methods.
{"title":"Target-oriented shape modeling with structure constraint for image segmentation","authors":"Wuxia Zhang, Yuan Yuan, Xuelong Li, Pingkun Yan","doi":"10.1109/ACPR.2011.6166707","DOIUrl":"https://doi.org/10.1109/ACPR.2011.6166707","url":null,"abstract":"Image segmentation plays a critical role in medical imaging applications, whereas it is still a challenging problem due to the complex shapes and complicated texture of structures in medical images. Model based methods have been widely used for medical image segmentation as a priori knowledge can be incorporated. Accurate shape prior estimation is one of the major factors affecting the accuracy of model based segmentation methods. This paper proposes a novel statistical shape modeling method, which aims to estimate target-oriented shape prior by applying the constraint from the intrinsic structure of the training shape set. The proposed shape modeling method is incorporated into a deformable model based framework for image segmentation. The experimental results showed that the proposed method can achieve more accurate segmentation compared with other existing methods.","PeriodicalId":287232,"journal":{"name":"The First Asian Conference on Pattern Recognition","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131345821","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Histogram features, such as SIFT, HOG, LBP et al, are widely used in modern computer vision algorithms. According to [18], chi-square distance is an effective measure for comparing histogram features. In this paper, we propose a new method, named the Quadric-chi similarity metric learning (QCSML) for histogram features. The main contribution of this paper is that we propose a new metric learning method based on chi-square distance, in contrast with traditional Mahalanobis distance metric learning methods. The use of quadric-chi similarity in our method leads to an effective learning algorithm. Our method is tested on SIFT features for face identification, and compared with the state-of-art metric learning method (LDML) on the benchmark dataset, the Labeled Faces in the Wild (LFW). Experimental results show that our method can achieve clear performance gains over LDML.
直方图特征在现代计算机视觉算法中得到了广泛的应用,如SIFT、HOG、LBP等。根据[18],卡方距离是比较直方图特征的有效度量。在本文中,我们提出了一种新的直方图特征的相似度度量学习(QCSML)方法。本文的主要贡献在于,与传统的马氏距离度量学习方法相比,我们提出了一种新的基于卡方距离的度量学习方法。在我们的方法中使用二次chi相似度导致了一个有效的学习算法。我们的方法在SIFT特征上进行了人脸识别测试,并在基准数据集Labeled Faces in the Wild (LFW)上与最先进的度量学习方法(LDML)进行了比较。实验结果表明,与LDML相比,我们的方法可以获得明显的性能提升。
{"title":"Quadratic-chi similarity metric learning for histogram feature","authors":"Xinyuan Cai, Baihua Xiao, Chunheng Wang, Rongguo Zhang","doi":"10.1109/ACPR.2011.6166698","DOIUrl":"https://doi.org/10.1109/ACPR.2011.6166698","url":null,"abstract":"Histogram features, such as SIFT, HOG, LBP et al, are widely used in modern computer vision algorithms. According to [18], chi-square distance is an effective measure for comparing histogram features. In this paper, we propose a new method, named the Quadric-chi similarity metric learning (QCSML) for histogram features. The main contribution of this paper is that we propose a new metric learning method based on chi-square distance, in contrast with traditional Mahalanobis distance metric learning methods. The use of quadric-chi similarity in our method leads to an effective learning algorithm. Our method is tested on SIFT features for face identification, and compared with the state-of-art metric learning method (LDML) on the benchmark dataset, the Labeled Faces in the Wild (LFW). Experimental results show that our method can achieve clear performance gains over LDML.","PeriodicalId":287232,"journal":{"name":"The First Asian Conference on Pattern Recognition","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124818472","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-11-01DOI: 10.1109/ACPR.2011.6166617
Xiao-Xin Li, D. Dai, Xiao-Fei Zhang, Chuan-Xian Ren
Partially occluded faces are common in automatic face recognition in the real world. Existing methods, such as sparse error correction with Markov random fields, correntropy-based sparse representation and robust sparse coding, are all based on error correction, which relies on the perfect reconstruction of the occluded facial image and limits their recognition rates especially when the occluded regions are large. It helps to enhance recognition rates if we can detect the occluded portions and exclude them from further classification. Based on a magnitude order measure, we propose an innovative effective occlusion detection algorithm, called Partially Iteratively Reweighted Sparse Coding (PIRSC). Compared to the state-of-the-art methods, our PIRSC based classifier greatly improve the face recognition rate especially when the occlusion percentage is large.
{"title":"Face recognition with continuous occlusion using partially iteratively reweighted sparse coding","authors":"Xiao-Xin Li, D. Dai, Xiao-Fei Zhang, Chuan-Xian Ren","doi":"10.1109/ACPR.2011.6166617","DOIUrl":"https://doi.org/10.1109/ACPR.2011.6166617","url":null,"abstract":"Partially occluded faces are common in automatic face recognition in the real world. Existing methods, such as sparse error correction with Markov random fields, correntropy-based sparse representation and robust sparse coding, are all based on error correction, which relies on the perfect reconstruction of the occluded facial image and limits their recognition rates especially when the occluded regions are large. It helps to enhance recognition rates if we can detect the occluded portions and exclude them from further classification. Based on a magnitude order measure, we propose an innovative effective occlusion detection algorithm, called Partially Iteratively Reweighted Sparse Coding (PIRSC). Compared to the state-of-the-art methods, our PIRSC based classifier greatly improve the face recognition rate especially when the occlusion percentage is large.","PeriodicalId":287232,"journal":{"name":"The First Asian Conference on Pattern Recognition","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122093256","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-11-01DOI: 10.1109/ACPR.2011.6166674
Genquan Duan, H. Ai, Takayoshi Yamashita, S. Lao
In this paper, we propose to detect human centric objects, including face, head shoulder, upper body, left body, right body and whole body, which can provide essential information to locate humans in highly crowed scenes. In the literature, the approaches to detect multi-class objects are either taking each class independently to learn and apply its classifier successively or taking all classes as a whole to learn individual classifier based on sharing features and to detect by step-by-step dividing. Different from these works, we consider two issues, one is the similarities and discriminations of different classes and the other is the semantic relations among them. Our main idea is to predict class labels quickly using a Salient Patch Model (SPM) first, and then do detection accurately using detectors of predicted classes in which a Semantic Relation Model (SRM) is proposed to capture relations among classes for efficient inferences. SPM and SRM are designed for these two issues respectively. Experiments on challenging real-world datasets demonstrate that our proposed approach can achieve significant performance improvements.
{"title":"Human centric object detection in highly crowded scenes","authors":"Genquan Duan, H. Ai, Takayoshi Yamashita, S. Lao","doi":"10.1109/ACPR.2011.6166674","DOIUrl":"https://doi.org/10.1109/ACPR.2011.6166674","url":null,"abstract":"In this paper, we propose to detect human centric objects, including face, head shoulder, upper body, left body, right body and whole body, which can provide essential information to locate humans in highly crowed scenes. In the literature, the approaches to detect multi-class objects are either taking each class independently to learn and apply its classifier successively or taking all classes as a whole to learn individual classifier based on sharing features and to detect by step-by-step dividing. Different from these works, we consider two issues, one is the similarities and discriminations of different classes and the other is the semantic relations among them. Our main idea is to predict class labels quickly using a Salient Patch Model (SPM) first, and then do detection accurately using detectors of predicted classes in which a Semantic Relation Model (SRM) is proposed to capture relations among classes for efficient inferences. SPM and SRM are designed for these two issues respectively. Experiments on challenging real-world datasets demonstrate that our proposed approach can achieve significant performance improvements.","PeriodicalId":287232,"journal":{"name":"The First Asian Conference on Pattern Recognition","volume":"113 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124747145","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-11-01DOI: 10.1109/ACPR.2011.6166669
Mingyang Jiang, Chunxiao Li, Zirui Deng, Jufu Feng, Liwei Wang
We propose an error learning model for image classification. Motivated by the observation that classifiers trained using local grid regions of the images are often biased, i.e., contain many classification error, we present a two-level combined model to learn useful classification information from these errors, based on Bayes rule. We give theoretical analysis and explanation to show that this error learning model is effective to correct the classification errors made by the local region classifiers. We conduct extensive experiments on benchmark image classification datasets, promising results are obtained.
{"title":"Learning from error: A two-level combined model for image classification","authors":"Mingyang Jiang, Chunxiao Li, Zirui Deng, Jufu Feng, Liwei Wang","doi":"10.1109/ACPR.2011.6166669","DOIUrl":"https://doi.org/10.1109/ACPR.2011.6166669","url":null,"abstract":"We propose an error learning model for image classification. Motivated by the observation that classifiers trained using local grid regions of the images are often biased, i.e., contain many classification error, we present a two-level combined model to learn useful classification information from these errors, based on Bayes rule. We give theoretical analysis and explanation to show that this error learning model is effective to correct the classification errors made by the local region classifiers. We conduct extensive experiments on benchmark image classification datasets, promising results are obtained.","PeriodicalId":287232,"journal":{"name":"The First Asian Conference on Pattern Recognition","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115013739","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-11-01DOI: 10.1109/ACPR.2011.6166686
Jingwen Li, Lei Huang, Chang-ping Liu
People counting is a challenging task and has attracted much attention in the area of video surveillance. In this paper, we present an efficient self-learning people counting system which can count the exact number of people in a region of interest. This system based on bag-of-features model can effectively detect the pedestrians some of which are usually treated as background because they are static or move slowly. The system can also select pedestrian and non-pedestrian samples automatically and update the classifier in real-time to make it more suitable for certain specific scene. Experimental results on a practical public dataset named CASIA Pedestrian Counting Dataset show that the proposed people counting system is robust and accurate.
{"title":"An efficient self-learning people counting system","authors":"Jingwen Li, Lei Huang, Chang-ping Liu","doi":"10.1109/ACPR.2011.6166686","DOIUrl":"https://doi.org/10.1109/ACPR.2011.6166686","url":null,"abstract":"People counting is a challenging task and has attracted much attention in the area of video surveillance. In this paper, we present an efficient self-learning people counting system which can count the exact number of people in a region of interest. This system based on bag-of-features model can effectively detect the pedestrians some of which are usually treated as background because they are static or move slowly. The system can also select pedestrian and non-pedestrian samples automatically and update the classifier in real-time to make it more suitable for certain specific scene. Experimental results on a practical public dataset named CASIA Pedestrian Counting Dataset show that the proposed people counting system is robust and accurate.","PeriodicalId":287232,"journal":{"name":"The First Asian Conference on Pattern Recognition","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115646972","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}