首页 > 最新文献

2008 Chinese Conference on Pattern Recognition最新文献

英文 中文
A Novel Facial Appearance Descriptor Based on Local Binary Pattern 一种基于局部二值模式的人脸外观描述符
Pub Date : 2008-10-31 DOI: 10.1109/CCPR.2008.58
Shihu Zhu, Jufu Feng
One of the key challenges for face recognition is finding efficient and discriminative facial appearance descriptors that are resistant to large variations in illumination, pose, face expression, ageing, face misalignment and other changes. In this paper, we propose a novel facial appearance descriptor based on local binary pattern (LBP), which presents several advantages. (1) It is more discriminative. (2) It is not sensitive to variations in illumination, pose, face expression, ageing and face misalignment. (3) It can be computed very efficiently and the feature sets are low-dimensional. Experiments on FERET database show that the proposed operator significantly outperforms other feature descriptors.
人脸识别的关键挑战之一是找到有效和有区别的面部外观描述符,这些描述符可以抵抗光照、姿势、面部表情、衰老、面部错位和其他变化的大变化。本文提出了一种新的基于局部二值模式(LBP)的面部外观描述符,该描述符具有几个优点。(1)更具歧视性。(2)对光照、姿态、面部表情、年龄、面部错位等变化不敏感。(3)计算效率高,特征集低维。在FERET数据库上的实验表明,该算子显著优于其他特征描述符。
{"title":"A Novel Facial Appearance Descriptor Based on Local Binary Pattern","authors":"Shihu Zhu, Jufu Feng","doi":"10.1109/CCPR.2008.58","DOIUrl":"https://doi.org/10.1109/CCPR.2008.58","url":null,"abstract":"One of the key challenges for face recognition is finding efficient and discriminative facial appearance descriptors that are resistant to large variations in illumination, pose, face expression, ageing, face misalignment and other changes. In this paper, we propose a novel facial appearance descriptor based on local binary pattern (LBP), which presents several advantages. (1) It is more discriminative. (2) It is not sensitive to variations in illumination, pose, face expression, ageing and face misalignment. (3) It can be computed very efficiently and the feature sets are low-dimensional. Experiments on FERET database show that the proposed operator significantly outperforms other feature descriptors.","PeriodicalId":292956,"journal":{"name":"2008 Chinese Conference on Pattern Recognition","volume":"239 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121398382","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
A Robust Video Watermarking Algorithm Resistant to Geometry Transformation Attacks Based on Background 一种抗背景几何变换攻击的鲁棒视频水印算法
Pub Date : 2008-10-31 DOI: 10.1109/CCPR.2008.70
L. Pang, Yiquan Wu
How to resist geometry transformation attacks effectively has become a focus of digital watermarking. In this paper a video blind watermarking algorithm resistant to geometry transformation attacks is proposed. In the scheme of watermark embedding, video shot segmentation is used first. Then, background of a video segment in the video shot is extracted by independent component analysis(ICA). Finally, the background image is decomposed by nonsubsampled Contourlet transform(NSCT) and meaningful watermark is embedded into the lowpass subband. In the scheme of watermark extraction, video segment embedded with watermark in the video shot is analyzed by ICA and the background image with watermark information is extracted first. Then, video segment embedded with watermark and other frames in the video shot are analyzed by ICA, and the background image without watermark information is extracted. Finally, the two background images are decomposed by NSCT, and the watermark is extracted though detecting the distinction of the lowpass subbands of the two background images. Experimental results show that, this algorithm can make the watermark resist geometry transformation attacks effectively and keep the visual quality of the video. In addition, it also has enough robustness to resist other attacks.
如何有效地抵抗几何变换攻击已成为数字水印研究的热点。提出了一种抗几何变换攻击的视频盲水印算法。在水印嵌入方案中,首先采用视频片段分割。然后,通过独立分量分析(ICA)提取视频镜头中的视频片段背景。最后,采用非下采样Contourlet变换(NSCT)对背景图像进行分解,并在低通子带中嵌入有意义的水印。在水印提取方案中,首先对视频镜头中嵌入水印的视频片段进行ICA分析,提取含有水印信息的背景图像。然后,对嵌入水印的视频片段和视频镜头中的其他帧进行ICA分析,提取没有水印信息的背景图像。最后,对两幅背景图像进行NSCT分解,通过检测两幅背景图像低通子带的区别提取水印。实验结果表明,该算法能有效地抵抗几何变换攻击,保持视频的视觉质量。此外,它还具有足够的鲁棒性来抵抗其他攻击。
{"title":"A Robust Video Watermarking Algorithm Resistant to Geometry Transformation Attacks Based on Background","authors":"L. Pang, Yiquan Wu","doi":"10.1109/CCPR.2008.70","DOIUrl":"https://doi.org/10.1109/CCPR.2008.70","url":null,"abstract":"How to resist geometry transformation attacks effectively has become a focus of digital watermarking. In this paper a video blind watermarking algorithm resistant to geometry transformation attacks is proposed. In the scheme of watermark embedding, video shot segmentation is used first. Then, background of a video segment in the video shot is extracted by independent component analysis(ICA). Finally, the background image is decomposed by nonsubsampled Contourlet transform(NSCT) and meaningful watermark is embedded into the lowpass subband. In the scheme of watermark extraction, video segment embedded with watermark in the video shot is analyzed by ICA and the background image with watermark information is extracted first. Then, video segment embedded with watermark and other frames in the video shot are analyzed by ICA, and the background image without watermark information is extracted. Finally, the two background images are decomposed by NSCT, and the watermark is extracted though detecting the distinction of the lowpass subbands of the two background images. Experimental results show that, this algorithm can make the watermark resist geometry transformation attacks effectively and keep the visual quality of the video. In addition, it also has enough robustness to resist other attacks.","PeriodicalId":292956,"journal":{"name":"2008 Chinese Conference on Pattern Recognition","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129345188","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Two-Dimensional Inverse FDA for Face Recognition 二维逆FDA人脸识别
Pub Date : 2008-10-31 DOI: 10.1109/CCPR.2008.51
Wankou Yang, Hui Yan, Jun Yin, Jingyu Yang
In this paper, we propose a two-dimensional Inverse Fisher Discriminant Analysis (2DIFDA) method for feature extraction and face recognition. This method combines the ideas of two-dimensional principal component analysis and Inverse FDA and it can directly extracts the optimal projective vectors from 2D image matrices rather than image vectors based on the inverse fisher discriminant criterion. Experiments on the FERET face databases show that the new method outperforms the PCA , 2DPCA, Fisherfaces and the inverse fisher discriminant analysis.
本文提出了一种用于特征提取和人脸识别的二维反费雪判别分析(2DIFDA)方法。该方法结合了二维主成分分析和逆FDA的思想,可以直接从二维图像矩阵中提取最优的投影向量,而不是基于逆fisher判别准则的图像向量。在FERET人脸数据库上的实验表明,该方法优于PCA、2DPCA、Fisherfaces和逆fisher判别分析。
{"title":"Two-Dimensional Inverse FDA for Face Recognition","authors":"Wankou Yang, Hui Yan, Jun Yin, Jingyu Yang","doi":"10.1109/CCPR.2008.51","DOIUrl":"https://doi.org/10.1109/CCPR.2008.51","url":null,"abstract":"In this paper, we propose a two-dimensional Inverse Fisher Discriminant Analysis (2DIFDA) method for feature extraction and face recognition. This method combines the ideas of two-dimensional principal component analysis and Inverse FDA and it can directly extracts the optimal projective vectors from 2D image matrices rather than image vectors based on the inverse fisher discriminant criterion. Experiments on the FERET face databases show that the new method outperforms the PCA , 2DPCA, Fisherfaces and the inverse fisher discriminant analysis.","PeriodicalId":292956,"journal":{"name":"2008 Chinese Conference on Pattern Recognition","volume":"11647 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128744898","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Phone-Level Mispronunciation Detection for Computer-Assisted Language Learning 计算机辅助语言学习的电话级发音错误检测
Pub Date : 2008-10-31 DOI: 10.1109/CCPR.2008.83
Xin Feng, Lan Wang
This paper presents a mispronunciation detection system which uses automatic speech recognition to effectively detect the phone-level mispronunciations in the Cantonese learners of English. Our approach extends a target pronunciation lexicon with possible phonetic confusions that may lead to pronunciation errors to generate an extended pronunciation lexicon that contains both target pronunciations for each word and pronunciation variants. The Viterbi decoding is then run with the extended pronunciation lexicon to detect phone-level mispronunciation in learners' speech. This paper introduces a data-driven approach by performing automatic phone recognition on the Cantonese learners' speech and analyzing the recognition errors to derive the possible phonetic confusions. The rule-based generation process leads to many implausible mispronunciations. We present a method to automatically prune the extended pronunciation lexicon. Experimental results show that the use of extended pronunciation lexicon after pruning can detect phone-level mispronunciation better than using a fully extended pronunciation lexicon.
本文提出了一种基于语音自动识别的语音错误检测系统,可以有效地检测粤语英语学习者的电话级语音错误。我们的方法扩展了可能导致发音错误的语音混淆的目标发音词典,以生成一个扩展的发音词典,其中包含每个单词的目标发音和发音变体。然后使用扩展的发音词典运行维特比解码,以检测学习者语音中的电话级发音错误。本文介绍了一种数据驱动的方法,对粤语学习者的语音进行自动电话识别,并分析识别误差,得出可能出现的语音混淆。基于规则的生成过程导致了许多难以置信的错误发音。提出了一种自动修剪扩展语音词典的方法。实验结果表明,使用经过修剪的扩展语音词典比使用完全扩展的语音词典能更好地检测电话级发音错误。
{"title":"Phone-Level Mispronunciation Detection for Computer-Assisted Language Learning","authors":"Xin Feng, Lan Wang","doi":"10.1109/CCPR.2008.83","DOIUrl":"https://doi.org/10.1109/CCPR.2008.83","url":null,"abstract":"This paper presents a mispronunciation detection system which uses automatic speech recognition to effectively detect the phone-level mispronunciations in the Cantonese learners of English. Our approach extends a target pronunciation lexicon with possible phonetic confusions that may lead to pronunciation errors to generate an extended pronunciation lexicon that contains both target pronunciations for each word and pronunciation variants. The Viterbi decoding is then run with the extended pronunciation lexicon to detect phone-level mispronunciation in learners' speech. This paper introduces a data-driven approach by performing automatic phone recognition on the Cantonese learners' speech and analyzing the recognition errors to derive the possible phonetic confusions. The rule-based generation process leads to many implausible mispronunciations. We present a method to automatically prune the extended pronunciation lexicon. Experimental results show that the use of extended pronunciation lexicon after pruning can detect phone-level mispronunciation better than using a fully extended pronunciation lexicon.","PeriodicalId":292956,"journal":{"name":"2008 Chinese Conference on Pattern Recognition","volume":"90 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116891163","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Accurate Eye Localization under Large Illumination and Expression Variations with Enhanced Pictorial Model 基于增强图像模型的大光照和表情变化下眼睛精确定位
Pub Date : 2008-10-31 DOI: 10.1109/CCPR.2008.25
F. Song, Xiaoyang Tan, Songcan Chen
As the first step in a face normalization procedure, accurate eye localization technique has the fundamental importance for the performance of face recognition systems. One of the most classical methods to address this is the pictorial model where the appearance model and shape constraints are optimized together. However, under extreme illumination changes and large expression variations, the simple Gaussian appearance model and the localization-based shape constraints used in the pictorial model are not capable to handle the complex appearance and structural changes appeared in the given face image. In this paper, we enhanced the pictorial model by combining the strength of illumination preprocessing, robust image descriptors, probabilistic SVM and an improved structural model which are invariant to scale, rotation and other transforms. Experimental results on CAS-PEAL dataset demonstrated that the proposed model can accurately localize eyes in spite of large illumination and expression variations in face images.
作为人脸归一化的第一步,准确的眼睛定位技术对人脸识别系统的性能起着至关重要的作用。解决这个问题的最经典方法之一是图形模型,其中外观模型和形状约束一起优化。然而,在极端光照变化和表情变化较大的情况下,图像模型中使用的简单高斯外观模型和基于定位的形状约束无法处理给定人脸图像中出现的复杂外观和结构变化。本文结合光照强度预处理、鲁棒图像描述符、概率支持向量机和一种对尺度、旋转等变换不变性的改进结构模型,对图像模型进行增强。在CAS-PEAL数据集上的实验结果表明,尽管人脸图像的光照和表情变化较大,该模型仍能准确地定位眼睛。
{"title":"Accurate Eye Localization under Large Illumination and Expression Variations with Enhanced Pictorial Model","authors":"F. Song, Xiaoyang Tan, Songcan Chen","doi":"10.1109/CCPR.2008.25","DOIUrl":"https://doi.org/10.1109/CCPR.2008.25","url":null,"abstract":"As the first step in a face normalization procedure, accurate eye localization technique has the fundamental importance for the performance of face recognition systems. One of the most classical methods to address this is the pictorial model where the appearance model and shape constraints are optimized together. However, under extreme illumination changes and large expression variations, the simple Gaussian appearance model and the localization-based shape constraints used in the pictorial model are not capable to handle the complex appearance and structural changes appeared in the given face image. In this paper, we enhanced the pictorial model by combining the strength of illumination preprocessing, robust image descriptors, probabilistic SVM and an improved structural model which are invariant to scale, rotation and other transforms. Experimental results on CAS-PEAL dataset demonstrated that the proposed model can accurately localize eyes in spite of large illumination and expression variations in face images.","PeriodicalId":292956,"journal":{"name":"2008 Chinese Conference on Pattern Recognition","volume":"2005 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125607744","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Nearest Feature Line: A Tangent Approximation 最近特征线:切线近似
Pub Date : 2008-10-31 DOI: 10.1109/CCPR.2008.22
R. He, Meng Ao, Shi-ming Xiang, S.Z. Li
Nearest feature line (NFL) (S.Z. Li and J. Lu, 1999) is an efficient yet simple classification method for pattern recognition. This paper presents a theoretical analysis and interpretation of NFL from the perspective of manifold analysis, and explains the geometric nature of NFL based similarity measures. It is illustrated that NFL, nearest feature plane (NFP) and nearest feature space (NFS) are special cases of tangent approximation. Under the assumption of manifold, we introduce localized NFL (LNFL) and nearest feature spline (NFB) to further enhance classification ability and reduce computational complexity. The LNFL extends NFL's Euclidean distance to a manifold distance. And for NFB, feature lines are constructed along with a manifold's variation which is defined on a tangent bundle. The proposed methods are validated on a synthetic dataset and two standard face recognition databases (FRGC version 2 and FERET). Experimental results illustrate its efficiency and effectiveness.
最近特征线(Nearest feature line, NFL)是一种简单有效的模式识别分类方法(Li S.Z. and J. Lu, 1999)。本文从流形分析的角度对NFL进行了理论分析和解释,并解释了基于NFL的相似性度量的几何性质。说明了NFL、最近特征平面(NFP)和最近特征空间(NFS)是切线近似的特殊情况。在流形假设下,为了进一步提高分类能力和降低计算复杂度,我们引入了局部特征样条(nlfl)和最近特征样条(NFB)。LNFL将NFL的欧氏距离扩展为流形距离。对于NFB,特征线是与在切线束上定义的流形变化一起构建的。在一个合成数据集和两个标准人脸识别数据库(FRGC version 2和FERET)上验证了所提出的方法。实验结果表明了该方法的有效性。
{"title":"Nearest Feature Line: A Tangent Approximation","authors":"R. He, Meng Ao, Shi-ming Xiang, S.Z. Li","doi":"10.1109/CCPR.2008.22","DOIUrl":"https://doi.org/10.1109/CCPR.2008.22","url":null,"abstract":"Nearest feature line (NFL) (S.Z. Li and J. Lu, 1999) is an efficient yet simple classification method for pattern recognition. This paper presents a theoretical analysis and interpretation of NFL from the perspective of manifold analysis, and explains the geometric nature of NFL based similarity measures. It is illustrated that NFL, nearest feature plane (NFP) and nearest feature space (NFS) are special cases of tangent approximation. Under the assumption of manifold, we introduce localized NFL (LNFL) and nearest feature spline (NFB) to further enhance classification ability and reduce computational complexity. The LNFL extends NFL's Euclidean distance to a manifold distance. And for NFB, feature lines are constructed along with a manifold's variation which is defined on a tangent bundle. The proposed methods are validated on a synthetic dataset and two standard face recognition databases (FRGC version 2 and FERET). Experimental results illustrate its efficiency and effectiveness.","PeriodicalId":292956,"journal":{"name":"2008 Chinese Conference on Pattern Recognition","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131441013","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Simplified Intelligence Single Particle Optimization Based Neural Network for Digit Recognition 基于简化智能单粒子优化的神经网络数字识别
Pub Date : 2008-10-31 DOI: 10.1109/CCPR.2008.74
Jiarui Zhou, Z. Ji, L. Shen
To overcome the drawback of overly dependence on the input parameters in intelligence single particle optimization (ISPO), an improved algorithm, called simplified intelligence single particle optimization (SISPO), is proposed in this paper. While maintaining similar performance as ISPO, no special parameter settings are required by SISPO. The proposed SISPO was successfully applied to train neural network classifier for digit recognition. Experimental results demonstrated that, the proposed neural network training algorithm, simplified intelligence single particle optimization neural network (SISPONN), achieved less training error and test error than traditional BP algorithms like gradient methods.
针对智能单粒子优化(ISPO)过于依赖输入参数的缺点,提出了一种改进算法——简化智能单粒子优化(SISPO)。在保持与ISPO类似的性能的同时,SISPO不需要特殊的参数设置。该方法已成功应用于数字识别神经网络分类器的训练。实验结果表明,所提出的神经网络训练算法——简化智能单粒子优化神经网络(SISPONN),与梯度方法等传统BP算法相比,训练误差和测试误差较小。
{"title":"Simplified Intelligence Single Particle Optimization Based Neural Network for Digit Recognition","authors":"Jiarui Zhou, Z. Ji, L. Shen","doi":"10.1109/CCPR.2008.74","DOIUrl":"https://doi.org/10.1109/CCPR.2008.74","url":null,"abstract":"To overcome the drawback of overly dependence on the input parameters in intelligence single particle optimization (ISPO), an improved algorithm, called simplified intelligence single particle optimization (SISPO), is proposed in this paper. While maintaining similar performance as ISPO, no special parameter settings are required by SISPO. The proposed SISPO was successfully applied to train neural network classifier for digit recognition. Experimental results demonstrated that, the proposed neural network training algorithm, simplified intelligence single particle optimization neural network (SISPONN), achieved less training error and test error than traditional BP algorithms like gradient methods.","PeriodicalId":292956,"journal":{"name":"2008 Chinese Conference on Pattern Recognition","volume":"94 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133527255","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 21
Local Maximal Marginal Embedding with Application to Face Recognition 局部最大边缘嵌入在人脸识别中的应用
Pub Date : 2008-10-31 DOI: 10.1109/CCPR.2008.49
Cairong Zhao, Zhihui Lai, Yuelei Sui, Yi Chen
Many problems in information processing involve some form of dimensionality reduction. This paper develops a new approach for dimensionality reduction of high dimensional data, called local maximal marginal (interclass) embedding (LMME), to manifold learning and pattern recognition. LMME can be seen as a linear approach of a multimanifolds-based learning framework which integrates the information of neighbor and class relations. LMME characterize the local maximal marginal scatter as well as the local intraclass compactness, seeking to find a projection that maximizes the local maximal margin and minimizes the local intraclass scatter. This characteristic makes LMME more powerful than the most up-to-data method, Marginal Fisher Analysis (MFA), and maintain all the advantages of MFA. The proposed algorithm is applied to face recognition and is examined using the Yale, AR, ORL and face image databases. The experimental results show LMME consistently outperforms PCA, LDA and MFA, owing to the locally discriminating nature. This demonstrates that LMME is an effective method for face recognition.
信息处理中的许多问题都涉及某种形式的降维。本文提出了一种新的高维数据降维方法——局部极大边际嵌入(LMME),用于流形学习和模式识别。LMME可以看作是一种基于多流形的学习框架的线性方法,它整合了邻居和阶级关系的信息。LMME对局部最大边缘散点和局部类内紧度进行表征,寻求一个最大化局部最大边缘和最小化局部类内散点的投影。这一特点使得LMME比最新的边际费雪分析(Marginal Fisher Analysis, MFA)更强大,并保持了MFA的所有优点。将该算法应用于人脸识别,并使用Yale、AR、ORL和人脸图像数据库进行了验证。实验结果表明,由于LMME具有局部判别性,其性能始终优于PCA、LDA和MFA。这表明LMME是一种有效的人脸识别方法。
{"title":"Local Maximal Marginal Embedding with Application to Face Recognition","authors":"Cairong Zhao, Zhihui Lai, Yuelei Sui, Yi Chen","doi":"10.1109/CCPR.2008.49","DOIUrl":"https://doi.org/10.1109/CCPR.2008.49","url":null,"abstract":"Many problems in information processing involve some form of dimensionality reduction. This paper develops a new approach for dimensionality reduction of high dimensional data, called local maximal marginal (interclass) embedding (LMME), to manifold learning and pattern recognition. LMME can be seen as a linear approach of a multimanifolds-based learning framework which integrates the information of neighbor and class relations. LMME characterize the local maximal marginal scatter as well as the local intraclass compactness, seeking to find a projection that maximizes the local maximal margin and minimizes the local intraclass scatter. This characteristic makes LMME more powerful than the most up-to-data method, Marginal Fisher Analysis (MFA), and maintain all the advantages of MFA. The proposed algorithm is applied to face recognition and is examined using the Yale, AR, ORL and face image databases. The experimental results show LMME consistently outperforms PCA, LDA and MFA, owing to the locally discriminating nature. This demonstrates that LMME is an effective method for face recognition.","PeriodicalId":292956,"journal":{"name":"2008 Chinese Conference on Pattern Recognition","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128415321","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
A Text Feature Selection Algorithm Based on Improved TFIDF 一种基于改进TFIDF的文本特征选择算法
Pub Date : 2008-10-31 DOI: 10.1109/CCPR.2008.87
Cheng-San Yang, Xingshi He
In Chinese text categorization system, for most classifiers using vector space model (VSM), all attributes of documents construct a high dimensional feature space. And the high dimensionality of feature space is the bottleneck of categorization. TFIDF is a kind of common methods used to measure the terms in a document. The method is easy but it doesn't consider the unbalance distribution of terms among classes. This paper analyzed the TFIDF feature selection algorithm deeply, and proposed a new TFIDF feature selection method based on Gini index theory. Experimental results show the method is valid in improving the accuracy of text categorization.
在中文文本分类系统中,对于大多数使用向量空间模型(VSM)的分类器,文档的所有属性都构建了一个高维特征空间。而特征空间的高维是分类的瓶颈。TFIDF是一种用于度量文档中术语的常用方法。该方法简单,但没有考虑类间项分布的不平衡。本文对TFIDF特征选择算法进行了深入分析,提出了一种基于基尼指数理论的TFIDF特征选择新方法。实验结果表明,该方法可以有效地提高文本分类的准确率。
{"title":"A Text Feature Selection Algorithm Based on Improved TFIDF","authors":"Cheng-San Yang, Xingshi He","doi":"10.1109/CCPR.2008.87","DOIUrl":"https://doi.org/10.1109/CCPR.2008.87","url":null,"abstract":"In Chinese text categorization system, for most classifiers using vector space model (VSM), all attributes of documents construct a high dimensional feature space. And the high dimensionality of feature space is the bottleneck of categorization. TFIDF is a kind of common methods used to measure the terms in a document. The method is easy but it doesn't consider the unbalance distribution of terms among classes. This paper analyzed the TFIDF feature selection algorithm deeply, and proposed a new TFIDF feature selection method based on Gini index theory. Experimental results show the method is valid in improving the accuracy of text categorization.","PeriodicalId":292956,"journal":{"name":"2008 Chinese Conference on Pattern Recognition","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134349121","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Manifold-Based Supervised Feature Extraction and Face Recognition 基于流形的监督特征提取与人脸识别
Pub Date : 2008-10-31 DOI: 10.1109/CCPR.2008.16
Caikou Chen, Cao Li, Jing-yu Yang
Unsupervised discriminant projection (UDP) has a good effect on face recognition problem, but it has not made full use of the training samples' class information that is useful for classification. Linear discrimination analysis (LDA) is a classical face recognition method. It is effective for classification, but it can not discover the samples' nonlinear structure. This paper develops a manifold-based supervised feature extraction method, which combines the manifold learning method UDP and the class-label information. It seeks to find a projection that maximizes the nonlocal scatter, while minimizes the local scatter and the within-class scatter. This method not only finds the intrinsic low-dimensional nonlinear representation of original high-dimensional data, but also is effective for classification. The experimental results on Yale face image database show that the proposed method outperforms the current UDP and LDA.
无监督判别投影(UDP)在人脸识别问题上有很好的效果,但它没有充分利用训练样本的分类信息。线性判别分析(LDA)是一种经典的人脸识别方法。它对分类是有效的,但不能发现样本的非线性结构。本文将流形学习方法UDP与类标签信息相结合,提出了一种基于流形的监督特征提取方法。它寻求找到一个最大化非局部分散,同时最小化局部分散和类内分散的投影。该方法不仅找到了原始高维数据固有的低维非线性表示,而且对分类也很有效。在耶鲁人脸图像数据库上的实验结果表明,该方法优于当前的UDP和LDA。
{"title":"Manifold-Based Supervised Feature Extraction and Face Recognition","authors":"Caikou Chen, Cao Li, Jing-yu Yang","doi":"10.1109/CCPR.2008.16","DOIUrl":"https://doi.org/10.1109/CCPR.2008.16","url":null,"abstract":"Unsupervised discriminant projection (UDP) has a good effect on face recognition problem, but it has not made full use of the training samples' class information that is useful for classification. Linear discrimination analysis (LDA) is a classical face recognition method. It is effective for classification, but it can not discover the samples' nonlinear structure. This paper develops a manifold-based supervised feature extraction method, which combines the manifold learning method UDP and the class-label information. It seeks to find a projection that maximizes the nonlocal scatter, while minimizes the local scatter and the within-class scatter. This method not only finds the intrinsic low-dimensional nonlinear representation of original high-dimensional data, but also is effective for classification. The experimental results on Yale face image database show that the proposed method outperforms the current UDP and LDA.","PeriodicalId":292956,"journal":{"name":"2008 Chinese Conference on Pattern Recognition","volume":"3 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132757022","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
2008 Chinese Conference on Pattern Recognition
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1