首页 > 最新文献

7th International Conference on Automatic Face and Gesture Recognition (FGR06)最新文献

英文 中文
Component-based robust face detection using AdaBoost and decision tree 基于AdaBoost和决策树的构件鲁棒人脸检测
K. Ichikawa, T. Mita, O. Hori
We present a robust frontal face detection method that enables the identification of face positions in images by combining the results of a low-resolution whole face and individual face parts classifiers. Our approach is to use face parts information and change the identification strategy based on the results from individual face parts classifiers. These classifiers were implemented based on AdaBoost. Moreover, we propose a novel method based on a decision tree to improve performance of face detectors for occluded faces. The proposed decision tree method distinguishes partially occluded faces based on the results from the individual classifies. Preliminarily experiments on a test sample set containing non-occluded faces and occluded faces indicated that our method achieved better results than conventional methods. Actual experimental results containing general images also showed better results
我们提出了一种鲁棒的正面人脸检测方法,该方法通过结合低分辨率的整个人脸和单个人脸部分分类器的结果来识别图像中的人脸位置。我们的方法是利用人脸信息,并根据单个人脸分类器的结果改变识别策略。这些分类器是基于AdaBoost实现的。此外,我们提出了一种新的基于决策树的方法来提高人脸检测器对遮挡人脸的检测性能。本文提出的决策树方法基于单个分类的结果来区分部分遮挡的人脸。在包含未遮挡面和遮挡面的测试样本集上进行的初步实验表明,该方法比常规方法取得了更好的效果。包含一般图像的实际实验结果也显示出较好的效果
{"title":"Component-based robust face detection using AdaBoost and decision tree","authors":"K. Ichikawa, T. Mita, O. Hori","doi":"10.1109/FGR.2006.33","DOIUrl":"https://doi.org/10.1109/FGR.2006.33","url":null,"abstract":"We present a robust frontal face detection method that enables the identification of face positions in images by combining the results of a low-resolution whole face and individual face parts classifiers. Our approach is to use face parts information and change the identification strategy based on the results from individual face parts classifiers. These classifiers were implemented based on AdaBoost. Moreover, we propose a novel method based on a decision tree to improve performance of face detectors for occluded faces. The proposed decision tree method distinguishes partially occluded faces based on the results from the individual classifies. Preliminarily experiments on a test sample set containing non-occluded faces and occluded faces indicated that our method achieved better results than conventional methods. Actual experimental results containing general images also showed better results","PeriodicalId":109260,"journal":{"name":"7th International Conference on Automatic Face and Gesture Recognition (FGR06)","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128168321","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
Multi-view face recognition by nonlinear dimensionality reduction and generalized linear models 基于非线性降维和广义线性模型的多视图人脸识别
B. Raytchev, Ikushi Yoda, K. Sakaue
In this paper we propose a new general framework for real-time multi-view face recognition in real-world conditions, based on a novel nonlinear dimensionality reduction method IsoScale and generalized linear models (GLMs). Multi-view face sequences of freely moving people are obtained from several stereo cameras installed in an ordinary room, and IsoScale is used to map the faces into a low-dimensional space where the manifold structure of the view-varied faces is preserved, but the face classes are forced to be linearly separable. Then, a GLM-based linear map is learnt between the low-dimensional face representation and the classes, providing posterior probabilities of class membership for the test faces. The benefits of the proposed method are illustrated in a typical HCl application
本文提出了一种新的基于非线性降维方法IsoScale和广义线性模型(GLMs)的实时多视图人脸识别通用框架。从安装在普通房间内的几台立体摄像机中获得自由移动的人的多视图人脸序列,并使用IsoScale将人脸映射到低维空间中,在低维空间中保留了视图变化的人脸的流形结构,但人脸类别被迫是线性可分的。然后,在低维人脸表示和类之间学习基于glm的线性映射,为测试人脸提供类隶属度的后验概率。在一个典型的HCl应用中说明了所提出方法的优点
{"title":"Multi-view face recognition by nonlinear dimensionality reduction and generalized linear models","authors":"B. Raytchev, Ikushi Yoda, K. Sakaue","doi":"10.1109/FGR.2006.82","DOIUrl":"https://doi.org/10.1109/FGR.2006.82","url":null,"abstract":"In this paper we propose a new general framework for real-time multi-view face recognition in real-world conditions, based on a novel nonlinear dimensionality reduction method IsoScale and generalized linear models (GLMs). Multi-view face sequences of freely moving people are obtained from several stereo cameras installed in an ordinary room, and IsoScale is used to map the faces into a low-dimensional space where the manifold structure of the view-varied faces is preserved, but the face classes are forced to be linearly separable. Then, a GLM-based linear map is learnt between the low-dimensional face representation and the classes, providing posterior probabilities of class membership for the test faces. The benefits of the proposed method are illustrated in a typical HCl application","PeriodicalId":109260,"journal":{"name":"7th International Conference on Automatic Face and Gesture Recognition (FGR06)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114401125","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Bayesian classification of task-oriented actions based on stochastic context-free grammar 基于随机上下文无关语法的任务导向动作贝叶斯分类
Masanobu Yamamoto, Humikazu Mitomi, F. Fujiwara, Taisuke Sato
This paper proposes a new approach for recognition of task-oriented actions based on stochastic context-free grammar (SCFG). Our attention puts on actions in the Japanese tea ceremony, where the action can be described by context-free grammar. Our aim is to recognize the action in the tea services. Existing SCFG approach consists of generating symbolic string, parsing it and recognition. The symbolic string often includes uncertainty. Therefore, the parsing process needs to recover the errors at the entry process. This paper proposes a segmentation method errorless as much as possible to segment an action into a string of finer actions. This method, based on an acceleration of the body motion, can produce the fine action corresponding to a terminal symbol with little error. After translating the sequence of fine actions into a set of symbolic strings, SCFG-based parsing of this set leaves small number of ones to be derived. Among the remaining strings, Bayesian classifier answers the action name with a maximum posterior probability. Giving one SCFG rule the multiple probabilities, one SCFG can recognize multiple actions
本文提出了一种基于随机上下文无关语法(SCFG)的面向任务的动作识别新方法。我们的注意力集中在日本茶道中的动作上,这些动作可以用上下文无关的语法来描述。我们的目的是认识到茶服务的作用。现有的SCFG方法包括符号字符串的生成、解析和识别。符号字符串通常包含不确定性。因此,解析过程需要在输入过程中恢复错误。本文提出了一种尽可能无差错的分割方法,将一个动作分割成一系列更精细的动作。该方法基于人体运动的加速度,可以产生与终端符号相对应的精细动作,误差很小。在将精细操作序列转换为一组符号字符串之后,基于scfg的解析会为该集合留下少量待派生的符号字符串。在剩余的字符串中,贝叶斯分类器以最大后验概率回答动作名称。给定一个SCFG规则的多个概率,一个SCFG可以识别多个动作
{"title":"Bayesian classification of task-oriented actions based on stochastic context-free grammar","authors":"Masanobu Yamamoto, Humikazu Mitomi, F. Fujiwara, Taisuke Sato","doi":"10.1109/FGR.2006.28","DOIUrl":"https://doi.org/10.1109/FGR.2006.28","url":null,"abstract":"This paper proposes a new approach for recognition of task-oriented actions based on stochastic context-free grammar (SCFG). Our attention puts on actions in the Japanese tea ceremony, where the action can be described by context-free grammar. Our aim is to recognize the action in the tea services. Existing SCFG approach consists of generating symbolic string, parsing it and recognition. The symbolic string often includes uncertainty. Therefore, the parsing process needs to recover the errors at the entry process. This paper proposes a segmentation method errorless as much as possible to segment an action into a string of finer actions. This method, based on an acceleration of the body motion, can produce the fine action corresponding to a terminal symbol with little error. After translating the sequence of fine actions into a set of symbolic strings, SCFG-based parsing of this set leaves small number of ones to be derived. Among the remaining strings, Bayesian classifier answers the action name with a maximum posterior probability. Giving one SCFG rule the multiple probabilities, one SCFG can recognize multiple actions","PeriodicalId":109260,"journal":{"name":"7th International Conference on Automatic Face and Gesture Recognition (FGR06)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129348123","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 42
Color face recognition by hypercomplex Gabor analysis 基于超复杂Gabor分析的彩色人脸识别
Creed F. Jones, A. L. Abbott
This paper explores the extraction of features from color imagery for recognition tasks, especially face recognition. The well-known Gabor filter, which is typically defined as a complex function, has been extended to the hypercomplex (quaternion) domain. Several proposed modes of this extension are discussed, and a preferred formulation is selected. To quantify the effectiveness of this novel filter for color-based feature extraction, an elastic graph implementation for human face recognition has been extended to color images, and performance of the corresponding monochromatic and color recognition systems have been compared. Our experiments have shown an improvement of 3% to 17% in recognition accuracy over the analysis of monochromatic images using complex Gabor filters
本文探讨了从彩色图像中提取特征用于识别任务,特别是人脸识别。众所周知的Gabor滤波器,通常被定义为一个复函数,已经扩展到超复(四元数)域。讨论了几种提出的扩展模式,并选择了一种优选的配方。为了量化这种新型滤波器在基于颜色的特征提取中的有效性,将人脸识别的弹性图实现扩展到彩色图像,并比较了相应的单色和彩色识别系统的性能。我们的实验表明,与使用复杂Gabor滤波器分析单色图像相比,识别精度提高了3%至17%
{"title":"Color face recognition by hypercomplex Gabor analysis","authors":"Creed F. Jones, A. L. Abbott","doi":"10.1109/FGR.2006.30","DOIUrl":"https://doi.org/10.1109/FGR.2006.30","url":null,"abstract":"This paper explores the extraction of features from color imagery for recognition tasks, especially face recognition. The well-known Gabor filter, which is typically defined as a complex function, has been extended to the hypercomplex (quaternion) domain. Several proposed modes of this extension are discussed, and a preferred formulation is selected. To quantify the effectiveness of this novel filter for color-based feature extraction, an elastic graph implementation for human face recognition has been extended to color images, and performance of the corresponding monochromatic and color recognition systems have been compared. Our experiments have shown an improvement of 3% to 17% in recognition accuracy over the analysis of monochromatic images using complex Gabor filters","PeriodicalId":109260,"journal":{"name":"7th International Conference on Automatic Face and Gesture Recognition (FGR06)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132971287","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 54
Face recognition with image sets using hierarchically extracted exemplars from appearance manifolds 使用从外观流形中分层提取样本的图像集进行人脸识别
Wei-liang Fan, D. Yeung
An unsupervised nonparametric approach is proposed to automatically extract representative face samples (exemplars) from a video sequence or an image set for multiple-shot face recognition. Motivated by a nonlinear dimensionality reduction algorithm called Isomap, we use local neighborhood information to approximate the geodesic distances between face images. A hierarchical agglomerative clustering (HAC) algorithm is then applied to group similar faces together based on the estimated geodesic distances which approximate their locations on the appearance manifold. We define the exemplars as cluster centers for template matching at the subsequent testing stage. The final recognition is the outcome of a majority voting scheme which combines the decisions from all the individual frames in the test set. Experimental results on a 40-subject video database demonstrate the effectiveness and flexibility of our proposed method
提出了一种从视频序列或图像集中自动提取代表性人脸样本的无监督非参数方法,用于多镜头人脸识别。在一种名为Isomap的非线性降维算法的激励下,我们使用局部邻域信息来近似人脸图像之间的测地线距离。然后采用层次聚类(HAC)算法,根据估计的测地线距离将相似的人脸在外观流形上的位置进行分组。我们将示例定义为集群中心,以便在随后的测试阶段进行模板匹配。最终的识别是多数投票方案的结果,该方案结合了测试集中所有单个帧的决策。在40个主题视频数据库上的实验结果证明了该方法的有效性和灵活性
{"title":"Face recognition with image sets using hierarchically extracted exemplars from appearance manifolds","authors":"Wei-liang Fan, D. Yeung","doi":"10.1109/FGR.2006.47","DOIUrl":"https://doi.org/10.1109/FGR.2006.47","url":null,"abstract":"An unsupervised nonparametric approach is proposed to automatically extract representative face samples (exemplars) from a video sequence or an image set for multiple-shot face recognition. Motivated by a nonlinear dimensionality reduction algorithm called Isomap, we use local neighborhood information to approximate the geodesic distances between face images. A hierarchical agglomerative clustering (HAC) algorithm is then applied to group similar faces together based on the estimated geodesic distances which approximate their locations on the appearance manifold. We define the exemplars as cluster centers for template matching at the subsequent testing stage. The final recognition is the outcome of a majority voting scheme which combines the decisions from all the individual frames in the test set. Experimental results on a 40-subject video database demonstrate the effectiveness and flexibility of our proposed method","PeriodicalId":109260,"journal":{"name":"7th International Conference on Automatic Face and Gesture Recognition (FGR06)","volume":"91 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133884706","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 24
Gait recognition by two-stage principal component analysis 基于两阶段主成分分析的步态识别
Sandhitsu R. Das, Robert C. Wilson, M. Lazarewicz, L. Finkel
We describe a methodology for classification of gait (walk, run, jog, etc.) and recognition of individuals based on gait using two successive stages of principal component analysis (PCA) on kinematic data. In psychophysical studies, we have found that observers are sensitive to specific "motion features" that characterize human gait. These spatiotemporal motion features closely correspond to the first few principal components (PC) of the kinematic data. The first few PCs provide a representation of an individual gait as trajectory along a low-dimensional manifold in PC space. A second stage of PCA captures variability in the shape of this manifold across individuals or gaits. This simple eigenspace based analysis is capable of accurate classification across subjects
我们描述了一种步态分类(步行,跑步,慢跑等)和基于步态的个体识别的方法,使用运动学数据的两个连续阶段的主成分分析(PCA)。在心理物理学研究中,我们发现观察者对人类步态的特定“运动特征”很敏感。这些时空运动特征与运动学数据的前几个主成分(PC)密切对应。最初的几个PC将个人步态表示为PC空间中沿低维流形的轨迹。PCA的第二阶段捕获这种流形在个体或步态上的变异性。这种简单的基于特征空间的分析能够跨主题进行准确分类
{"title":"Gait recognition by two-stage principal component analysis","authors":"Sandhitsu R. Das, Robert C. Wilson, M. Lazarewicz, L. Finkel","doi":"10.1109/FGR.2006.56","DOIUrl":"https://doi.org/10.1109/FGR.2006.56","url":null,"abstract":"We describe a methodology for classification of gait (walk, run, jog, etc.) and recognition of individuals based on gait using two successive stages of principal component analysis (PCA) on kinematic data. In psychophysical studies, we have found that observers are sensitive to specific \"motion features\" that characterize human gait. These spatiotemporal motion features closely correspond to the first few principal components (PC) of the kinematic data. The first few PCs provide a representation of an individual gait as trajectory along a low-dimensional manifold in PC space. A second stage of PCA captures variability in the shape of this manifold across individuals or gaits. This simple eigenspace based analysis is capable of accurate classification across subjects","PeriodicalId":109260,"journal":{"name":"7th International Conference on Automatic Face and Gesture Recognition (FGR06)","volume":"106 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133975333","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Expanding Training Set for Chinese Sign Language Recognition 扩展中文手语识别训练集
Chunli Wang, Xilin Chen, Wen Gao
In sign language recognition, one of the problems is to collect enough training data. Almost all of the statistical methods used in sign language recognition suffer from this problem. Inspired by the crossover of genetic algorithms, this paper presents a method to expand Chinese sign language (CSL) database through re-sampling from existing sign samples. Two original samples of the same sign are regarded as parents. They can reproduce their children by crossover. To verify the validity of the proposed method, some experiments are carried out on a vocabulary of 2435 gestures in Chinese sign language. Each gesture has 4 samples. Three samples are used to be the original generation. These three original samples and their offspring are used to construct the training set, and the remaining sample is used for test. The experimental results show that the new samples generated by the proposed method are effective
在手语识别中,问题之一是收集足够的训练数据。几乎所有用于手语识别的统计方法都存在这个问题。在遗传算法交叉的启发下,提出了一种通过对已有的手语样本进行重新采样来扩充中文手语数据库的方法。同一星座的两个原始样本被视为父母。它们可以通过杂交繁殖后代。为了验证该方法的有效性,我们对中国手语中的2435个手势进行了实验。每个手势有4个样本。三个样本作为原始代。这三个原始样本和它们的子代样本用来构造训练集,剩下的样本用来测试。实验结果表明,该方法生成的新样本是有效的
{"title":"Expanding Training Set for Chinese Sign Language Recognition","authors":"Chunli Wang, Xilin Chen, Wen Gao","doi":"10.1109/FGR.2006.39","DOIUrl":"https://doi.org/10.1109/FGR.2006.39","url":null,"abstract":"In sign language recognition, one of the problems is to collect enough training data. Almost all of the statistical methods used in sign language recognition suffer from this problem. Inspired by the crossover of genetic algorithms, this paper presents a method to expand Chinese sign language (CSL) database through re-sampling from existing sign samples. Two original samples of the same sign are regarded as parents. They can reproduce their children by crossover. To verify the validity of the proposed method, some experiments are carried out on a vocabulary of 2435 gestures in Chinese sign language. Each gesture has 4 samples. Three samples are used to be the original generation. These three original samples and their offspring are used to construct the training set, and the remaining sample is used for test. The experimental results show that the new samples generated by the proposed method are effective","PeriodicalId":109260,"journal":{"name":"7th International Conference on Automatic Face and Gesture Recognition (FGR06)","volume":"69 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123677269","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
Self correcting tracking for articulated objects 铰接对象的自校正跟踪
M. Caglar, N. Lobo
Hand detection and tracking play important roles in human computer interaction (HCI) applications, as well as surveillance. We propose a self initializing and self correcting tracking technique that is robust to different skin color, illumination and shadow irregularities. Self initialization is achieved from a detector that has relatively high false positive rate. The detected hands are then tracked backwards and forward in time using mean shift trackers initialized at each hand to find the candidate tracks for possible objects in the test sequence. Observed tracks are merged and weighed to find the real trajectories. Simple actions can be inferred extracting each object from the scene and interpreting their locations within each frame. Extraction is possible using the color histograms of the objects built during the detection phase. We apply the technique here to simple hand tracking with good results, without the need for training for skin color
手部检测和跟踪在人机交互(HCI)应用以及监控中发挥着重要作用。我们提出了一种对不同肤色、光照和阴影不规则性具有鲁棒性的自初始化和自校正跟踪技术。自初始化是由一个具有较高假阳性率的检测器实现的。然后使用在每只手初始化的平均移位跟踪器及时向后和向前跟踪检测到的手,以找到测试序列中可能对象的候选轨迹。观察到的轨迹被合并和加权以找到真实的轨迹。可以推断出简单的动作,从场景中提取每个对象并解释它们在每个帧中的位置。使用在检测阶段建立的物体的颜色直方图进行提取是可能的。我们将这项技术应用于简单的手部跟踪,效果很好,不需要对肤色进行培训
{"title":"Self correcting tracking for articulated objects","authors":"M. Caglar, N. Lobo","doi":"10.1109/FGR.2006.100","DOIUrl":"https://doi.org/10.1109/FGR.2006.100","url":null,"abstract":"Hand detection and tracking play important roles in human computer interaction (HCI) applications, as well as surveillance. We propose a self initializing and self correcting tracking technique that is robust to different skin color, illumination and shadow irregularities. Self initialization is achieved from a detector that has relatively high false positive rate. The detected hands are then tracked backwards and forward in time using mean shift trackers initialized at each hand to find the candidate tracks for possible objects in the test sequence. Observed tracks are merged and weighed to find the real trajectories. Simple actions can be inferred extracting each object from the scene and interpreting their locations within each frame. Extraction is possible using the color histograms of the objects built during the detection phase. We apply the technique here to simple hand tracking with good results, without the need for training for skin color","PeriodicalId":109260,"journal":{"name":"7th International Conference on Automatic Face and Gesture Recognition (FGR06)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128160985","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Face recognition by projection-based 3D normalization and shading subspace orthogonalization 基于投影的三维归一化和阴影子空间正交化人脸识别
Tatsuo Kozakaya, Osamu Yamaguchi
This paper describes a new face recognition method using a projection-based 3D normalization and a shading subspace orthogonalization under variation in facial pose and illumination. The proposed method does not need any reconstruction and reillumination for a personalized 3D model, thus it can avoid these troublesome problems and the recognition process can be done rapidly. The facial size and pose including out of plane rotation can be normalized to a generic 3D model from one still image and the input subspace is generated by perturbed cropped patterns in order to absorb the localization errors. Furthermore, by exploiting the fact that a normalized pattern is fitted to the generic 3D model, illumination robust features are extracted through the shading subspace orthogonalization. Evaluation experiments are performed using several databases and the results show the effectiveness of our method under various facial poses and illuminations
提出了一种基于投影的三维归一化和阴影子空间正交化的人脸识别方法。该方法不需要对个性化的三维模型进行任何重建和重新照明,从而避免了这些麻烦的问题,并且可以快速完成识别过程。将人脸的大小和姿态(包括离面旋转)从一张静止图像归一化为一个通用的三维模型,并通过扰动裁剪模式生成输入子空间,以吸收定位误差。此外,利用归一化模式拟合通用三维模型的特点,通过阴影子空间正交化提取光照鲁棒性特征。在多个数据库中进行了评估实验,结果表明了该方法在各种面部姿态和光照下的有效性
{"title":"Face recognition by projection-based 3D normalization and shading subspace orthogonalization","authors":"Tatsuo Kozakaya, Osamu Yamaguchi","doi":"10.1109/FGR.2006.43","DOIUrl":"https://doi.org/10.1109/FGR.2006.43","url":null,"abstract":"This paper describes a new face recognition method using a projection-based 3D normalization and a shading subspace orthogonalization under variation in facial pose and illumination. The proposed method does not need any reconstruction and reillumination for a personalized 3D model, thus it can avoid these troublesome problems and the recognition process can be done rapidly. The facial size and pose including out of plane rotation can be normalized to a generic 3D model from one still image and the input subspace is generated by perturbed cropped patterns in order to absorb the localization errors. Furthermore, by exploiting the fact that a normalized pattern is fitted to the generic 3D model, illumination robust features are extracted through the shading subspace orthogonalization. Evaluation experiments are performed using several databases and the results show the effectiveness of our method under various facial poses and illuminations","PeriodicalId":109260,"journal":{"name":"7th International Conference on Automatic Face and Gesture Recognition (FGR06)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114426664","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Head and facial action tracking: comparison of two robust approaches 头部和面部动作跟踪:两种鲁棒方法的比较
R. Hérault, F. Davoine, Yves Grandvalet
In this work, we address a method that is able to track simultaneously 3D head movements and facial actions like lip and eyebrow movements in a video sequence. In a baseline framework, an adaptive appearance model is estimated online by the knowledge of a monocular video sequence. This method uses a 3D model of the face and a facial adaptive texture model. Then, we consider and compare two improved models in order to increase robustness to occlusions. First, we use robust statistics in order to downweight the hidden regions or outlier pixels. In a second approach, mixture models provides better integration of occlusions. Experiments demonstrate the benefit of the two robust models. The latter are compared under various occlusions
在这项工作中,我们提出了一种能够同时跟踪3D头部运动和面部动作(如视频序列中的嘴唇和眉毛运动)的方法。在基线框架中,根据单目视频序列的知识在线估计自适应外观模型。该方法使用人脸的三维模型和人脸自适应纹理模型。然后,我们考虑并比较了两种改进的模型,以提高对遮挡的鲁棒性。首先,我们使用鲁棒统计来降低隐藏区域或离群像素的权重。在第二种方法中,混合模型提供了更好的闭塞整合。实验证明了这两种鲁棒模型的有效性。后者在不同的咬合下进行比较
{"title":"Head and facial action tracking: comparison of two robust approaches","authors":"R. Hérault, F. Davoine, Yves Grandvalet","doi":"10.1109/FGR.2006.63","DOIUrl":"https://doi.org/10.1109/FGR.2006.63","url":null,"abstract":"In this work, we address a method that is able to track simultaneously 3D head movements and facial actions like lip and eyebrow movements in a video sequence. In a baseline framework, an adaptive appearance model is estimated online by the knowledge of a monocular video sequence. This method uses a 3D model of the face and a facial adaptive texture model. Then, we consider and compare two improved models in order to increase robustness to occlusions. First, we use robust statistics in order to downweight the hidden regions or outlier pixels. In a second approach, mixture models provides better integration of occlusions. Experiments demonstrate the benefit of the two robust models. The latter are compared under various occlusions","PeriodicalId":109260,"journal":{"name":"7th International Conference on Automatic Face and Gesture Recognition (FGR06)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130957840","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
期刊
7th International Conference on Automatic Face and Gesture Recognition (FGR06)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1