首页 > 最新文献

7th International Conference on Automatic Face and Gesture Recognition (FGR06)最新文献

英文 中文
Evaluating error functions for robust active appearance models 评估鲁棒主动外观模型的误差函数
B. Theobald, I. Matthews, Simon Baker
Active appearance models (AAMs) are generative parametric models commonly used to track faces in video sequences. A limitation of AAMs is they are not robust to occlusion. A recent extension reformulated the search as an iteratively re-weighted least-squares problem. In this paper we focus on the choice of error function for use in a robust AAM search. We evaluate eight error functions using two performance metrics: accuracy of occlusion detection and fitting robustness. We show for any reasonable error function the performance in terms of occlusion detection is the same. However, this does not mean that fitting performance is the same. We describe experiments for measuring fitting robustness for images containing real occlusion. The best approach assumes the residuals at each pixel are Gaussianally distributed, then estimates the parameters of the distribution from images that do not contain occlusion. In each iteration of the search, the error image is used to sample these distributions to obtain the pixel weights
活动外观模型(AAMs)是一种常用的生成参数模型,用于跟踪视频序列中的人脸。aam的一个限制是它们对遮挡的鲁棒性不强。最近的扩展将搜索重新表述为迭代加权最小二乘问题。在本文中,我们重点研究了在鲁棒AAM搜索中使用的误差函数的选择。我们使用两个性能指标来评估8个误差函数:遮挡检测的准确性和拟合的鲁棒性。我们表明,对于任何合理的误差函数,在遮挡检测方面的性能都是相同的。然而,这并不意味着拟合性能是相同的。我们描述了测量包含真实遮挡的图像的拟合鲁棒性的实验。最好的方法是假设每个像素的残差是高斯分布的,然后从不包含遮挡的图像中估计分布的参数。在每次搜索迭代中,使用误差图像对这些分布进行采样以获得像素权重
{"title":"Evaluating error functions for robust active appearance models","authors":"B. Theobald, I. Matthews, Simon Baker","doi":"10.1109/FGR.2006.38","DOIUrl":"https://doi.org/10.1109/FGR.2006.38","url":null,"abstract":"Active appearance models (AAMs) are generative parametric models commonly used to track faces in video sequences. A limitation of AAMs is they are not robust to occlusion. A recent extension reformulated the search as an iteratively re-weighted least-squares problem. In this paper we focus on the choice of error function for use in a robust AAM search. We evaluate eight error functions using two performance metrics: accuracy of occlusion detection and fitting robustness. We show for any reasonable error function the performance in terms of occlusion detection is the same. However, this does not mean that fitting performance is the same. We describe experiments for measuring fitting robustness for images containing real occlusion. The best approach assumes the residuals at each pixel are Gaussianally distributed, then estimates the parameters of the distribution from images that do not contain occlusion. In each iteration of the search, the error image is used to sample these distributions to obtain the pixel weights","PeriodicalId":109260,"journal":{"name":"7th International Conference on Automatic Face and Gesture Recognition (FGR06)","volume":"134 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121287402","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 49
Estimation of Anthropomeasures from a Single Calibrated Camera 从单个校准相机估算人类测量值
Chiraz BenAbdelkader, L. Davis
We are interested in the recovery of anthropometric dimensions of the human body from calibrated monocular sequences, and their use in multi-target tracking across multiple cameras and identification of individual people. In this paper, we focus on two specific anthropomeasures that are relatively easy to estimate from low-resolution images: stature and shoulder breadth. Precise average estimates are obtained for each anthropomeasure by combining measurements from multiple frames in the sequence. Our contribution is two-fold: (i) a novel technique for automatic and passive estimation of shoulder breadth, that is based on modelling the shoulders as an ellipse, and (U) a novel method for increasing the accuracy of the mean estimates of both anthropomeasures. The latter is based on the observation that major sources of error in the measurements are landmark localization the 2D image and 3D modelling error, and that both of these are correlated with gait phase and body orientation with respect to the camera. Consequently, estimation error can be significantly reduced via appropriate selection or control of these two variables
我们对从校准的单目序列中恢复人体的人体测量尺寸感兴趣,并将其用于跨多个摄像机的多目标跟踪和个体识别。在本文中,我们关注两个相对容易从低分辨率图像中估计的具体人体测量:身高和肩宽。通过组合序列中多个帧的测量值,可以获得每个人体测量值的精确平均估计。我们的贡献是双重的:(i)一种自动和被动估计肩宽的新技术,该技术基于将肩膀建模为椭圆,以及(U)一种提高两种人体测量的平均估计精度的新方法。后者是基于观察到测量误差的主要来源是二维图像的地标定位和三维建模误差,并且这两者都与步态相位和相对于相机的身体方向相关。因此,通过适当选择或控制这两个变量,可以显著减少估计误差
{"title":"Estimation of Anthropomeasures from a Single Calibrated Camera","authors":"Chiraz BenAbdelkader, L. Davis","doi":"10.1109/FGR.2006.37","DOIUrl":"https://doi.org/10.1109/FGR.2006.37","url":null,"abstract":"We are interested in the recovery of anthropometric dimensions of the human body from calibrated monocular sequences, and their use in multi-target tracking across multiple cameras and identification of individual people. In this paper, we focus on two specific anthropomeasures that are relatively easy to estimate from low-resolution images: stature and shoulder breadth. Precise average estimates are obtained for each anthropomeasure by combining measurements from multiple frames in the sequence. Our contribution is two-fold: (i) a novel technique for automatic and passive estimation of shoulder breadth, that is based on modelling the shoulders as an ellipse, and (U) a novel method for increasing the accuracy of the mean estimates of both anthropomeasures. The latter is based on the observation that major sources of error in the measurements are landmark localization the 2D image and 3D modelling error, and that both of these are correlated with gait phase and body orientation with respect to the camera. Consequently, estimation error can be significantly reduced via appropriate selection or control of these two variables","PeriodicalId":109260,"journal":{"name":"7th International Conference on Automatic Face and Gesture Recognition (FGR06)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127791229","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 29
Preliminary Face Recognition Grand Challenge Results 人脸识别大挑战初步结果
P. Phillips, P. Flynn, W. T. Scruggs, K. Bowyer, W. Worek
The goal of the face recognition grand challenge (FRGC) is to improve the performance of face recognition algorithms by an order of magnitude over the best results in face recognition vendor test (FRVT) 2002. The FRGC is designed to achieve this performance goal by presenting to researchers a six-experiment challenge problem along with a data corpus of 50,000 images. The data consists of 3D scans and high resolution still imagery taken under controlled and uncontrolled conditions. This paper presents preliminary results of the FRGC for all six experiments. The preliminary results indicate that significant progress has been made towards achieving the stated goals
人脸识别大挑战(FRGC)的目标是将人脸识别算法的性能在2002年人脸识别厂商测试(FRVT)的最佳结果基础上提高一个数量级。FRGC旨在通过向研究人员提供6个实验挑战问题以及50,000张图像的数据语料库来实现这一性能目标。数据包括在受控和非受控条件下拍摄的3D扫描和高分辨率静止图像。本文介绍了FRGC在所有六个实验中的初步结果。初步结果表明,在实现既定目标方面取得了重大进展
{"title":"Preliminary Face Recognition Grand Challenge Results","authors":"P. Phillips, P. Flynn, W. T. Scruggs, K. Bowyer, W. Worek","doi":"10.1109/FGR.2006.87","DOIUrl":"https://doi.org/10.1109/FGR.2006.87","url":null,"abstract":"The goal of the face recognition grand challenge (FRGC) is to improve the performance of face recognition algorithms by an order of magnitude over the best results in face recognition vendor test (FRVT) 2002. The FRGC is designed to achieve this performance goal by presenting to researchers a six-experiment challenge problem along with a data corpus of 50,000 images. The data consists of 3D scans and high resolution still imagery taken under controlled and uncontrolled conditions. This paper presents preliminary results of the FRGC for all six experiments. The preliminary results indicate that significant progress has been made towards achieving the stated goals","PeriodicalId":109260,"journal":{"name":"7th International Conference on Automatic Face and Gesture Recognition (FGR06)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122906078","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 202
Reliable and fast tracking of faces under varying pose 可靠、快速地跟踪不同姿态下的人脸
Tao Yang, S. Li, Q. Pan, Jing Li, Chunhui Zhao
This paper presents a system that is able to track multiple faces under varying pose (tilted and rotated) reliably in real-time. The system consists of two interactive modules. The first module performs detection of face subject to rotations. The second does online learning based face tracking. A mechanism of switching between the two modules is embedded into the system to automatically decide the best strategy for reliable tracking. The mechanism enables smooth transit between the detection and tracking module when one of them gives no results or unreliable results. Results demonstrate that the system can make reliable real-time tracking of multiple faces in complex background under out-of-plane rotation, up to 90 degree tilting, fast nonlinear motion, partial occlusion, large scale changes, and camera motion
本文提出了一种能够实时可靠地跟踪不同姿态(倾斜和旋转)下的多个人脸的系统。该系统由两个交互模块组成。第一个模块对受旋转影响的人脸进行检测。第二个是基于在线学习的面部跟踪。在系统中嵌入了两个模块之间的切换机制,以自动确定可靠跟踪的最佳策略。当检测模块和跟踪模块中的一个没有给出结果或结果不可靠时,该机制可以实现检测模块和跟踪模块之间的平滑传输。结果表明,该系统能够在面外旋转、最大90度倾斜、快速非线性运动、局部遮挡、大尺度变化和摄像机运动等条件下,对复杂背景下的多人脸进行可靠的实时跟踪
{"title":"Reliable and fast tracking of faces under varying pose","authors":"Tao Yang, S. Li, Q. Pan, Jing Li, Chunhui Zhao","doi":"10.1109/FGR.2006.92","DOIUrl":"https://doi.org/10.1109/FGR.2006.92","url":null,"abstract":"This paper presents a system that is able to track multiple faces under varying pose (tilted and rotated) reliably in real-time. The system consists of two interactive modules. The first module performs detection of face subject to rotations. The second does online learning based face tracking. A mechanism of switching between the two modules is embedded into the system to automatically decide the best strategy for reliable tracking. The mechanism enables smooth transit between the detection and tracking module when one of them gives no results or unreliable results. Results demonstrate that the system can make reliable real-time tracking of multiple faces in complex background under out-of-plane rotation, up to 90 degree tilting, fast nonlinear motion, partial occlusion, large scale changes, and camera motion","PeriodicalId":109260,"journal":{"name":"7th International Conference on Automatic Face and Gesture Recognition (FGR06)","volume":"2015 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114562884","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 27
Gait tracking and recognition using person-dependent dynamic shape model 基于人相关动态形状模型的步态跟踪与识别
Chan-Su Lee, A. Elgammal
The characteristics of the 2D shape deformation in human motion contain rich information for human identification and pose estimation. In this paper, we introduce a framework for simultaneous gait tracking and recognition using person-dependent global shape deformation model. Person-dependent global shape deformations are modeled using a nonlinear generative model with kinematic manifold embedding and kernel mapping. The kinematic manifold is used as a common representation of body pose dynamics in different people in a low dimensional space. Shape style as well as geometric transformation and body pose are estimated within a Bayesian framework using the generative model of global shape deformation. Experimental results show person-dependent synthesis of global shape deformation, gait recognition from extracted silhouettes using style parameters, and simultaneous gait tracking and recognition from image edges
人体运动中的二维形状变形特征为人体识别和姿态估计提供了丰富的信息。本文提出了一种基于人相关全局形状变形模型的步态同步跟踪与识别框架。基于运动学流形嵌入和核映射的非线性生成模型建立了基于人的全局形状变形模型。采用运动流形作为低维空间中不同人的身体姿态动力学的通用表示。使用全局形状变形生成模型在贝叶斯框架内估计形状样式以及几何变换和身体姿态。实验结果表明,基于人的全局形状变形合成、基于风格参数提取轮廓的步态识别以及基于图像边缘的同步步态跟踪和识别
{"title":"Gait tracking and recognition using person-dependent dynamic shape model","authors":"Chan-Su Lee, A. Elgammal","doi":"10.1109/FGR.2006.58","DOIUrl":"https://doi.org/10.1109/FGR.2006.58","url":null,"abstract":"The characteristics of the 2D shape deformation in human motion contain rich information for human identification and pose estimation. In this paper, we introduce a framework for simultaneous gait tracking and recognition using person-dependent global shape deformation model. Person-dependent global shape deformations are modeled using a nonlinear generative model with kinematic manifold embedding and kernel mapping. The kinematic manifold is used as a common representation of body pose dynamics in different people in a low dimensional space. Shape style as well as geometric transformation and body pose are estimated within a Bayesian framework using the generative model of global shape deformation. Experimental results show person-dependent synthesis of global shape deformation, gait recognition from extracted silhouettes using style parameters, and simultaneous gait tracking and recognition from image edges","PeriodicalId":109260,"journal":{"name":"7th International Conference on Automatic Face and Gesture Recognition (FGR06)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114755365","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
Robust Spotting of Key Gestures from Whole Body Motion Sequence 从整个身体运动序列的关键手势鲁棒定位
Hee-Deok Yang, A-Yeon Park, Seong-Whan Lee
Robust gesture recognition in video requires segmentation of the meaningful gestures from a whole body gesture sequence. This is a challenging problem because it is not straightforward to describe and model meaningless gesture patterns. This paper presents a new method for simultaneous spotting and recognition of whole body key gestures. A human subject is first described by a set of features encoding the angular relations between a dozen body parts in 3D. A feature vector is then mapped to a codeword of gesture HMMs. In order to spot key gestures accurately, a sophisticated method of designing a garbage gesture model is proposed; a model reduction which merges similar states based on data-dependent statistics and relative entropy. This model provides an effective mechanism for qualifying or disqualifying gestural motions. The proposed method has been tested with 20 persons' samples and 80 synthetic data. The proposed method achieved a reliability rate of 94.8% in spotting task and a recognition rate of 97.4% from an isolated gesture
视频中的鲁棒手势识别需要从整个身体手势序列中分割出有意义的手势。这是一个具有挑战性的问题,因为描述和建模无意义的手势模式并不简单。提出了一种全身按键手势同步识别的新方法。人体主体首先通过一组特征来描述,这些特征编码了3D中十二个身体部位之间的角度关系。然后将特征向量映射到手势hmm的码字。为了准确识别关键手势,提出了一种复杂的垃圾手势模型设计方法;一种基于数据相关统计和相对熵合并相似状态的模型简化。该模型提供了一种有效的机制来限定或取消手势动作。该方法已用20人的样本和80个合成数据进行了测试。该方法在识别任务中的可靠性为94.8%,对孤立手势的识别率为97.4%
{"title":"Robust Spotting of Key Gestures from Whole Body Motion Sequence","authors":"Hee-Deok Yang, A-Yeon Park, Seong-Whan Lee","doi":"10.1109/FGR.2006.99","DOIUrl":"https://doi.org/10.1109/FGR.2006.99","url":null,"abstract":"Robust gesture recognition in video requires segmentation of the meaningful gestures from a whole body gesture sequence. This is a challenging problem because it is not straightforward to describe and model meaningless gesture patterns. This paper presents a new method for simultaneous spotting and recognition of whole body key gestures. A human subject is first described by a set of features encoding the angular relations between a dozen body parts in 3D. A feature vector is then mapped to a codeword of gesture HMMs. In order to spot key gestures accurately, a sophisticated method of designing a garbage gesture model is proposed; a model reduction which merges similar states based on data-dependent statistics and relative entropy. This model provides an effective mechanism for qualifying or disqualifying gestural motions. The proposed method has been tested with 20 persons' samples and 80 synthetic data. The proposed method achieved a reliability rate of 94.8% in spotting task and a recognition rate of 97.4% from an isolated gesture","PeriodicalId":109260,"journal":{"name":"7th International Conference on Automatic Face and Gesture Recognition (FGR06)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114832339","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
Using Feature Combination and Statistical Resampling for Accurate Face Recognition Based on Frequency Domain Representation of Facial Asymmetry 基于人脸不对称频域表示的特征组合和统计重采样精确识别
S. Mitra, M. Savvides
This paper explores the efficiency of facial asymmetry in face identification tasks using a frequency domain representation. Satisfactory results are obtained for two different tasks, namely, human identification under extreme expression variations and expression classification, using a PCA-type classifier on a database with 55 individuals, which establishes the robustness of these measures to intra-personal distortions. Furthermore, we demonstrate that it is possible to improve upon these results significantly by simple means such as feature set combination and statistical resampling methods like bagging and random subspace method (RSM) using the same PCA-type base classifier. This even succeeds in attaining perfect classification results with 100% accuracy in some cases. Moreover, both these methods require few additional resources (computing time and power), hence they are useful for practical applications as well and help establish the effectiveness of frequency domain representation of facial asymmetry in automatic identification tasks
本文利用频域表示探讨了人脸识别任务中人脸不对称性的效率。在两个不同的任务中,即极端表达变化下的人类识别和表达分类,使用pca类型的分类器在55个个体的数据库上获得了令人满意的结果,这建立了这些措施对个人内部扭曲的鲁棒性。此外,我们证明了可以通过简单的方法显著改善这些结果,例如特征集组合和统计重采样方法,如bagging和随机子空间方法(RSM),使用相同的pca类型基础分类器。在某些情况下,这甚至可以获得100%准确率的完美分类结果。此外,这两种方法都需要很少的额外资源(计算时间和功率),因此它们在实际应用中也很有用,并有助于在自动识别任务中建立面部不对称的频域表示的有效性
{"title":"Using Feature Combination and Statistical Resampling for Accurate Face Recognition Based on Frequency Domain Representation of Facial Asymmetry","authors":"S. Mitra, M. Savvides","doi":"10.1109/FGR.2006.109","DOIUrl":"https://doi.org/10.1109/FGR.2006.109","url":null,"abstract":"This paper explores the efficiency of facial asymmetry in face identification tasks using a frequency domain representation. Satisfactory results are obtained for two different tasks, namely, human identification under extreme expression variations and expression classification, using a PCA-type classifier on a database with 55 individuals, which establishes the robustness of these measures to intra-personal distortions. Furthermore, we demonstrate that it is possible to improve upon these results significantly by simple means such as feature set combination and statistical resampling methods like bagging and random subspace method (RSM) using the same PCA-type base classifier. This even succeeds in attaining perfect classification results with 100% accuracy in some cases. Moreover, both these methods require few additional resources (computing time and power), hence they are useful for practical applications as well and help establish the effectiveness of frequency domain representation of facial asymmetry in automatic identification tasks","PeriodicalId":109260,"journal":{"name":"7th International Conference on Automatic Face and Gesture Recognition (FGR06)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127827802","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automatic Skin Segmentation for Gesture Recognition Combining Region and Support Vector Machine Active Learning 结合区域和支持向量机主动学习的手势识别自动皮肤分割
Junwei Han, G. Awad, Alistair Sutherland, Hai Wu
Skin segmentation is the cornerstone of many applications such as gesture recognition, face detection, and objectionable image filtering. In this paper, we attempt to address the skin segmentation problem for gesture recognition. Initially, given a gesture video sequence, a generic skin model is applied to the first couple of frames to automatically collect the training data. Then, an SVM classifier based on active learning is used to identify the skin pixels. Finally, the results are improved by incorporating region segmentation. The proposed algorithm is fully automatic and adaptive to different signers. We have tested our approach on the ECHO database. Comparing with other existing algorithms, our method could achieve better performance
皮肤分割是手势识别、人脸检测和不良图像过滤等许多应用的基础。在本文中,我们试图解决手势识别中的皮肤分割问题。首先,给定手势视频序列,将通用皮肤模型应用于前几帧以自动收集训练数据。然后,使用基于主动学习的SVM分类器对皮肤像素进行识别。最后,结合区域分割对结果进行改进。该算法具有全自动和自适应的特点。我们已经在ECHO数据库上测试了我们的方法。与已有的算法相比,我们的方法可以达到更好的性能
{"title":"Automatic Skin Segmentation for Gesture Recognition Combining Region and Support Vector Machine Active Learning","authors":"Junwei Han, G. Awad, Alistair Sutherland, Hai Wu","doi":"10.1109/FGR.2006.27","DOIUrl":"https://doi.org/10.1109/FGR.2006.27","url":null,"abstract":"Skin segmentation is the cornerstone of many applications such as gesture recognition, face detection, and objectionable image filtering. In this paper, we attempt to address the skin segmentation problem for gesture recognition. Initially, given a gesture video sequence, a generic skin model is applied to the first couple of frames to automatically collect the training data. Then, an SVM classifier based on active learning is used to identify the skin pixels. Finally, the results are improved by incorporating region segmentation. The proposed algorithm is fully automatic and adaptive to different signers. We have tested our approach on the ECHO database. Comparing with other existing algorithms, our method could achieve better performance","PeriodicalId":109260,"journal":{"name":"7th International Conference on Automatic Face and Gesture Recognition (FGR06)","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124667267","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 55
Learning to identify facial expression during detection using Markov decision process 学习识别面部表情在检测中使用马尔可夫决策过程
Ramana Isukapalli, A. Elgammal, R. Greiner
While there has been a great deal of research in face detection and recognition, there has been very limited work on identifying the expression on a face. Many current face detection methods use a Viola-Jones style "cascade" of Adaboost-based classifiers to detect faces. We demonstrate that faces with similar expression form "clusters" in a "classifier space" defined by the real-valued outcomes of these classifiers on the images and address the task of using these classifiers to classify a new image into the appropriate cluster (expression). We formulate this as a Markov decision process and use dynamic programming to find an optimal policy - here a decision tree whose internal nodes each correspond to some classifier, whose arcs correspond to ranges of classifier values, and whose leaf nodes each correspond to a specific facial expression, augmented with a sequence of additional classifiers. We present empirical results that demonstrate that our system accurately determines the expression on a face during detection
虽然在人脸检测和识别方面已经有了大量的研究,但在识别人脸表情方面的工作却非常有限。许多当前的人脸检测方法使用Viola-Jones风格的基于adaboost分类器的“级联”来检测人脸。我们证明了具有相似表情的人脸在由这些分类器对图像的实值结果定义的“分类器空间”中形成“簇”,并解决了使用这些分类器将新图像分类到适当的簇(表达)中的任务。我们将其表述为马尔可夫决策过程,并使用动态规划来找到最优策略——这里是一棵决策树,其内部节点每个对应于某个分类器,其弧线对应于分类器值的范围,其叶节点每个对应于特定的面部表情,并增加了一系列额外的分类器。我们提出的实证结果表明,我们的系统在检测过程中准确地确定了面部表情
{"title":"Learning to identify facial expression during detection using Markov decision process","authors":"Ramana Isukapalli, A. Elgammal, R. Greiner","doi":"10.1109/FGR.2006.71","DOIUrl":"https://doi.org/10.1109/FGR.2006.71","url":null,"abstract":"While there has been a great deal of research in face detection and recognition, there has been very limited work on identifying the expression on a face. Many current face detection methods use a Viola-Jones style \"cascade\" of Adaboost-based classifiers to detect faces. We demonstrate that faces with similar expression form \"clusters\" in a \"classifier space\" defined by the real-valued outcomes of these classifiers on the images and address the task of using these classifiers to classify a new image into the appropriate cluster (expression). We formulate this as a Markov decision process and use dynamic programming to find an optimal policy - here a decision tree whose internal nodes each correspond to some classifier, whose arcs correspond to ranges of classifier values, and whose leaf nodes each correspond to a specific facial expression, augmented with a sequence of additional classifiers. We present empirical results that demonstrate that our system accurately determines the expression on a face during detection","PeriodicalId":109260,"journal":{"name":"7th International Conference on Automatic Face and Gesture Recognition (FGR06)","volume":"78 5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129634884","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Fully Automatic Facial Action Recognition in Spontaneous Behavior 自发行为中的全自动面部动作识别
M. Bartlett, G. Littlewort, M. Frank, C. Lainscsek, Ian R. Fasel, J. Movellan
We present results on a user independent fully automatic system for real time recognition of facial actions from the facial action coding system (FACS). The system automatically detects frontal faces in the video stream and codes each frame with respect to 20 action units. We present preliminary results on a task of facial action detection in spontaneous expressions during discourse. Support vector machines and AdaBoost classifiers are compared. For both classifiers, the output margin predicts action unit intensity
我们介绍了一个独立于用户的全自动系统,用于从面部动作编码系统(FACS)实时识别面部动作。系统自动检测视频流中的正面人脸,并根据20个动作单元对每帧进行编码。我们提出了一项关于话语中自发表情的面部动作检测任务的初步结果。比较了支持向量机和AdaBoost分类器。对于这两个分类器,输出余量预测动作单元强度
{"title":"Fully Automatic Facial Action Recognition in Spontaneous Behavior","authors":"M. Bartlett, G. Littlewort, M. Frank, C. Lainscsek, Ian R. Fasel, J. Movellan","doi":"10.1109/FGR.2006.55","DOIUrl":"https://doi.org/10.1109/FGR.2006.55","url":null,"abstract":"We present results on a user independent fully automatic system for real time recognition of facial actions from the facial action coding system (FACS). The system automatically detects frontal faces in the video stream and codes each frame with respect to 20 action units. We present preliminary results on a task of facial action detection in spontaneous expressions during discourse. Support vector machines and AdaBoost classifiers are compared. For both classifiers, the output margin predicts action unit intensity","PeriodicalId":109260,"journal":{"name":"7th International Conference on Automatic Face and Gesture Recognition (FGR06)","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131063831","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 325
期刊
7th International Conference on Automatic Face and Gesture Recognition (FGR06)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1