首页 > 最新文献

7th International Conference on Automatic Face and Gesture Recognition (FGR06)最新文献

英文 中文
Framework for a portable gesture interface 用于便携式手势界面的框架
Sébastien Wagner, B. Alefs, C. Picus
Gesture recognition is a valuable extension for interaction with portable devices. This paper presents a framework for interaction by hand gestures using a head mounted camera system. The framework includes automatic activation using AdaBoost hand detection, tracking of chromatic and luminance color modes based on adaptive mean shift and pose recognition using template matching of the polar histogram. The system achieves 95% detection rate and 96% classification accuracy at real time processing, for a non-static camera setup and cluttered background
手势识别是与便携式设备交互的一个有价值的扩展。本文提出了一个使用头戴式相机系统进行手势交互的框架。该框架包括基于AdaBoost手部检测的自动激活、基于自适应均值移位的颜色和亮度模式跟踪以及基于极坐标直方图模板匹配的姿态识别。在非静态摄像机设置和杂乱背景下,该系统在实时处理中实现了95%的检测率和96%的分类准确率
{"title":"Framework for a portable gesture interface","authors":"Sébastien Wagner, B. Alefs, C. Picus","doi":"10.1109/FGR.2006.54","DOIUrl":"https://doi.org/10.1109/FGR.2006.54","url":null,"abstract":"Gesture recognition is a valuable extension for interaction with portable devices. This paper presents a framework for interaction by hand gestures using a head mounted camera system. The framework includes automatic activation using AdaBoost hand detection, tracking of chromatic and luminance color modes based on adaptive mean shift and pose recognition using template matching of the polar histogram. The system achieves 95% detection rate and 96% classification accuracy at real time processing, for a non-static camera setup and cluttered background","PeriodicalId":109260,"journal":{"name":"7th International Conference on Automatic Face and Gesture Recognition (FGR06)","volume":"143 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131893842","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
Graph embedded analysis for head pose estimation 头部姿态估计的图嵌入分析
Yun Fu, Thomas S. Huang
Head pose is an important vision cue for scene interpretation and human computer interaction. To determine the head pose, one may consider the low-dimensional manifold structure of the face view points in image space. In this paper, we present an appearance-based strategy for head pose estimation using supervised graph embedding (GE) analysis. Thinking globally and fitting locally, we first construct the neighborhood weighted graph in the sense of supervised LLE. The unified projection is calculated in a closed-form solution based on the GE linearization. We then project new data (face view images) into the embedded low-dimensional subspace with the identical projection. The head pose is finally estimated by the K-nearest neighbor classification. We test the proposed method on 18,100 USF face view images. Experimental results show that, even using a very small training set (e.g. 10 subjects), GE achieves higher head pose estimation accuracy with more efficient dimensionality reduction than the existing methods
头部姿态是场景解释和人机交互的重要视觉线索。为了确定头部姿态,可以考虑图像空间中面部视点的低维流形结构。在本文中,我们提出了一种基于外观的头部姿态估计策略,该策略使用监督图嵌入(GE)分析。从全局和局部拟合的角度出发,首先构造有监督LLE意义上的邻域加权图。统一投影的计算是基于GE线性化的封闭解。然后,我们用相同的投影将新数据(面部视图图像)投影到嵌入的低维子空间中。最后通过k近邻分类估计头部姿态。我们对18100张USF人脸图像进行了测试。实验结果表明,即使使用非常小的训练集(例如10个受试者),GE也能比现有方法获得更高的头姿估计精度和更有效的降维
{"title":"Graph embedded analysis for head pose estimation","authors":"Yun Fu, Thomas S. Huang","doi":"10.1109/FGR.2006.60","DOIUrl":"https://doi.org/10.1109/FGR.2006.60","url":null,"abstract":"Head pose is an important vision cue for scene interpretation and human computer interaction. To determine the head pose, one may consider the low-dimensional manifold structure of the face view points in image space. In this paper, we present an appearance-based strategy for head pose estimation using supervised graph embedding (GE) analysis. Thinking globally and fitting locally, we first construct the neighborhood weighted graph in the sense of supervised LLE. The unified projection is calculated in a closed-form solution based on the GE linearization. We then project new data (face view images) into the embedded low-dimensional subspace with the identical projection. The head pose is finally estimated by the K-nearest neighbor classification. We test the proposed method on 18,100 USF face view images. Experimental results show that, even using a very small training set (e.g. 10 subjects), GE achieves higher head pose estimation accuracy with more efficient dimensionality reduction than the existing methods","PeriodicalId":109260,"journal":{"name":"7th International Conference on Automatic Face and Gesture Recognition (FGR06)","volume":"115 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134383528","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 127
A multiview face identification model with no geometric constraints 无几何约束的多视角人脸识别模型
Jerry Jun Yokono, T. Poggio
Face identification systems relying on local descriptors are increasingly used because of their perceived robustness with respect to occlusions and to global geometrical deformations. Descriptors of this type - based on a set of oriented Gaussian derivative filters - are used in our identification system. In this paper, we explore a pose-invariant multiview face identification system that does not use explicit geometrical information. The basic idea of the approach is to find discriminant features to describe a face across different views. A boosting procedure is used to select features out of a large feature pool of local features collected from the positive training examples. We describe experiments on well-known, though small, face databases with excellent recognition rate
基于局部描述符的人脸识别系统越来越多地使用,因为它们对遮挡和全局几何变形具有鲁棒性。这种类型的描述符——基于一组定向高斯导数滤波器——被用于我们的识别系统中。在本文中,我们探索了一种不使用显式几何信息的姿态不变多视图人脸识别系统。该方法的基本思想是找到区分特征来描述不同视角下的人脸。增强过程用于从从正训练样例收集的大量局部特征池中选择特征。我们描述了在众所周知的,虽然小,具有优异识别率的人脸数据库上的实验
{"title":"A multiview face identification model with no geometric constraints","authors":"Jerry Jun Yokono, T. Poggio","doi":"10.1109/FGR.2006.12","DOIUrl":"https://doi.org/10.1109/FGR.2006.12","url":null,"abstract":"Face identification systems relying on local descriptors are increasingly used because of their perceived robustness with respect to occlusions and to global geometrical deformations. Descriptors of this type - based on a set of oriented Gaussian derivative filters - are used in our identification system. In this paper, we explore a pose-invariant multiview face identification system that does not use explicit geometrical information. The basic idea of the approach is to find discriminant features to describe a face across different views. A boosting procedure is used to select features out of a large feature pool of local features collected from the positive training examples. We describe experiments on well-known, though small, face databases with excellent recognition rate","PeriodicalId":109260,"journal":{"name":"7th International Conference on Automatic Face and Gesture Recognition (FGR06)","volume":"119 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114629443","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Hand Posture Classification and Recognition using the Modified Census Transform 基于改进普查变换的手部姿势分类与识别
Agnès Just, Yann Rodriguez, S. Marcel
Developing new techniques for human-computer interaction is very challenging. Vision-based techniques have the advantage of being unobtrusive and hands are a natural device that can be used for more intuitive interfaces. But in order to use hands for interaction, it is necessary to be able to recognize them in images. In this paper, we propose to apply to the hand posture classification and recognition tasks an approach that has been successfully used for face detection (B. Froba and A. Ernst, 2004). The features are based on the modified census transform and are illumination invariant. For the classification and recognition processes, a simple linear classifier is trained, using a set of feature lookup-tables. The database used for the experiments is a benchmark database in the field of posture recognition. Two protocols have been defined. We provide results following these two protocols for both the classification and recognition tasks. Results are very encouraging
开发人机交互的新技术是非常具有挑战性的。基于视觉的技术具有不引人注目的优点,手是一种自然的设备,可以用于更直观的界面。但是为了使用手进行交互,有必要能够在图像中识别它们。在本文中,我们提出将一种已经成功用于人脸检测的方法应用于手部姿势分类和识别任务(B. Froba和A. Ernst, 2004)。这些特征基于改进的人口普查变换,并且是光照不变性的。对于分类和识别过程,使用一组特征查找表训练一个简单的线性分类器。实验使用的数据库是姿态识别领域的一个基准数据库。已经定义了两个协议。我们为分类和识别任务提供了这两种协议的结果。结果非常令人鼓舞
{"title":"Hand Posture Classification and Recognition using the Modified Census Transform","authors":"Agnès Just, Yann Rodriguez, S. Marcel","doi":"10.1109/FGR.2006.62","DOIUrl":"https://doi.org/10.1109/FGR.2006.62","url":null,"abstract":"Developing new techniques for human-computer interaction is very challenging. Vision-based techniques have the advantage of being unobtrusive and hands are a natural device that can be used for more intuitive interfaces. But in order to use hands for interaction, it is necessary to be able to recognize them in images. In this paper, we propose to apply to the hand posture classification and recognition tasks an approach that has been successfully used for face detection (B. Froba and A. Ernst, 2004). The features are based on the modified census transform and are illumination invariant. For the classification and recognition processes, a simple linear classifier is trained, using a set of feature lookup-tables. The database used for the experiments is a benchmark database in the field of posture recognition. Two protocols have been defined. We provide results following these two protocols for both the classification and recognition tasks. Results are very encouraging","PeriodicalId":109260,"journal":{"name":"7th International Conference on Automatic Face and Gesture Recognition (FGR06)","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114738873","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 111
AAM derived face representations for robust facial action recognition AAM衍生的人脸表示鲁棒面部动作识别
S. Lucey, I. Matthews, Changbo Hu, Z. Ambadar, F. D. L. Torre, J. Cohn
In this paper, we present results on experiments employing active appearance model (AAM) derived facial representations, for the task of facial action recognition. Experimental results demonstrate the benefit of AAM-derived representations on a spontaneous AU database containing "real-world" variation. Additionally, we explore a number of normalization methods for these representations which increase facial action recognition performance
在本文中,我们介绍了使用主动外观模型(AAM)衍生的面部表征来完成面部动作识别任务的实验结果。实验结果表明,在包含“真实世界”变化的自发AU数据库上,aam衍生的表示是有益的。此外,我们还探索了一些用于这些表示的归一化方法,以提高面部动作识别性能
{"title":"AAM derived face representations for robust facial action recognition","authors":"S. Lucey, I. Matthews, Changbo Hu, Z. Ambadar, F. D. L. Torre, J. Cohn","doi":"10.1109/FGR.2006.17","DOIUrl":"https://doi.org/10.1109/FGR.2006.17","url":null,"abstract":"In this paper, we present results on experiments employing active appearance model (AAM) derived facial representations, for the task of facial action recognition. Experimental results demonstrate the benefit of AAM-derived representations on a spontaneous AU database containing \"real-world\" variation. Additionally, we explore a number of normalization methods for these representations which increase facial action recognition performance","PeriodicalId":109260,"journal":{"name":"7th International Conference on Automatic Face and Gesture Recognition (FGR06)","volume":"123 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122977587","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 123
A 3D facial expression database for facial behavior research 用于面部行为研究的三维面部表情数据库
L. Yin, Xiaozhou Wei, Yi Sun, Jun Wang, Matthew J. Rosato
Traditionally, human facial expressions have been studied using either 2D static images or 2D video sequences. The 2D-based analysis is incapable of handing large pose variations. Although 3D modeling techniques have been extensively used for 3D face recognition and 3D face animation, barely any research on 3D facial expression recognition using 3D range data has been reported. A primary factor for preventing such research is the lack of a publicly available 3D facial expression database. In this paper, we present a newly developed 3D facial expression database, which includes both prototypical 3D facial expression shapes and 2D facial textures of 2,500 models from 100 subjects. This is the first attempt at making a 3D facial expression database available for the research community, with the ultimate goal of fostering the research on affective computing and increasing the general understanding of facial behavior and the fine 3D structure inherent in human facial expressions. The new database can be a valuable resource for algorithm assessment, comparison and evaluation
传统上,人类面部表情的研究要么使用二维静态图像,要么使用二维视频序列。基于2d的分析无法处理大的姿势变化。尽管三维建模技术已广泛应用于三维人脸识别和三维人脸动画,但利用三维距离数据进行三维面部表情识别的研究却很少。阻碍此类研究的一个主要因素是缺乏公开可用的3D面部表情数据库。在本文中,我们提出了一个新开发的3D面部表情数据库,其中包括来自100个受试者的2500个模型的3D面部表情原型形状和2D面部纹理。这是为研究界提供3D面部表情数据库的第一次尝试,其最终目标是促进情感计算的研究,增加对面部行为和人类面部表情固有的精细3D结构的一般理解。新的数据库可以为算法的评估、比较和评价提供有价值的资源
{"title":"A 3D facial expression database for facial behavior research","authors":"L. Yin, Xiaozhou Wei, Yi Sun, Jun Wang, Matthew J. Rosato","doi":"10.1109/FGR.2006.6","DOIUrl":"https://doi.org/10.1109/FGR.2006.6","url":null,"abstract":"Traditionally, human facial expressions have been studied using either 2D static images or 2D video sequences. The 2D-based analysis is incapable of handing large pose variations. Although 3D modeling techniques have been extensively used for 3D face recognition and 3D face animation, barely any research on 3D facial expression recognition using 3D range data has been reported. A primary factor for preventing such research is the lack of a publicly available 3D facial expression database. In this paper, we present a newly developed 3D facial expression database, which includes both prototypical 3D facial expression shapes and 2D facial textures of 2,500 models from 100 subjects. This is the first attempt at making a 3D facial expression database available for the research community, with the ultimate goal of fostering the research on affective computing and increasing the general understanding of facial behavior and the fine 3D structure inherent in human facial expressions. The new database can be a valuable resource for algorithm assessment, comparison and evaluation","PeriodicalId":109260,"journal":{"name":"7th International Conference on Automatic Face and Gesture Recognition (FGR06)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123953968","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1285
Facial Expression Classification using Gabor and Log-Gabor Filters 基于Gabor和Log-Gabor滤波器的面部表情分类
Nectarios Rose
Facial expression classification has achieved good results in the past using manually extracted facial points convolved with Gabor filters. In this paper, classification performance was tested on feature vectors composed of facial points convolved with Gabor and log-Gabor filters, as well as with whole image pixel representation of static facial images. Principal component analysis was performed on these feature vectors, and classification accuracies compared using linear discriminant analysis. Experiments carried out on two databases show comparable performance between Gabor and log-Gabor filters, with a classification accuracy of around 85%. This was achieved on low-resolution images, without the need to precisely locate facial points on each face image
过去使用人工提取的面部点与Gabor滤波器卷积进行面部表情分类,取得了很好的效果。本文对人脸点与Gabor和log-Gabor滤波器卷积组成的特征向量以及静态人脸图像的全图像像素表示进行了分类性能测试。对这些特征向量进行主成分分析,并用线性判别分析比较分类精度。在两个数据库上进行的实验表明,Gabor和log-Gabor滤波器之间的性能相当,分类准确率约为85%。这是在低分辨率图像上实现的,不需要精确定位每个人脸图像上的人脸点
{"title":"Facial Expression Classification using Gabor and Log-Gabor Filters","authors":"Nectarios Rose","doi":"10.1109/FGR.2006.49","DOIUrl":"https://doi.org/10.1109/FGR.2006.49","url":null,"abstract":"Facial expression classification has achieved good results in the past using manually extracted facial points convolved with Gabor filters. In this paper, classification performance was tested on feature vectors composed of facial points convolved with Gabor and log-Gabor filters, as well as with whole image pixel representation of static facial images. Principal component analysis was performed on these feature vectors, and classification accuracies compared using linear discriminant analysis. Experiments carried out on two databases show comparable performance between Gabor and log-Gabor filters, with a classification accuracy of around 85%. This was achieved on low-resolution images, without the need to precisely locate facial points on each face image","PeriodicalId":109260,"journal":{"name":"7th International Conference on Automatic Face and Gesture Recognition (FGR06)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123979870","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 68
Multi-scale primal feature based facial expression modeling and identification 基于多尺度原始特征的面部表情建模与识别
L. Yin, Xiaozhou Wei
In this paper, we present our newly developed face expression modeling system for expression analysis and identification. Given a face image at a front view, a realistic facial model is created using our extended topographic analysis and model instantiation approach. Our facial expression modeling system consists of two major components: (1) facial feature representation using the coarse-to-fine multiscale topographic primitive features and (2) an adaptive generic model individualization process based on the primal facial surface feature context. The algorithms have been tested using both static images and facial expression sequences. The usefulness of the generated expression models is validated by our 3D facial expression analysis algorithm. The accuracy of the generated expression model is evaluated by the comparison between the generated models and the range models obtained by a 3D digitizer
本文介绍了一种基于表情分析和识别的人脸表情建模系统。给定正面视图的面部图像,使用我们的扩展地形分析和模型实例化方法创建逼真的面部模型。我们的面部表情建模系统由两个主要部分组成:(1)使用粗到细的多尺度地形原始特征的面部特征表示;(2)基于原始面部特征上下文的自适应通用模型个性化过程。这些算法已经在静态图像和面部表情序列上进行了测试。通过我们的三维面部表情分析算法验证了生成的表情模型的有效性。通过将生成的表达式模型与三维数字化仪获得的距离模型进行比较,对生成的表达式模型的精度进行评价
{"title":"Multi-scale primal feature based facial expression modeling and identification","authors":"L. Yin, Xiaozhou Wei","doi":"10.1109/FGR.2006.80","DOIUrl":"https://doi.org/10.1109/FGR.2006.80","url":null,"abstract":"In this paper, we present our newly developed face expression modeling system for expression analysis and identification. Given a face image at a front view, a realistic facial model is created using our extended topographic analysis and model instantiation approach. Our facial expression modeling system consists of two major components: (1) facial feature representation using the coarse-to-fine multiscale topographic primitive features and (2) an adaptive generic model individualization process based on the primal facial surface feature context. The algorithms have been tested using both static images and facial expression sequences. The usefulness of the generated expression models is validated by our 3D facial expression analysis algorithm. The accuracy of the generated expression model is evaluated by the comparison between the generated models and the range models obtained by a 3D digitizer","PeriodicalId":109260,"journal":{"name":"7th International Conference on Automatic Face and Gesture Recognition (FGR06)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129267894","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Local Linear Regression (LLR) for Pose Invariant Face Recognition 局部线性回归(LLR)用于姿态不变人脸识别
Xiujuan Chai, S. Shan, Xilin Chen, Wen Gao
The variation of facial appearance due to the viewpoint (/pose) degrades face recognition systems considerably, which is well known as one of the bottlenecks in face recognition. One of the possible solutions is generating virtual frontal view from any given non-frontal view to obtain a virtual gallery/probe face. By formulating this kind of solutions as a prediction problem, this paper proposes a simple but efficient novel local linear regression (LLR) method, which can generate the virtual frontal view from a given non-frontal face image. The proposed LLR inspires from the observation that the corresponding local facial regions of the frontal and non-frontal view pair satisfy linear assumption much better than the whole face region. This can be explained easily by the fact that a 3D face shape is composed of many local planar surfaces, which satisfy naturally linear model under imaging projection. In LLR, we simply partition the whole non-frontal face image into multiple local patches and apply linear regression to each patch for the prediction of its virtual frontal patch. Comparing with other methods, the experimental results on CMU PIE database show distinct advantage of the proposed method
视点(或姿态)引起的面部外观变化严重降低了人脸识别系统的性能,这是人脸识别的瓶颈之一。一种可能的解决方案是从任何给定的非正面视图生成虚拟正面视图,以获得虚拟的画廊/探头面。本文将这类解表述为一个预测问题,提出了一种简单而高效的局部线性回归(LLR)方法,该方法可以从给定的非正面人脸图像生成虚拟正面视图。通过观察发现,正面和非正面视图对对应的局部面部区域比整个面部区域更符合线性假设。这可以很容易地解释为三维脸型是由许多局部平面组成的,这些局部平面在成像投影下满足自然线性模型。在LLR中,我们简单地将整个非正面人脸图像分割成多个局部补丁,并对每个补丁应用线性回归来预测其虚拟正面补丁。与其他方法相比,在CMU PIE数据库上的实验结果表明,该方法具有明显的优势
{"title":"Local Linear Regression (LLR) for Pose Invariant Face Recognition","authors":"Xiujuan Chai, S. Shan, Xilin Chen, Wen Gao","doi":"10.1109/FGR.2006.73","DOIUrl":"https://doi.org/10.1109/FGR.2006.73","url":null,"abstract":"The variation of facial appearance due to the viewpoint (/pose) degrades face recognition systems considerably, which is well known as one of the bottlenecks in face recognition. One of the possible solutions is generating virtual frontal view from any given non-frontal view to obtain a virtual gallery/probe face. By formulating this kind of solutions as a prediction problem, this paper proposes a simple but efficient novel local linear regression (LLR) method, which can generate the virtual frontal view from a given non-frontal face image. The proposed LLR inspires from the observation that the corresponding local facial regions of the frontal and non-frontal view pair satisfy linear assumption much better than the whole face region. This can be explained easily by the fact that a 3D face shape is composed of many local planar surfaces, which satisfy naturally linear model under imaging projection. In LLR, we simply partition the whole non-frontal face image into multiple local patches and apply linear regression to each patch for the prediction of its virtual frontal patch. Comparing with other methods, the experimental results on CMU PIE database show distinct advantage of the proposed method","PeriodicalId":109260,"journal":{"name":"7th International Conference on Automatic Face and Gesture Recognition (FGR06)","volume":"94 8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131207184","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 32
Towards Automatic Body Language Annotation 走向自动肢体语言注释
P. Chippendale
This paper describes a real-time system developed for the derivation of low-level visual cues targeted at the recognition of simple hand, head and body gestures. A novel, adaptive background subtraction technique is presented together with a tool for monitoring repetitive movements, e.g. fidgeting. To monitor subtle body movements in an unconstrained environment, active cameras with pan, tilt and zoom capabilities must be employed to track an individual's actions more closely. This paper then explores a means of detecting small and large scale human activity within images produced from active cameras that may be reoriented during monitoring
本文描述了一个实时系统,用于低级视觉线索的派生,目标是识别简单的手,头和身体手势。提出了一种新颖的自适应背景减法技术,并提出了一种监测重复运动(如坐立不安)的工具。为了在不受约束的环境中监控细微的身体运动,必须使用具有平移、倾斜和变焦功能的主动摄像机来更密切地跟踪个人的动作。然后,本文探讨了一种检测活动摄像机产生的图像中的小型和大规模人类活动的方法,这些图像可能在监测期间重新定向
{"title":"Towards Automatic Body Language Annotation","authors":"P. Chippendale","doi":"10.1109/FGR.2006.105","DOIUrl":"https://doi.org/10.1109/FGR.2006.105","url":null,"abstract":"This paper describes a real-time system developed for the derivation of low-level visual cues targeted at the recognition of simple hand, head and body gestures. A novel, adaptive background subtraction technique is presented together with a tool for monitoring repetitive movements, e.g. fidgeting. To monitor subtle body movements in an unconstrained environment, active cameras with pan, tilt and zoom capabilities must be employed to track an individual's actions more closely. This paper then explores a means of detecting small and large scale human activity within images produced from active cameras that may be reoriented during monitoring","PeriodicalId":109260,"journal":{"name":"7th International Conference on Automatic Face and Gesture Recognition (FGR06)","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126955818","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 33
期刊
7th International Conference on Automatic Face and Gesture Recognition (FGR06)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1