首页 > 最新文献

7th International Conference on Automatic Face and Gesture Recognition (FGR06)最新文献

英文 中文
Weighted Gabor features in unitary space for face recognition 面向人脸识别的酉空间加权Gabor特征
Yong Gao, Yangsheng Wang, Xinshan Zhu, Xuetao Feng, Xiaoxu Zhou
Gabor filters based features, with their good properties of space-frequency localization and orientation selectivity, seem to be the most effective features for face recognition currently. In this paper, we propose a kind of weighted Gabor complex features which combining Gabor magnitude and phase features in unitary space. Its weights are determined according to recognition rates of magnitude and phase features. Meanwhile, subspace based algorithms, PCA and LDA, are generalized into unitary space, and a rarely used distance measure, unitary space cosine distance, is adopted for unitary subspace based recognition algorithms. Using the generalized subspace algorithms our proposed weighted Gabor complex features (WGCF) produce better recognition result than either Gabor magnitude or Gabor phase features. Experiments on FERET database show good results comparable to the best one reported in literature
基于Gabor滤波器的特征具有良好的空频定位和方向选择性,是目前人脸识别中最有效的特征。在酉空间中,我们提出了一种结合Gabor幅度和相位特征的加权Gabor复特征。根据幅值和相位特征的识别率确定其权重。同时,将基于子空间的PCA和LDA算法推广到幺正空间,并在基于幺正子空间的识别算法中采用了很少使用的距离度量幺正空间余弦距离。利用广义子空间算法,我们提出的加权Gabor复特征(WGCF)比Gabor幅度特征和Gabor相位特征具有更好的识别效果。在FERET数据库上的实验结果与文献报道的最佳结果相当
{"title":"Weighted Gabor features in unitary space for face recognition","authors":"Yong Gao, Yangsheng Wang, Xinshan Zhu, Xuetao Feng, Xiaoxu Zhou","doi":"10.1109/FGR.2006.111","DOIUrl":"https://doi.org/10.1109/FGR.2006.111","url":null,"abstract":"Gabor filters based features, with their good properties of space-frequency localization and orientation selectivity, seem to be the most effective features for face recognition currently. In this paper, we propose a kind of weighted Gabor complex features which combining Gabor magnitude and phase features in unitary space. Its weights are determined according to recognition rates of magnitude and phase features. Meanwhile, subspace based algorithms, PCA and LDA, are generalized into unitary space, and a rarely used distance measure, unitary space cosine distance, is adopted for unitary subspace based recognition algorithms. Using the generalized subspace algorithms our proposed weighted Gabor complex features (WGCF) produce better recognition result than either Gabor magnitude or Gabor phase features. Experiments on FERET database show good results comparable to the best one reported in literature","PeriodicalId":109260,"journal":{"name":"7th International Conference on Automatic Face and Gesture Recognition (FGR06)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130994261","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Face Alignment with Unified Subspace Optimization of Active Statistical Models 主动统计模型统一子空间优化的人脸对齐
Ming Zhao, Tat-Seng Chua
Active statistical models including active shape models and active appearance models are very powerful for face alignment. They are composed of two parts: the subspace model(s) and the search process. While these two parts are closely correlated, existing efforts treated them separately and had not considered how to optimize them overall. Another problem with the subspace model(s) is that the two kinds of parameters of subspaces (the number of components and the constraints on the components) are also treated separately. So they are not jointly optimized. To tackle these two problems, an unified subspace optimization method is proposed. This method is composed of two unification aspects: (I) unification of the statistical model and the search process: the subspace models are optimized according to the search procedure; (2) unification of the number of components and the constraints: the two kinds of parameters are modelled in an unified way, such that they can be optimized jointly. Experimental results demonstrate that our method can effectively find the optimal subspace model and significantly improve the performance
包括主动形状模型和主动外观模型在内的主动统计模型对人脸对齐非常有效。它们由两部分组成:子空间模型和搜索过程。虽然这两个部分是密切相关的,但现有的工作将它们分开处理,没有考虑如何全面优化它们。子空间模型的另一个问题是子空间的两种参数(组件的数量和组件的约束)也被分开处理。所以它们不是联合优化的。针对这两个问题,提出了一种统一的子空间优化方法。该方法由两个统一方面组成:(1)统计模型与搜索过程的统一:根据搜索过程对子空间模型进行优化;(2)构件数量和约束条件的统一:将两类参数统一建模,可以进行联合优化。实验结果表明,该方法能有效地找到最优子空间模型,显著提高了性能
{"title":"Face Alignment with Unified Subspace Optimization of Active Statistical Models","authors":"Ming Zhao, Tat-Seng Chua","doi":"10.1109/FGR.2006.40","DOIUrl":"https://doi.org/10.1109/FGR.2006.40","url":null,"abstract":"Active statistical models including active shape models and active appearance models are very powerful for face alignment. They are composed of two parts: the subspace model(s) and the search process. While these two parts are closely correlated, existing efforts treated them separately and had not considered how to optimize them overall. Another problem with the subspace model(s) is that the two kinds of parameters of subspaces (the number of components and the constraints on the components) are also treated separately. So they are not jointly optimized. To tackle these two problems, an unified subspace optimization method is proposed. This method is composed of two unification aspects: (I) unification of the statistical model and the search process: the subspace models are optimized according to the search procedure; (2) unification of the number of components and the constraints: the two kinds of parameters are modelled in an unified way, such that they can be optimized jointly. Experimental results demonstrate that our method can effectively find the optimal subspace model and significantly improve the performance","PeriodicalId":109260,"journal":{"name":"7th International Conference on Automatic Face and Gesture Recognition (FGR06)","volume":"128 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129625743","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Face classification based on Shannon wavelet kernel and modified Fisher criterion 基于Shannon小波核和改进Fisher准则的人脸分类
Wensheng Chen, P. Yuen, Jian Huang, J. Lai
This paper addresses nonlinear feature extraction and small sample size (S3) problems in face recognition. In sample feature space, the distribution of face images is nonlinear because of complex variations in pose, illumination and face expression. The performance of classical linear method, such as Fisher discriminant analysis (FDA), will degrade. To overcome pose and illumination problems, Shannon wavelet kernel is constructed and utilized for nonlinear feature extraction. Based on a modified Fisher criterion, simultaneous diagonalization technique is exploited to deal with S3 problem, which often occurs in FDA based methods. Shannon wavelet kernel based subspace Fisher discriminant (SWK-SFD) method is then developed in this paper. The proposed approach not only overcomes some drawbacks of existing FDA based algorithms, but also has good computational complexity. Two databases, namely FERET and CMU PIE face databases, are selected for evaluation. Comparing with the existing PDA-based methods, the proposed method gives superior results
本文研究了人脸识别中的非线性特征提取和小样本问题。在样本特征空间中,由于姿态、光照和面部表情的复杂变化,人脸图像的分布是非线性的。经典的线性方法,如Fisher判别分析(FDA)的性能会下降。为了克服位姿和光照问题,构造香农小波核进行非线性特征提取。基于改进的Fisher准则,利用同步对角化技术解决了基于FDA的方法中经常出现的S3问题。在此基础上,提出了基于Shannon小波核的子空间Fisher判别方法。该方法不仅克服了现有基于FDA的算法的一些缺点,而且具有良好的计算复杂度。选择FERET和CMU PIE人脸数据库进行评价。与现有的基于pda的方法相比,该方法取得了较好的效果
{"title":"Face classification based on Shannon wavelet kernel and modified Fisher criterion","authors":"Wensheng Chen, P. Yuen, Jian Huang, J. Lai","doi":"10.1109/FGR.2006.41","DOIUrl":"https://doi.org/10.1109/FGR.2006.41","url":null,"abstract":"This paper addresses nonlinear feature extraction and small sample size (S3) problems in face recognition. In sample feature space, the distribution of face images is nonlinear because of complex variations in pose, illumination and face expression. The performance of classical linear method, such as Fisher discriminant analysis (FDA), will degrade. To overcome pose and illumination problems, Shannon wavelet kernel is constructed and utilized for nonlinear feature extraction. Based on a modified Fisher criterion, simultaneous diagonalization technique is exploited to deal with S3 problem, which often occurs in FDA based methods. Shannon wavelet kernel based subspace Fisher discriminant (SWK-SFD) method is then developed in this paper. The proposed approach not only overcomes some drawbacks of existing FDA based algorithms, but also has good computational complexity. Two databases, namely FERET and CMU PIE face databases, are selected for evaluation. Comparing with the existing PDA-based methods, the proposed method gives superior results","PeriodicalId":109260,"journal":{"name":"7th International Conference on Automatic Face and Gesture Recognition (FGR06)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122703490","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Multi-template ASM Method for feature points detection of facial image with diverse expressions 多模板ASM方法用于多种表情面部图像的特征点检测
Ying Li, J. Lai, P. Yuen
This paper proposed a multi-template ASM algorithm addressing facial feature points detection under nonlinear shape variation of facial images with various kinds of expression. By adding texture information, adopting asymmetric sampling strategy for the feature points on outer contour of face, building multiple templates and integrating local ASM and global ASM, experimental results show that the proposed multi-template ASM algorithm outperforms traditional single template ASM
本文提出了一种多模板ASM算法,用于解决各种表情的面部图像在非线性形状变化下的面部特征点检测问题。通过添加纹理信息,对人脸外轮廓特征点采用非对称采样策略,构建多个模板,并将局部ASM和全局ASM集成,实验结果表明,多模板ASM算法优于传统的单模板ASM
{"title":"Multi-template ASM Method for feature points detection of facial image with diverse expressions","authors":"Ying Li, J. Lai, P. Yuen","doi":"10.1109/FGR.2006.81","DOIUrl":"https://doi.org/10.1109/FGR.2006.81","url":null,"abstract":"This paper proposed a multi-template ASM algorithm addressing facial feature points detection under nonlinear shape variation of facial images with various kinds of expression. By adding texture information, adopting asymmetric sampling strategy for the feature points on outer contour of face, building multiple templates and integrating local ASM and global ASM, experimental results show that the proposed multi-template ASM algorithm outperforms traditional single template ASM","PeriodicalId":109260,"journal":{"name":"7th International Conference on Automatic Face and Gesture Recognition (FGR06)","volume":"26 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120873309","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Joint spatial and frequency domain motion analysis 关节空间和频域运动分析
N. Ahuja, A. Briassouli
Traditionally, motion estimation and segmentation have been performed mostly in the spatial domain, i.e., using the luminance information in the video sequence. Frequency domain representation offers an alternative, rich source of motion information, which has been used to a very limited extent in the past, and on relatively simple problems such as image registration. We review our work during the last few years on an approach to video motion analysis that combines spatial and Fourier domain information. We review our methods for (1) basic (translation and rotation) motion estimation and segmentation, for multiple moving objects, with constant as well as time varying velocities; and (2) more complicated motions, such as periodic motion, and periodic motion superposed on translation. The joint space analysis leads to more compact and computationally efficient solutions than existing techniques
传统上,运动估计和分割主要是在空间域中进行的,即利用视频序列中的亮度信息。频域表示提供了一种替代的、丰富的运动信息源,它在过去被用于非常有限的程度,以及相对简单的问题,如图像配准。我们回顾了我们的工作,在过去几年的视频运动分析的方法,结合空间和傅里叶域信息。我们回顾了我们的方法:(1)基本(平移和旋转)运动估计和分割,对于多个运动物体,具有恒定和时变的速度;(2)更复杂的运动,如周期运动,和周期运动叠加在平移上。与现有技术相比,关节空间分析可以得到更紧凑、计算效率更高的解决方案
{"title":"Joint spatial and frequency domain motion analysis","authors":"N. Ahuja, A. Briassouli","doi":"10.1109/FGR.2006.68","DOIUrl":"https://doi.org/10.1109/FGR.2006.68","url":null,"abstract":"Traditionally, motion estimation and segmentation have been performed mostly in the spatial domain, i.e., using the luminance information in the video sequence. Frequency domain representation offers an alternative, rich source of motion information, which has been used to a very limited extent in the past, and on relatively simple problems such as image registration. We review our work during the last few years on an approach to video motion analysis that combines spatial and Fourier domain information. We review our methods for (1) basic (translation and rotation) motion estimation and segmentation, for multiple moving objects, with constant as well as time varying velocities; and (2) more complicated motions, such as periodic motion, and periodic motion superposed on translation. The joint space analysis leads to more compact and computationally efficient solutions than existing techniques","PeriodicalId":109260,"journal":{"name":"7th International Conference on Automatic Face and Gesture Recognition (FGR06)","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126256636","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Incremental kernel SVD for face recognition with image sets 基于图像集的增量核奇异值分解人脸识别
Tat-Jun Chin, K. Schindler, D. Suter
Non-linear subspaces derived using kernel methods have been found to be superior compared to linear subspaces in modeling or classification tasks of several visual phenomena. Such kernel methods include kernel PCA, kernel DA, kernel SVD and kernel QR. Since incremental computation algorithms for these methods do not exist yet, the practicality of these methods on large datasets or online video processing is minimal. We propose an approximate incremental kernel SVD algorithm for computer vision applications that require estimation of non-linear subspaces, specifically face recognition by matching image sets obtained through long-term observations or video recordings. We extend a well-known linear subspace updating algorithm to the nonlinear case by utilizing the kernel trick, and apply a reduced set construction method to produce sparse expressions for the derived subspace basis so as to maintain constant processing speed and memory usage. Experimental results demonstrate the effectiveness of the proposed method
利用核方法导出的非线性子空间在若干视觉现象的建模或分类任务中优于线性子空间。这些核方法包括核PCA、核DA、核SVD和核QR。由于这些方法的增量计算算法还不存在,这些方法在大型数据集或在线视频处理上的实用性很小。我们提出了一种近似增量核SVD算法,用于需要估计非线性子空间的计算机视觉应用,特别是通过匹配通过长期观察或视频记录获得的图像集来识别人脸。我们利用核技巧将一种著名的线性子空间更新算法扩展到非线性情况,并应用简化集构造方法对派生的子空间基产生稀疏表达式,以保持恒定的处理速度和内存使用。实验结果证明了该方法的有效性
{"title":"Incremental kernel SVD for face recognition with image sets","authors":"Tat-Jun Chin, K. Schindler, D. Suter","doi":"10.1109/FGR.2006.67","DOIUrl":"https://doi.org/10.1109/FGR.2006.67","url":null,"abstract":"Non-linear subspaces derived using kernel methods have been found to be superior compared to linear subspaces in modeling or classification tasks of several visual phenomena. Such kernel methods include kernel PCA, kernel DA, kernel SVD and kernel QR. Since incremental computation algorithms for these methods do not exist yet, the practicality of these methods on large datasets or online video processing is minimal. We propose an approximate incremental kernel SVD algorithm for computer vision applications that require estimation of non-linear subspaces, specifically face recognition by matching image sets obtained through long-term observations or video recordings. We extend a well-known linear subspace updating algorithm to the nonlinear case by utilizing the kernel trick, and apply a reduced set construction method to produce sparse expressions for the derived subspace basis so as to maintain constant processing speed and memory usage. Experimental results demonstrate the effectiveness of the proposed method","PeriodicalId":109260,"journal":{"name":"7th International Conference on Automatic Face and Gesture Recognition (FGR06)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125083482","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 62
Fast learning for customizable head pose recognition in robotic wheelchair control 机器人轮椅控制中可定制头部姿态识别的快速学习
C. Bauckhage, Thomas Käster, Andrei M. Rotenstein
In the PLAYBOT project, we aim at assisting disabled children at play. To this end, we are developing a semi autonomous robotic wheelchair. It is equipped with several visual sensors and a robotic manipulator and thus conveniently enhances the innate capabilities of a disabled child. In addition to a touch screen, the child may control the wheelchair using simple head movements. As control based on head posture requires reliable face detection and head pose recognition, we are in need of a robust technique that may effortlessly be tailored to individual users. In this paper, we present a multilinear classification algorithm for fast and reliable face detection. It trains within seconds and thus can easily be customized to the home environment of a disabled child. Subsequent head pose recognition is done using support vector machines. Experimental results show that this two stage approach to head pose-based robotic wheelchair control performs fast and very robust
在PLAYBOT项目中,我们的目标是帮助残疾儿童玩耍。为此,我们正在开发一种半自动机器人轮椅。它配备了几个视觉传感器和一个机器人操纵器,从而方便地提高了残疾儿童的先天能力。除了触摸屏外,孩子还可以通过简单的头部运动来控制轮椅。由于基于头部姿势的控制需要可靠的面部检测和头部姿势识别,我们需要一种可以毫不费力地为个人用户量身定制的强大技术。本文提出了一种快速可靠的多线性分类算法。它可以在几秒钟内训练,因此可以很容易地根据残疾儿童的家庭环境进行定制。随后的头部姿势识别使用支持向量机完成。实验结果表明,这种基于头部姿态的轮椅机器人控制方法具有快速、鲁棒性好等优点
{"title":"Fast learning for customizable head pose recognition in robotic wheelchair control","authors":"C. Bauckhage, Thomas Käster, Andrei M. Rotenstein","doi":"10.1109/FGR.2006.52","DOIUrl":"https://doi.org/10.1109/FGR.2006.52","url":null,"abstract":"In the PLAYBOT project, we aim at assisting disabled children at play. To this end, we are developing a semi autonomous robotic wheelchair. It is equipped with several visual sensors and a robotic manipulator and thus conveniently enhances the innate capabilities of a disabled child. In addition to a touch screen, the child may control the wheelchair using simple head movements. As control based on head posture requires reliable face detection and head pose recognition, we are in need of a robust technique that may effortlessly be tailored to individual users. In this paper, we present a multilinear classification algorithm for fast and reliable face detection. It trains within seconds and thus can easily be customized to the home environment of a disabled child. Subsequent head pose recognition is done using support vector machines. Experimental results show that this two stage approach to head pose-based robotic wheelchair control performs fast and very robust","PeriodicalId":109260,"journal":{"name":"7th International Conference on Automatic Face and Gesture Recognition (FGR06)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115855519","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Automatic impression transformation of faces in 3D shape - a perceptual comparison with processing on 2D images 三维形状人脸的自动印象转换——与二维图像处理的感知比较
Yuhya Okada, Masataka Ozu, T. Sakurai, Mitsuharu Inaba, S. Akamatsu
This paper describes an attempt to transform a 3D model of a person's face to produce an intended change of impression. 3D shape and surface texture of faces are represented by high-dimensional vectors automatically extracted from the 3D data captured by a range finder, and variations among a set of faces are coded by applying principal component analysis. The relationship between the coded representation and the attribute of faces along a given impression dimension is analyzed to obtain an impression transfer vector. Here, we propose a method using this impression transfer vector to manipulate 3D faces in order to transform impressions. Experimental results on transformation of gender impressions confirmed the superiority of manipulating the 3D information of faces over a previous approach using only 2D face information
这篇论文描述了一种尝试转换人脸的3D模型,以产生预期的印象变化。利用测距仪捕获的三维数据自动提取高维向量来表示人脸的三维形状和表面纹理,并利用主成分分析对一组人脸之间的变化进行编码。分析了编码表示与给定印象维数的人脸属性之间的关系,得到了印象传递向量。在这里,我们提出了一种使用该印象传递矢量来操纵3D人脸以转换印象的方法。性别印象转换的实验结果证实了处理面部的3D信息比以前只使用二维面部信息的方法的优越性
{"title":"Automatic impression transformation of faces in 3D shape - a perceptual comparison with processing on 2D images","authors":"Yuhya Okada, Masataka Ozu, T. Sakurai, Mitsuharu Inaba, S. Akamatsu","doi":"10.1109/FGR.2006.26","DOIUrl":"https://doi.org/10.1109/FGR.2006.26","url":null,"abstract":"This paper describes an attempt to transform a 3D model of a person's face to produce an intended change of impression. 3D shape and surface texture of faces are represented by high-dimensional vectors automatically extracted from the 3D data captured by a range finder, and variations among a set of faces are coded by applying principal component analysis. The relationship between the coded representation and the attribute of faces along a given impression dimension is analyzed to obtain an impression transfer vector. Here, we propose a method using this impression transfer vector to manipulate 3D faces in order to transform impressions. Experimental results on transformation of gender impressions confirmed the superiority of manipulating the 3D information of faces over a previous approach using only 2D face information","PeriodicalId":109260,"journal":{"name":"7th International Conference on Automatic Face and Gesture Recognition (FGR06)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115652068","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Accurate Head Pose Tracking in Low Resolution Video 准确的头部姿态跟踪在低分辨率视频
J. Tu, Thomas S. Huang, Hai Tao
Estimating 3D head poses accurately in low resolution video is a challenging vision task because it is difficult to find continuous one-to-one mapping from person-independent low resolution visual representation to head pose parameters. We propose to track head poses by modeling the shape-free facial textures acquired from the video with subspace learning techniques. In particular, we propose to model the facial appearance variations online by incremental weighted PCA subspace with forgetting mechanism, and we do the tracking in an annealed particle filtering framework. Experiments show that, the tracking accuracy of our approach outperforms past visual face tracking algorithms especially in low resolution videos
在低分辨率视频中准确估计3D头部姿态是一项具有挑战性的视觉任务,因为很难找到从独立于人的低分辨率视觉表示到头部姿态参数的连续一对一映射。我们提出利用子空间学习技术对视频中获取的无形状面部纹理进行建模,从而跟踪头部姿态。特别地,我们提出了基于遗忘机制的增量加权PCA子空间在线建模面部外观变化,并在退火粒子滤波框架中进行跟踪。实验表明,该方法的跟踪精度优于以往的视觉人脸跟踪算法,特别是在低分辨率视频中
{"title":"Accurate Head Pose Tracking in Low Resolution Video","authors":"J. Tu, Thomas S. Huang, Hai Tao","doi":"10.1109/FGR.2006.19","DOIUrl":"https://doi.org/10.1109/FGR.2006.19","url":null,"abstract":"Estimating 3D head poses accurately in low resolution video is a challenging vision task because it is difficult to find continuous one-to-one mapping from person-independent low resolution visual representation to head pose parameters. We propose to track head poses by modeling the shape-free facial textures acquired from the video with subspace learning techniques. In particular, we propose to model the facial appearance variations online by incremental weighted PCA subspace with forgetting mechanism, and we do the tracking in an annealed particle filtering framework. Experiments show that, the tracking accuracy of our approach outperforms past visual face tracking algorithms especially in low resolution videos","PeriodicalId":109260,"journal":{"name":"7th International Conference on Automatic Face and Gesture Recognition (FGR06)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129587071","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 45
Dance posture recognition using wide-baseline orthogonal stereo cameras 基于宽基线正交立体摄像机的舞蹈姿势识别
Feng Guo, G. Qian
In this paper, a robust 3D dance posture recognition system using two cameras is proposed. A pair of wide-baseline video cameras with approximately orthogonal looking directions is used to reduce pose recognition ambiguities. Silhouettes extracted from these two views are represented using Gaussian mixture models (GMM) and used as features for recognition. Relevance vector machine (RVM) is deployed for robust pose recognition. The proposed system is trained using synthesized silhouettes created using animation software and motion capture data. The experimental results on synthetic and real images illustrate that the proposed approach can recognize 3D postures effectively. In addition, the system is easy to set up without any need of precise camera calibration
本文提出了一种基于双摄像头的三维舞蹈姿态识别系统。采用近似正交的双宽基线摄像机来降低姿态识别的模糊性。从这两个视图中提取的轮廓使用高斯混合模型(GMM)表示,并作为识别的特征。采用相关向量机(RVM)进行鲁棒姿态识别。所提出的系统是使用动画软件和动作捕捉数据创建的合成轮廓进行训练的。在合成图像和真实图像上的实验结果表明,该方法可以有效地识别三维姿态。此外,该系统易于设置,无需精确的相机校准
{"title":"Dance posture recognition using wide-baseline orthogonal stereo cameras","authors":"Feng Guo, G. Qian","doi":"10.1109/FGR.2006.35","DOIUrl":"https://doi.org/10.1109/FGR.2006.35","url":null,"abstract":"In this paper, a robust 3D dance posture recognition system using two cameras is proposed. A pair of wide-baseline video cameras with approximately orthogonal looking directions is used to reduce pose recognition ambiguities. Silhouettes extracted from these two views are represented using Gaussian mixture models (GMM) and used as features for recognition. Relevance vector machine (RVM) is deployed for robust pose recognition. The proposed system is trained using synthesized silhouettes created using animation software and motion capture data. The experimental results on synthetic and real images illustrate that the proposed approach can recognize 3D postures effectively. In addition, the system is easy to set up without any need of precise camera calibration","PeriodicalId":109260,"journal":{"name":"7th International Conference on Automatic Face and Gesture Recognition (FGR06)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115023365","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 31
期刊
7th International Conference on Automatic Face and Gesture Recognition (FGR06)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1