首页 > 最新文献

Proceedings Fourth IEEE International Conference on Automatic Face and Gesture Recognition (Cat. No. PR00580)最新文献

英文 中文
Towards automatic face identification robust to ageing variation 人脸自动识别对年龄变化的鲁棒性研究
A. Lanitis, C. Taylor
A large number of high-performance automatic face recognition systems have been reported in the literature. Many of them are robust to within class appearance variation of subjects such as variation in expression, lighting of subjects such as variation in expression, lighting and pose. However, most of the face identification systems developed are sensitive to changes in the age of individuals. We present experimental results to prove that the performance of automatic face recognition systems depends on the age difference of subjects between the training and test images. We also demonstrate that automatic age simulation techniques can be used for designing face recognition systems, robust to ageing variation. In this context, the perceived age of the subjects in the training and test images is modified before the training and classification procedures, so that ageing variation is eliminated. Experimental results demonstrate that the performance of our face recognition system can be improved significantly, when this approach is adopted.
大量的高性能自动人脸识别系统已经在文献中被报道。它们中的许多对类内对象的外观变化(如表情变化)、对象的照明(如表情变化)、照明和姿势变化都具有鲁棒性。然而,大多数开发的人脸识别系统对个人年龄的变化很敏感。我们的实验结果证明了自动人脸识别系统的性能取决于训练图像和测试图像之间受试者的年龄差异。我们还证明了自动年龄模拟技术可以用于设计对年龄变化具有鲁棒性的人脸识别系统。在这种情况下,在训练和分类程序之前,对训练和测试图像中受试者的感知年龄进行修改,从而消除了年龄变化。实验结果表明,采用该方法可以显著提高人脸识别系统的性能。
{"title":"Towards automatic face identification robust to ageing variation","authors":"A. Lanitis, C. Taylor","doi":"10.1109/AFGR.2000.840664","DOIUrl":"https://doi.org/10.1109/AFGR.2000.840664","url":null,"abstract":"A large number of high-performance automatic face recognition systems have been reported in the literature. Many of them are robust to within class appearance variation of subjects such as variation in expression, lighting of subjects such as variation in expression, lighting and pose. However, most of the face identification systems developed are sensitive to changes in the age of individuals. We present experimental results to prove that the performance of automatic face recognition systems depends on the age difference of subjects between the training and test images. We also demonstrate that automatic age simulation techniques can be used for designing face recognition systems, robust to ageing variation. In this context, the perceived age of the subjects in the training and test images is modified before the training and classification procedures, so that ageing variation is eliminated. Experimental results demonstrate that the performance of our face recognition system can be improved significantly, when this approach is adopted.","PeriodicalId":360065,"journal":{"name":"Proceedings Fourth IEEE International Conference on Automatic Face and Gesture Recognition (Cat. No. PR00580)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133792677","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 40
Real-time stereo tracking for head pose and gaze estimation 实时立体跟踪头部姿态和凝视估计
R. Newman, Y. Matsumoto, S. Rougeaux, A. Zelinsky
Computer systems which analyse human face/head motion have attracted significant attention recently as there are a number of interesting and useful applications. Not least among these is the goal of tracking the head in real time. A useful extension of this problem is to estimate the subject's gaze point in addition to his/her head pose. This paper describes a real-time stereo vision system which determines the head pose and gaze direction of a human subject. Its accuracy makes it useful for a number of applications including human/computer interaction, consumer research and ergonomic assessment.
分析人脸/头部运动的计算机系统最近引起了人们的极大关注,因为有许多有趣和有用的应用。其中最重要的目标是实时跟踪头部。这个问题的一个有用的扩展是,除了他/她的头部姿势之外,还要估计受试者的注视点。本文描述了一种实时立体视觉系统,用于确定人体主体的头部姿态和凝视方向。它的准确性使它在许多应用中都很有用,包括人机交互、消费者研究和人体工程学评估。
{"title":"Real-time stereo tracking for head pose and gaze estimation","authors":"R. Newman, Y. Matsumoto, S. Rougeaux, A. Zelinsky","doi":"10.1109/AFGR.2000.840622","DOIUrl":"https://doi.org/10.1109/AFGR.2000.840622","url":null,"abstract":"Computer systems which analyse human face/head motion have attracted significant attention recently as there are a number of interesting and useful applications. Not least among these is the goal of tracking the head in real time. A useful extension of this problem is to estimate the subject's gaze point in addition to his/her head pose. This paper describes a real-time stereo vision system which determines the head pose and gaze direction of a human subject. Its accuracy makes it useful for a number of applications including human/computer interaction, consumer research and ergonomic assessment.","PeriodicalId":360065,"journal":{"name":"Proceedings Fourth IEEE International Conference on Automatic Face and Gesture Recognition (Cat. No. PR00580)","volume":"364 10","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114011035","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 149
SFS based view synthesis for robust face recognition 基于SFS的视图合成鲁棒人脸识别
Wenyi Zhao, R. Chellappa
Sensitivity to variations in pose is a challenging problem in face recognition using appearance-based methods. More specifically, the appearance of a face changes dramatically when viewing and/or lighting directions change. Various approaches have been proposed to solve this difficult problem. They can be broadly divided into three classes: (1) multiple image-based methods where multiple images of various poses per person are available; (2) hybrid methods where multiple example images are available during learning but only one database image per person is available during recognition; and (3) single image-based methods where no example-based learning is carried out. We present a method that comes under class 3. This method, based on shape-from-shading (SFS), improves the performance of a face recognition system in handling variations due to pose and illumination via image synthesis.
在基于外观的人脸识别方法中,对姿态变化的敏感性是一个具有挑战性的问题。更具体地说,当观看和/或照明方向改变时,面部的外观会发生巨大变化。已经提出了各种方法来解决这个难题。它们大致可分为三类:(1)基于多幅图像的方法,即每个人可以获得多幅不同姿势的图像;(2)混合方法,即在学习过程中使用多个示例图像,但在识别过程中每人只能使用一个数据库图像;(3)基于单一图像的方法,不进行基于示例的学习。我们提出的方法属于第3类。该方法基于形状-阴影(SFS),通过图像合成提高了人脸识别系统处理姿态和光照变化的性能。
{"title":"SFS based view synthesis for robust face recognition","authors":"Wenyi Zhao, R. Chellappa","doi":"10.1109/AFGR.2000.840648","DOIUrl":"https://doi.org/10.1109/AFGR.2000.840648","url":null,"abstract":"Sensitivity to variations in pose is a challenging problem in face recognition using appearance-based methods. More specifically, the appearance of a face changes dramatically when viewing and/or lighting directions change. Various approaches have been proposed to solve this difficult problem. They can be broadly divided into three classes: (1) multiple image-based methods where multiple images of various poses per person are available; (2) hybrid methods where multiple example images are available during learning but only one database image per person is available during recognition; and (3) single image-based methods where no example-based learning is carried out. We present a method that comes under class 3. This method, based on shape-from-shading (SFS), improves the performance of a face recognition system in handling variations due to pose and illumination via image synthesis.","PeriodicalId":360065,"journal":{"name":"Proceedings Fourth IEEE International Conference on Automatic Face and Gesture Recognition (Cat. No. PR00580)","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115161927","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 153
Viewpoint-invariant learning and detection of human heads 人类头部的视点不变学习与检测
Markus Weber, W. Einhäuser, M. Welling, P. Perona
We present a method to learn models of human heads for the purpose of detection from different viewing angles. We focus on a model where objects are represented as constellations of rigid features (parts). Variability is represented by a joint probability density function (PDF) on the shape of the constellation. In the first stage, the method automatically identifies distinctive features in the training set using an interest operator followed by vector quantization. The set of model parameters, including the shape PDF, is then learned using expectation maximization. Experiments show good generalization performance to novel viewpoints and unseen faces. Performance is above 90% correct with less than 1 s computation time per image.
我们提出了一种学习人体头部模型的方法,以便从不同的视角进行检测。我们关注的是一个模型,其中对象被表示为刚性特征(部件)的星座。变异由星座形状的联合概率密度函数(PDF)表示。在第一阶段,该方法使用兴趣算子和矢量量化自动识别训练集中的显著特征。然后使用期望最大化来学习模型参数集,包括形状PDF。实验结果表明,该算法对新视点和未见人脸具有良好的泛化性能。性能正确率在90%以上,每张图像的计算时间少于1秒。
{"title":"Viewpoint-invariant learning and detection of human heads","authors":"Markus Weber, W. Einhäuser, M. Welling, P. Perona","doi":"10.1109/AFGR.2000.840607","DOIUrl":"https://doi.org/10.1109/AFGR.2000.840607","url":null,"abstract":"We present a method to learn models of human heads for the purpose of detection from different viewing angles. We focus on a model where objects are represented as constellations of rigid features (parts). Variability is represented by a joint probability density function (PDF) on the shape of the constellation. In the first stage, the method automatically identifies distinctive features in the training set using an interest operator followed by vector quantization. The set of model parameters, including the shape PDF, is then learned using expectation maximization. Experiments show good generalization performance to novel viewpoints and unseen faces. Performance is above 90% correct with less than 1 s computation time per image.","PeriodicalId":360065,"journal":{"name":"Proceedings Fourth IEEE International Conference on Automatic Face and Gesture Recognition (Cat. No. PR00580)","volume":"114 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115450556","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 60
Memory-based face recognition for visitor identification 基于记忆的人脸识别访客身份
T. Sim, R. Sukthankar, M. D. Mullin, S. Baluja
We show that a simple, memory-based technique for appearance-based face recognition, motivated by the real-world task of visitor identification, can outperform more sophisticated algorithms that use principal components analysis (PCA) and neural networks. This technique is closely related to correlation templates; however, we show that the use of novel similarity measures greatly improves performance. We also show that augmenting the memory base with additional, synthetic face images results in further improvements in performance. Results of extensive empirical testing on two standard face recognition datasets are presented, and direct comparisons with published work show that our algorithm achieves comparable (or superior) results. Our system is incorporated into an automated visitor identification system that has been operating successfully in an outdoor environment since January 1999.
我们展示了一种简单的,基于记忆的基于外观的人脸识别技术,受现实世界访客识别任务的激励,可以胜过使用主成分分析(PCA)和神经网络的更复杂的算法。该技术与相关模板密切相关;然而,我们表明使用新的相似度量大大提高了性能。我们还表明,使用额外的合成人脸图像来增强记忆基础可以进一步提高性能。在两个标准人脸识别数据集上进行了广泛的实证测试,并与已发表的工作进行了直接比较,结果表明我们的算法达到了相当(或更好)的结果。自一九九九年一月起,我们的系统已并入一套自动访客身份识别系统,在户外环境下成功运作。
{"title":"Memory-based face recognition for visitor identification","authors":"T. Sim, R. Sukthankar, M. D. Mullin, S. Baluja","doi":"10.1109/AFGR.2000.840637","DOIUrl":"https://doi.org/10.1109/AFGR.2000.840637","url":null,"abstract":"We show that a simple, memory-based technique for appearance-based face recognition, motivated by the real-world task of visitor identification, can outperform more sophisticated algorithms that use principal components analysis (PCA) and neural networks. This technique is closely related to correlation templates; however, we show that the use of novel similarity measures greatly improves performance. We also show that augmenting the memory base with additional, synthetic face images results in further improvements in performance. Results of extensive empirical testing on two standard face recognition datasets are presented, and direct comparisons with published work show that our algorithm achieves comparable (or superior) results. Our system is incorporated into an automated visitor identification system that has been operating successfully in an outdoor environment since January 1999.","PeriodicalId":360065,"journal":{"name":"Proceedings Fourth IEEE International Conference on Automatic Face and Gesture Recognition (Cat. No. PR00580)","volume":"218 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123336054","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 87
A probabilistic sensor for the perception of activities 感知活动的概率传感器
Olivier Chomat, J. Crowley
This paper presents a new technique for the perception of activities using a statistical description of spatio-temporal properties. With this approach, the probability of an activity in a spatio-temporal image sequence is computed by applying a Bayes rule to the joint statistics of the responses of motion energy receptive fields. A set of motion energy receptive fields is designed in order to sample the power spectrum of a moving texture. Their structure relates to the spatio-temporal energy models of Adelson and Bergen where measures of local visual motion information are extracted comparing the outputs of triad of Gabor energy filters. Then the probability density function required for the Bayes rule is estimated for each class of activity by computing multi-dimensional histograms from the outputs from the set of receptive fields. The perception of activities is achieved according to the Bayes rule. The result at a given time is the map of the conditional probabilities that each pixel belongs to an activity of the training set. The approach is validated with experiments in the perception of activities of walking persons in a visual surveillance scenario. Results are robust to changes in illumination conditions, to occlusions and to changes in texture.
本文提出了一种利用时空特性的统计描述来感知活动的新技术。利用这种方法,通过对运动能量接受场响应的联合统计应用贝叶斯规则来计算时空图像序列中活动的概率。为了对运动纹理的功率谱进行采样,设计了一组运动能量接受场。它们的结构与Adelson和Bergen的时空能量模型有关,其中局部视觉运动信息的度量是通过比较Gabor能量滤波器的三联输出来提取的。然后,通过计算来自接收域集的输出的多维直方图来估计每一类活动所需的贝叶斯规则的概率密度函数。活动感知是根据贝叶斯规则实现的。给定时间的结果是每个像素属于训练集的一个活动的条件概率的映射。该方法在视觉监控场景下对行走者活动感知的实验中得到验证。结果对光照条件的变化、遮挡和纹理的变化具有鲁棒性。
{"title":"A probabilistic sensor for the perception of activities","authors":"Olivier Chomat, J. Crowley","doi":"10.1109/AFGR.2000.840652","DOIUrl":"https://doi.org/10.1109/AFGR.2000.840652","url":null,"abstract":"This paper presents a new technique for the perception of activities using a statistical description of spatio-temporal properties. With this approach, the probability of an activity in a spatio-temporal image sequence is computed by applying a Bayes rule to the joint statistics of the responses of motion energy receptive fields. A set of motion energy receptive fields is designed in order to sample the power spectrum of a moving texture. Their structure relates to the spatio-temporal energy models of Adelson and Bergen where measures of local visual motion information are extracted comparing the outputs of triad of Gabor energy filters. Then the probability density function required for the Bayes rule is estimated for each class of activity by computing multi-dimensional histograms from the outputs from the set of receptive fields. The perception of activities is achieved according to the Bayes rule. The result at a given time is the map of the conditional probabilities that each pixel belongs to an activity of the training set. The approach is validated with experiments in the perception of activities of walking persons in a visual surveillance scenario. Results are robust to changes in illumination conditions, to occlusions and to changes in texture.","PeriodicalId":360065,"journal":{"name":"Proceedings Fourth IEEE International Conference on Automatic Face and Gesture Recognition (Cat. No. PR00580)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123673421","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Constraint-conscious smoothing framework for the recovery of 3D articulated motion from image sequences 从图像序列中恢复三维关节运动的约束意识平滑框架
Hiroyuki Segawa, H. Shioya, N. Hiraki, T. Totsuka
3D articulated motion is recovered from image sequences by relying on a recursive smoothing framework. In conventional recursive filtering frameworks, the filter may misestimate the state due to degenerated observation. To cope with this problem, we take into account knowledge about the limitations of the state-space. Our novel estimation framework relies on the combination of a smoothing algorithm with a "constraint-conscious" enhanced Kalman filter. The technique is shown to be effective for the recovery of experimental 3D articulated motions, making it a good candidate for marker-less motion capture applications.
三维关节运动恢复从图像序列依赖于一个递归平滑框架。在传统的递归滤波框架中,由于观测值退化,滤波器可能会对状态进行错误估计。为了解决这个问题,我们考虑了关于状态空间限制的知识。我们的新估计框架依赖于平滑算法与“约束意识”增强卡尔曼滤波器的结合。该技术被证明是有效的实验三维关节运动的恢复,使其成为无标记运动捕捉应用的良好候选者。
{"title":"Constraint-conscious smoothing framework for the recovery of 3D articulated motion from image sequences","authors":"Hiroyuki Segawa, H. Shioya, N. Hiraki, T. Totsuka","doi":"10.1109/AFGR.2000.840677","DOIUrl":"https://doi.org/10.1109/AFGR.2000.840677","url":null,"abstract":"3D articulated motion is recovered from image sequences by relying on a recursive smoothing framework. In conventional recursive filtering frameworks, the filter may misestimate the state due to degenerated observation. To cope with this problem, we take into account knowledge about the limitations of the state-space. Our novel estimation framework relies on the combination of a smoothing algorithm with a \"constraint-conscious\" enhanced Kalman filter. The technique is shown to be effective for the recovery of experimental 3D articulated motions, making it a good candidate for marker-less motion capture applications.","PeriodicalId":360065,"journal":{"name":"Proceedings Fourth IEEE International Conference on Automatic Face and Gesture Recognition (Cat. No. PR00580)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122796489","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Face shape extraction and recognition using 3D morphing and distance mapping 基于三维变形和距离映射的人脸形状提取与识别
Chongzhen Zhang, F. Cohen
We describe a novel approach for creating a 3D face structure from multiple image views of a human face taken at a priori unknown poses by appropriately morphing a generic 3D face. A 3D cubic explicit polynomial is used to morph a generic face into the specific face structure. This allows the creation of a database of 3D faces that is used in identifying a person (in the database) from one or more arbitrary image view(s). The estimation of a 3D person's face and its recognition from the database of faces is achieved through the use of a distance map metric. The use of this metric avoids either resorting to the formidable task of establishing feature point correspondences in the image views, or even more severely, relying on the extremely view-sensitive image intensity (texture). Experimental results are shown for images of real faces, and excellent results are obtained.
我们描述了一种新的方法,通过适当地变形通用3D面部,从先验未知姿势拍摄的人脸的多个图像视图中创建3D面部结构。使用三维三次显式多项式将一般的人脸变形为特定的人脸结构。这允许创建3D人脸数据库,用于从一个或多个任意图像视图中识别人(在数据库中)。三维人脸的估计和识别是通过使用距离地图度量来实现的。该度量的使用避免了在图像视图中建立特征点对应关系的艰巨任务,或者更严重的是,依赖于对视图极其敏感的图像强度(纹理)。对真实人脸图像进行了实验,取得了较好的效果。
{"title":"Face shape extraction and recognition using 3D morphing and distance mapping","authors":"Chongzhen Zhang, F. Cohen","doi":"10.1109/AFGR.2000.840608","DOIUrl":"https://doi.org/10.1109/AFGR.2000.840608","url":null,"abstract":"We describe a novel approach for creating a 3D face structure from multiple image views of a human face taken at a priori unknown poses by appropriately morphing a generic 3D face. A 3D cubic explicit polynomial is used to morph a generic face into the specific face structure. This allows the creation of a database of 3D faces that is used in identifying a person (in the database) from one or more arbitrary image view(s). The estimation of a 3D person's face and its recognition from the database of faces is achieved through the use of a distance map metric. The use of this metric avoids either resorting to the formidable task of establishing feature point correspondences in the image views, or even more severely, relying on the extremely view-sensitive image intensity (texture). Experimental results are shown for images of real faces, and excellent results are obtained.","PeriodicalId":360065,"journal":{"name":"Proceedings Fourth IEEE International Conference on Automatic Face and Gesture Recognition (Cat. No. PR00580)","volume":"372 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126030961","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Hand gesture recognition using input-output hidden Markov models 使用输入输出隐马尔可夫模型的手势识别
S. Marcel, O. Bernier, J. Viallet, D. Collobert
A new hand gesture recognition method based on input-output hidden Markov models is presented. This method deals with the dynamic aspects of gestures. Gestures are extracted from a sequence of video images by tracking the skin-color blobs corresponding to the hand into a body-face space centered on the face of the user. Our goal is to recognize two classes of gestures: deictic and symbolic.
提出了一种基于输入输出隐马尔可夫模型的手势识别方法。这种方法处理手势的动态方面。手势是从一系列视频图像中提取出来的,通过跟踪与手相对应的肤色斑点到以用户面部为中心的身体-面部空间。我们的目标是识别两类手势:指示和象征。
{"title":"Hand gesture recognition using input-output hidden Markov models","authors":"S. Marcel, O. Bernier, J. Viallet, D. Collobert","doi":"10.1109/AFGR.2000.840674","DOIUrl":"https://doi.org/10.1109/AFGR.2000.840674","url":null,"abstract":"A new hand gesture recognition method based on input-output hidden Markov models is presented. This method deals with the dynamic aspects of gestures. Gestures are extracted from a sequence of video images by tracking the skin-color blobs corresponding to the hand into a body-face space centered on the face of the user. Our goal is to recognize two classes of gestures: deictic and symbolic.","PeriodicalId":360065,"journal":{"name":"Proceedings Fourth IEEE International Conference on Automatic Face and Gesture Recognition (Cat. No. PR00580)","volume":"139 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116369884","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 148
Virtual view face image synthesis using 3D spring-based face model from a single image 利用基于三维弹簧的人脸模型从单幅图像合成虚拟视图人脸图像
G. Feng, P. Yuen, J. Lai
It is known that 2D views of a person can be synthesised if the face 3D model of that person is available. This paper proposes a new method, called 3D spring-based face model (SBFM), to determine the precise face model of a person with different poses and facial expressions from a single image. The SBFM combines the concepts of generic 3D face model in computer graphics and deformable template in computer vision. Face image databases from MIT AI laboratory and Yale University are used to test our proposed method and the results are encouraging.
众所周知,如果一个人的面部3D模型可用,则可以合成该人的2D视图。本文提出了一种基于三维弹簧的人脸模型(SBFM)方法,从单幅图像中确定具有不同姿态和面部表情的人的精确人脸模型。该模型结合了计算机图形学中的通用三维人脸模型和计算机视觉中的可变形模板的概念。使用麻省理工学院人工智能实验室和耶鲁大学的人脸图像数据库对我们提出的方法进行了测试,结果令人鼓舞。
{"title":"Virtual view face image synthesis using 3D spring-based face model from a single image","authors":"G. Feng, P. Yuen, J. Lai","doi":"10.1109/AFGR.2000.840685","DOIUrl":"https://doi.org/10.1109/AFGR.2000.840685","url":null,"abstract":"It is known that 2D views of a person can be synthesised if the face 3D model of that person is available. This paper proposes a new method, called 3D spring-based face model (SBFM), to determine the precise face model of a person with different poses and facial expressions from a single image. The SBFM combines the concepts of generic 3D face model in computer graphics and deformable template in computer vision. Face image databases from MIT AI laboratory and Yale University are used to test our proposed method and the results are encouraging.","PeriodicalId":360065,"journal":{"name":"Proceedings Fourth IEEE International Conference on Automatic Face and Gesture Recognition (Cat. No. PR00580)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133362127","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
期刊
Proceedings Fourth IEEE International Conference on Automatic Face and Gesture Recognition (Cat. No. PR00580)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1