首页 > 最新文献

Proceedings Fourth IEEE International Conference on Automatic Face and Gesture Recognition (Cat. No. PR00580)最新文献

英文 中文
Human head tracking using adaptive appearance models with a fixed-viewpoint pan-tilt-zoom camera 基于自适应外观模型的固定视点平移变焦相机的头部跟踪
K. Yachi, T. Wada, T. Matsuyama
We propose a method for detecting and tracking a human head in real time from an image sequence. The proposed method has three advantages: (1) we employ a fixed-viewpoint pan-tilt-zoom camera to acquire image sequences; with the camera, we eliminate the variations in the head appearance due to camera rotations with respect to the viewpoint; (2) we prepare a variety of contour models of the head appearances and relate them to the camera parameters; this allows us to adaptively select the model to deal with the variations in the head appearance due to human activities; (3) we use the model parameters obtained by detecting the head in the previous image to estimate those to be fitted in the current image; this estimation facilitates computational time for the head detection. Accordingly, the accuracy of the detection and required computational time are both improved and, at the same time, the robust head detection and tracking are realized in almost real time. Experimental results in the real situation show the effectiveness of our method.
我们提出了一种从图像序列中实时检测和跟踪人类头部的方法。该方法具有三个优点:(1)采用固定视点平移变焦相机获取图像序列;使用相机,我们消除了由于相机相对于视点旋转而导致的头部外观变化;(2)制备各种头部外形轮廓模型,并将其与相机参数相关联;这使我们能够自适应地选择模型来处理由于人类活动导致的头部外观变化;(3)利用前一幅图像中检测头部得到的模型参数估计当前图像中待拟合的模型参数;这种估计简化了头部检测的计算时间。从而提高了检测的精度和所需的计算时间,同时几乎实时地实现了头部的鲁棒检测和跟踪。实际情况下的实验结果表明了该方法的有效性。
{"title":"Human head tracking using adaptive appearance models with a fixed-viewpoint pan-tilt-zoom camera","authors":"K. Yachi, T. Wada, T. Matsuyama","doi":"10.1109/AFGR.2000.840626","DOIUrl":"https://doi.org/10.1109/AFGR.2000.840626","url":null,"abstract":"We propose a method for detecting and tracking a human head in real time from an image sequence. The proposed method has three advantages: (1) we employ a fixed-viewpoint pan-tilt-zoom camera to acquire image sequences; with the camera, we eliminate the variations in the head appearance due to camera rotations with respect to the viewpoint; (2) we prepare a variety of contour models of the head appearances and relate them to the camera parameters; this allows us to adaptively select the model to deal with the variations in the head appearance due to human activities; (3) we use the model parameters obtained by detecting the head in the previous image to estimate those to be fitted in the current image; this estimation facilitates computational time for the head detection. Accordingly, the accuracy of the detection and required computational time are both improved and, at the same time, the robust head detection and tracking are realized in almost real time. Experimental results in the real situation show the effectiveness of our method.","PeriodicalId":360065,"journal":{"name":"Proceedings Fourth IEEE International Conference on Automatic Face and Gesture Recognition (Cat. No. PR00580)","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127577239","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
Improving face tracking with 2D template warping 利用2D模板翘曲改进人脸跟踪
R. Kjeldsen, Aya Aner
Tracking the face of a computer user as he looks at various parts of the screen is a fundamental tool for a variety of perceptual user interface applications. The authors have developed a simple but surprisingly robust tracking algorithm based on template matching and applied it successfully. This paper describes extensions to that algorithm, which improves performance at large facial rotation angles. The method is based on pre-distorting the single training template using 2D image transformations to simulate 3D facial rotations. The method avoids many of the problems associated with using a complex 3D head model. It is robust to variations in the environment and well-suited to use in practical applications in typical computing environments.
跟踪计算机用户看屏幕各个部分时的面部表情是各种感知用户界面应用程序的基本工具。作者开发了一种基于模板匹配的简单但鲁棒性惊人的跟踪算法,并成功应用。本文描述了该算法的扩展,提高了大面部旋转角度下的性能。该方法利用二维图像变换对单个训练模板进行预失真,模拟三维人脸旋转。该方法避免了使用复杂的3D头部模型所带来的许多问题。它对环境的变化具有鲁棒性,非常适合在典型计算环境中的实际应用程序中使用。
{"title":"Improving face tracking with 2D template warping","authors":"R. Kjeldsen, Aya Aner","doi":"10.1109/AFGR.2000.840623","DOIUrl":"https://doi.org/10.1109/AFGR.2000.840623","url":null,"abstract":"Tracking the face of a computer user as he looks at various parts of the screen is a fundamental tool for a variety of perceptual user interface applications. The authors have developed a simple but surprisingly robust tracking algorithm based on template matching and applied it successfully. This paper describes extensions to that algorithm, which improves performance at large facial rotation angles. The method is based on pre-distorting the single training template using 2D image transformations to simulate 3D facial rotations. The method avoids many of the problems associated with using a complex 3D head model. It is robust to variations in the environment and well-suited to use in practical applications in typical computing environments.","PeriodicalId":360065,"journal":{"name":"Proceedings Fourth IEEE International Conference on Automatic Face and Gesture Recognition (Cat. No. PR00580)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126320685","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Fast tracking of hands and fingertips in infrared images for augmented desk interface 快速跟踪手和指尖在红外图像增强桌面界面
Yoichi Sato, Yoshinori Kobayashi, H. Koike
We introduce a fast and robust method for tracking positions of the centers and the fingertips of both right and left hands. Our method makes use of infrared camera images for reliable detection of a user's hands, and uses a template matching strategy for finding fingertips. This method is an essential part of our augmented desk interface in which a user can, with natural hand gestures, simultaneously manipulate both physical objects and electronically projected objects on a desk, e.g., a textbook and related WWW pages. Previous tracking methods which are typically based on color segmentation or background subtraction simply do not perform well in this type of application because an observed color of human skin and image backgrounds may change significantly due to protection of various objects onto a desk. In contrast, our proposed method was shown to be effective even in such a challenging situation through demonstration in our augmented desk interface. This paper describes the details of our tracking method as well as typical applications in our augmented desk interface.
我们介绍了一种快速、鲁棒的方法来跟踪右手和左手的中心和指尖的位置。我们的方法利用红外相机图像来可靠地检测用户的手,并使用模板匹配策略来寻找指尖。这种方法是我们增强桌面界面的重要组成部分,在这个界面中,用户可以用自然的手势同时操作桌子上的物理对象和电子投影对象,例如教科书和相关的WWW页面。以前通常基于颜色分割或背景减法的跟踪方法在这种类型的应用中表现不佳,因为由于桌子上各种物体的保护,观察到的人体皮肤和图像背景的颜色可能会发生显着变化。相比之下,我们提出的方法被证明是有效的,即使在这样一个具有挑战性的情况下,通过我们的增强桌面界面演示。本文描述了我们的跟踪方法的细节以及在我们的增强桌面界面中的典型应用。
{"title":"Fast tracking of hands and fingertips in infrared images for augmented desk interface","authors":"Yoichi Sato, Yoshinori Kobayashi, H. Koike","doi":"10.1109/AFGR.2000.840675","DOIUrl":"https://doi.org/10.1109/AFGR.2000.840675","url":null,"abstract":"We introduce a fast and robust method for tracking positions of the centers and the fingertips of both right and left hands. Our method makes use of infrared camera images for reliable detection of a user's hands, and uses a template matching strategy for finding fingertips. This method is an essential part of our augmented desk interface in which a user can, with natural hand gestures, simultaneously manipulate both physical objects and electronically projected objects on a desk, e.g., a textbook and related WWW pages. Previous tracking methods which are typically based on color segmentation or background subtraction simply do not perform well in this type of application because an observed color of human skin and image backgrounds may change significantly due to protection of various objects onto a desk. In contrast, our proposed method was shown to be effective even in such a challenging situation through demonstration in our augmented desk interface. This paper describes the details of our tracking method as well as typical applications in our augmented desk interface.","PeriodicalId":360065,"journal":{"name":"Proceedings Fourth IEEE International Conference on Automatic Face and Gesture Recognition (Cat. No. PR00580)","volume":"120 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132029725","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 174
Performance assessment of face-verification based access control system 基于人脸验证的门禁系统性能评估
G. V. Wheeler, P. Courtney, Tim Cootes, C. Taylor
In recent years there has been much progress in the development of facial recognition systems. The FERET series of tests reported the black box performance of several such systems working on stored face images. Much less effort has been spent in studying the behaviour of systems under realistic conditions of use. We describe and analyse the result of a trial of a door access control system based on a model-based approach. The trial consisted of 10 registered users making over 200 accesses during a 2 week period. We describe the internal failure modes and the performance characteristics of the system, identify inter- and intra-person dependencies and make recommendations for future work.
近年来,人脸识别系统的发展取得了很大进展。FERET系列测试报告了几个这样的系统在存储面部图像上的黑匣子性能。在研究系统在实际使用条件下的行为方面花费的精力要少得多。我们描述和分析了一个基于模型方法的门禁控制系统的试验结果。在两周的时间里,10名注册用户进行了超过200次的访问。我们描述了系统的内部故障模式和性能特征,确定了人与人之间和人与人之间的依赖关系,并为未来的工作提出了建议。
{"title":"Performance assessment of face-verification based access control system","authors":"G. V. Wheeler, P. Courtney, Tim Cootes, C. Taylor","doi":"10.1109/AFGR.2000.840638","DOIUrl":"https://doi.org/10.1109/AFGR.2000.840638","url":null,"abstract":"In recent years there has been much progress in the development of facial recognition systems. The FERET series of tests reported the black box performance of several such systems working on stored face images. Much less effort has been spent in studying the behaviour of systems under realistic conditions of use. We describe and analyse the result of a trial of a door access control system based on a model-based approach. The trial consisted of 10 registered users making over 200 accesses during a 2 week period. We describe the internal failure modes and the performance characteristics of the system, identify inter- and intra-person dependencies and make recommendations for future work.","PeriodicalId":360065,"journal":{"name":"Proceedings Fourth IEEE International Conference on Automatic Face and Gesture Recognition (Cat. No. PR00580)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132000681","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A method for recognizing a sequence of sign language words represented in a Japanese sign language sentence 一种识别日语手语句子中所表示的手语单词序列的方法
H. Sagawa, M. Takeuchi
To automatically interpret Japanese sign language (JSL), the recognition of signed words must be more accurate and the effects of extraneous gestures removed. We describe the parameters and the algorithms used to accomplish this. We experimented with 200 JSL sentences and demonstrated that recognition performance could be considerably improved.
为了自动翻译日本手语(JSL),必须更准确地识别手语单词,并消除外来手势的影响。我们描述了用于实现这一目标的参数和算法。我们对200个JSL句子进行了实验,并证明识别性能可以大大提高。
{"title":"A method for recognizing a sequence of sign language words represented in a Japanese sign language sentence","authors":"H. Sagawa, M. Takeuchi","doi":"10.1109/AFGR.2000.840671","DOIUrl":"https://doi.org/10.1109/AFGR.2000.840671","url":null,"abstract":"To automatically interpret Japanese sign language (JSL), the recognition of signed words must be more accurate and the effects of extraneous gestures removed. We describe the parameters and the algorithms used to accomplish this. We experimented with 200 JSL sentences and demonstrated that recognition performance could be considerably improved.","PeriodicalId":360065,"journal":{"name":"Proceedings Fourth IEEE International Conference on Automatic Face and Gesture Recognition (Cat. No. PR00580)","volume":"9 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115477641","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 68
Human body postures from trinocular camera images 人体姿势从三叉相机图像
Shoichiro Iwasawa, J. Ohya, Kazuhiko Takahashi, T. Sakaguchi, S. Morishima, K. Ebihara
This paper proposes a new real-time method for estimating human postures in 3D from trinocular images. In this method, an upper body orientation detection and a heuristic contour analysis are performed on the human silhouettes extracted from the trinocular images so that representative points such as the top of the head can be located. The major joint positions are estimated based on a genetic algorithm-based learning procedure. 3D coordinates of the representative points and joints are then obtained from the two views by evaluating the appropriateness of the three views. The proposed method implemented on a personal computer runs in real-time. Experimental results show high estimation accuracies and the effectiveness of the view selection process.
本文提出了一种从三维图像中实时估计人体姿态的新方法。该方法对从三视图像中提取的人体轮廓进行上半身方向检测和启发式轮廓分析,从而定位出头部等代表性点。主要关节位置的估计是基于遗传算法的学习过程。然后通过评估三个视图的适当性,从两个视图中获得代表性点和关节的三维坐标。该方法在个人计算机上实现了实时运行。实验结果表明,该方法具有较高的估计精度和有效性。
{"title":"Human body postures from trinocular camera images","authors":"Shoichiro Iwasawa, J. Ohya, Kazuhiko Takahashi, T. Sakaguchi, S. Morishima, K. Ebihara","doi":"10.1109/AFGR.2000.840654","DOIUrl":"https://doi.org/10.1109/AFGR.2000.840654","url":null,"abstract":"This paper proposes a new real-time method for estimating human postures in 3D from trinocular images. In this method, an upper body orientation detection and a heuristic contour analysis are performed on the human silhouettes extracted from the trinocular images so that representative points such as the top of the head can be located. The major joint positions are estimated based on a genetic algorithm-based learning procedure. 3D coordinates of the representative points and joints are then obtained from the two views by evaluating the appropriateness of the three views. The proposed method implemented on a personal computer runs in real-time. Experimental results show high estimation accuracies and the effectiveness of the view selection process.","PeriodicalId":360065,"journal":{"name":"Proceedings Fourth IEEE International Conference on Automatic Face and Gesture Recognition (Cat. No. PR00580)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121932724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 28
Dynamic facial caricaturing system based on the gaze direction of gallery 基于画廊注视方向的动态面部漫画系统
K. Murakami, M. Tominaga, H. Koshimizu
Facial caricaturing is a representation process of human visual impression onto paper or other media. Facial caricaturing should be discussed from multiple viewpoints of three relations among the model, the caricaturist and the gallery. Furthermore, some kinds of interactive mechanism should be required between the caricaturist and the gallery. We propose a dynamic caricaturing system. In our system the utilization of an in-betweening method realizes the generation mechanism from the caricaturist to the gallery, and on the contrary, the utilization of eye-camera vision realizes the feedback mechanism from the gallery to the caricaturist. This is an original and unique point of our system. The gallery mounts an eye-camera on the head, and the system reflects visual characteristics of the gallery directly onto the works of facial caricature. After observing the image of the model and analyzing the gaze direction and distribution, the system deforms some characteristic and impressive facial parts more strongly than other non-impressive facial parts, and generates the caricature which is suited especially for the gallery. We demonstrate experimentally the effectivity of this method to integrate these kinds of viewpoints.
面部漫画是人类视觉印象在纸上或其他媒介上的表现过程。面部漫画应该从模特、漫画家和画廊三种关系的多角度来探讨。此外,还需要漫画家与画廊之间的某种互动机制。我们提出了一个动态漫画系统。在我们的系统中,利用中间方法实现了从漫画家到画廊的生成机制,相反,利用眼-相机视觉实现了从画廊到漫画家的反馈机制。这是我们制度的独创之处。画廊在头部安装眼部摄像头,系统将画廊的视觉特征直接反映在面部漫画作品上。通过对模特图像的观察和对凝视方向和分布的分析,系统对一些有特征的、令人印象深刻的面部部位进行了比其他非印象深刻的面部部位更强烈的变形,生成了特别适合画廊的漫画。我们通过实验证明了该方法对这些视点进行整合的有效性。
{"title":"Dynamic facial caricaturing system based on the gaze direction of gallery","authors":"K. Murakami, M. Tominaga, H. Koshimizu","doi":"10.1109/AFGR.2000.840624","DOIUrl":"https://doi.org/10.1109/AFGR.2000.840624","url":null,"abstract":"Facial caricaturing is a representation process of human visual impression onto paper or other media. Facial caricaturing should be discussed from multiple viewpoints of three relations among the model, the caricaturist and the gallery. Furthermore, some kinds of interactive mechanism should be required between the caricaturist and the gallery. We propose a dynamic caricaturing system. In our system the utilization of an in-betweening method realizes the generation mechanism from the caricaturist to the gallery, and on the contrary, the utilization of eye-camera vision realizes the feedback mechanism from the gallery to the caricaturist. This is an original and unique point of our system. The gallery mounts an eye-camera on the head, and the system reflects visual characteristics of the gallery directly onto the works of facial caricature. After observing the image of the model and analyzing the gaze direction and distribution, the system deforms some characteristic and impressive facial parts more strongly than other non-impressive facial parts, and generates the caricature which is suited especially for the gallery. We demonstrate experimentally the effectivity of this method to integrate these kinds of viewpoints.","PeriodicalId":360065,"journal":{"name":"Proceedings Fourth IEEE International Conference on Automatic Face and Gesture Recognition (Cat. No. PR00580)","volume":"176 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132249744","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Self-organized integration of adaptive visual cues for face tracking 人脸跟踪自适应视觉线索的自组织集成
J. Triesch, C. Malsburg
A mechanism for the self-organized integration of different adaptive cues is proposed. In democratic integration the cues agree on a result and each cue adapts towards the result agreed upon. A technical formulation of this scheme is employed in a face tracking system. The self-organized adaptivity lends itself to suppression and recalibration of discordant cues. Experiments show that the system is robust to sudden changes in the environment as long as the changes disrupt only a minority of cues at the same time, although all cues may be affected in the long run.
提出了不同自适应线索的自组织整合机制。在民主整合中,线索对结果达成一致,每个线索都适应商定的结果。在人脸跟踪系统中采用了该方案的一种技术方案。这种自组织的适应性有助于抑制和重新校准不和谐的线索。实验表明,该系统对环境的突然变化具有鲁棒性,只要这些变化同时只干扰少数线索,尽管从长远来看,所有线索都可能受到影响。
{"title":"Self-organized integration of adaptive visual cues for face tracking","authors":"J. Triesch, C. Malsburg","doi":"10.1109/AFGR.2000.840619","DOIUrl":"https://doi.org/10.1109/AFGR.2000.840619","url":null,"abstract":"A mechanism for the self-organized integration of different adaptive cues is proposed. In democratic integration the cues agree on a result and each cue adapts towards the result agreed upon. A technical formulation of this scheme is employed in a face tracking system. The self-organized adaptivity lends itself to suppression and recalibration of discordant cues. Experiments show that the system is robust to sudden changes in the environment as long as the changes disrupt only a minority of cues at the same time, although all cues may be affected in the long run.","PeriodicalId":360065,"journal":{"name":"Proceedings Fourth IEEE International Conference on Automatic Face and Gesture Recognition (Cat. No. PR00580)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130769558","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 74
An algorithm for real-time stereo vision implementation of head pose and gaze direction measurement 一种实现头部姿态和凝视方向实时立体视觉测量的算法
Y. Matsumoto, A. Zelinsky
To build smart human interfaces, it is necessary for a system to know a user's intention and point of attention. Since the motion of a person's head pose and gaze direction are deeply related with his/her intention and attention, detection of such information can be utilized to build natural and intuitive interfaces. We describe our real-time stereo face tracking and gaze detection system to measure head pose and gaze direction simultaneously. The key aspect of our system is the use of real-time stereo vision together with a simple algorithm which is suitable for real-time processing. Since the 3D coordinates of the features on a face can be directly measured in our system, we can significantly simplify the algorithm for 3D model fitting to obtain the full 3D pose of the head compared with conventional systems that use monocular camera. Consequently we achieved a non-contact, passive, real-time, robust, accurate and compact measurement system for head pose and gaze direction.
为了构建智能人机界面,系统必须了解用户的意图和注意点。由于一个人的头部姿势和凝视方向的运动与他/她的意图和注意力密切相关,因此可以利用这些信息的检测来构建自然直观的界面。我们描述了一种实时立体人脸跟踪和凝视检测系统,可以同时测量头部姿态和凝视方向。该系统的关键是利用实时立体视觉,并结合一种适合实时处理的简单算法。由于我们的系统可以直接测量人脸特征的三维坐标,与传统的单目相机系统相比,我们可以大大简化3D模型拟合算法,以获得头部的完整三维姿态。因此,我们实现了一个非接触、被动、实时、鲁棒、精确和紧凑的头部姿态和凝视方向测量系统。
{"title":"An algorithm for real-time stereo vision implementation of head pose and gaze direction measurement","authors":"Y. Matsumoto, A. Zelinsky","doi":"10.1109/AFGR.2000.840680","DOIUrl":"https://doi.org/10.1109/AFGR.2000.840680","url":null,"abstract":"To build smart human interfaces, it is necessary for a system to know a user's intention and point of attention. Since the motion of a person's head pose and gaze direction are deeply related with his/her intention and attention, detection of such information can be utilized to build natural and intuitive interfaces. We describe our real-time stereo face tracking and gaze detection system to measure head pose and gaze direction simultaneously. The key aspect of our system is the use of real-time stereo vision together with a simple algorithm which is suitable for real-time processing. Since the 3D coordinates of the features on a face can be directly measured in our system, we can significantly simplify the algorithm for 3D model fitting to obtain the full 3D pose of the head compared with conventional systems that use monocular camera. Consequently we achieved a non-contact, passive, real-time, robust, accurate and compact measurement system for head pose and gaze direction.","PeriodicalId":360065,"journal":{"name":"Proceedings Fourth IEEE International Conference on Automatic Face and Gesture Recognition (Cat. No. PR00580)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133782724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 327
Face matching through information theoretical attention points and its applications to face detection and classification 人脸匹配通过信息理论的关注点及其在人脸检测和分类中的应用
K. Hotta, T. Mishima, Takio Kurita, S. Umeyama
This paper presents a face matching method through information theoretical attention points. The attention points are selected as the points where the outputs of Gabor filters applied to the contrast-filtered image (Gabor features) have rich information. The information value of Gabor features of the certain point is used as the weight and the weighed sum of the correlations is used as the similarity measure for the matching. To cope with the scale changes of a face, several images with different scales are generated by interpolation from the input image and the best match is searched. By using the attention points given from the information theoretical point of view, the matching becomes robust under various environments. This matching method is applied to face detection of a known person and face classification. The effectiveness of the proposed method is confirmed by experiments using the face images captured over years under the different environments.
本文提出了一种基于信息理论的人脸匹配方法。选择注意点作为Gabor滤波器应用于对比滤波图像的输出(Gabor特征)具有丰富信息的点。将某点的Gabor特征的信息值作为权重,将相关的加权和作为匹配的相似度度量。为了应对人脸尺度的变化,对输入图像进行插值生成不同尺度的图像,并寻找最佳匹配。利用信息论给出的注意点,使匹配在各种环境下都具有鲁棒性。将这种匹配方法应用于人脸检测和人脸分类中。利用多年来在不同环境下采集的人脸图像进行实验,验证了该方法的有效性。
{"title":"Face matching through information theoretical attention points and its applications to face detection and classification","authors":"K. Hotta, T. Mishima, Takio Kurita, S. Umeyama","doi":"10.1109/AFGR.2000.840609","DOIUrl":"https://doi.org/10.1109/AFGR.2000.840609","url":null,"abstract":"This paper presents a face matching method through information theoretical attention points. The attention points are selected as the points where the outputs of Gabor filters applied to the contrast-filtered image (Gabor features) have rich information. The information value of Gabor features of the certain point is used as the weight and the weighed sum of the correlations is used as the similarity measure for the matching. To cope with the scale changes of a face, several images with different scales are generated by interpolation from the input image and the best match is searched. By using the attention points given from the information theoretical point of view, the matching becomes robust under various environments. This matching method is applied to face detection of a known person and face classification. The effectiveness of the proposed method is confirmed by experiments using the face images captured over years under the different environments.","PeriodicalId":360065,"journal":{"name":"Proceedings Fourth IEEE International Conference on Automatic Face and Gesture Recognition (Cat. No. PR00580)","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2000-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123237646","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 27
期刊
Proceedings Fourth IEEE International Conference on Automatic Face and Gesture Recognition (Cat. No. PR00580)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1