We address the need for robust detection of obstructed human features in complex environments, with a focus on intelligent surgical UIs. In our setup, real-time detection is used to find features without the help of local (spatial or temporal) information. Such a detector is used to validate, correct or reject the output of the visual feature tracking, which is locally more robust, but drifts over time. In operating rooms (OR), surgeon faces are typically obstructed by sterile clothing and tools, making statistical and/or feature-based face detection approaches ineffective. We propose a new method for face detection that relies on geometric information from disparity maps, locally refined by color processing. We have applied our method to a surgical mock-up scene, as well as to images gathered during real surgery. Running in a real-time, continuous detection loop, our detector successfully found 99% of target heads (0.1% false positive) in our simulated setup, and 98% of target heads (0.5% false positive) in the surgical theater
{"title":"Robust method for real-time, continuous, 3D detection of obstructed faces in indoors environments","authors":"S. Grange, C. Baur","doi":"10.1109/FGR.2006.97","DOIUrl":"https://doi.org/10.1109/FGR.2006.97","url":null,"abstract":"We address the need for robust detection of obstructed human features in complex environments, with a focus on intelligent surgical UIs. In our setup, real-time detection is used to find features without the help of local (spatial or temporal) information. Such a detector is used to validate, correct or reject the output of the visual feature tracking, which is locally more robust, but drifts over time. In operating rooms (OR), surgeon faces are typically obstructed by sterile clothing and tools, making statistical and/or feature-based face detection approaches ineffective. We propose a new method for face detection that relies on geometric information from disparity maps, locally refined by color processing. We have applied our method to a surgical mock-up scene, as well as to images gathered during real surgery. Running in a real-time, continuous detection loop, our detector successfully found 99% of target heads (0.1% false positive) in our simulated setup, and 98% of target heads (0.5% false positive) in the surgical theater","PeriodicalId":109260,"journal":{"name":"7th International Conference on Automatic Face and Gesture Recognition (FGR06)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114704966","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper we propose a novel approach to accurate face localisation for faces under near-infrared (near-IR) illumination. The circular shape of the bright pupils is a scale and rotation invariant feature which is exploited to quickly detect pupil candidates. As the first step of face localisation, a rule-based pupil detector is employed to find candidate pupil edges from the edge map. Candidate eye centres for each eye are selected from the neighborhood of corresponding pupil regions and sorted based on the similarity to eye templates. Two support vector machine (SVM) classifiers based on eye appearance are employed to validate those candidates for each eye individually. Finally candidates are further validated in pair by an SVM classifier based on global face appearance. In the experiment on a near-IR face database with 40 subjects and 48 images per subject, 96.5% images are accurately localised using the proposed approach
{"title":"Accurate face localisation for faces under active near-IR illumination","authors":"X. Zou, J. Kittler, K. Messer","doi":"10.1109/FGR.2006.18","DOIUrl":"https://doi.org/10.1109/FGR.2006.18","url":null,"abstract":"In this paper we propose a novel approach to accurate face localisation for faces under near-infrared (near-IR) illumination. The circular shape of the bright pupils is a scale and rotation invariant feature which is exploited to quickly detect pupil candidates. As the first step of face localisation, a rule-based pupil detector is employed to find candidate pupil edges from the edge map. Candidate eye centres for each eye are selected from the neighborhood of corresponding pupil regions and sorted based on the similarity to eye templates. Two support vector machine (SVM) classifiers based on eye appearance are employed to validate those candidates for each eye individually. Finally candidates are further validated in pair by an SVM classifier based on global face appearance. In the experiment on a near-IR face database with 40 subjects and 48 images per subject, 96.5% images are accurately localised using the proposed approach","PeriodicalId":109260,"journal":{"name":"7th International Conference on Automatic Face and Gesture Recognition (FGR06)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121722737","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Good registration (alignment to a reference) is essential for accurate face recognition. The effects of the number of landmarks on the mean localization error and the recognition performance are studied. Two landmarking methods are explored and compared for that purpose: (1) the most likely-landmark locator (MLLL), based on maximizing the likelihood ratio, and (2) Viola-Jones detection. Both use the locations of facial features (eyes, nose, mouth, etc) as landmarks. Further, a landmark-correction method (BILBO) based on projection into a subspace is introduced. The MLLL has been trained for locating 17 landmarks and the Viola-Jones method for 5. The mean localization errors and effects on the verification performance have been measured. It was found that on the eyes, the Viola-Jones detector is about 1% of the interocular distance more accurate than the MLLL-BILBO combination. On the nose and mouth, the MLLL-BILBO combination is about 0.5% of the inter-ocular distance more accurate than the Viola-Jones detector. Using more landmarks will result in lower equal-error rates, even when the landmarking is not so accurate. If the same landmarks are used, the most accurate landmarking method gives the best verification performance
{"title":"A landmark paper in face recognition","authors":"G. M. Beumer, Q. Tao, A. Bazen, R. Veldhuis","doi":"10.1109/FGR.2006.10","DOIUrl":"https://doi.org/10.1109/FGR.2006.10","url":null,"abstract":"Good registration (alignment to a reference) is essential for accurate face recognition. The effects of the number of landmarks on the mean localization error and the recognition performance are studied. Two landmarking methods are explored and compared for that purpose: (1) the most likely-landmark locator (MLLL), based on maximizing the likelihood ratio, and (2) Viola-Jones detection. Both use the locations of facial features (eyes, nose, mouth, etc) as landmarks. Further, a landmark-correction method (BILBO) based on projection into a subspace is introduced. The MLLL has been trained for locating 17 landmarks and the Viola-Jones method for 5. The mean localization errors and effects on the verification performance have been measured. It was found that on the eyes, the Viola-Jones detector is about 1% of the interocular distance more accurate than the MLLL-BILBO combination. On the nose and mouth, the MLLL-BILBO combination is about 0.5% of the inter-ocular distance more accurate than the Viola-Jones detector. Using more landmarks will result in lower equal-error rates, even when the landmarking is not so accurate. If the same landmarks are used, the most accurate landmarking method gives the best verification performance","PeriodicalId":109260,"journal":{"name":"7th International Conference on Automatic Face and Gesture Recognition (FGR06)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134354965","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We propose an isometric self-organizing map (ISO-SOM) method for nonlinear dimensionality reduction, which integrates a self-organizing map model and an ISOMAP dimension reduction algorithm, organizing the high dimension data in a low dimension lattice structure. We apply the proposed method to the problem of appearance-based 3D hand posture estimation. As a learning stage, we use a realistic 3D hand model to generate data encoding the mapping between the hand pose space and the image feature space. The intrinsic dimension of such nonlinear mapping is learned by ISOSOM, which clusters the data into a lattice map. We perform 3D hand posture estimation on this map, showing that the ISOSOM algorithm performs better than traditional image retrieval algorithms for pose estimation. We also show that a 2.5D feature representation based on depth edges is clearly superior to intensity edge features commonly used in previous methods
{"title":"The isometric self-organizing map for 3D hand pose estimation","authors":"Haiying Guan, R. Feris, M. Turk","doi":"10.1109/FGR.2006.103","DOIUrl":"https://doi.org/10.1109/FGR.2006.103","url":null,"abstract":"We propose an isometric self-organizing map (ISO-SOM) method for nonlinear dimensionality reduction, which integrates a self-organizing map model and an ISOMAP dimension reduction algorithm, organizing the high dimension data in a low dimension lattice structure. We apply the proposed method to the problem of appearance-based 3D hand posture estimation. As a learning stage, we use a realistic 3D hand model to generate data encoding the mapping between the hand pose space and the image feature space. The intrinsic dimension of such nonlinear mapping is learned by ISOSOM, which clusters the data into a lattice map. We perform 3D hand posture estimation on this map, showing that the ISOSOM algorithm performs better than traditional image retrieval algorithms for pose estimation. We also show that a 2.5D feature representation based on depth edges is clearly superior to intensity edge features commonly used in previous methods","PeriodicalId":109260,"journal":{"name":"7th International Conference on Automatic Face and Gesture Recognition (FGR06)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128894052","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. C. Santana, O. Déniz-Suárez, J. Lorenzo-Navarro, M. Hernández-Tejera
In this paper a system for face recognition from a tabula rasa (i.e. blank slate) perspective is described. A priori, the system has the only ability to detect automatically faces and represent them in a space of reduced dimension. Later, the system is exposed to over 400 different identities, observing its recognition performance evolution. The preliminary results achieved indicate on the one side that the system is able to reject most of unknown individuals after an initialization stage. On the other side the ability to recognize known individuals (or revisitors) is still far from being reliable. However, the observation of the recognition evolution results for individuals frequently met suggests that the more meetings are held, the lower recognition error is achieved
{"title":"Face Recognition from a Tabula Rasa Perspective","authors":"M. C. Santana, O. Déniz-Suárez, J. Lorenzo-Navarro, M. Hernández-Tejera","doi":"10.1109/FGR.2006.44","DOIUrl":"https://doi.org/10.1109/FGR.2006.44","url":null,"abstract":"In this paper a system for face recognition from a tabula rasa (i.e. blank slate) perspective is described. A priori, the system has the only ability to detect automatically faces and represent them in a space of reduced dimension. Later, the system is exposed to over 400 different identities, observing its recognition performance evolution. The preliminary results achieved indicate on the one side that the system is able to reject most of unknown individuals after an initialization stage. On the other side the ability to recognize known individuals (or revisitors) is still far from being reliable. However, the observation of the recognition evolution results for individuals frequently met suggests that the more meetings are held, the lower recognition error is achieved","PeriodicalId":109260,"journal":{"name":"7th International Conference on Automatic Face and Gesture Recognition (FGR06)","volume":"83 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124118013","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
An intelligent robot requires natural interaction with humans. Visual interpretation of gestures can be useful in accomplishing natural human-robot interaction (HRl). Previous HRI researches were focused on issues such as hand gesture, sign language, and command gesture recognition. However, automatic recognition of whole body gestures is required in order to operate HRI naturally. This can be a challenging problem because describing and modeling meaningful gesture patterns from whole body gestures are complex tasks. This paper presents a new method for spotting and recognizing whole body key gestures at the same time on a mobile robot. Our method is simultaneously used with other HRI approaches such as speech recognition, face recognition, and so forth. In this regard, both of execution speed and recognition performance should be considered. For efficient and natural operation, we used several approaches at each step of gesture recognition; learning and extraction of articulated joint information, representing gesture as a sequence of clusters, spotting and recognizing a gesture with HMM. In addition, we constructed a large gesture database, with which we verified our method. As a result, our method is successfully included and operated in a mobile robot
{"title":"Automatic gesture recognition for intelligent human-robot interaction","authors":"Seong-Whan Lee","doi":"10.1109/FGR.2006.25","DOIUrl":"https://doi.org/10.1109/FGR.2006.25","url":null,"abstract":"An intelligent robot requires natural interaction with humans. Visual interpretation of gestures can be useful in accomplishing natural human-robot interaction (HRl). Previous HRI researches were focused on issues such as hand gesture, sign language, and command gesture recognition. However, automatic recognition of whole body gestures is required in order to operate HRI naturally. This can be a challenging problem because describing and modeling meaningful gesture patterns from whole body gestures are complex tasks. This paper presents a new method for spotting and recognizing whole body key gestures at the same time on a mobile robot. Our method is simultaneously used with other HRI approaches such as speech recognition, face recognition, and so forth. In this regard, both of execution speed and recognition performance should be considered. For efficient and natural operation, we used several approaches at each step of gesture recognition; learning and extraction of articulated joint information, representing gesture as a sequence of clusters, spotting and recognizing a gesture with HMM. In addition, we constructed a large gesture database, with which we verified our method. As a result, our method is successfully included and operated in a mobile robot","PeriodicalId":109260,"journal":{"name":"7th International Conference on Automatic Face and Gesture Recognition (FGR06)","volume":"85 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126040802","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we present an improved active shape model (ASM) for facial feature extraction. The original ASM method developed by Cootes et al. highly relies on the initialization and the representation of the local structure of the facial features in the image. We use color information to improve the ASM approach for facial feature extraction. The color information is used to localize the centers of the mouth and the eyes to assist the initialization step. Moreover, we model the local structure of the feature points in the RGB color space. Besides, we use 2D affine transformation to align facial features that are perturbed by head pose variations. In fact, the 2D affine transformation compensates for the effects of both head pose variations and the projection of 3D data to 2D. Experiments on a face database of 50 subjects show that our approach outperforms the standard ASM and is successful in facial feature extraction
{"title":"Facial features extraction in color images using enhanced active shape model","authors":"M. Mahoor, M. Abdel-Mottaleb","doi":"10.1109/FGR.2006.51","DOIUrl":"https://doi.org/10.1109/FGR.2006.51","url":null,"abstract":"In this paper, we present an improved active shape model (ASM) for facial feature extraction. The original ASM method developed by Cootes et al. highly relies on the initialization and the representation of the local structure of the facial features in the image. We use color information to improve the ASM approach for facial feature extraction. The color information is used to localize the centers of the mouth and the eyes to assist the initialization step. Moreover, we model the local structure of the feature points in the RGB color space. Besides, we use 2D affine transformation to align facial features that are perturbed by head pose variations. In fact, the 2D affine transformation compensates for the effects of both head pose variations and the projection of 3D data to 2D. Experiments on a face database of 50 subjects show that our approach outperforms the standard ASM and is successful in facial feature extraction","PeriodicalId":109260,"journal":{"name":"7th International Conference on Automatic Face and Gesture Recognition (FGR06)","volume":"127 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116083109","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recognizing human action from image sequences is an active area of research in computer vision. In this paper, we present a novel method for human action recognition from image sequences in different viewing angles that uses the Cartesian component of optical flow velocity and human body shape feature vector information. We use principal component analysis to reduce the higher dimensional shape feature space into low dimensional shape feature space. We represent each action using a set of multidimensional discrete hidden Markov model and model each action for any viewing direction. We performed experiments of the proposed method by using KU gesture database. Experimental results based on this database of different actions show that our method is robust
{"title":"Human action recognition using multi-view image sequences","authors":"Mohiudding Ahmad, Seong-Whan Lee","doi":"10.1109/FGR.2006.65","DOIUrl":"https://doi.org/10.1109/FGR.2006.65","url":null,"abstract":"Recognizing human action from image sequences is an active area of research in computer vision. In this paper, we present a novel method for human action recognition from image sequences in different viewing angles that uses the Cartesian component of optical flow velocity and human body shape feature vector information. We use principal component analysis to reduce the higher dimensional shape feature space into low dimensional shape feature space. We represent each action using a set of multidimensional discrete hidden Markov model and model each action for any viewing direction. We performed experiments of the proposed method by using KU gesture database. Experimental results based on this database of different actions show that our method is robust","PeriodicalId":109260,"journal":{"name":"7th International Conference on Automatic Face and Gesture Recognition (FGR06)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132574206","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Current 2D face recognition systems encounter difficulties in recognizing faces with large pose variations. Utilizing the pose-invariant features of 3D face data has the potential to handle multiview face matching. A feature extractor based on the directional maximum is proposed to estimate the nose tip location and the pose angle simultaneously. A nose profile model represented by subspaces is used to select the best candidates for the nose tip. Assisted by a statistical feature location model, a multimodal scheme is presented to extract eye and mouth corners. Using the automatic feature extractor, a fully automatic 3D face recognition system is developed. The system is evaluated on two databases, the MSU database (300 multiview test scans from 100 subjects) and the UND database (953 near frontal scans from 277 subjects). The automatic system provides recognition accuracy that is comparable to the accuracy of a system with manually labeled feature points
{"title":"Automatic feature extraction for multiview 3D face recognition","authors":"Xiaoguang Lu, Anil K. Jain","doi":"10.1109/FGR.2006.23","DOIUrl":"https://doi.org/10.1109/FGR.2006.23","url":null,"abstract":"Current 2D face recognition systems encounter difficulties in recognizing faces with large pose variations. Utilizing the pose-invariant features of 3D face data has the potential to handle multiview face matching. A feature extractor based on the directional maximum is proposed to estimate the nose tip location and the pose angle simultaneously. A nose profile model represented by subspaces is used to select the best candidates for the nose tip. Assisted by a statistical feature location model, a multimodal scheme is presented to extract eye and mouth corners. Using the automatic feature extractor, a fully automatic 3D face recognition system is developed. The system is evaluated on two databases, the MSU database (300 multiview test scans from 100 subjects) and the UND database (953 near frontal scans from 277 subjects). The automatic system provides recognition accuracy that is comparable to the accuracy of a system with manually labeled feature points","PeriodicalId":109260,"journal":{"name":"7th International Conference on Automatic Face and Gesture Recognition (FGR06)","volume":"253 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132759517","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, a layered deformable model (LDM) is proposed for human body pose recovery in gait analysis. This model is inspired by the manually labeled silhouettes in (Z. Liu, et al., July 2004) and it is designed to closely match them. For fronto-parallel gait, the introduced LDM model defines the body part widths and lengths, the position and the joint angles of human body using 22 parameters. The model consists of four layers and allows for limb deformation. With this model, our objective is to recover its parameters (and thus the human body pose) from automatically extracted silhouettes. LDM recovery algorithm is first developed for manual silhouettes, in order to generate ground truth sequences for comparison and useful statistics regarding the LDM parameters. It is then extended for automatically extracted silhouettes. The proposed methodologies have been tested on 10005 frames from 285 gait sequences captured under various conditions and an average error rate of 7% is achieved for the lower limb joint angles of all the frames, showing great potential for model-based gait recognition
提出了一种用于步态分析中人体姿态恢复的分层变形模型(LDM)。该模型的灵感来自于(Z. Liu, et al., July 2004)中手工标记的轮廓,并被设计为与它们紧密匹配。对于前平行步态,引入的LDM模型使用22个参数定义了人体部位的宽度和长度、位置和关节角度。该模型由四层组成,并允许肢体变形。有了这个模型,我们的目标是从自动提取的轮廓中恢复其参数(从而恢复人体姿势)。LDM恢复算法首先是针对人工轮廓而开发的,目的是生成地面真值序列,用于LDM参数的比较和有用的统计。然后将其扩展为自动提取的轮廓。在各种条件下对285个步态序列中的10005帧进行了测试,所有帧的下肢关节角度的平均错误率为7%,显示出基于模型的步态识别的巨大潜力
{"title":"A layered deformable model for gait analysis","authors":"Haiping Lu, K. Plataniotis, A. Venetsanopoulos","doi":"10.1109/FGR.2006.11","DOIUrl":"https://doi.org/10.1109/FGR.2006.11","url":null,"abstract":"In this paper, a layered deformable model (LDM) is proposed for human body pose recovery in gait analysis. This model is inspired by the manually labeled silhouettes in (Z. Liu, et al., July 2004) and it is designed to closely match them. For fronto-parallel gait, the introduced LDM model defines the body part widths and lengths, the position and the joint angles of human body using 22 parameters. The model consists of four layers and allows for limb deformation. With this model, our objective is to recover its parameters (and thus the human body pose) from automatically extracted silhouettes. LDM recovery algorithm is first developed for manual silhouettes, in order to generate ground truth sequences for comparison and useful statistics regarding the LDM parameters. It is then extended for automatically extracted silhouettes. The proposed methodologies have been tested on 10005 frames from 285 gait sequences captured under various conditions and an average error rate of 7% is achieved for the lower limb joint angles of all the frames, showing great potential for model-based gait recognition","PeriodicalId":109260,"journal":{"name":"7th International Conference on Automatic Face and Gesture Recognition (FGR06)","volume":"335 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133215871","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}