Pub Date : 2006-09-01DOI: 10.1109/BCC.2006.4341630
S. Enokida, R. Shimomoto, T. Wada, T. Ejima
Gait Recognition has been paid an attention to as non-contact and unobtrusive biometric method. Magnitude and phase spectra of horizontal and vertical movement of ankles in a normal walk are effective and efficient signatures in gait recognition. However, gait recognition rate degrades significantly due to variance caused by covariates of clothing, surface or time lapse. In this paper, to improve gait recognition rate on a variety of footwear, a predictive model is proposed. The predictive model is able to estimate slipper gait from shoes gait. By using predictive slipper gait, much higher recognition rate is achieved for slipper gait over time lapse than ones without predictive model. The predictive model designed in this paper succeeds in separation of the variance due to a footwear covariate from the variance due to a time covariate.
{"title":"A Predictive Model for Gait Recognition","authors":"S. Enokida, R. Shimomoto, T. Wada, T. Ejima","doi":"10.1109/BCC.2006.4341630","DOIUrl":"https://doi.org/10.1109/BCC.2006.4341630","url":null,"abstract":"Gait Recognition has been paid an attention to as non-contact and unobtrusive biometric method. Magnitude and phase spectra of horizontal and vertical movement of ankles in a normal walk are effective and efficient signatures in gait recognition. However, gait recognition rate degrades significantly due to variance caused by covariates of clothing, surface or time lapse. In this paper, to improve gait recognition rate on a variety of footwear, a predictive model is proposed. The predictive model is able to estimate slipper gait from shoes gait. By using predictive slipper gait, much higher recognition rate is achieved for slipper gait over time lapse than ones without predictive model. The predictive model designed in this paper succeeds in separation of the variance due to a footwear covariate from the variance due to a time covariate.","PeriodicalId":226152,"journal":{"name":"2006 Biometrics Symposium: Special Session on Research at the Biometric Consortium Conference","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128634841","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2006-09-01DOI: 10.1109/BCC.2006.4341637
Sung W. Park, M. Savvides
Facial images change appearance due to multiple factors such as poses, lighting variations, facial expressions, etc. Tensor approach, an extension of the conventional 2D matrix, is appropriate to analyze facial factors since tensors make it possible to construct multilinear models using multiple factor structures. However, tensor algebra provides some difficulties in practical usage. First, it is difficult to decompose the multiple factors (e.g. pose, illumination, expression) of a test image, especially when the factor parameters are unknown or are not in the training set. Second, for face recognition, as the number of factors is larger, it becomes more difficult to construct reliable multilinear models and it requires more memory and computation to build a global model. In this paper, we propose a novel Individual TensorFaces which does not require tensor factorization, a step which was necessary in previous tensorface research for face recognition. Another advantage of this individual subspace approach is that it makes the face recognition tasks computationally and analytically simpler. Based on various experiments, we demonstrate the proposed Individual TensorFaces bring better discriminant power for classification.
{"title":"Individual Tensorface Subspaces for Efficient and Robust Face Recognition that do not Require Factorization","authors":"Sung W. Park, M. Savvides","doi":"10.1109/BCC.2006.4341637","DOIUrl":"https://doi.org/10.1109/BCC.2006.4341637","url":null,"abstract":"Facial images change appearance due to multiple factors such as poses, lighting variations, facial expressions, etc. Tensor approach, an extension of the conventional 2D matrix, is appropriate to analyze facial factors since tensors make it possible to construct multilinear models using multiple factor structures. However, tensor algebra provides some difficulties in practical usage. First, it is difficult to decompose the multiple factors (e.g. pose, illumination, expression) of a test image, especially when the factor parameters are unknown or are not in the training set. Second, for face recognition, as the number of factors is larger, it becomes more difficult to construct reliable multilinear models and it requires more memory and computation to build a global model. In this paper, we propose a novel Individual TensorFaces which does not require tensor factorization, a step which was necessary in previous tensorface research for face recognition. Another advantage of this individual subspace approach is that it makes the face recognition tasks computationally and analytically simpler. Based on various experiments, we demonstrate the proposed Individual TensorFaces bring better discriminant power for classification.","PeriodicalId":226152,"journal":{"name":"2006 Biometrics Symposium: Special Session on Research at the Biometric Consortium Conference","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125715294","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2006-09-01DOI: 10.1109/BCC.2006.4341624
Sung Joo Lee, K. Park, Jaihie Kim
In this paper, we propose a new fake iris detection method based on the changes in the reflectance ratio between the iris and the sclera. The proposed method has four advantages over previous works. First, it is possible to detect fake iris images with high accuracy. Second, our method does not cause inconvenience to users since it can detect fake iris images at a very fast speed. Third, it is possible to show the theoretical background of using the variation of the reflectance ratio between the iris and the sclera. To compare fake iris images with live ones, three types of fake iris images were produced: a printed iris, an artificial eye, and a fake contact lens. In the experiments, we prove that the proposed fake iris detection method achieves high performance when distinguishing between live and fake iris.
{"title":"Robust Fake Iris Detection Based on Variation of the Reflectance Ratio Between the IRIS and the Sclera","authors":"Sung Joo Lee, K. Park, Jaihie Kim","doi":"10.1109/BCC.2006.4341624","DOIUrl":"https://doi.org/10.1109/BCC.2006.4341624","url":null,"abstract":"In this paper, we propose a new fake iris detection method based on the changes in the reflectance ratio between the iris and the sclera. The proposed method has four advantages over previous works. First, it is possible to detect fake iris images with high accuracy. Second, our method does not cause inconvenience to users since it can detect fake iris images at a very fast speed. Third, it is possible to show the theoretical background of using the variation of the reflectance ratio between the iris and the sclera. To compare fake iris images with live ones, three types of fake iris images were produced: a printed iris, an artificial eye, and a fake contact lens. In the experiments, we prove that the proposed fake iris detection method achieves high performance when distinguishing between live and fake iris.","PeriodicalId":226152,"journal":{"name":"2006 Biometrics Symposium: Special Session on Research at the Biometric Consortium Conference","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127447282","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2006-09-01DOI: 10.1109/BCC.2006.4341626
Douglas A. Schulz
A biometric system suitable for validating user identity using only mouse movements and no specialized equipment is presented. Mouse curves (mouse movements with little or no pause between them) are individually classified and used to develop classification histograms, which are representative of an individual's typical mouse use. These classification histograms can then be compared to validate identity. This classification approach is suitable for providing continuous identity validation during an entire user session.
{"title":"Mouse Curve Biometrics","authors":"Douglas A. Schulz","doi":"10.1109/BCC.2006.4341626","DOIUrl":"https://doi.org/10.1109/BCC.2006.4341626","url":null,"abstract":"A biometric system suitable for validating user identity using only mouse movements and no specialized equipment is presented. Mouse curves (mouse movements with little or no pause between them) are individually classified and used to develop classification histograms, which are representative of an individual's typical mouse use. These classification histograms can then be compared to validate identity. This classification approach is suitable for providing continuous identity validation during an entire user session.","PeriodicalId":226152,"journal":{"name":"2006 Biometrics Symposium: Special Session on Research at the Biometric Consortium Conference","volume":"95 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126454506","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2006-09-01DOI: 10.1109/BCC.2006.4341631
A. Rattani, D. Kisku, M. Bicego, M. Tistarelli
This paper proposes a robust feature level based fusion classifier for face and fingerprint biometrics. The proposed system fuses the two traits at feature extraction level by first making the feature sets compatible for concatenation and then reducing the feature sets to handle the 'problem of curse of dimensionality'; finally the concatenated feature vectors are matched. The system is tested on the database of 50 chimeric users with five samples per trait per person. The results are compared with the monomodal ones and with the fusion at matching score level using the most popular sum rule technique. The system reports an accuracy of 97.41% with a FAR and FRR of 1.98% and 3.18% respectively, outperforming single modalities and score-level fusion.
{"title":"Robust Feature-Level Multibiometric Classification","authors":"A. Rattani, D. Kisku, M. Bicego, M. Tistarelli","doi":"10.1109/BCC.2006.4341631","DOIUrl":"https://doi.org/10.1109/BCC.2006.4341631","url":null,"abstract":"This paper proposes a robust feature level based fusion classifier for face and fingerprint biometrics. The proposed system fuses the two traits at feature extraction level by first making the feature sets compatible for concatenation and then reducing the feature sets to handle the 'problem of curse of dimensionality'; finally the concatenated feature vectors are matched. The system is tested on the database of 50 chimeric users with five samples per trait per person. The results are compared with the monomodal ones and with the fusion at matching score level using the most popular sum rule technique. The system reports an accuracy of 97.41% with a FAR and FRR of 1.98% and 3.18% respectively, outperforming single modalities and score-level fusion.","PeriodicalId":226152,"journal":{"name":"2006 Biometrics Symposium: Special Session on Research at the Biometric Consortium Conference","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125245282","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2006-09-01DOI: 10.1109/BCC.2006.4341614
S. Krawczyk, E. Lawson, R. Stanchak, B. Kamgar-Parsi
We propose an approach for capturing a human similarity measure (within an artificial neural network, SVM, or other classifiers) for face recognition. That is, the following important and long desired goal appears achievable: "The similarity measure used in a face recognition system should be designed so that humans' ability to perform face recognition and recall are imitated as closely as possible by the machine". For each person of interest, a dedicated classifier is developed. Within the classifier we effectively capture a human classification functionality. This is done by automatically generating and labeling two arbitrarily large sets of morphed images (typically tens of thousands). One set is composed of images with reduced resemblance to the imaged person, yet recognizable by humans as that person (positive exemplars); the second set consists of look-alikes, i.e. "others" who look almost like the imaged person (negative exemplars). Humans, unlike most face recognition systems, do not rank images as a precursor to recognition. Like humans, our system does not rank images, as it is capable of rejecting images of previously unseen faces (or faces which are not of interest) by simply examining their images, and recognizing faces for which it is trained to identify. We demonstrate this capability in our presented experiments, where a large set of impostor images that were not provided during training are consistently rejected by the system.
{"title":"Toward A Human-Like Similarity Measure for Face Recognition","authors":"S. Krawczyk, E. Lawson, R. Stanchak, B. Kamgar-Parsi","doi":"10.1109/BCC.2006.4341614","DOIUrl":"https://doi.org/10.1109/BCC.2006.4341614","url":null,"abstract":"We propose an approach for capturing a human similarity measure (within an artificial neural network, SVM, or other classifiers) for face recognition. That is, the following important and long desired goal appears achievable: \"The similarity measure used in a face recognition system should be designed so that humans' ability to perform face recognition and recall are imitated as closely as possible by the machine\". For each person of interest, a dedicated classifier is developed. Within the classifier we effectively capture a human classification functionality. This is done by automatically generating and labeling two arbitrarily large sets of morphed images (typically tens of thousands). One set is composed of images with reduced resemblance to the imaged person, yet recognizable by humans as that person (positive exemplars); the second set consists of look-alikes, i.e. \"others\" who look almost like the imaged person (negative exemplars). Humans, unlike most face recognition systems, do not rank images as a precursor to recognition. Like humans, our system does not rank images, as it is capable of rejecting images of previously unseen faces (or faces which are not of interest) by simply examining their images, and recognizing faces for which it is trained to identify. We demonstrate this capability in our presented experiments, where a large set of impostor images that were not provided during training are consistently rejected by the system.","PeriodicalId":226152,"journal":{"name":"2006 Biometrics Symposium: Special Session on Research at the Biometric Consortium Conference","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117349771","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2006-09-01DOI: 10.1109/BCC.2006.4341623
Jinyu Zuo, N. Kalka, N. Schmid
Iris as a biometric, is the most reliable with respect to performance. However, this reliability is a function of the ideality of the data, therefore a robust segmentation algorithm is required to handle non-ideal data. In this paper, a segmentation methodology is proposed that utilizes shape, intensity, and location information that is intrinsic to the pupil/iris. The virtue of this methodology lies in its capability to reliably segment non-ideal imagery that is simultaneously affected with such factors as specular reflection, blur, lighting variation, and off-angle images. We demonstrate the robustness of our segmentation methodology by evaluating ideal and non-ideal datasets, namely CASIA, Iris Challenge Evaluation (ICE) data, WVU, and WVU Off-angle. Furthermore, we compare our performance to that of Camus and Wildes, and Libor Masek's algorithms. We demonstrate an increase in segmentation performance of 7.02%, 8.16%, 20.84%, 26.61%, over the former mentioned algorithms when evaluating these datasets, respectively.
{"title":"A Robust IRIS Segmentation Procedure for Unconstrained Subject Presentation","authors":"Jinyu Zuo, N. Kalka, N. Schmid","doi":"10.1109/BCC.2006.4341623","DOIUrl":"https://doi.org/10.1109/BCC.2006.4341623","url":null,"abstract":"Iris as a biometric, is the most reliable with respect to performance. However, this reliability is a function of the ideality of the data, therefore a robust segmentation algorithm is required to handle non-ideal data. In this paper, a segmentation methodology is proposed that utilizes shape, intensity, and location information that is intrinsic to the pupil/iris. The virtue of this methodology lies in its capability to reliably segment non-ideal imagery that is simultaneously affected with such factors as specular reflection, blur, lighting variation, and off-angle images. We demonstrate the robustness of our segmentation methodology by evaluating ideal and non-ideal datasets, namely CASIA, Iris Challenge Evaluation (ICE) data, WVU, and WVU Off-angle. Furthermore, we compare our performance to that of Camus and Wildes, and Libor Masek's algorithms. We demonstrate an increase in segmentation performance of 7.02%, 8.16%, 20.84%, 26.61%, over the former mentioned algorithms when evaluating these datasets, respectively.","PeriodicalId":226152,"journal":{"name":"2006 Biometrics Symposium: Special Session on Research at the Biometric Consortium Conference","volume":"203 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123035058","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2006-09-01DOI: 10.1109/BCC.2006.4341629
MinYi Jeong, Chulhan Lee, Jongsun Kim, Jeung-Yoon Choi, K. Toh, Jaihie Kim
To enhance security and privacy in biometrics, changeable (or cancelable) biometrics have recently been introduced. The idea is to transform a biometric signal or feature into a new one for enrollment and matching. In this paper, we proposed changeable biometrics for face recognition using an appearance based approach. PCA and ICA coefficient vectors extracted from an input face image are normalized using their norm. The two normalized vectors are scrambled randomly and a new transformed face coefficient vector (transformed template) is generated by addition of the two normalized vectors. When a transformed template is compromised, it is replaced by using a new scrambling rule. Because the transformed template is generated by the addition of two vectors, the original PCA and ICA coefficients cannot be recovered from the transformed coefficients. In our experiment, we compared the performance between the cases when PCA and ICA coefficient vectors are used for verification and when the transformed coefficient vectors are used for verification.
{"title":"Changeable Biometrics for Appearance Based Face Recognition","authors":"MinYi Jeong, Chulhan Lee, Jongsun Kim, Jeung-Yoon Choi, K. Toh, Jaihie Kim","doi":"10.1109/BCC.2006.4341629","DOIUrl":"https://doi.org/10.1109/BCC.2006.4341629","url":null,"abstract":"To enhance security and privacy in biometrics, changeable (or cancelable) biometrics have recently been introduced. The idea is to transform a biometric signal or feature into a new one for enrollment and matching. In this paper, we proposed changeable biometrics for face recognition using an appearance based approach. PCA and ICA coefficient vectors extracted from an input face image are normalized using their norm. The two normalized vectors are scrambled randomly and a new transformed face coefficient vector (transformed template) is generated by addition of the two normalized vectors. When a transformed template is compromised, it is replaced by using a new scrambling rule. Because the transformed template is generated by the addition of two vectors, the original PCA and ICA coefficients cannot be recovered from the transformed coefficients. In our experiment, we compared the performance between the cases when PCA and ICA coefficient vectors are used for verification and when the transformed coefficient vectors are used for verification.","PeriodicalId":226152,"journal":{"name":"2006 Biometrics Symposium: Special Session on Research at the Biometric Consortium Conference","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123969173","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2006-09-01DOI: 10.1109/BCC.2006.4341635
Yi Yao, B. Abidi, N. Kalka, N. Schmid, M. Abidi
In this paper, we describe a face video database obtained from Long Distances and with High Magnifications, IRIS- LDHM. Both indoor and outdoor sequences are collected under uncontrolled surveillance conditions. The significance of this database lies in the fact that it is the first database to provide face images from long distances (indoor: 10 m~20 m and outdoor: 50 m~300 m). The corresponding system magnification is elevated from less than 3times to 20times for indoor and up to 375times for outdoor. The database has applications in experimentations with human identification and authentication in long range surveillance and wide area monitoring. The database will be made public to the research community for perusal towards long range face related research. Deteriorations unique to high magnification and long range face images are investigated in terms of face recognition rates. Magnification blur is proved to be an additional major degradation source, which can be alleviated via blur assessment and deblurring algorithms. Experimental results validate a relative improvement of up to 25% in recognition rates after assessment and enhancement of degradations.
{"title":"High Magnification and Long Distance Face Recognition: Database Acquisition, Evaluation, and Enhancement","authors":"Yi Yao, B. Abidi, N. Kalka, N. Schmid, M. Abidi","doi":"10.1109/BCC.2006.4341635","DOIUrl":"https://doi.org/10.1109/BCC.2006.4341635","url":null,"abstract":"In this paper, we describe a face video database obtained from Long Distances and with High Magnifications, IRIS- LDHM. Both indoor and outdoor sequences are collected under uncontrolled surveillance conditions. The significance of this database lies in the fact that it is the first database to provide face images from long distances (indoor: 10 m~20 m and outdoor: 50 m~300 m). The corresponding system magnification is elevated from less than 3times to 20times for indoor and up to 375times for outdoor. The database has applications in experimentations with human identification and authentication in long range surveillance and wide area monitoring. The database will be made public to the research community for perusal towards long range face related research. Deteriorations unique to high magnification and long range face images are investigated in terms of face recognition rates. Magnification blur is proved to be an additional major degradation source, which can be alleviated via blur assessment and deblurring algorithms. Experimental results validate a relative improvement of up to 25% in recognition rates after assessment and enhancement of degradations.","PeriodicalId":226152,"journal":{"name":"2006 Biometrics Symposium: Special Session on Research at the Biometric Consortium Conference","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126906500","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2006-09-01DOI: 10.1109/BCC.2006.4341633
A. Ansari, M. Abdel-Mottaleb, M. Mahoor
We present a multimodal approach for 3D face modeling and recognition from two frontal and one profile view stereo images of the face. Once the images are captured, the algorithm starts by extracting selected 2D facial features from one of the frontal views and computes a dense disparity map from the two frontal images. We then align a low resolution mesh model to the selected features, adjust its vertices at the selected features and along the profile line using the profile view, increase its vertices to a higher resolution, and re-project them back on the frontal image. Using the coordinates of the re-projected vertices and their corresponding disparities, we capture and compute the 3D facial shape variations using triangulation. The final result is a deformed 3D model specific to a given subject's face. Application of the model in 3D face recognition validates the algorithm with a high recognition rate.
{"title":"A Multimodal Approach for 3D Face Modeling and Recognition Using Deformable Mesh Model","authors":"A. Ansari, M. Abdel-Mottaleb, M. Mahoor","doi":"10.1109/BCC.2006.4341633","DOIUrl":"https://doi.org/10.1109/BCC.2006.4341633","url":null,"abstract":"We present a multimodal approach for 3D face modeling and recognition from two frontal and one profile view stereo images of the face. Once the images are captured, the algorithm starts by extracting selected 2D facial features from one of the frontal views and computes a dense disparity map from the two frontal images. We then align a low resolution mesh model to the selected features, adjust its vertices at the selected features and along the profile line using the profile view, increase its vertices to a higher resolution, and re-project them back on the frontal image. Using the coordinates of the re-projected vertices and their corresponding disparities, we capture and compute the 3D facial shape variations using triangulation. The final result is a deformed 3D model specific to a given subject's face. Application of the model in 3D face recognition validates the algorithm with a high recognition rate.","PeriodicalId":226152,"journal":{"name":"2006 Biometrics Symposium: Special Session on Research at the Biometric Consortium Conference","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131093243","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}