Pub Date : 2011-10-11DOI: 10.1109/IJCB.2011.6117593
Jonathan Parris, Michael J. Wilber, B. Heflin, H. Rara, Ahmed El-Barkouky, A. Farag, J. Movellan, anonymous, M. C. Santana, J. Lorenzo-Navarro, Mohammad Nayeem Teli, S. Marcel, Cosmin Atanasoaei, T. Boult
Face and eye detection algorithms are deployed in a wide variety of applications. Unfortunately, there has been no quantitative comparison of how these detectors perform under difficult circumstances. We created a dataset of low light and long distance images which possess some of the problems encountered by face and eye detectors solving real world problems. The dataset we created is composed of reimaged images (photohead) and semi-synthetic heads imaged under varying conditions of low light, atmospheric blur, and distances of 3m, 50m, 80m, and 200m. This paper analyzes the detection and localization performance of the participating face and eye algorithms compared with the Viola Jones detector and four leading commercial face detectors. Performance is characterized under the different conditions and parameterized by per-image brightness and contrast. In localization accuracy for eyes, the groups/companies focusing on long-range face detection outperform leading commercial applications.
{"title":"Face and eye detection on hard datasets","authors":"Jonathan Parris, Michael J. Wilber, B. Heflin, H. Rara, Ahmed El-Barkouky, A. Farag, J. Movellan, anonymous, M. C. Santana, J. Lorenzo-Navarro, Mohammad Nayeem Teli, S. Marcel, Cosmin Atanasoaei, T. Boult","doi":"10.1109/IJCB.2011.6117593","DOIUrl":"https://doi.org/10.1109/IJCB.2011.6117593","url":null,"abstract":"Face and eye detection algorithms are deployed in a wide variety of applications. Unfortunately, there has been no quantitative comparison of how these detectors perform under difficult circumstances. We created a dataset of low light and long distance images which possess some of the problems encountered by face and eye detectors solving real world problems. The dataset we created is composed of reimaged images (photohead) and semi-synthetic heads imaged under varying conditions of low light, atmospheric blur, and distances of 3m, 50m, 80m, and 200m. This paper analyzes the detection and localization performance of the participating face and eye algorithms compared with the Viola Jones detector and four leading commercial face detectors. Performance is characterized under the different conditions and parameterized by per-image brightness and contrast. In localization accuracy for eyes, the groups/companies focusing on long-range face detection outperform leading commercial applications.","PeriodicalId":103913,"journal":{"name":"2011 International Joint Conference on Biometrics (IJCB)","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115012020","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-10-11DOI: 10.1109/IJCB.2011.6117484
Shreyas Venugopalan, U. Prasad, Khalid Harun, Kyle Neblett, D. Toomey, Joseph Heyman, M. Savvides
Most iris based biometric systems require a lot of cooperation from the users so that iris images of acceptable quality may be acquired. Features from these may then be used for recognition purposes. Relatively fewer works in literature address the question of less cooperative iris acquisition systems in order to reduce constraints on users. In this paper, we describe our ongoing work in designing and developing such a system. It is capable of capturing images of the iris up to distances of 8 meters with a resolution of 200 pixels across the diameter. If the resolution requirement is decreased to 150 pixels, then the same system may be used to capture images from up to 12 meters. We have incorporated velocity estimation and focus tracking modules so that images may be acquired from subjects on the move as well. We describe the various components that make up the system, including the lenses used, the imaging sensor, our auto-focus function and velocity estimation module. All the hardware components are Commercial Off The Shelf (COTS) with little or no modifications. We also present preliminary iris acquisition results using our system for both stationary and mobile subjects.
{"title":"Long range iris acquisition system for stationary and mobile subjects","authors":"Shreyas Venugopalan, U. Prasad, Khalid Harun, Kyle Neblett, D. Toomey, Joseph Heyman, M. Savvides","doi":"10.1109/IJCB.2011.6117484","DOIUrl":"https://doi.org/10.1109/IJCB.2011.6117484","url":null,"abstract":"Most iris based biometric systems require a lot of cooperation from the users so that iris images of acceptable quality may be acquired. Features from these may then be used for recognition purposes. Relatively fewer works in literature address the question of less cooperative iris acquisition systems in order to reduce constraints on users. In this paper, we describe our ongoing work in designing and developing such a system. It is capable of capturing images of the iris up to distances of 8 meters with a resolution of 200 pixels across the diameter. If the resolution requirement is decreased to 150 pixels, then the same system may be used to capture images from up to 12 meters. We have incorporated velocity estimation and focus tracking modules so that images may be acquired from subjects on the move as well. We describe the various components that make up the system, including the lenses used, the imaging sensor, our auto-focus function and velocity estimation module. All the hardware components are Commercial Off The Shelf (COTS) with little or no modifications. We also present preliminary iris acquisition results using our system for both stationary and mobile subjects.","PeriodicalId":103913,"journal":{"name":"2011 International Joint Conference on Biometrics (IJCB)","volume":"13 6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115386719","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-10-11DOI: 10.1109/IJCB.2011.6117501
Shaun J. Canavan, Xing Zhang, L. Yin, Yong Zhang
3D facial representations have been widely used for face recognition. There has been intensive research on geometric matching and similarity measurement on 3D range data and 3D geometric meshes of individual faces. However, little investigation has been done on geometric measurement for 3D sketch models. In this paper, we study the 3D face recognition from 3D face sketches which are derived from hand-drawn sketches and machine generated sketches. First, we have developed a 3D sketch modeling approach to create 3D facial sketch models from 2D facial sketch images. Second, we compared the 3D sketches to the existing 3D scans. Third, the 3D face similarity is measured between 3D sketches versus 3D scans, and 3D sketches versus 3D sketches based on the spatial Hidden Markov Model (HMM) classification. Experiments are conducted on both the BU-4DFE database and YSU face sketch database, resulting in a recognition rate at around 92% on average.
{"title":"3D face sketch modeling and assessment for component based face recognition","authors":"Shaun J. Canavan, Xing Zhang, L. Yin, Yong Zhang","doi":"10.1109/IJCB.2011.6117501","DOIUrl":"https://doi.org/10.1109/IJCB.2011.6117501","url":null,"abstract":"3D facial representations have been widely used for face recognition. There has been intensive research on geometric matching and similarity measurement on 3D range data and 3D geometric meshes of individual faces. However, little investigation has been done on geometric measurement for 3D sketch models. In this paper, we study the 3D face recognition from 3D face sketches which are derived from hand-drawn sketches and machine generated sketches. First, we have developed a 3D sketch modeling approach to create 3D facial sketch models from 2D facial sketch images. Second, we compared the 3D sketches to the existing 3D scans. Third, the 3D face similarity is measured between 3D sketches versus 3D scans, and 3D sketches versus 3D sketches based on the spatial Hidden Markov Model (HMM) classification. Experiments are conducted on both the BU-4DFE database and YSU face sketch database, resulting in a recognition rate at around 92% on average.","PeriodicalId":103913,"journal":{"name":"2011 International Joint Conference on Biometrics (IJCB)","volume":"26 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115447016","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-10-11DOI: 10.1109/IJCB.2011.6117529
Hao Ji, Fei Su, Yujia Zhu
In this paper, a new ℓ1-graph regularized semi-supervised manifold learning (LRSML) method is proposed for robust human head pose estimation problem. The manifold is constructed under Biased Manifold Embedding (BME) framework which computes a biased neighborhood of each point in the feature space with ℓ1-graph regularization. The construction process of ℓ1-graph is assumed to be unsupervised without harnessing any data label information and uncovers the underlying ℓ1-norm driven sparse reconstruction relationship of each sample. The LRSML is more robust to noises and has the potential to convey more discriminative information compared to conventional manifold learning methods. Furthermore, utilizing both labeled and unlabeled information improve the pose estimation accuracy and generalization capability. Numerous experiments show the superiority of our method over several current state of the art methods on publicly available dataset.
{"title":"Robust head pose estimation via semi-supervised manifold learning with ℓ1-graph regularization","authors":"Hao Ji, Fei Su, Yujia Zhu","doi":"10.1109/IJCB.2011.6117529","DOIUrl":"https://doi.org/10.1109/IJCB.2011.6117529","url":null,"abstract":"In this paper, a new ℓ1-graph regularized semi-supervised manifold learning (LRSML) method is proposed for robust human head pose estimation problem. The manifold is constructed under Biased Manifold Embedding (BME) framework which computes a biased neighborhood of each point in the feature space with ℓ1-graph regularization. The construction process of ℓ1-graph is assumed to be unsupervised without harnessing any data label information and uncovers the underlying ℓ1-norm driven sparse reconstruction relationship of each sample. The LRSML is more robust to noises and has the potential to convey more discriminative information compared to conventional manifold learning methods. Furthermore, utilizing both labeled and unlabeled information improve the pose estimation accuracy and generalization capability. Numerous experiments show the superiority of our method over several current state of the art methods on publicly available dataset.","PeriodicalId":103913,"journal":{"name":"2011 International Joint Conference on Biometrics (IJCB)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125224586","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-10-11DOI: 10.1109/IJCB.2011.6117490
W. Scheirer, Neeraj Kumar, K. Ricanek, P. Belhumeur, T. Boult
For identity related problems, descriptive attributes can take the form of any information that helps represent an individual, including age data, describable visual attributes, and contextual data. With a rich set of descriptive attributes, it is possible to enhance the base matching accuracy of a traditional face identification system through intelligent score weighting. If we can factor any attribute differences between people into our match score calculation, we can deemphasize incorrect results, and ideally lift the correct matching record to a higher rank position. Naturally, the presence of all descriptive attributes during a match instance cannot be expected, especially when considering non-biometric context. Thus, in this paper, we examine the application of Bayesian Attribute Networks to combine descriptive attributes and produce accurate weighting factors to apply to match scores from face recognition systems based on incomplete observations made at match time. We also examine the pragmatic concerns of attribute network creation, and introduce a Noisy-OR formulation for streamlined truth value assignment and more accurate weighting. Experimental results show that incorporating descriptive attributes into the matching process significantly enhances face identification over the baseline by up to 32.8%.
{"title":"Fusing with context: A Bayesian approach to combining descriptive attributes","authors":"W. Scheirer, Neeraj Kumar, K. Ricanek, P. Belhumeur, T. Boult","doi":"10.1109/IJCB.2011.6117490","DOIUrl":"https://doi.org/10.1109/IJCB.2011.6117490","url":null,"abstract":"For identity related problems, descriptive attributes can take the form of any information that helps represent an individual, including age data, describable visual attributes, and contextual data. With a rich set of descriptive attributes, it is possible to enhance the base matching accuracy of a traditional face identification system through intelligent score weighting. If we can factor any attribute differences between people into our match score calculation, we can deemphasize incorrect results, and ideally lift the correct matching record to a higher rank position. Naturally, the presence of all descriptive attributes during a match instance cannot be expected, especially when considering non-biometric context. Thus, in this paper, we examine the application of Bayesian Attribute Networks to combine descriptive attributes and produce accurate weighting factors to apply to match scores from face recognition systems based on incomplete observations made at match time. We also examine the pragmatic concerns of attribute network creation, and introduce a Noisy-OR formulation for streamlined truth value assignment and more accurate weighting. Experimental results show that incorporating descriptive attributes into the matching process significantly enhances face identification over the baseline by up to 32.8%.","PeriodicalId":103913,"journal":{"name":"2011 International Joint Conference on Biometrics (IJCB)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124990420","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-10-11DOI: 10.1109/IJCB.2011.6117515
Konstantinos Tselios, E. Zois, A. Nassiopoulos, G. Economou
In this work, a feature extraction method for off-line signature recognition and verification is proposed, described and validated. This approach is based on the exploitation of the relative pixel distribution over predetermined two and three-step paths along the signature trace. The proposed procedure can be regarded as a model for estimating the transitional probabilities of the signature stroke, arcs and angles. Partitioning the signature image with respect to its center of gravity is applied to the two-step part of the feature extraction algorithm, while an enhanced three-step algorithm utilizes the entire signature image. Fusion at feature level generates a multidimensional vector which encodes the spatial details of each writer. The classifier model is composed of the combination of a first stage similarity score along with a continuous SVM output. Results based on the estimation of the EER on domestic signature datasets and well known international corpuses demonstrate the high efficiency of the proposed methodology.
{"title":"Fusion of directional transitional features for off-line signature verification","authors":"Konstantinos Tselios, E. Zois, A. Nassiopoulos, G. Economou","doi":"10.1109/IJCB.2011.6117515","DOIUrl":"https://doi.org/10.1109/IJCB.2011.6117515","url":null,"abstract":"In this work, a feature extraction method for off-line signature recognition and verification is proposed, described and validated. This approach is based on the exploitation of the relative pixel distribution over predetermined two and three-step paths along the signature trace. The proposed procedure can be regarded as a model for estimating the transitional probabilities of the signature stroke, arcs and angles. Partitioning the signature image with respect to its center of gravity is applied to the two-step part of the feature extraction algorithm, while an enhanced three-step algorithm utilizes the entire signature image. Fusion at feature level generates a multidimensional vector which encodes the spatial details of each writer. The classifier model is composed of the combination of a first stage similarity score along with a continuous SVM output. Results based on the estimation of the EER on domestic signature datasets and well known international corpuses demonstrate the high efficiency of the proposed methodology.","PeriodicalId":103913,"journal":{"name":"2011 International Joint Conference on Biometrics (IJCB)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126130643","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-10-11DOI: 10.1109/IJCB.2011.6117586
N. Kalka, T. Bourlai, B. Cukic, L. Hornak
In this paper we study the problem of cross spectral face recognition in heterogeneous environments. Specifically we investigate the advantages and limitations of matching short wave infrared (SWIR) face images to visible images under controlled or uncontrolled conditions. The contributions of this work are three-fold. First, three different databases are considered, which represent three different data collection conditions, i.e., images acquired in fully controlled (indoors), semi-controlled (indoors at standoff distances ≥ 50m), and uncontrolled (outdoor operational conditions) environments. Second, we demonstrate the possibility of SWIR cross-spectral matching under controlled and challenging scenarios. Third, we illustrate how photometric normalization and our proposed cross-photometric score level fusion rule can be utilized to improve cross-spectral matching performance across all scenarios. We utilized both commercial and academic (texture-based) face matchers and performed a set of experiments indicating that SWIR images can be matched to visible images with encouraging results. Our experiments also indicate that the level of improvement in recognition performance is scenario dependent.
{"title":"Cross-spectral face recognition in heterogeneous environments: A case study on matching visible to short-wave infrared imagery","authors":"N. Kalka, T. Bourlai, B. Cukic, L. Hornak","doi":"10.1109/IJCB.2011.6117586","DOIUrl":"https://doi.org/10.1109/IJCB.2011.6117586","url":null,"abstract":"In this paper we study the problem of cross spectral face recognition in heterogeneous environments. Specifically we investigate the advantages and limitations of matching short wave infrared (SWIR) face images to visible images under controlled or uncontrolled conditions. The contributions of this work are three-fold. First, three different databases are considered, which represent three different data collection conditions, i.e., images acquired in fully controlled (indoors), semi-controlled (indoors at standoff distances ≥ 50m), and uncontrolled (outdoor operational conditions) environments. Second, we demonstrate the possibility of SWIR cross-spectral matching under controlled and challenging scenarios. Third, we illustrate how photometric normalization and our proposed cross-photometric score level fusion rule can be utilized to improve cross-spectral matching performance across all scenarios. We utilized both commercial and academic (texture-based) face matchers and performed a set of experiments indicating that SWIR images can be matched to visible images with encouraging results. Our experiments also indicate that the level of improvement in recognition performance is scenario dependent.","PeriodicalId":103913,"journal":{"name":"2011 International Joint Conference on Biometrics (IJCB)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122578136","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-10-11DOI: 10.1109/IJCB.2011.6117534
Zeda Zhang, Yunhong Wang, Zhaoxiang Zhang
This paper presents a novel method for synthesizing artificial visual light (VIS) face images from near-infrared (NIR) inputs. Active NIR imaging is now widely employed because it is unobtrusive, invariant of environmental illuminations, and can penetrate glasses and sweats. Unfortunately, NIR imaging exhibits discrepant photic properties compared with VIS imaging. Based on recent results of research on compressive sensing, natural images can be compressed and recovered with an overcomplete dictionary by sparse representation coefficients. In our approach a pairwise dictionary is trained from randomly sampled coupled face patches, which contains sparse coded base functions to reconstruct representation coefficients via l1-minimization. We will demonstrate that this method is robust to moderate pose and expression variations, and is efficient in computing. Comparative experiments are conducted with state-of-the-art algorithms.
{"title":"Face synthesis from near-infrared to visual light via sparse representation","authors":"Zeda Zhang, Yunhong Wang, Zhaoxiang Zhang","doi":"10.1109/IJCB.2011.6117534","DOIUrl":"https://doi.org/10.1109/IJCB.2011.6117534","url":null,"abstract":"This paper presents a novel method for synthesizing artificial visual light (VIS) face images from near-infrared (NIR) inputs. Active NIR imaging is now widely employed because it is unobtrusive, invariant of environmental illuminations, and can penetrate glasses and sweats. Unfortunately, NIR imaging exhibits discrepant photic properties compared with VIS imaging. Based on recent results of research on compressive sensing, natural images can be compressed and recovered with an overcomplete dictionary by sparse representation coefficients. In our approach a pairwise dictionary is trained from randomly sampled coupled face patches, which contains sparse coded base functions to reconstruct representation coefficients via l1-minimization. We will demonstrate that this method is robust to moderate pose and expression variations, and is efficient in computing. Comparative experiments are conducted with state-of-the-art algorithms.","PeriodicalId":103913,"journal":{"name":"2011 International Joint Conference on Biometrics (IJCB)","volume":"125 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117333247","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-10-11DOI: 10.1109/IJCB.2011.6117503
André Anjos, S. Marcel
A common technique to by-pass 2-D face recognition systems is to use photographs of spoofed identities. Unfortunately, research in counter-measures to this type of attack have not kept-up - even if such threats have been known for nearly a decade, there seems to exist no consensus on best practices, techniques or protocols for developing and testing spoofing-detectors for face recognition. We attribute the reason for this delay, partly, to the unavailability of public databases and protocols to study solutions and compare results. To this purpose we introduce the publicly available PRINT-ATTACK database and exemplify how to use its companion protocol with a motion-based algorithm that detects correlations between the person's head movements and the scene context. The results are to be used as basis for comparison to other counter-measure techniques. The PRINT-ATTACK database contains 200 videos of real-accesses and 200 videos of spoof attempts using printed photographs of 50 different identities.
{"title":"Counter-measures to photo attacks in face recognition: A public database and a baseline","authors":"André Anjos, S. Marcel","doi":"10.1109/IJCB.2011.6117503","DOIUrl":"https://doi.org/10.1109/IJCB.2011.6117503","url":null,"abstract":"A common technique to by-pass 2-D face recognition systems is to use photographs of spoofed identities. Unfortunately, research in counter-measures to this type of attack have not kept-up - even if such threats have been known for nearly a decade, there seems to exist no consensus on best practices, techniques or protocols for developing and testing spoofing-detectors for face recognition. We attribute the reason for this delay, partly, to the unavailability of public databases and protocols to study solutions and compare results. To this purpose we introduce the publicly available PRINT-ATTACK database and exemplify how to use its companion protocol with a motion-based algorithm that detects correlations between the person's head movements and the scene context. The results are to be used as basis for comparison to other counter-measure techniques. The PRINT-ATTACK database contains 200 videos of real-accesses and 200 videos of spoof attempts using printed photographs of 50 different identities.","PeriodicalId":103913,"journal":{"name":"2011 International Joint Conference on Biometrics (IJCB)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127043952","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-10-11DOI: 10.1109/IJCB.2011.6117598
S. Cadavid, Sherin Fathy, Jindan Zhou, M. Abdel-Mottaleb
We present a novel voxelization framework for holistic Three-Dimensional (3D) object representation that accounts for distinct surface features. A voxelization of an object is performed by encoding an attribute or set of attributes of the surface region contained within each voxel occupying the space that the object resides in. To our knowledge, the voxel structures employed in previous methods consist of uniformly-sized voxels. The proposed framework, in contrast, generates structures consisting of variable-sized voxels that are adaptively distributed in higher concentration near distinct surface features. The primary advantage of the proposed method over its fixed resolution counterparts is that it yields a significantly more concise feature representation that is demonstrated to achieve a superior recognition performance. An evaluation of the method is conducted on a 3D ear recognition task. The ear provides a challenging case study because of its high degree of inter-subject similarity.
{"title":"An adaptive resolution voxelization framework for 3D ear recognition","authors":"S. Cadavid, Sherin Fathy, Jindan Zhou, M. Abdel-Mottaleb","doi":"10.1109/IJCB.2011.6117598","DOIUrl":"https://doi.org/10.1109/IJCB.2011.6117598","url":null,"abstract":"We present a novel voxelization framework for holistic Three-Dimensional (3D) object representation that accounts for distinct surface features. A voxelization of an object is performed by encoding an attribute or set of attributes of the surface region contained within each voxel occupying the space that the object resides in. To our knowledge, the voxel structures employed in previous methods consist of uniformly-sized voxels. The proposed framework, in contrast, generates structures consisting of variable-sized voxels that are adaptively distributed in higher concentration near distinct surface features. The primary advantage of the proposed method over its fixed resolution counterparts is that it yields a significantly more concise feature representation that is demonstrated to achieve a superior recognition performance. An evaluation of the method is conducted on a 3D ear recognition task. The ear provides a challenging case study because of its high degree of inter-subject similarity.","PeriodicalId":103913,"journal":{"name":"2011 International Joint Conference on Biometrics (IJCB)","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130668136","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}