Pub Date : 2015-05-19DOI: 10.1109/ICB.2015.7139078
E. S. Jaha, M. Nixon
As much information as possible should be used when identifying subjects in surveillance video due to the poor quality and resolution. So far, little attention has been paid to exploiting clothing as it has been considered unlikely to be a potential cue to identity. Clothing analysis could not only potentially improve recognition, but could also aid in subject re-identification. Further, we show here how clothing can aid recognition when there is a large change in viewpoint. Our study offers some important insights into the capability of clothing information in more realistic scenarios. We show how recognition can benefit from clothing analysis when the viewpoint changes with partial occlusion, unlike other approaches addressing soft biometrics from single viewpoint data images. This research presents how soft clothing biometrics can be used to achieve viewpoint invariant subject retrieval, given a verbal query description of the subject observed from a different viewpoint. We investigate the influence of the most correlated clothing traits when extracted from multiple viewpoints, and how they can lead to increased performance.
{"title":"Viewpoint invariant subject retrieval via soft clothing biometrics","authors":"E. S. Jaha, M. Nixon","doi":"10.1109/ICB.2015.7139078","DOIUrl":"https://doi.org/10.1109/ICB.2015.7139078","url":null,"abstract":"As much information as possible should be used when identifying subjects in surveillance video due to the poor quality and resolution. So far, little attention has been paid to exploiting clothing as it has been considered unlikely to be a potential cue to identity. Clothing analysis could not only potentially improve recognition, but could also aid in subject re-identification. Further, we show here how clothing can aid recognition when there is a large change in viewpoint. Our study offers some important insights into the capability of clothing information in more realistic scenarios. We show how recognition can benefit from clothing analysis when the viewpoint changes with partial occlusion, unlike other approaches addressing soft biometrics from single viewpoint data images. This research presents how soft clothing biometrics can be used to achieve viewpoint invariant subject retrieval, given a verbal query description of the subject observed from a different viewpoint. We investigate the influence of the most correlated clothing traits when extracted from multiple viewpoints, and how they can lead to increased performance.","PeriodicalId":237372,"journal":{"name":"2015 International Conference on Biometrics (ICB)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115494825","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-05-19DOI: 10.1109/ICB.2015.7139073
Benoit Ducray, Sheila Cobourne, K. Mayes, K. Markantonakis
Biometric systems either use physiological or behavioural characteristics to identify an individual. However, if a biometric is compromised it could be difficult or impossible to change it. This paper proposes a biometric authentication system based on gesture recognition, where gestures can be easily changed by the user. The system uses a Kinect™ device to capture and extract features, as it provides 20 skeleton tracking points: we use just six of these in our system. The Dynamic Time Warping (DTW) algorithm is used to find an optimal alignment between gestures which are time-bound sequences. We tested the system on a sample of 38 volunteers. Ten volunteers provided reference gestures of their own design and 28 volunteers attempted to attack these reference gestures by both guessing and copying. Guessing the gesture was unsuccessful in all cases, but when the attacker had previously seen a video of the reference gesture the experiment gave us an estimation of the True Positive Rate (TPR) of 0.93, False Positive Rate (FPR) of 0.017 and Equal Error Rate (EER) of 0.028.
{"title":"Authentication based on a changeable biometric using gesture recognition with the Kinect™","authors":"Benoit Ducray, Sheila Cobourne, K. Mayes, K. Markantonakis","doi":"10.1109/ICB.2015.7139073","DOIUrl":"https://doi.org/10.1109/ICB.2015.7139073","url":null,"abstract":"Biometric systems either use physiological or behavioural characteristics to identify an individual. However, if a biometric is compromised it could be difficult or impossible to change it. This paper proposes a biometric authentication system based on gesture recognition, where gestures can be easily changed by the user. The system uses a Kinect™ device to capture and extract features, as it provides 20 skeleton tracking points: we use just six of these in our system. The Dynamic Time Warping (DTW) algorithm is used to find an optimal alignment between gestures which are time-bound sequences. We tested the system on a sample of 38 volunteers. Ten volunteers provided reference gestures of their own design and 28 volunteers attempted to attack these reference gestures by both guessing and copying. Guessing the gesture was unsuccessful in all cases, but when the attacker had previously seen a video of the reference gesture the experiment gave us an estimation of the True Positive Rate (TPR) of 0.93, False Positive Rate (FPR) of 0.017 and Equal Error Rate (EER) of 0.028.","PeriodicalId":237372,"journal":{"name":"2015 International Conference on Biometrics (ICB)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127453650","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-05-19DOI: 10.1109/ICB.2015.7139107
O. Nikisins, Teodors Eglitis, Mihails Pudzs, M. Greitans
The paper introduces the combination of algorithms for possibly the first bimodal biometric system capable of touch-less capturing of two biometric parameters, palm veins and palm creases, synchronously with a single image sensor. The architecture of the proposed system is based on the Detection, Alignment and Recognition pipeline. The ROI detection and alignment stages are simplified with efficient combination of hardware (lighting sources) and software. A new feature descriptor, namely Histogram of Vectors is proposed in the recognition stage. Since the capturing of images requires special conditions, the database including images of 100 individuals and ground-truth data is introduced. The analysis of performance of the system utilizes the database leading to detailed understanding of the error propagation in the automatic recognition pipeline.
{"title":"Algorithms for a novel touchless bimodal palm biometric system","authors":"O. Nikisins, Teodors Eglitis, Mihails Pudzs, M. Greitans","doi":"10.1109/ICB.2015.7139107","DOIUrl":"https://doi.org/10.1109/ICB.2015.7139107","url":null,"abstract":"The paper introduces the combination of algorithms for possibly the first bimodal biometric system capable of touch-less capturing of two biometric parameters, palm veins and palm creases, synchronously with a single image sensor. The architecture of the proposed system is based on the Detection, Alignment and Recognition pipeline. The ROI detection and alignment stages are simplified with efficient combination of hardware (lighting sources) and software. A new feature descriptor, namely Histogram of Vectors is proposed in the recognition stage. Since the capturing of images requires special conditions, the database including images of 100 individuals and ground-truth data is introduced. The analysis of performance of the system utilizes the database leading to detailed understanding of the error propagation in the automatic recognition pipeline.","PeriodicalId":237372,"journal":{"name":"2015 International Conference on Biometrics (ICB)","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124951135","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-05-19DOI: 10.1109/ICB.2015.7139090
Min-Chun Yang, Chia-Po Wei, Yi-Ren Yeh, Y. Wang
In real-world video surveillance applications, one often needs to recognize face images from a very long distance. Such recognition tasks are very challenging, since such images are typically with very low resolution (VLR). However, if one simply downsamples high-resolution (HR) training images for recognizing the VLR test inputs, or if one directly upsamples the VLR inputs for matching the HR training data, the resulting recognition performance would not be satisfactory. In this paper, we propose a joint face hallucination and recognition approach based on sparse representation. Given a VLR input image, our method is able to synthesize its person-specific HR version with recognition guarantees. In our experiments, we consider two different face image datasets. Empirical results will support the use of our approach for both VLR face recognition. In addition, compared to state-of-the-art super-resolution (SR) methods, we will also show that our method results in improved quality for the recovered HR face images.
{"title":"Recognition at a long distance: Very low resolution face recognition and hallucination","authors":"Min-Chun Yang, Chia-Po Wei, Yi-Ren Yeh, Y. Wang","doi":"10.1109/ICB.2015.7139090","DOIUrl":"https://doi.org/10.1109/ICB.2015.7139090","url":null,"abstract":"In real-world video surveillance applications, one often needs to recognize face images from a very long distance. Such recognition tasks are very challenging, since such images are typically with very low resolution (VLR). However, if one simply downsamples high-resolution (HR) training images for recognizing the VLR test inputs, or if one directly upsamples the VLR inputs for matching the HR training data, the resulting recognition performance would not be satisfactory. In this paper, we propose a joint face hallucination and recognition approach based on sparse representation. Given a VLR input image, our method is able to synthesize its person-specific HR version with recognition guarantees. In our experiments, we consider two different face image datasets. Empirical results will support the use of our approach for both VLR face recognition. In addition, compared to state-of-the-art super-resolution (SR) methods, we will also show that our method results in improved quality for the recovered HR face images.","PeriodicalId":237372,"journal":{"name":"2015 International Conference on Biometrics (ICB)","volume":"146 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116453416","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-05-19DOI: 10.1109/ICB.2015.7139054
Z. Akhtar, C. Micheloni, G. Foresti
Fingerprint recognition systems are vulnerable to spoof attacks, which consist in presenting forged fingerprints to the sensor. Typical anti-spoofing mechanism is fingerprint liveness detection. Existing liveness detection methods are still not robust to spoofing materials, datasets and sensor variations. In particular, the performance of a liveness detection algorithm remarkably drops upon encountering spoof fabrication materials that were not used during the training stage. Likewise, a quintessential liveness detection method needs to be adapted and retrained to new spoofing materials, datasets and each sensor used for acquiring the fingerprints. In this paper, we propose a framework that first performs correlation mapping between live and spoof fingerprints and then uses a discriminative-generative classification scheme for spoof detection. Partial Least Squares (PLS) is utilized to learn the correlations. While, support vector machine (SVM) is combined with three generative classifiers, namely Gaussian Mixture Model, Gaussian Copula, and Quadratic Discriminant Analysis, for final classification. Experiments on the publicly available LivDet2011 and LivDet2013 datasets, show that the proposed method outperforms the existing methods alongside cross-spoof material and cross-sensor techniques.
{"title":"Correlation based fingerprint liveness detection","authors":"Z. Akhtar, C. Micheloni, G. Foresti","doi":"10.1109/ICB.2015.7139054","DOIUrl":"https://doi.org/10.1109/ICB.2015.7139054","url":null,"abstract":"Fingerprint recognition systems are vulnerable to spoof attacks, which consist in presenting forged fingerprints to the sensor. Typical anti-spoofing mechanism is fingerprint liveness detection. Existing liveness detection methods are still not robust to spoofing materials, datasets and sensor variations. In particular, the performance of a liveness detection algorithm remarkably drops upon encountering spoof fabrication materials that were not used during the training stage. Likewise, a quintessential liveness detection method needs to be adapted and retrained to new spoofing materials, datasets and each sensor used for acquiring the fingerprints. In this paper, we propose a framework that first performs correlation mapping between live and spoof fingerprints and then uses a discriminative-generative classification scheme for spoof detection. Partial Least Squares (PLS) is utilized to learn the correlations. While, support vector machine (SVM) is combined with three generative classifiers, namely Gaussian Mixture Model, Gaussian Copula, and Quadratic Discriminant Analysis, for final classification. Experiments on the publicly available LivDet2011 and LivDet2013 datasets, show that the proposed method outperforms the existing methods alongside cross-spoof material and cross-sensor techniques.","PeriodicalId":237372,"journal":{"name":"2015 International Conference on Biometrics (ICB)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123663180","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-05-19DOI: 10.1109/ICB.2015.7139055
R. Veldhuis
A theoretical result relating the maximum achievable security of the family of biometric template protection systems known as key-binding systems to the recognition performance of a biometric recognition system that is optimal in Neyman-Pearson sense is derived. The relation allows for the computation of the maximum achievable key length from the Receiver Operating Characteristic (ROC) of the optimal biometric recognition system. Illustrative examples that demonstrate how the shape of the ROC impacts the security of a template protection system are presented and discussed.
{"title":"The relation between the secrecy rate of biometric template protection and biometric recognition performance","authors":"R. Veldhuis","doi":"10.1109/ICB.2015.7139055","DOIUrl":"https://doi.org/10.1109/ICB.2015.7139055","url":null,"abstract":"A theoretical result relating the maximum achievable security of the family of biometric template protection systems known as key-binding systems to the recognition performance of a biometric recognition system that is optimal in Neyman-Pearson sense is derived. The relation allows for the computation of the maximum achievable key length from the Receiver Operating Characteristic (ROC) of the optimal biometric recognition system. Illustrative examples that demonstrate how the shape of the ROC impacts the security of a template protection system are presented and discussed.","PeriodicalId":237372,"journal":{"name":"2015 International Conference on Biometrics (ICB)","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126294702","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-05-19DOI: 10.1109/ICB.2015.7139071
Nitin K. Mahadeo, Gholamreza Haffari, A. Paplinski
Iris segmentation is defined as the isolation of the iris pattern in an eye image. A highly accurate segmented iris plays a key role in the overall performance of an iris recognition system, as shown in previous research. We present a fully automated method for classifying correctly and incorrectly segmented iris regions in eye images. In contrast with previous work where only iris boundary detection is considered (using a limited number of features), we introduce the following novelties which greatly enhance the performance of an iris recognition system. Firstly, we go beyond iris boundary detection and consider a more realistic and challenging task of complete segmentation which includes iris boundary detection and occlusion detection (due to eyelids and eyelashes). Secondly, an extended and rich feature set is investigated for this task. Thirdly, several non-linear learning algorithms are used to measure the prediction accuracy. Finally, we extend our model to iris videos, taking into account neighbouring frames for a better prediction. Both intrinsic and extrinsic evaluation are carried out to evaluate the performance of the proposed method. With these innovations, our method outperforms current state-of-the-art techniques and presents a reliable approach to the task of classifying segmented iris images in an iris recognition system.
{"title":"Predicting segmentation errors in an iris recognition system","authors":"Nitin K. Mahadeo, Gholamreza Haffari, A. Paplinski","doi":"10.1109/ICB.2015.7139071","DOIUrl":"https://doi.org/10.1109/ICB.2015.7139071","url":null,"abstract":"Iris segmentation is defined as the isolation of the iris pattern in an eye image. A highly accurate segmented iris plays a key role in the overall performance of an iris recognition system, as shown in previous research. We present a fully automated method for classifying correctly and incorrectly segmented iris regions in eye images. In contrast with previous work where only iris boundary detection is considered (using a limited number of features), we introduce the following novelties which greatly enhance the performance of an iris recognition system. Firstly, we go beyond iris boundary detection and consider a more realistic and challenging task of complete segmentation which includes iris boundary detection and occlusion detection (due to eyelids and eyelashes). Secondly, an extended and rich feature set is investigated for this task. Thirdly, several non-linear learning algorithms are used to measure the prediction accuracy. Finally, we extend our model to iris videos, taking into account neighbouring frames for a better prediction. Both intrinsic and extrinsic evaluation are carried out to evaluate the performance of the proposed method. With these innovations, our method outperforms current state-of-the-art techniques and presents a reliable approach to the task of classifying segmented iris images in an iris recognition system.","PeriodicalId":237372,"journal":{"name":"2015 International Conference on Biometrics (ICB)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123957159","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-05-19DOI: 10.1109/ICB.2015.7139077
Raphael C. Prates, W. R. Schwartz
The main challenges in person re-identification are related to different camera acquisition conditions and high inter-class similarities. These aspects motivated us to handle such problems by learning intra-camera discriminative models, based on training samples, to discover representative individuals for a given sample (probe or gallery samples), referred to as prototypes. These prototypes are used to weight the features according to their discriminative power by using the Partial Least Square (PLS) method. We also exploit models built from the gallery and probe samples to generate re-identification results that will be combined in a single ranking using ranking aggregation techniques. According to the experiments, the proposed method achieves state-of-the-art results. They also demonstrate that aggregating the results achieved by our method with results achieved by a distance metric learning method, outperforms the state-of-the-art, e.g., the top-1 rank is increased in almost 10 percent points for VIPeR and PRID 450S data sets.
{"title":"Appearance-based person re-identification by intra-camera discriminative models and rank aggregation","authors":"Raphael C. Prates, W. R. Schwartz","doi":"10.1109/ICB.2015.7139077","DOIUrl":"https://doi.org/10.1109/ICB.2015.7139077","url":null,"abstract":"The main challenges in person re-identification are related to different camera acquisition conditions and high inter-class similarities. These aspects motivated us to handle such problems by learning intra-camera discriminative models, based on training samples, to discover representative individuals for a given sample (probe or gallery samples), referred to as prototypes. These prototypes are used to weight the features according to their discriminative power by using the Partial Least Square (PLS) method. We also exploit models built from the gallery and probe samples to generate re-identification results that will be combined in a single ranking using ranking aggregation techniques. According to the experiments, the proposed method achieves state-of-the-art results. They also demonstrate that aggregating the results achieved by our method with results achieved by a distance metric learning method, outperforms the state-of-the-art, e.g., the top-1 rank is increased in almost 10 percent points for VIPeR and PRID 450S data sets.","PeriodicalId":237372,"journal":{"name":"2015 International Conference on Biometrics (ICB)","volume":"243 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123993081","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-05-19DOI: 10.1109/ICB.2015.7139099
Rohit Pandey, V. Govindaraju
Security is an important aspect in the practical deployment of biometric authentication systems. Biometric data in its original form is irreplaceable and thus, must be protected. This often comes at the cost of reduced matching accuracy or loss of the true key-less convenience biometric authentication can offer. In this paper, we address the shortcomings of current face template protection schemes and show the advantages of a localized approach. We propose a framework that utilizes features from local regions of the face to achieve exact matching, and thus, enables the security offered by hash functions like SHA-256. We study the matching accuracy of different feature extractors, and propose measures to quantify the security offered by the scheme under reasonable real-world assumptions. The efficacy of our approach is demonstrated on the Multi-PIE face database.
{"title":"Secure face template generation via local region hashing","authors":"Rohit Pandey, V. Govindaraju","doi":"10.1109/ICB.2015.7139099","DOIUrl":"https://doi.org/10.1109/ICB.2015.7139099","url":null,"abstract":"Security is an important aspect in the practical deployment of biometric authentication systems. Biometric data in its original form is irreplaceable and thus, must be protected. This often comes at the cost of reduced matching accuracy or loss of the true key-less convenience biometric authentication can offer. In this paper, we address the shortcomings of current face template protection schemes and show the advantages of a localized approach. We propose a framework that utilizes features from local regions of the face to achieve exact matching, and thus, enables the security offered by hash functions like SHA-256. We study the matching accuracy of different feature extractors, and propose measures to quantify the security offered by the scheme under reasonable real-world assumptions. The efficacy of our approach is demonstrated on the Multi-PIE face database.","PeriodicalId":237372,"journal":{"name":"2015 International Conference on Biometrics (ICB)","volume":"74 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127149451","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-05-19DOI: 10.1109/ICB.2015.7139102
Kribashnee Dorasamy, L. Webb, J. Tapamo, N. P. Khanyile
The use of directional patterns has recently received more attention in fingerprint classification. It provides a global representation of a fingerprint, by dividing it into homogeneous orientation partitions. With this technique, the challenge in previous works has been the complexity of the pattern templates used for classification. In addition, incomplete fingerprints are often not accounted for. A rule-based technique using simplified rules is proposed to overcome the challenges faced by previous pattern templates. Two features, namely directional patterns and singular points (SPs), are combined to categorise six fingerprint classes: namely Whorl (W); Right Loop (RL); Left Loop (LL); Tented Arch (TA); Plain Arch (PA); and Unclassifiable (U). The proposed technique achieves an accuracy of 92.87% and 92.20% on the FVC 2002 and 2004 DB1, respectively. Analysing the global representation of the fingerprint has proved to be advantageous, as the rules are invariant to rotation and have the potential to address issues of incomplete fingerprints.
{"title":"Fingerprint classification using a simplified rule-set based on directional patterns and singularity features","authors":"Kribashnee Dorasamy, L. Webb, J. Tapamo, N. P. Khanyile","doi":"10.1109/ICB.2015.7139102","DOIUrl":"https://doi.org/10.1109/ICB.2015.7139102","url":null,"abstract":"The use of directional patterns has recently received more attention in fingerprint classification. It provides a global representation of a fingerprint, by dividing it into homogeneous orientation partitions. With this technique, the challenge in previous works has been the complexity of the pattern templates used for classification. In addition, incomplete fingerprints are often not accounted for. A rule-based technique using simplified rules is proposed to overcome the challenges faced by previous pattern templates. Two features, namely directional patterns and singular points (SPs), are combined to categorise six fingerprint classes: namely Whorl (W); Right Loop (RL); Left Loop (LL); Tented Arch (TA); Plain Arch (PA); and Unclassifiable (U). The proposed technique achieves an accuracy of 92.87% and 92.20% on the FVC 2002 and 2004 DB1, respectively. Analysing the global representation of the fingerprint has proved to be advantageous, as the rules are invariant to rotation and have the potential to address issues of incomplete fingerprints.","PeriodicalId":237372,"journal":{"name":"2015 International Conference on Biometrics (ICB)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133538389","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}