Pub Date : 2011-12-05DOI: 10.1109/ICHB.2011.6094351
Changlong Jin, Shengzhe Li, Hakil Kim
Robust alignment point detection is still a challenging problem in fingerprint recognition, especially for arch type fingerprints. Proposed in this paper is a method of detecting a pixel-level alignment point from mated fingerprints regardless of the type based on pixel-level orientation field. Given a fingerprint, firstly, pixel-level orientation field is computed using multi-scale Gaussian filtering. Secondly, a vertical symmetry line is extracted from the orientation field, based on which the fingerprint type is classified, either arch or non-arch type. For non-arch mated pairs, the pixel-level singular points (core or delta) are adopted as candidate alignment points and be verified by point-pattern matching and the average orientation difference between the orientation fields. And, for arch mated pairs, the alignment points are detected at the maximum in the angular difference and the orientation certainty level over the symmetry lines. The proposed method is tested over the FVC 2000 DB2a, and 95.93% mated fingerprint pairs are aligned within one ridge-width displacement.
{"title":"Type-Independent Pixel-Level Alignment Point Detection for Fingerprints","authors":"Changlong Jin, Shengzhe Li, Hakil Kim","doi":"10.1109/ICHB.2011.6094351","DOIUrl":"https://doi.org/10.1109/ICHB.2011.6094351","url":null,"abstract":"Robust alignment point detection is still a challenging problem in fingerprint recognition, especially for arch type fingerprints. Proposed in this paper is a method of detecting a pixel-level alignment point from mated fingerprints regardless of the type based on pixel-level orientation field. Given a fingerprint, firstly, pixel-level orientation field is computed using multi-scale Gaussian filtering. Secondly, a vertical symmetry line is extracted from the orientation field, based on which the fingerprint type is classified, either arch or non-arch type. For non-arch mated pairs, the pixel-level singular points (core or delta) are adopted as candidate alignment points and be verified by point-pattern matching and the average orientation difference between the orientation fields. And, for arch mated pairs, the alignment points are detected at the maximum in the angular difference and the orientation certainty level over the symmetry lines. The proposed method is tested over the FVC 2000 DB2a, and 95.93% mated fingerprint pairs are aligned within one ridge-width displacement.","PeriodicalId":378764,"journal":{"name":"2011 International Conference on Hand-Based Biometrics","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127798184","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-12-05DOI: 10.1109/ICHB.2011.6094302
Wei Jia, Rongxiang Hu, Yang Zhao, Jie Gui, Yihai Zhu
In this paper, we propose Band-Limited minimum average correlation energy filter and unconstrained minimum average correlation energy filter for palmprint recognition, in which the high frequency components are removed and only the inherent frequency band is adopted for filter design. The results of experiments conducted on Hong Kong Polytechnic University Palmprint Database show that the proposed filters can significantly improve accurate recognition rates and reduce equal error rates. Meanwhile, they also have faster matching speed and need less feature storage.
{"title":"Palmprint Recognition Using Band-Limited Minimum Average Correlation Energy Filter","authors":"Wei Jia, Rongxiang Hu, Yang Zhao, Jie Gui, Yihai Zhu","doi":"10.1109/ICHB.2011.6094302","DOIUrl":"https://doi.org/10.1109/ICHB.2011.6094302","url":null,"abstract":"In this paper, we propose Band-Limited minimum average correlation energy filter and unconstrained minimum average correlation energy filter for palmprint recognition, in which the high frequency components are removed and only the inherent frequency band is adopted for filter design. The results of experiments conducted on Hong Kong Polytechnic University Palmprint Database show that the proposed filters can significantly improve accurate recognition rates and reduce equal error rates. Meanwhile, they also have faster matching speed and need less feature storage.","PeriodicalId":378764,"journal":{"name":"2011 International Conference on Hand-Based Biometrics","volume":"2016 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127504056","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-12-05DOI: 10.1109/ICHB.2011.6094305
Dr. H. B. Kekre, D. Sarode, R. Vig, Arya Pranay, Irani Aashita, B. Saurabh
This paper presents a novel technique to identify palmprints of individuals for various purposes including security, access control, forensic applications, identification, etc. Palmprints, known to be more robust as biometrics are being increasingly used in these areas. In this paper the identification of the palmprint of an individual has been done using a transform domain technique where a new transform using the Kronecker product of the existing transforms (DCT and Walsh) is developed and applied to multi-spectral palmprint images. Energy compaction technique in transform domain is applied to reduce the size of feature vector. The properties of both DCT and Walsh transforms are incorporated in the new transform which gives better results than when both the transforms are used individually. The GAR values have been computed for different values of energy considered. The maximum value of GAR obtained is 98.53% for an energy threshold of 99.99% on palmprints under blue illumination. The FAR is found to be 4%.
{"title":"Palmprint Identification Using Kronecker Product of DCT and Walsh Transforms for Multi-Spectral Images","authors":"Dr. H. B. Kekre, D. Sarode, R. Vig, Arya Pranay, Irani Aashita, B. Saurabh","doi":"10.1109/ICHB.2011.6094305","DOIUrl":"https://doi.org/10.1109/ICHB.2011.6094305","url":null,"abstract":"This paper presents a novel technique to identify palmprints of individuals for various purposes including security, access control, forensic applications, identification, etc. Palmprints, known to be more robust as biometrics are being increasingly used in these areas. In this paper the identification of the palmprint of an individual has been done using a transform domain technique where a new transform using the Kronecker product of the existing transforms (DCT and Walsh) is developed and applied to multi-spectral palmprint images. Energy compaction technique in transform domain is applied to reduce the size of feature vector. The properties of both DCT and Walsh transforms are incorporated in the new transform which gives better results than when both the transforms are used individually. The GAR values have been computed for different values of energy considered. The maximum value of GAR obtained is 98.53% for an energy threshold of 99.99% on palmprints under blue illumination. The FAR is found to be 4%.","PeriodicalId":378764,"journal":{"name":"2011 International Conference on Hand-Based Biometrics","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114780206","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-12-05DOI: 10.1109/ICHB.2011.6094320
Jinfeng Yang, Junjie Wang
Recently, finger-vein recognition has been studied extensively for personal identification. Since veins exist inside the finger, the finger-vein images are often not in high quality due to light scattering and absorption of the skin tissue. According to the optical properties of the biological tissues, the multilayered human skin is a kind of inhomogeneous medium, and different skin layers hold different optical properties. Therefore, this paper focuses on finger-vein image restoration considering the layered skin structure. First a Gaussian-PSF model is used to restore the finger-vein images degraded by the camera lens. Then, two depth-PSF models are built to further restore the images considering the optical properties of skin layers. Third, a fused finger-vein image is generated by the combination of the depth-depended restored images. Finally, experimental results show that the proposed method exhibits an exciting performance in finger-vein image quality improvement.
{"title":"Finger-Vein Image Restoration Considering Skin Layer Structure","authors":"Jinfeng Yang, Junjie Wang","doi":"10.1109/ICHB.2011.6094320","DOIUrl":"https://doi.org/10.1109/ICHB.2011.6094320","url":null,"abstract":"Recently, finger-vein recognition has been studied extensively for personal identification. Since veins exist inside the finger, the finger-vein images are often not in high quality due to light scattering and absorption of the skin tissue. According to the optical properties of the biological tissues, the multilayered human skin is a kind of inhomogeneous medium, and different skin layers hold different optical properties. Therefore, this paper focuses on finger-vein image restoration considering the layered skin structure. First a Gaussian-PSF model is used to restore the finger-vein images degraded by the camera lens. Then, two depth-PSF models are built to further restore the images considering the optical properties of skin layers. Third, a fused finger-vein image is generated by the combination of the depth-depended restored images. Finally, experimental results show that the proposed method exhibits an exciting performance in finger-vein image quality improvement.","PeriodicalId":378764,"journal":{"name":"2011 International Conference on Hand-Based Biometrics","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129647899","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-12-05DOI: 10.1109/ICHB.2011.6094334
Rihards Fuksis, Arturs Kadikis, M. Greitans
This paper combines the results of our previous experiments concerning acquisition of the images of the human palm in infrared and visible light, the extraction of features from images as well as our current results on biometric data hashing with the advanced biohashing algorithm. We first describe the properties of the complex 2D matched filtering for feature extraction from images, followed by biometric vector construction techniques and raw biometric data comparison. We address the problem of securing biometric data for multimodal biometric systems, by analyzing the biohashing algorithm and proposing our enhancements. Results of experiments that include raw biometric data comparison, biohashing and advanced biohashing biocode comparisons are presented at the end of the paper.
{"title":"Biohashing and Fusion of Palmprint and Palm Vein Biometric Data","authors":"Rihards Fuksis, Arturs Kadikis, M. Greitans","doi":"10.1109/ICHB.2011.6094334","DOIUrl":"https://doi.org/10.1109/ICHB.2011.6094334","url":null,"abstract":"This paper combines the results of our previous experiments concerning acquisition of the images of the human palm in infrared and visible light, the extraction of features from images as well as our current results on biometric data hashing with the advanced biohashing algorithm. We first describe the properties of the complex 2D matched filtering for feature extraction from images, followed by biometric vector construction techniques and raw biometric data comparison. We address the problem of securing biometric data for multimodal biometric systems, by analyzing the biohashing algorithm and proposing our enhancements. Results of experiments that include raw biometric data comparison, biohashing and advanced biohashing biocode comparisons are presented at the end of the paper.","PeriodicalId":378764,"journal":{"name":"2011 International Conference on Hand-Based Biometrics","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124425978","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-12-05DOI: 10.1109/ICHB.2011.6094347
M. Espinoza, C. Champod
Due to the growing use of biometric technologies in our modern society, spoofing attacks are becoming a serious concern. Many solutions have been proposed to detect the use of fake "fingerprints" on an acquisition device. In this paper, we propose to take advantage of intrinsic features of friction ridge skin: pores. The aim of this study is to investigate the potential of using pores to detect spoofing attacks. Results show that the use of pores is a promising approach. Four major observations were made: First, results confirmed that the reproduction of pores on fake "fingerprints" is possible. Second, the distribution of the total number of pores between fake and genuine fingerprints cannot be discriminated. Third, the difference in pore quantities between a query image and a reference image (genuine or fake) can be used as a discriminating factor in a linear discriminant analysis. In our sample, the observed error rates were as follows: 45.5% of false positive (the fake passed the test) and 3.8% of false negative (a genuine print has been rejected). Finally, the performance is improved by using the difference of pore quantity obtained between a distorted query fingerprint and a non-distorted reference fingerprint. By using this approach, the error rates improved to 21.2% of false acceptation rate and 8.3% of false rejection rate.
{"title":"Using the Number of Pores on Fingerprint Images to Detect Spoofing Attacks","authors":"M. Espinoza, C. Champod","doi":"10.1109/ICHB.2011.6094347","DOIUrl":"https://doi.org/10.1109/ICHB.2011.6094347","url":null,"abstract":"Due to the growing use of biometric technologies in our modern society, spoofing attacks are becoming a serious concern. Many solutions have been proposed to detect the use of fake \"fingerprints\" on an acquisition device. In this paper, we propose to take advantage of intrinsic features of friction ridge skin: pores. The aim of this study is to investigate the potential of using pores to detect spoofing attacks. Results show that the use of pores is a promising approach. Four major observations were made: First, results confirmed that the reproduction of pores on fake \"fingerprints\" is possible. Second, the distribution of the total number of pores between fake and genuine fingerprints cannot be discriminated. Third, the difference in pore quantities between a query image and a reference image (genuine or fake) can be used as a discriminating factor in a linear discriminant analysis. In our sample, the observed error rates were as follows: 45.5% of false positive (the fake passed the test) and 3.8% of false negative (a genuine print has been rejected). Finally, the performance is improved by using the difference of pore quantity obtained between a distorted query fingerprint and a non-distorted reference fingerprint. By using this approach, the error rates improved to 21.2% of false acceptation rate and 8.3% of false rejection rate.","PeriodicalId":378764,"journal":{"name":"2011 International Conference on Hand-Based Biometrics","volume":"150 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134379577","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-12-05DOI: 10.1109/ICHB.2011.6094338
Wei Bu, Qiushi Zhao, Xiangqian Wu, Youbao Tang, Kuanquan Wang
This paper proposes a novel multimodal biometric system based on multiple hand features, i.e. palmprint, palm vein, palm dorsal vein, finger vein and hand geometry. In this system, the palmprint, palm vein and dorsal vein images are firstly captured using an integrated contactless acquisition device. And then these images are preprocessed and split into six regions of interest (ROIs), that is, one palmprint ROI, one palm vein ROI, three finger vein ROIs and one dorsal vein ROI. After that, features are extracted from each ROI and matched respectively. Besides these features, hand geometry feature is also extracted from the original palm vein image and matched. Finally, these matching scores are fused to make the final score for decision. Experiments on a large data set show that the proposed system can get a very high accuracy (the EER is around 0.01%), which outperforms any uni-modal system based on single feature of hand.
{"title":"A Novel Contactless Multimodal Biometric System Based on Multiple Hand Features","authors":"Wei Bu, Qiushi Zhao, Xiangqian Wu, Youbao Tang, Kuanquan Wang","doi":"10.1109/ICHB.2011.6094338","DOIUrl":"https://doi.org/10.1109/ICHB.2011.6094338","url":null,"abstract":"This paper proposes a novel multimodal biometric system based on multiple hand features, i.e. palmprint, palm vein, palm dorsal vein, finger vein and hand geometry. In this system, the palmprint, palm vein and dorsal vein images are firstly captured using an integrated contactless acquisition device. And then these images are preprocessed and split into six regions of interest (ROIs), that is, one palmprint ROI, one palm vein ROI, three finger vein ROIs and one dorsal vein ROI. After that, features are extracted from each ROI and matched respectively. Besides these features, hand geometry feature is also extracted from the original palm vein image and matched. Finally, these matching scores are fused to make the final score for decision. Experiments on a large data set show that the proposed system can get a very high accuracy (the EER is around 0.01%), which outperforms any uni-modal system based on single feature of hand.","PeriodicalId":378764,"journal":{"name":"2011 International Conference on Hand-Based Biometrics","volume":"100 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133225431","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-12-05DOI: 10.1109/ICHB.2011.6094315
J. Briceño, C. Travieso, J. B. Alonso, Miguel A. Ferrer
The present work presents a biometric identification system for hand shape identification. The different contours have been coded based on angular descriptions forming a Markov chain descriptor. Hidden Markov Models (HMM), each representing a target identification class, have been trained with such chains. Features have been calculated from a kernel based on the HMM parameters descriptors. Finally, supervised Support Vector Machines were used to classify parameters from the HMM kernel. Firstly, the system was modelled using 60 users to tune up the HMM and HMM+SVM configuration parameters and finally, the system was checked with all database, 144 users with 10 samples per class. Our experiments have obtained similar results per both cases, showing a scalable, stable and robust system. Our experiments have achieved an upper success rate of 99.92%, using four hand samples per class for training mode, and six hand samples for test mode. This success was found using as features the transformation of 100 points hand shape with our HMM kernel, and as classifier Support Vector Machines with lineal separating functions.
{"title":"Biometric Identification Based on Hand-Shape Features Using a HMM Kernel","authors":"J. Briceño, C. Travieso, J. B. Alonso, Miguel A. Ferrer","doi":"10.1109/ICHB.2011.6094315","DOIUrl":"https://doi.org/10.1109/ICHB.2011.6094315","url":null,"abstract":"The present work presents a biometric identification system for hand shape identification. The different contours have been coded based on angular descriptions forming a Markov chain descriptor. Hidden Markov Models (HMM), each representing a target identification class, have been trained with such chains. Features have been calculated from a kernel based on the HMM parameters descriptors. Finally, supervised Support Vector Machines were used to classify parameters from the HMM kernel. Firstly, the system was modelled using 60 users to tune up the HMM and HMM+SVM configuration parameters and finally, the system was checked with all database, 144 users with 10 samples per class. Our experiments have obtained similar results per both cases, showing a scalable, stable and robust system. Our experiments have achieved an upper success rate of 99.92%, using four hand samples per class for training mode, and six hand samples for test mode. This success was found using as features the transformation of 100 points hand shape with our HMM kernel, and as classifier Support Vector Machines with lineal separating functions.","PeriodicalId":378764,"journal":{"name":"2011 International Conference on Hand-Based Biometrics","volume":"118 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114373283","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-12-05DOI: 10.1109/ICHB.2011.6094295
Shengzhe Li, Changlong Jin, Hakil Kim, S. Elliott
Understanding the difficulty of a dataset is of primary importance when it comes to testing and evaluating fingerprint recognition systems or algorithms because the evaluation result is dependent on the dataset. Proposed in this paper is a general framework of assessing the level of difficulty of fingerprint datasets based on quantitative measurements of not only the sample quality of individual fingerprints but also relative differences between genuine pairs, such as common area and deformation. The experimental results over multi-year FVC datasets demonstrate that the proposed method can predict the relative difficulty levels of the fingerprint datasets which coincide with the equal error rates produced by two matching algorithms. The proposed framework is independent of matching algorithms and can be performed automatically.
{"title":"Assessing the Difficulty Level of Fingerprint Datasets Based on Relative Quality Measures","authors":"Shengzhe Li, Changlong Jin, Hakil Kim, S. Elliott","doi":"10.1109/ICHB.2011.6094295","DOIUrl":"https://doi.org/10.1109/ICHB.2011.6094295","url":null,"abstract":"Understanding the difficulty of a dataset is of primary importance when it comes to testing and evaluating fingerprint recognition systems or algorithms because the evaluation result is dependent on the dataset. Proposed in this paper is a general framework of assessing the level of difficulty of fingerprint datasets based on quantitative measurements of not only the sample quality of individual fingerprints but also relative differences between genuine pairs, such as common area and deformation. The experimental results over multi-year FVC datasets demonstrate that the proposed method can predict the relative difficulty levels of the fingerprint datasets which coincide with the equal error rates produced by two matching algorithms. The proposed framework is independent of matching algorithms and can be performed automatically.","PeriodicalId":378764,"journal":{"name":"2011 International Conference on Hand-Based Biometrics","volume":"307 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122491530","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-12-05DOI: 10.1109/ICHB.2011.6094311
Qian Liu, Xiaoyuan Jing, Li Li, Mingxiao Huang, Sheng Li, Yong-Fang Yao
Similarity is one of the most widely used measures in the field of pattern recognition like Euclidean and Mahalanobis distances. Semi-supervised learning is an effective technique for feature extraction, which can make full use of the unlabeled samples for training. In this paper, we incorporate similarity into semi-supervised learning and propose a novel feature extraction approach, named semi-supervised similarity projection analysis (SSP), for palmprint recognition. SSP projects original samples from a high-dimensional space to a low-dimensional subspace in a semi-supervised manner. It can preserve the similarity between intra-class samples and the dissimilarity between inter-class samples, and simultaneously maintain the global dissimilarity among both labeled and unlabeled samples. Experimental results on the HK PolyU palmprint image database demonstrate that the proposed approach outperforms several representative unsupervised, supervised and semi-supervised subspace learning methods.
{"title":"Semi-Supervised Palmprint Recognition Based on Similarity Projection Analysis","authors":"Qian Liu, Xiaoyuan Jing, Li Li, Mingxiao Huang, Sheng Li, Yong-Fang Yao","doi":"10.1109/ICHB.2011.6094311","DOIUrl":"https://doi.org/10.1109/ICHB.2011.6094311","url":null,"abstract":"Similarity is one of the most widely used measures in the field of pattern recognition like Euclidean and Mahalanobis distances. Semi-supervised learning is an effective technique for feature extraction, which can make full use of the unlabeled samples for training. In this paper, we incorporate similarity into semi-supervised learning and propose a novel feature extraction approach, named semi-supervised similarity projection analysis (SSP), for palmprint recognition. SSP projects original samples from a high-dimensional space to a low-dimensional subspace in a semi-supervised manner. It can preserve the similarity between intra-class samples and the dissimilarity between inter-class samples, and simultaneously maintain the global dissimilarity among both labeled and unlabeled samples. Experimental results on the HK PolyU palmprint image database demonstrate that the proposed approach outperforms several representative unsupervised, supervised and semi-supervised subspace learning methods.","PeriodicalId":378764,"journal":{"name":"2011 International Conference on Hand-Based Biometrics","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130735743","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}