Pub Date : 2015-03-23DOI: 10.1109/ISBA.2015.7126363
Piyush Joshi, S. Prakash
Contrast enhancement is a significant phase in image processing for improving visual and informational quality of a degraded image. We propose an enhancement technique for poor contrast images which utilizes modified Artificial Bee Colony (ABC) technique. This paper includes following two major contributions. First, Direction Constraints (DC) has been associated with ABC so that artificial bees can move in right direction to obtain better solution and reduce computational time. This is similar to natural bees which use their memory to find out food sources. Second, Contrast based Quality Estimation (CQE) is used as an objective function of ABC. Experimental results show the effectiveness of the proposed technique.
{"title":"An efficient technique for image contrast enhancement using artificial bee colony","authors":"Piyush Joshi, S. Prakash","doi":"10.1109/ISBA.2015.7126363","DOIUrl":"https://doi.org/10.1109/ISBA.2015.7126363","url":null,"abstract":"Contrast enhancement is a significant phase in image processing for improving visual and informational quality of a degraded image. We propose an enhancement technique for poor contrast images which utilizes modified Artificial Bee Colony (ABC) technique. This paper includes following two major contributions. First, Direction Constraints (DC) has been associated with ABC so that artificial bees can move in right direction to obtain better solution and reduce computational time. This is similar to natural bees which use their memory to find out food sources. Second, Contrast based Quality Estimation (CQE) is used as an objective function of ABC. Experimental results show the effectiveness of the proposed technique.","PeriodicalId":398910,"journal":{"name":"IEEE International Conference on Identity, Security and Behavior Analysis (ISBA 2015)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114889330","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-03-23DOI: 10.1109/ISBA.2015.7126361
Jiaju Huang, Daqing Hou, S. Schuckers, Zhenhao Hou
Free-text keystroke authentication has been demonstrated to be a promising behavioral biometric. But unlike physiological traits such as fingerprints, in free-text keystroke authentication, there is no natural way to identify what makes a sample. It remains an open problem as to how much keystroke data are necessary for achieving acceptable authentication performance. Using public datasets and two existing algorithms, we conduct two experiments to investigate the effect of the reference profile size and test sample size on False Alarm Rate (FAR) and Imposter Pass Rate (IPR). We find that (1) larger reference profiles will drive down both IPR and FAR values, provided that the test samples are large enough, and (2) larger test samples have no obvious effect on IPR, regardless of the reference profile size. We discuss the practical implication of our findings.
{"title":"Effect of data size on performance of free-text keystroke authentication","authors":"Jiaju Huang, Daqing Hou, S. Schuckers, Zhenhao Hou","doi":"10.1109/ISBA.2015.7126361","DOIUrl":"https://doi.org/10.1109/ISBA.2015.7126361","url":null,"abstract":"Free-text keystroke authentication has been demonstrated to be a promising behavioral biometric. But unlike physiological traits such as fingerprints, in free-text keystroke authentication, there is no natural way to identify what makes a sample. It remains an open problem as to how much keystroke data are necessary for achieving acceptable authentication performance. Using public datasets and two existing algorithms, we conduct two experiments to investigate the effect of the reference profile size and test sample size on False Alarm Rate (FAR) and Imposter Pass Rate (IPR). We find that (1) larger reference profiles will drive down both IPR and FAR values, provided that the test samples are large enough, and (2) larger test samples have no obvious effect on IPR, regardless of the reference profile size. We discuss the practical implication of our findings.","PeriodicalId":398910,"journal":{"name":"IEEE International Conference on Identity, Security and Behavior Analysis (ISBA 2015)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114440384","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-03-23DOI: 10.1109/ISBA.2015.7126355
Xun Gong, Zehua Fu, Xinxin Li, Lin Feng
To address the problem of 3D face modeling based on a set of landmarks on images, the traditional feature-based morphable model, using face class-specific information, makes direct use of these 2D points to infer a dense 3D face surface. However, the unknown depth of landmarks degrades accuracy considerably. A promising solution is to predict the depth of landmarks at first. Bases on this idea, a two-stage estimation method is proposed to compute the depth value of landmarks from two images. And then, the estimated 3D landmarks are applied to a deformation algorithm to make a precise 3D dense facial shape. Test results on synthesized images with known ground-truth show that the proposed two-stage estimation method can obtain landmarks' depth both effectively and efficiently, and further that the reconstructed accuracy is greatly enhanced with the estimated 3D landmarks. Reconstruction results of real-world photos are rather realistic.
{"title":"A two-stage estimation method for depth estimation of facial landmarks","authors":"Xun Gong, Zehua Fu, Xinxin Li, Lin Feng","doi":"10.1109/ISBA.2015.7126355","DOIUrl":"https://doi.org/10.1109/ISBA.2015.7126355","url":null,"abstract":"To address the problem of 3D face modeling based on a set of landmarks on images, the traditional feature-based morphable model, using face class-specific information, makes direct use of these 2D points to infer a dense 3D face surface. However, the unknown depth of landmarks degrades accuracy considerably. A promising solution is to predict the depth of landmarks at first. Bases on this idea, a two-stage estimation method is proposed to compute the depth value of landmarks from two images. And then, the estimated 3D landmarks are applied to a deformation algorithm to make a precise 3D dense facial shape. Test results on synthesized images with known ground-truth show that the proposed two-stage estimation method can obtain landmarks' depth both effectively and efficiently, and further that the reconstructed accuracy is greatly enhanced with the estimated 3D landmarks. Reconstruction results of real-world photos are rather realistic.","PeriodicalId":398910,"journal":{"name":"IEEE International Conference on Identity, Security and Behavior Analysis (ISBA 2015)","volume":"231 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126252624","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-03-23DOI: 10.1109/ISBA.2015.7126357
Qiong Gui, Zhanpeng Jin, Maria V. Ruiz-Blondet, Sarah Laszlo, Wenyao Xu
EEG brainwaves have recently emerged as a promising biometric that can be used for individual identification, since those signals are confidential, sensitive, and hard to steal and replicate. In this study, we propose a new stimuli-driven, non-volitional brain responses based framework towards individual identification. The non-volitional mechanism provides an even more secure way in which the subjects are not aware of and thus can not manipulate their brain activities. We present our preliminary investigations based on two pattern matching approaches: Euclidean Distance (ED) and Dynamic Time Warping (DTW). We investigate the performance of our proposed methods using four different visual stimuli and the potential impacts from four different EEG electrode channels. Experimental results show that, the Oz channel provides the best identification accuracy for both ED and DTW methods, and the stimuli of illegal strings and words seem to trigger more distinguishable brain responses. For ED method, the accuracy of identifying 30 subjects could reach over 80%, which is better than the best accuracy of about 68% that can be achieved by DTW method. Our study lays a foundation for future investigation of brainwave-based biometric approaches.
{"title":"Towards EEG biometrics: pattern matching approaches for user identification","authors":"Qiong Gui, Zhanpeng Jin, Maria V. Ruiz-Blondet, Sarah Laszlo, Wenyao Xu","doi":"10.1109/ISBA.2015.7126357","DOIUrl":"https://doi.org/10.1109/ISBA.2015.7126357","url":null,"abstract":"EEG brainwaves have recently emerged as a promising biometric that can be used for individual identification, since those signals are confidential, sensitive, and hard to steal and replicate. In this study, we propose a new stimuli-driven, non-volitional brain responses based framework towards individual identification. The non-volitional mechanism provides an even more secure way in which the subjects are not aware of and thus can not manipulate their brain activities. We present our preliminary investigations based on two pattern matching approaches: Euclidean Distance (ED) and Dynamic Time Warping (DTW). We investigate the performance of our proposed methods using four different visual stimuli and the potential impacts from four different EEG electrode channels. Experimental results show that, the Oz channel provides the best identification accuracy for both ED and DTW methods, and the stimuli of illegal strings and words seem to trigger more distinguishable brain responses. For ED method, the accuracy of identifying 30 subjects could reach over 80%, which is better than the best accuracy of about 68% that can be achieved by DTW method. Our study lays a foundation for future investigation of brainwave-based biometric approaches.","PeriodicalId":398910,"journal":{"name":"IEEE International Conference on Identity, Security and Behavior Analysis (ISBA 2015)","volume":"2007 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125575074","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-03-23DOI: 10.1109/ISBA.2015.7126356
D. Rissacher, D. Galy
This work explores the use of cardiac data acquired by a 2.4 GHz radar system as a potential biometric identification tool. Monostatic and bistatic systems are used to record data from human subjects over two visits. Cardiac data is extracted from the radar recordings and an ensemble average is computed using ECG as a time reference. The Continuous Wavelet Transform is then computed to provide time-frequency analysis of the average radar cardiac cycle and a nearest neighbor technique is applied to demonstrate that a cardiac radar system has some promise as a biometric identification technology currently producing Rank-1 accuracy of 19% and Rank-5 accuracy of 42% over 26 subjects.
{"title":"Cardiac radar for biometric identification using nearest neighbour of continuous wavelet transform peaks","authors":"D. Rissacher, D. Galy","doi":"10.1109/ISBA.2015.7126356","DOIUrl":"https://doi.org/10.1109/ISBA.2015.7126356","url":null,"abstract":"This work explores the use of cardiac data acquired by a 2.4 GHz radar system as a potential biometric identification tool. Monostatic and bistatic systems are used to record data from human subjects over two visits. Cardiac data is extracted from the radar recordings and an ensemble average is computed using ECG as a time reference. The Continuous Wavelet Transform is then computed to provide time-frequency analysis of the average radar cardiac cycle and a nearest neighbor technique is applied to demonstrate that a cardiac radar system has some promise as a biometric identification technology currently producing Rank-1 accuracy of 19% and Rank-5 accuracy of 42% over 26 subjects.","PeriodicalId":398910,"journal":{"name":"IEEE International Conference on Identity, Security and Behavior Analysis (ISBA 2015)","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128930990","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-03-23DOI: 10.1109/ISBA.2015.7126364
M. S. Shakeel, Wenxiong Kang
This paper introduces an efficient approach towards blind deblurring of palm print images suffered from severe motion blur. First an improved Hough transform method is proposed to detect the blur angle and length of palm print image accurately. Analysis of blurred image is performed in Fourier domain which contains important information about the blur orientation of an image. After detecting the blur parameters successfully an improved augmented lagrangian method is proposed that utilizes the point spread function constructed from blur parameters to de blur the image. Deconvolution algorithm is first divided into various sub problems which are then solved iteratively to find their corresponding solutions by using alternating direction method. This proposed method provides de blurred image which is free of ringing artifacts. Its main application is in biometric systems in which camera captured blurred image because of user's hand motion.
{"title":"Efficient blind image deblurring method for palm print images","authors":"M. S. Shakeel, Wenxiong Kang","doi":"10.1109/ISBA.2015.7126364","DOIUrl":"https://doi.org/10.1109/ISBA.2015.7126364","url":null,"abstract":"This paper introduces an efficient approach towards blind deblurring of palm print images suffered from severe motion blur. First an improved Hough transform method is proposed to detect the blur angle and length of palm print image accurately. Analysis of blurred image is performed in Fourier domain which contains important information about the blur orientation of an image. After detecting the blur parameters successfully an improved augmented lagrangian method is proposed that utilizes the point spread function constructed from blur parameters to de blur the image. Deconvolution algorithm is first divided into various sub problems which are then solved iteratively to find their corresponding solutions by using alternating direction method. This proposed method provides de blurred image which is free of ringing artifacts. Its main application is in biometric systems in which camera captured blurred image because of user's hand motion.","PeriodicalId":398910,"journal":{"name":"IEEE International Conference on Identity, Security and Behavior Analysis (ISBA 2015)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116485612","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-03-23DOI: 10.1109/ISBA.2015.7126369
M. Ngan, P. Grother
Tattoos have been used for many years to assist law enforcement in investigations leading to the identification of criminals and victims. A tattoo is an elective biometric trait that could contain more discriminative information to support person identification than traditional soft biometrics such as age, gender and race. While some research has been done in the area of image-based tattoo detection and retrieval, it is not a mature domain. There are no common datasets to evaluate and develop operationally-relevant tattoo recognition applications. To address this shortcoming, the NIST Tattoo Recognition Technology Challenge (Tatt-C) database was developed as an initial tattoo research corpus that addresses use cases representative of operational scenarios. The Tatt-C database represents an initial attempt to provide a set of ground-truthed tattoo images focused on, but not limited to, five primary use cases. This paper describes the details of the database along with the experimental protocols and test cases that should be followed, which will enable consistent performance comparison of tattoo recognition methods.
{"title":"Tattoo recognition technology - challenge (Tatt-C): an open tattoo database for developing tattoo recognition research","authors":"M. Ngan, P. Grother","doi":"10.1109/ISBA.2015.7126369","DOIUrl":"https://doi.org/10.1109/ISBA.2015.7126369","url":null,"abstract":"Tattoos have been used for many years to assist law enforcement in investigations leading to the identification of criminals and victims. A tattoo is an elective biometric trait that could contain more discriminative information to support person identification than traditional soft biometrics such as age, gender and race. While some research has been done in the area of image-based tattoo detection and retrieval, it is not a mature domain. There are no common datasets to evaluate and develop operationally-relevant tattoo recognition applications. To address this shortcoming, the NIST Tattoo Recognition Technology Challenge (Tatt-C) database was developed as an initial tattoo research corpus that addresses use cases representative of operational scenarios. The Tatt-C database represents an initial attempt to provide a set of ground-truthed tattoo images focused on, but not limited to, five primary use cases. This paper describes the details of the database along with the experimental protocols and test cases that should be followed, which will enable consistent performance comparison of tattoo recognition methods.","PeriodicalId":398910,"journal":{"name":"IEEE International Conference on Identity, Security and Behavior Analysis (ISBA 2015)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133965761","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-03-23DOI: 10.1109/ISBA.2015.7126365
G. Shikkenawis, S. Mitra
Face is the most powerful biometric as far as human recognition system is concerned which is not the case for machine vision. Face recognition by machine is yet incomplete due to adverse, unconstrained environment. Out of several attempts made in past few decades, subspace based methods appeared to be more accurate and robust. In the present proposal, a new subspace based method is developed. It preserves the local geometry of data points, here face images. In particular, it keeps the neighboring points which are from the same class close to each other and those from different classes far apart in the subspace. The first part can be seen as a variant of locality preserving projection (LPP) and the combination of both the parts is mentioned as locality preserving discriminant projection (LPDP). The performance of the proposed subspace based approach is compared with a few other contemporary approaches on some benchmark databases for face recognition. The current method seems to perform significantly better.
{"title":"Locality Preserving Discriminant Projection","authors":"G. Shikkenawis, S. Mitra","doi":"10.1109/ISBA.2015.7126365","DOIUrl":"https://doi.org/10.1109/ISBA.2015.7126365","url":null,"abstract":"Face is the most powerful biometric as far as human recognition system is concerned which is not the case for machine vision. Face recognition by machine is yet incomplete due to adverse, unconstrained environment. Out of several attempts made in past few decades, subspace based methods appeared to be more accurate and robust. In the present proposal, a new subspace based method is developed. It preserves the local geometry of data points, here face images. In particular, it keeps the neighboring points which are from the same class close to each other and those from different classes far apart in the subspace. The first part can be seen as a variant of locality preserving projection (LPP) and the combination of both the parts is mentioned as locality preserving discriminant projection (LPDP). The performance of the proposed subspace based approach is compared with a few other contemporary approaches on some benchmark databases for face recognition. The current method seems to perform significantly better.","PeriodicalId":398910,"journal":{"name":"IEEE International Conference on Identity, Security and Behavior Analysis (ISBA 2015)","volume":"116 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124329501","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-03-23DOI: 10.1109/ISBA.2015.7126347
R. Subramanian, Sudeep Sarkar, M. Labrador, K. Contino, Christopher Eggert, O. Javed, Jiejie Zhu, Hui Cheng
Accelerometer and gyroscope sensors in smart phones capture the dynamics of human gait that can be matched to arrive at identity authentication measures of the person carrying the phone. Any such matching method has to take into account the reality that the phone may be placed at uncontrolled orientations with respect to the human body. In this paper, we present a novel orientation invariant gaitmatching algorithm based on the Kabsch alignment. The algorithm consists of simple, intuitive, yet robust methods for cycle splitting, aligning orientation, and comparing gait signals. We demonstrate the effectiveness of the method using a dataset from 101 subjects, with the phone placed in uncontrolled orientations in the holster and in the pocket, and collected on different days. We find that the orientation invariant gait algorithm results in a significant reduction in error: up to a 9% reduction in equal error rate, from 30.4% to 21.5% when comparing data captured on different days. On the McGill dataset from 20 subjects, which is the other dataset with orientation variation, we find a more pronounced effect; the identification rate increased from 67.5% to 96.5%. On the OU-ISIR data, which has data from 745 subjects, the equal error rates are as low as 6.3%, which is among the best reported in the literature.
{"title":"Orientation invariant gait matching algorithm based on the Kabsch alignment","authors":"R. Subramanian, Sudeep Sarkar, M. Labrador, K. Contino, Christopher Eggert, O. Javed, Jiejie Zhu, Hui Cheng","doi":"10.1109/ISBA.2015.7126347","DOIUrl":"https://doi.org/10.1109/ISBA.2015.7126347","url":null,"abstract":"Accelerometer and gyroscope sensors in smart phones capture the dynamics of human gait that can be matched to arrive at identity authentication measures of the person carrying the phone. Any such matching method has to take into account the reality that the phone may be placed at uncontrolled orientations with respect to the human body. In this paper, we present a novel orientation invariant gaitmatching algorithm based on the Kabsch alignment. The algorithm consists of simple, intuitive, yet robust methods for cycle splitting, aligning orientation, and comparing gait signals. We demonstrate the effectiveness of the method using a dataset from 101 subjects, with the phone placed in uncontrolled orientations in the holster and in the pocket, and collected on different days. We find that the orientation invariant gait algorithm results in a significant reduction in error: up to a 9% reduction in equal error rate, from 30.4% to 21.5% when comparing data captured on different days. On the McGill dataset from 20 subjects, which is the other dataset with orientation variation, we find a more pronounced effect; the identification rate increased from 67.5% to 96.5%. On the OU-ISIR data, which has data from 745 subjects, the equal error rates are as low as 6.3%, which is among the best reported in the literature.","PeriodicalId":398910,"journal":{"name":"IEEE International Conference on Identity, Security and Behavior Analysis (ISBA 2015)","volume":"96 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116908883","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-03-23DOI: 10.1109/ISBA.2015.7126343
Ramachandra Raghavendra, Jayachander Surbiryala, C. Busch
Finger vein recognition has emerged as the robust biometric modality because of their unique vein pattern that can be captured using near infrared spectrum. The large scale finger vein based biometric solutions demand the need of searching the probe finger vein sample against the large collection of gallery samples. In order to improve the reliability in searching for the suitable identity in the large-scale finger vein database, it is essential to introduce the finger vein indexing and retrieval scheme. In this work, we present a novel finger vein indexing and retrieval scheme based on unsupervised clustering. To this extent we investigated three different clustering schemes namely K-means, K-medoids and Self Organizing Maps (SOM) neural networks. In addition, we also present a new feature extraction scheme to extract both compact and discriminant features from the finger vein images that are more suitable to build the indexing space. Extensive experiments are carried out on a large-scale heterogeneous finger vein database comprised of 2850 unique identities constructed using seven different publicly available finger vein databases. The obtained results demonstrated the efficacy of the proposed scheme with a pre-selection rate of 7.58% (hit rate of 92.42%) with a penetration rate of 42.48%. Further, the multi-cluster search demonstrated the performance with pre-selection error rate of 0.98% (hit rate of 99.02%) with a penetration rate of 52.88%.
手指静脉识别由于其独特的静脉模式可以使用近红外光谱捕获而成为强大的生物识别方式。基于手指静脉的大规模生物识别解决方案需要在大量的画廊样本中搜索探针手指静脉样本。为了提高在大规模手指静脉数据库中搜索到合适身份的可靠性,有必要引入手指静脉索引和检索方案。在这项工作中,我们提出了一种新的基于无监督聚类的手指静脉索引和检索方案。在此范围内,我们研究了三种不同的聚类方案,即K-means, k - medioids和自组织映射(SOM)神经网络。此外,我们还提出了一种新的特征提取方案,从手指静脉图像中提取更适合构建索引空间的紧凑特征和判别特征。利用7种不同的公开手指静脉数据库构建了2850个独特身份的大型异质手指静脉数据库,并对其进行了大量实验。结果表明,该方案的预选率为7.58%(命中率为92.42%),穿透率为42.48%。此外,多聚类搜索的预选错误率为0.98%(命中率为99.02%),渗透率为52.88%。
{"title":"An efficient finger vein indexing scheme based on unsupervised clustering","authors":"Ramachandra Raghavendra, Jayachander Surbiryala, C. Busch","doi":"10.1109/ISBA.2015.7126343","DOIUrl":"https://doi.org/10.1109/ISBA.2015.7126343","url":null,"abstract":"Finger vein recognition has emerged as the robust biometric modality because of their unique vein pattern that can be captured using near infrared spectrum. The large scale finger vein based biometric solutions demand the need of searching the probe finger vein sample against the large collection of gallery samples. In order to improve the reliability in searching for the suitable identity in the large-scale finger vein database, it is essential to introduce the finger vein indexing and retrieval scheme. In this work, we present a novel finger vein indexing and retrieval scheme based on unsupervised clustering. To this extent we investigated three different clustering schemes namely K-means, K-medoids and Self Organizing Maps (SOM) neural networks. In addition, we also present a new feature extraction scheme to extract both compact and discriminant features from the finger vein images that are more suitable to build the indexing space. Extensive experiments are carried out on a large-scale heterogeneous finger vein database comprised of 2850 unique identities constructed using seven different publicly available finger vein databases. The obtained results demonstrated the efficacy of the proposed scheme with a pre-selection rate of 7.58% (hit rate of 92.42%) with a penetration rate of 42.48%. Further, the multi-cluster search demonstrated the performance with pre-selection error rate of 0.98% (hit rate of 99.02%) with a penetration rate of 52.88%.","PeriodicalId":398910,"journal":{"name":"IEEE International Conference on Identity, Security and Behavior Analysis (ISBA 2015)","volume":"119 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131702501","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}