Pub Date : 2006-09-01DOI: 10.1109/BCC.2006.4341627
Yongjin Wang, K. Plataniotis, Dimitrios Hatzinakos
In this paper, we investigate identification of human subjects from electrocardiogram (ECG) signals. We segment the ECG records into individual heartbeat based on the localization of R wave peaks. Two types of features, namely analytic and appearance features, are extracted to represent the characteristics of heartbeat signal of different subjects. Feature selection is performed to find out significant attributes. We compared the performance of different classification algorithms. To better utilize the advantages of different types of features, we proposed two schemes for data fusion and classification. Our system achieves promising results with 100% correct human identification rate and 98.90% accuracy for heartbeat identification. The proposed framework reveals the potential of employing appearance based analysis in ECG signal, yet demonstrates the advantage of a hierarchical architecture in pattern recognition problems.
{"title":"Integrating Analytic and Appearance Attributes for Human Identification from ECG Signals","authors":"Yongjin Wang, K. Plataniotis, Dimitrios Hatzinakos","doi":"10.1109/BCC.2006.4341627","DOIUrl":"https://doi.org/10.1109/BCC.2006.4341627","url":null,"abstract":"In this paper, we investigate identification of human subjects from electrocardiogram (ECG) signals. We segment the ECG records into individual heartbeat based on the localization of R wave peaks. Two types of features, namely analytic and appearance features, are extracted to represent the characteristics of heartbeat signal of different subjects. Feature selection is performed to find out significant attributes. We compared the performance of different classification algorithms. To better utilize the advantages of different types of features, we proposed two schemes for data fusion and classification. Our system achieves promising results with 100% correct human identification rate and 98.90% accuracy for heartbeat identification. The proposed framework reveals the potential of employing appearance based analysis in ECG signal, yet demonstrates the advantage of a hierarchical architecture in pattern recognition problems.","PeriodicalId":226152,"journal":{"name":"2006 Biometrics Symposium: Special Session on Research at the Biometric Consortium Conference","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127703261","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2006-09-01DOI: 10.1109/BCC.2006.4341622
J. Jeffers, A. Arakala
One vital application of biometrics is to supplement or replace passwords to provide secure authentication. Cryptographic schemes using passwords require exactly the same password at enrolment and verification to authenticate successfully. The inherent variation in samples of the same biometric makes it difficult to replace passwords directly with biometrics in a cryptographic scheme. The fuzzy vault is an innovative cryptographic construct that uses error correction techniques to compensate for biometric variation. Our research is directed to methods of realizing the fuzzy vault for the fingerprint biometric using minutia points described in a translation and rotation invariant manner. We investigate three different minutia representation methods, which are translation and rotation invariant. We study their robustness and determine their suitability to be incorporated in a fuzzy vault construct. We finally show that one of our three chosen structures shows promise for incorporation into a fuzzy vault scheme.
{"title":"Minutiae-Based Structures for A Fuzzy Vault","authors":"J. Jeffers, A. Arakala","doi":"10.1109/BCC.2006.4341622","DOIUrl":"https://doi.org/10.1109/BCC.2006.4341622","url":null,"abstract":"One vital application of biometrics is to supplement or replace passwords to provide secure authentication. Cryptographic schemes using passwords require exactly the same password at enrolment and verification to authenticate successfully. The inherent variation in samples of the same biometric makes it difficult to replace passwords directly with biometrics in a cryptographic scheme. The fuzzy vault is an innovative cryptographic construct that uses error correction techniques to compensate for biometric variation. Our research is directed to methods of realizing the fuzzy vault for the fingerprint biometric using minutia points described in a translation and rotation invariant manner. We investigate three different minutia representation methods, which are translation and rotation invariant. We study their robustness and determine their suitability to be incorporated in a fuzzy vault construct. We finally show that one of our three chosen structures shows promise for incorporation into a fuzzy vault scheme.","PeriodicalId":226152,"journal":{"name":"2006 Biometrics Symposium: Special Session on Research at the Biometric Consortium Conference","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124796590","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2006-09-01DOI: 10.1109/BCC.2006.4341625
A. Ross, Samir Shah
The richness and the apparent stability of the iris texture makes it a robust biometric trait for personal authentication. The performance of an automated iris recognition system is affected by the accuracy of the segmentation process used to isolate the iris structure from the other components in its vicinity, viz., the sclera, pupil, eyelids and eyelashes. Most segmentation models in the literature assume that the pupillary, the limbic and the eyelid boundaries are circular or elliptical in shape. Hence, they focus on determining model parameters that best fit these hypotheses. In this paper, we describe a novel iris segmentation scheme that employs Geodesic Active Contours to extract the iris from the surrounding structures. The proposed scheme elicits the iris texture in an iterative fashion depending upon both the local and global conditions in the image. The performance of an iris recognition system based on multiple Gabor filters is observed to improve upon application of the proposed segmentation algorithm. Experimental results on the WVU and CASIA v1.0 iris databases indicate the efficacy of the proposed technique.
{"title":"Segmenting Non-Ideal Irises Using Geodesic Active Contours","authors":"A. Ross, Samir Shah","doi":"10.1109/BCC.2006.4341625","DOIUrl":"https://doi.org/10.1109/BCC.2006.4341625","url":null,"abstract":"The richness and the apparent stability of the iris texture makes it a robust biometric trait for personal authentication. The performance of an automated iris recognition system is affected by the accuracy of the segmentation process used to isolate the iris structure from the other components in its vicinity, viz., the sclera, pupil, eyelids and eyelashes. Most segmentation models in the literature assume that the pupillary, the limbic and the eyelid boundaries are circular or elliptical in shape. Hence, they focus on determining model parameters that best fit these hypotheses. In this paper, we describe a novel iris segmentation scheme that employs Geodesic Active Contours to extract the iris from the surrounding structures. The proposed scheme elicits the iris texture in an iterative fashion depending upon both the local and global conditions in the image. The performance of an iris recognition system based on multiple Gabor filters is observed to improve upon application of the proposed segmentation algorithm. Experimental results on the WVU and CASIA v1.0 iris databases indicate the efficacy of the proposed technique.","PeriodicalId":226152,"journal":{"name":"2006 Biometrics Symposium: Special Session on Research at the Biometric Consortium Conference","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114359112","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2006-09-01DOI: 10.1109/BCC.2006.4341615
G. Chetty, M. Wagner
In this paper we present the details of the multilevel liveness verification (MLLV) framework proposed for realizing a secure face-voice biometric authentication system that can thwart different types of audio and video replay attacks. The proposed MLLV framework based on novel feature extraction and multimodal fusion approaches, uncovers the static and dynamic relationship between voice and face information from speaking faces, and allows multiple levels of security. Experiments with three different speaking corpora VidTIMIT, UCBN and AVOZES shows a significant improvement in system performance in terms of DET curves and equal error rates (EER) for different types of replay and synthesis attacks.
{"title":"Multi-Level Liveness Verification for Face-Voice Biometric Authentication","authors":"G. Chetty, M. Wagner","doi":"10.1109/BCC.2006.4341615","DOIUrl":"https://doi.org/10.1109/BCC.2006.4341615","url":null,"abstract":"In this paper we present the details of the multilevel liveness verification (MLLV) framework proposed for realizing a secure face-voice biometric authentication system that can thwart different types of audio and video replay attacks. The proposed MLLV framework based on novel feature extraction and multimodal fusion approaches, uncovers the static and dynamic relationship between voice and face information from speaking faces, and allows multiple levels of security. Experiments with three different speaking corpora VidTIMIT, UCBN and AVOZES shows a significant improvement in system performance in terms of DET curves and equal error rates (EER) for different types of replay and synthesis attacks.","PeriodicalId":226152,"journal":{"name":"2006 Biometrics Symposium: Special Session on Research at the Biometric Consortium Conference","volume":"114 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132750328","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2006-09-01DOI: 10.1109/BCC.2006.4341638
R. Abiantun, M. Savvides, B. Kumar
In this paper we investigate the effect of image resolution of the face recognition grand challenge (FRGC) dataset on the kernel class-dependence feature analysis (KCFA) method. Good performance on low-resolution image data is important for any face recognition system using low- resolution imagery, such as in surveillance footage. We show that KCFA works reliably even at very low resolutions on the FRGC dataset Experiment 4 using the one-to-one matching protocol (greater than 70% verification rate (VR) at 0.1% false accept rate (FAR)). We observe reasonable performance at resolution as low as 16x16. However performance of KCFA degrades significantly below this resolution, but still outperforms the PCA baseline algorithm with 12% VR at 0.1% FAR.
{"title":"How Low Can You Go? Low Resolution Face Recognition Study Using Kernel Correlation Feature Analysis on the FRGCv2 dataset","authors":"R. Abiantun, M. Savvides, B. Kumar","doi":"10.1109/BCC.2006.4341638","DOIUrl":"https://doi.org/10.1109/BCC.2006.4341638","url":null,"abstract":"In this paper we investigate the effect of image resolution of the face recognition grand challenge (FRGC) dataset on the kernel class-dependence feature analysis (KCFA) method. Good performance on low-resolution image data is important for any face recognition system using low- resolution imagery, such as in surveillance footage. We show that KCFA works reliably even at very low resolutions on the FRGC dataset Experiment 4 using the one-to-one matching protocol (greater than 70% verification rate (VR) at 0.1% false accept rate (FAR)). We observe reasonable performance at resolution as low as 16x16. However performance of KCFA degrades significantly below this resolution, but still outperforms the PCA baseline algorithm with 12% VR at 0.1% FAR.","PeriodicalId":226152,"journal":{"name":"2006 Biometrics Symposium: Special Session on Research at the Biometric Consortium Conference","volume":"26 42","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113962272","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2006-09-01DOI: 10.1109/BCC.2006.4341616
H. Chang, M. Yi, H. Harishwaran, B. Abidi, A. Koschan, M. Abidi
Face analysis via multispectral imaging is a relatively unexplored territory in face recognition research. The multispectral, multimodal and multi-illuminant IRIS-M3 database was acquired, indoors and outdoors, to promote research in this direction. In the database, each data record has images spanning all bands in the visible spectrum and one thermal image, acquired under different illumination conditions. The spectral power distributions of the lighting sources and daylight conditions are also encoded in the database. Multispectral fused images show improved face recognition performance compared to the visible monochromatic images. Galleries and probes were selected from the indoor and outdoor sections of the database to study the effects of data and decision fusion in the presence of lighting changes. Our experiments were validated by comparing cumulative match characteristics of monochromatic probes against multispectral probes obtained via multispectral fusion by averaging, principal component analysis, wavelet analysis, illumination adjustment and decision level fusion. In this effort, we demonstrate that spectral bands, either individually or fused by different techniques, provide better face recognition results with up to 78% improvement on conventional visible images.
{"title":"Multispectral Fusion for Indoor and Outdoor Face Authentication","authors":"H. Chang, M. Yi, H. Harishwaran, B. Abidi, A. Koschan, M. Abidi","doi":"10.1109/BCC.2006.4341616","DOIUrl":"https://doi.org/10.1109/BCC.2006.4341616","url":null,"abstract":"Face analysis via multispectral imaging is a relatively unexplored territory in face recognition research. The multispectral, multimodal and multi-illuminant IRIS-M3 database was acquired, indoors and outdoors, to promote research in this direction. In the database, each data record has images spanning all bands in the visible spectrum and one thermal image, acquired under different illumination conditions. The spectral power distributions of the lighting sources and daylight conditions are also encoded in the database. Multispectral fused images show improved face recognition performance compared to the visible monochromatic images. Galleries and probes were selected from the indoor and outdoor sections of the database to study the effects of data and decision fusion in the presence of lighting changes. Our experiments were validated by comparing cumulative match characteristics of monochromatic probes against multispectral probes obtained via multispectral fusion by averaging, principal component analysis, wavelet analysis, illumination adjustment and decision level fusion. In this effort, we demonstrate that spectral bands, either individually or fused by different techniques, provide better face recognition results with up to 78% improvement on conventional visible images.","PeriodicalId":226152,"journal":{"name":"2006 Biometrics Symposium: Special Session on Research at the Biometric Consortium Conference","volume":"53 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132238340","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}