Pub Date : 2017-10-01DOI: 10.1109/BTAS.2017.8272692
Nima Karimian, D. Woodard, Domenic Forte
Electrocardiogram (ECG) has long been regarded as a biometric modality which is impractical to copy, clone, or spoof. However, it was recently shown that an ECG signal can be replayed from arbitrary waveform generators, computer sound cards, or off-the-shelf audio players. In this paper, we develop a novel presentation attack where a short template of the victim's ECG is captured by an attacker and used to map the attacker's ECG into the victim's, which can then be provided to the sensor using one of the above sources. Our approach involves exploiting ECG models, characterizing the differences between ECG signals, and developing mapping functions that transform any ECG into one that closely matches an authentic user's ECG. Our proposed approach, which can operate online or on-the-fly, is compared with a more ideal offline scenario where the attacker has more time and resources. In our experiments, the offline approach achieves average success rates of 97.43% and 94.17% for non-fiducial and fiducial based ECG authentication. In the online scenario, the performance is de-graded by 5.65% for non-fiducial based authentication, but is nearly unaffected for fiducial authentication.
{"title":"On the vulnerability of ECG verification to online presentation attacks","authors":"Nima Karimian, D. Woodard, Domenic Forte","doi":"10.1109/BTAS.2017.8272692","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272692","url":null,"abstract":"Electrocardiogram (ECG) has long been regarded as a biometric modality which is impractical to copy, clone, or spoof. However, it was recently shown that an ECG signal can be replayed from arbitrary waveform generators, computer sound cards, or off-the-shelf audio players. In this paper, we develop a novel presentation attack where a short template of the victim's ECG is captured by an attacker and used to map the attacker's ECG into the victim's, which can then be provided to the sensor using one of the above sources. Our approach involves exploiting ECG models, characterizing the differences between ECG signals, and developing mapping functions that transform any ECG into one that closely matches an authentic user's ECG. Our proposed approach, which can operate online or on-the-fly, is compared with a more ideal offline scenario where the attacker has more time and resources. In our experiments, the offline approach achieves average success rates of 97.43% and 94.17% for non-fiducial and fiducial based ECG authentication. In the online scenario, the performance is de-graded by 5.65% for non-fiducial based authentication, but is nearly unaffected for fiducial authentication.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":" 8","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132075158","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-10-01DOI: 10.1109/BTAS.2017.8272701
Isaac Griswold-Steiner, Richard Matovu, Abdul Serwadda
Despite decades of research on automated handwriting authentication, there is yet to emerge an automated handwriting authentication application that breaks into the mainstream. In this paper, we argue that the burgeoning wearables market holds the key to a practical handwriting authentication app. With potential applications in online education, standardized testing and mobile banking, we present Handwriting Watcher, a mechanism which leverages a wrist-worn sensor-enabled device to authenticate a user's free handwriting. Through experiments capturing a wide range of writing scenarios, we show Handwriting Watcher attains mean error rates as low as 6.56% across the population. Our work represents a promising step towards a market-ready, generalized handwriting authentication system.
{"title":"Handwriting watcher: A mechanism for smartwatch-driven handwriting authentication","authors":"Isaac Griswold-Steiner, Richard Matovu, Abdul Serwadda","doi":"10.1109/BTAS.2017.8272701","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272701","url":null,"abstract":"Despite decades of research on automated handwriting authentication, there is yet to emerge an automated handwriting authentication application that breaks into the mainstream. In this paper, we argue that the burgeoning wearables market holds the key to a practical handwriting authentication app. With potential applications in online education, standardized testing and mobile banking, we present Handwriting Watcher, a mechanism which leverages a wrist-worn sensor-enabled device to authenticate a user's free handwriting. Through experiments capturing a wide range of writing scenarios, we show Handwriting Watcher attains mean error rates as low as 6.56% across the population. Our work represents a promising step towards a market-ready, generalized handwriting authentication system.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128450675","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-10-01DOI: 10.1109/BTAS.2017.8272702
Javier Galbally, G. Boström, Laurent Beslay
One of the fields that still today remains largely unexplored in biometrics is 3D fingerprint recognition. This gap is mainly explained by the lack of scanners capable of acquiring on a touchless, fast, reliable and repeatable way, accurate fingerprint 3D spatial models. As such, full 3D fingerprint data with which to produce research and advance this field is almost nonexistent. If such acquisition process was possible, it could represent the beginning of a real paradigm shift in the way fingerprint recognition is performed. The present paper is a first promising step to address the fascinating challenge of 3D fingerprint acquisition and recognition. It presents a new full 3D touchless fingerprint scanner, a new database with 1,000 3D finger-print models, a new segmentation method based on the additional spatial information provided by the models, and initial baseline verification results.
{"title":"Full 3D touchless fingerprint recognition: Sensor, database and baseline performance","authors":"Javier Galbally, G. Boström, Laurent Beslay","doi":"10.1109/BTAS.2017.8272702","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272702","url":null,"abstract":"One of the fields that still today remains largely unexplored in biometrics is 3D fingerprint recognition. This gap is mainly explained by the lack of scanners capable of acquiring on a touchless, fast, reliable and repeatable way, accurate fingerprint 3D spatial models. As such, full 3D fingerprint data with which to produce research and advance this field is almost nonexistent. If such acquisition process was possible, it could represent the beginning of a real paradigm shift in the way fingerprint recognition is performed. The present paper is a first promising step to address the fascinating challenge of 3D fingerprint acquisition and recognition. It presents a new full 3D touchless fingerprint scanner, a new database with 1,000 3D finger-print models, a new segmentation method based on the additional spatial information provided by the models, and initial baseline verification results.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117148819","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-10-01DOI: 10.1109/BTAS.2017.8272762
Ana F. Sequeira, Lulu Chen, J. Ferryman, Peter Wild, F. Alonso-Fernandez, J. Bigün, K. Raja, Ramachandra Raghavendra, C. Busch, Tiago de Freitas Pereira, S. Marcel, S. S. Behera, Mahesh Gour, Vivek Kanhangad
This work presents the 2nd Cross-Spectrum Iris/Periocular Recognition Competition (Cross-Eyed2017). The main goal of the competition is to promote and evaluate advances in cross-spectrum iris and periocular recognition. This second edition registered an increase in the participation numbers ranging from academia to industry: five teams submitted twelve methods for the periocular task and five for the iris task. The benchmark dataset is an enlarged version of the dual-spectrum database containing both iris and periocular images synchronously captured from a distance and within a realistic indoor environment. The evaluation was performed on an undisclosed test-set. Methodology, tested algorithms, and obtained results are reported in this paper identifying the remaining challenges in path forward.
{"title":"Cross-eyed 2017: Cross-spectral iris/periocular recognition competition","authors":"Ana F. Sequeira, Lulu Chen, J. Ferryman, Peter Wild, F. Alonso-Fernandez, J. Bigün, K. Raja, Ramachandra Raghavendra, C. Busch, Tiago de Freitas Pereira, S. Marcel, S. S. Behera, Mahesh Gour, Vivek Kanhangad","doi":"10.1109/BTAS.2017.8272762","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272762","url":null,"abstract":"This work presents the 2nd Cross-Spectrum Iris/Periocular Recognition Competition (Cross-Eyed2017). The main goal of the competition is to promote and evaluate advances in cross-spectrum iris and periocular recognition. This second edition registered an increase in the participation numbers ranging from academia to industry: five teams submitted twelve methods for the periocular task and five for the iris task. The benchmark dataset is an enlarged version of the dual-spectrum database containing both iris and periocular images synchronously captured from a distance and within a realistic indoor environment. The evaluation was performed on an undisclosed test-set. Methodology, tested algorithms, and obtained results are reported in this paper identifying the remaining challenges in path forward.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115468939","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-10-01DOI: 10.1109/BTAS.2017.8272750
P. Basak, Saurabh De, Mallika Agarwal, Aakarsh Malhotra, Mayank Vatsa, Richa Singh
In many applications such as law enforcement, attendance systems, and medical services, biometrics is utilized for identifying individuals. However, current systems, in general, do not enroll all possible age groups, particularly, toddlers and pre-school children. This research is the first of its kind attempt to prepare a multimodal biometric database for such potential users of biometric systems. In the proposed database, face, fingerprint, and iris modalities of over 100 children (age range of 18 months to 4 years) are captured in two different sessions, months apart. We also perform benchmarking evaluation of existing tools and algorithms to establish the baseline results for different unimodal and multimodal scenarios. Our experience and results suggest that while iris is highly accurate, it requires constant adult supervision to attain cooperation from children. On the other hand, face is the most easy-to-capture modality but yields very low verification performance. We assert that the availability of this database can instigate research in this important research problem.
{"title":"Multimodal biometric recognition for toddlers and pre-school children","authors":"P. Basak, Saurabh De, Mallika Agarwal, Aakarsh Malhotra, Mayank Vatsa, Richa Singh","doi":"10.1109/BTAS.2017.8272750","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272750","url":null,"abstract":"In many applications such as law enforcement, attendance systems, and medical services, biometrics is utilized for identifying individuals. However, current systems, in general, do not enroll all possible age groups, particularly, toddlers and pre-school children. This research is the first of its kind attempt to prepare a multimodal biometric database for such potential users of biometric systems. In the proposed database, face, fingerprint, and iris modalities of over 100 children (age range of 18 months to 4 years) are captured in two different sessions, months apart. We also perform benchmarking evaluation of existing tools and algorithms to establish the baseline results for different unimodal and multimodal scenarios. Our experience and results suggest that while iris is highly accurate, it requires constant adult supervision to attain cooperation from children. On the other hand, face is the most easy-to-capture modality but yields very low verification performance. We assert that the availability of this database can instigate research in this important research problem.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115477195","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-10-01DOI: 10.1109/BTAS.2017.8272740
Qi Li, Zhenan Sun, R. He
Face alignment is an important problem in computer vision. It is still an open problem due to the variations of facial attributes (e.g., head pose, facial expression, illumination variation). Many studies have shown that face alignment and facial attribute analysis are often correlated. This paper develops a two-stage multi-task Auto-encoders framework for fast face alignment by incorporating head pose information to handle large view variations. In the first and second stages, multi-task Auto-encoders are used to roughly locate and further refine facial landmark locations with related pose information, respectively. Besides, the shape constraint is naturally encoded into our two-stage face alignment framework to preserve facial structures. A coarse-to-fine strategy is adopted to refine the facial landmark results with the shape constraint. Furthermore, the computational cost of our method is much lower than its deep learning competitors. Experimental results on various challenging datasets show the effectiveness of the proposed method.
{"title":"Fast multi-view face alignment via multi-task auto-encoders","authors":"Qi Li, Zhenan Sun, R. He","doi":"10.1109/BTAS.2017.8272740","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272740","url":null,"abstract":"Face alignment is an important problem in computer vision. It is still an open problem due to the variations of facial attributes (e.g., head pose, facial expression, illumination variation). Many studies have shown that face alignment and facial attribute analysis are often correlated. This paper develops a two-stage multi-task Auto-encoders framework for fast face alignment by incorporating head pose information to handle large view variations. In the first and second stages, multi-task Auto-encoders are used to roughly locate and further refine facial landmark locations with related pose information, respectively. Besides, the shape constraint is naturally encoded into our two-stage face alignment framework to preserve facial structures. A coarse-to-fine strategy is adopted to refine the facial landmark results with the shape constraint. Furthermore, the computational cost of our method is much lower than its deep learning competitors. Experimental results on various challenging datasets show the effectiveness of the proposed method.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130200832","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-10-01DOI: 10.1109/BTAS.2017.8272724
Caue Zaghetto, Mateus Mendelson, A. Zaghetto, F. Vidal
This paper presents a liveness detection method based on texture descriptors and artificial neural networks, whose objective is to identify potential attempts of spoofing attacks against touchless fingerprinting devices. First, a database was created. It comprises a set of 400 images, from which 200 represent real fingers and 200 represent fake fingers made of beeswax, corn flour play dough, latex, silicone and wood glue, 40 samples each. The artificial neural network classifier is trained and tested in 7 different scenarios. In Scenario 1, there are only two classes, “real finger” and “fake finger”. From Scenarios 2 to 6, six classes are used, but classification is done considering the “realfinger” class and each one of the five “fake finger” classes, separately. Finally, in Scenario 7, six classes are used and the classifier must indicate to which of the six classes the acquired sample belongs. Results show that the proposed method achieves its goal, since it correctly detects liveness in almost 100% of cases.
{"title":"Liveness detection on touchless fingerprint devices using texture descriptors and artificial neural networks","authors":"Caue Zaghetto, Mateus Mendelson, A. Zaghetto, F. Vidal","doi":"10.1109/BTAS.2017.8272724","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272724","url":null,"abstract":"This paper presents a liveness detection method based on texture descriptors and artificial neural networks, whose objective is to identify potential attempts of spoofing attacks against touchless fingerprinting devices. First, a database was created. It comprises a set of 400 images, from which 200 represent real fingers and 200 represent fake fingers made of beeswax, corn flour play dough, latex, silicone and wood glue, 40 samples each. The artificial neural network classifier is trained and tested in 7 different scenarios. In Scenario 1, there are only two classes, “real finger” and “fake finger”. From Scenarios 2 to 6, six classes are used, but classification is done considering the “realfinger” class and each one of the five “fake finger” classes, separately. Finally, in Scenario 7, six classes are used and the classifier must indicate to which of the six classes the acquired sample belongs. Results show that the proposed method achieves its goal, since it correctly detects liveness in almost 100% of cases.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":"06 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123734577","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-10-01DOI: 10.1109/BTAS.2017.8272764
Abhijit Das, U. Pal, M. A. Ferrer-Ballester, M. Blumenstein, Dejan Štepec, Peter Rot, Ž. Emeršič, P. Peer, V. Štruc, S. V. A. Kumar, B. Harish
This paper summarises the results of the Sclera Segmentation and Eye Recognition Benchmarking Competition (SSERBC 2017). It was organised in the context of the International Joint Conference on Biometrics (IJCB 2017). The aim of this competition was to record the recent developments in sclera segmentation and eye recognition in the visible spectrum (using iris, sclera and peri-ocular, and their fusion), and also to gain the attention of researchers on this subject. In this regard, we have used the Multi-Angle Sclera Dataset (MASD version 1). It is comprised of2624 images taken from both the eyes of 82 identities. Therefore, it consists of images of 164 (82×2) eyes. A manual segmentation mask of these images was created to baseline both tasks. Precision and recall based statistical measures were employed to evaluate the effectiveness of the segmentation and the ranks of the segmentation task. Recognition accuracy measure has been employed to measure the recognition task. Manually segmented sclera, iris and peri-ocular regions were used in the recognition task. Sixteen teams registered for the competition, and among them, six teams submitted their algorithms or systems for the segmentation task and two of them submitted their recognition algorithm or systems. The results produced by these algorithms or systems reflect current developments in the literature of sclera segmentation and eye recognition, employing cutting edge techniques. The MASD version 1 dataset with some of the ground truth will be freely available for research purposes. The success of the competition also demonstrates the recent interests of researchers from academia as well as industry on this subject.
{"title":"SSERBC 2017: Sclera segmentation and eye recognition benchmarking competition","authors":"Abhijit Das, U. Pal, M. A. Ferrer-Ballester, M. Blumenstein, Dejan Štepec, Peter Rot, Ž. Emeršič, P. Peer, V. Štruc, S. V. A. Kumar, B. Harish","doi":"10.1109/BTAS.2017.8272764","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272764","url":null,"abstract":"This paper summarises the results of the Sclera Segmentation and Eye Recognition Benchmarking Competition (SSERBC 2017). It was organised in the context of the International Joint Conference on Biometrics (IJCB 2017). The aim of this competition was to record the recent developments in sclera segmentation and eye recognition in the visible spectrum (using iris, sclera and peri-ocular, and their fusion), and also to gain the attention of researchers on this subject. In this regard, we have used the Multi-Angle Sclera Dataset (MASD version 1). It is comprised of2624 images taken from both the eyes of 82 identities. Therefore, it consists of images of 164 (82×2) eyes. A manual segmentation mask of these images was created to baseline both tasks. Precision and recall based statistical measures were employed to evaluate the effectiveness of the segmentation and the ranks of the segmentation task. Recognition accuracy measure has been employed to measure the recognition task. Manually segmented sclera, iris and peri-ocular regions were used in the recognition task. Sixteen teams registered for the competition, and among them, six teams submitted their algorithms or systems for the segmentation task and two of them submitted their recognition algorithm or systems. The results produced by these algorithms or systems reflect current developments in the literature of sclera segmentation and eye recognition, employing cutting edge techniques. The MASD version 1 dataset with some of the ground truth will be freely available for research purposes. The success of the competition also demonstrates the recent interests of researchers from academia as well as industry on this subject.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":"135 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114313022","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-10-01DOI: 10.1109/BTAS.2017.8272774
Juan E. Tapia, Ignacio A. Viedma
Gender classification from multispectral periocular and iris images is a new topic on soft-biometric research. The feature extracted from RGB images and Near Infrared Images shows complementary information independent of the spectrum of the images. This paper shows that we canfusion these information improving the accuracy of gender classification. Most gender classification methods reported in the literature has used images from face databases and all the features for classification purposes. Experimental results suggest: (a) Features extracted in different scales can perform better than using only one feature in a single scale; (b) The periocular images performed better than iris images on VIS and NIR; c) The fusion of features on different spectral images NIR and VIS allows improve the accuracy; (c) The feature selection applied to NIR and VIS allows select relevant features and d) Our accuracy 90% is competitive with the state of the art.
{"title":"Gender classification from multispectral periocular images","authors":"Juan E. Tapia, Ignacio A. Viedma","doi":"10.1109/BTAS.2017.8272774","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272774","url":null,"abstract":"Gender classification from multispectral periocular and iris images is a new topic on soft-biometric research. The feature extracted from RGB images and Near Infrared Images shows complementary information independent of the spectrum of the images. This paper shows that we canfusion these information improving the accuracy of gender classification. Most gender classification methods reported in the literature has used images from face databases and all the features for classification purposes. Experimental results suggest: (a) Features extracted in different scales can perform better than using only one feature in a single scale; (b) The periocular images performed better than iris images on VIS and NIR; c) The fusion of features on different spectral images NIR and VIS allows improve the accuracy; (c) The feature selection applied to NIR and VIS allows select relevant features and d) Our accuracy 90% is competitive with the state of the art.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126731730","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-10-01DOI: 10.1109/BTAS.2017.8272770
Abhijit Das, Abira Sengupta, M. A. Ferrer-Ballester, U. Pal, M. Blumenstein
This paper discusses the possibility of use of some challenging face images scenario captured from optical phenomenon in the wild for forensic purpose towards individual identification. Occluded and under cover face images in surveillance scenario can be collected from its reflection on a surrounding glass or on a smooth wall that is under the coverage of the surveillance camera and such scenario of face images can be linked for forensic purposes. Another similar scenario that can also be used for forensic is the face images of an individual standing behind a transparent glass wall. To investigate the capability of these images for personal identification this study is conducted. This work investigated different types of features employed in the literature to establish individual identification by such degraded face images. Among them, local region based featured worked best. To achieve higher accuracy and better facial features face image were cropped manually along its close bounding box and noise removal was performed (reflection, etc.). In order to experiment we have developed a database considering the above mentioned scenario, which will be publicly available for academic research. Initial investigation substantiates the possibility of using such face images for forensic purpose.
{"title":"Linking face images captured from the optical phenomenon in the wild for forensic science","authors":"Abhijit Das, Abira Sengupta, M. A. Ferrer-Ballester, U. Pal, M. Blumenstein","doi":"10.1109/BTAS.2017.8272770","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272770","url":null,"abstract":"This paper discusses the possibility of use of some challenging face images scenario captured from optical phenomenon in the wild for forensic purpose towards individual identification. Occluded and under cover face images in surveillance scenario can be collected from its reflection on a surrounding glass or on a smooth wall that is under the coverage of the surveillance camera and such scenario of face images can be linked for forensic purposes. Another similar scenario that can also be used for forensic is the face images of an individual standing behind a transparent glass wall. To investigate the capability of these images for personal identification this study is conducted. This work investigated different types of features employed in the literature to establish individual identification by such degraded face images. Among them, local region based featured worked best. To achieve higher accuracy and better facial features face image were cropped manually along its close bounding box and noise removal was performed (reflection, etc.). In order to experiment we have developed a database considering the above mentioned scenario, which will be publicly available for academic research. Initial investigation substantiates the possibility of using such face images for forensic purpose.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123653686","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}