Pub Date : 2019-06-01DOI: 10.1109/ICB45273.2019.8987318
James Rainey, John D. Bustard, S. McLoone
State of the art gait recognition methods often make use of the shape of the body as well as its movement, as observed in the use of Gait Energy Images(GEIs), for recognition. However, it is desirable to have a method that works exclusively with the movement of the body, as clothing and other factors may interfere with the biometric signature from body shapes. Recent advances in markerless motion capture enable full 3D body poses to be estimated from unconstrained video sources. This paper describes how one such technique can be used to provide improved performance for verification tests.The markerless motion capture algorithm fits the 3D SMPL body model to a 2D image. Joint rotations from a single cycle are extracted from the model and matched using a verification system trained using an automated machine learning system, auto-sklearn. Evaluations of the method were performed on the CASIA-B gait dataset, and results show competitive verification performance with an Equal Error Rate of 18.40%.
{"title":"Gait Recognition from Markerless 3D Motion Capture","authors":"James Rainey, John D. Bustard, S. McLoone","doi":"10.1109/ICB45273.2019.8987318","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987318","url":null,"abstract":"State of the art gait recognition methods often make use of the shape of the body as well as its movement, as observed in the use of Gait Energy Images(GEIs), for recognition. However, it is desirable to have a method that works exclusively with the movement of the body, as clothing and other factors may interfere with the biometric signature from body shapes. Recent advances in markerless motion capture enable full 3D body poses to be estimated from unconstrained video sources. This paper describes how one such technique can be used to provide improved performance for verification tests.The markerless motion capture algorithm fits the 3D SMPL body model to a 2D image. Joint rotations from a single cycle are extracted from the model and matched using a verification system trained using an automated machine learning system, auto-sklearn. Evaluations of the method were performed on the CASIA-B gait dataset, and results show competitive verification performance with an Equal Error Rate of 18.40%.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125370288","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-06-01DOI: 10.1109/ICB45273.2019.8987402
Sanka Rasnayaka, Sanjay Saha, T. Sim
In order to provide the additional security required by modern mobile devices, biometric methods and Continuous Authentication(CA) systems are getting popular. Most existing work on CA are concerned about achieving higher accuracy or fusing multiple modalities. However, in a mobile environment there are more constraints on the resources available. This work is the first to compare between different biometric modalities based on the resources they use. We do this by determining the Resource Profile Curve (RPC) for each modality. This Curve reveals the trade-off between authentication accuracy and resource usage, and is helpful for different usage scenarios in which a CA system needs to operate. In particular, we explain how a CA system can intelligently switch between RPCs to conserve battery power, memory usage, or to maximize authentication accuracy. We argue that RPCs ought to guide the development of practical CA systems.
{"title":"Making the most of what you have! Profiling biometric authentication on mobile devices","authors":"Sanka Rasnayaka, Sanjay Saha, T. Sim","doi":"10.1109/ICB45273.2019.8987402","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987402","url":null,"abstract":"In order to provide the additional security required by modern mobile devices, biometric methods and Continuous Authentication(CA) systems are getting popular. Most existing work on CA are concerned about achieving higher accuracy or fusing multiple modalities. However, in a mobile environment there are more constraints on the resources available. This work is the first to compare between different biometric modalities based on the resources they use. We do this by determining the Resource Profile Curve (RPC) for each modality. This Curve reveals the trade-off between authentication accuracy and resource usage, and is helpful for different usage scenarios in which a CA system needs to operate. In particular, we explain how a CA system can intelligently switch between RPCs to conserve battery power, memory usage, or to maximize authentication accuracy. We argue that RPCs ought to guide the development of practical CA systems.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122438609","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-06-01DOI: 10.1109/ICB45273.2019.8987300
Zhe Cui, Jianjiang Feng, Jie Zhou
Dense registration of fingerprints provides pixel-wise correspondences between two fingerprints, which is beneficial for fingerprint mosaicking and matching. However, this problem is very challenging due to large distortion, low fingerprint quality and lack of distinctive features. The performance of existing dense registration approaches, such as image correlation and phase demodulation, are limited by manually designed features and similarity measures. To overcome the limitations of these approaches, we propose a dense fingerprint registration algorithm through convolutional neural network. The key component is a displacement regression network (DRN) that can regress pixel-wise displacement field directly from coarsely aligned fingerprint images. Training ground-truth data is automatically generated by an existing dense registration algorithm without tedious manual labelling. We also propose a multi-scale matching score fusion method to show the application of the proposed registration algorithm in improving fingerprint matching accuracy. Experimental results on FVC2004 DB1_A and DB2_A, and Tsinghua Distorted Fingerprint (TDF) database show that our method reaches state-of-the-art registration performances.
{"title":"Dense Fingerprint Registration via Displacement Regression Network","authors":"Zhe Cui, Jianjiang Feng, Jie Zhou","doi":"10.1109/ICB45273.2019.8987300","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987300","url":null,"abstract":"Dense registration of fingerprints provides pixel-wise correspondences between two fingerprints, which is beneficial for fingerprint mosaicking and matching. However, this problem is very challenging due to large distortion, low fingerprint quality and lack of distinctive features. The performance of existing dense registration approaches, such as image correlation and phase demodulation, are limited by manually designed features and similarity measures. To overcome the limitations of these approaches, we propose a dense fingerprint registration algorithm through convolutional neural network. The key component is a displacement regression network (DRN) that can regress pixel-wise displacement field directly from coarsely aligned fingerprint images. Training ground-truth data is automatically generated by an existing dense registration algorithm without tedious manual labelling. We also propose a multi-scale matching score fusion method to show the application of the proposed registration algorithm in improving fingerprint matching accuracy. Experimental results on FVC2004 DB1_A and DB2_A, and Tsinghua Distorted Fingerprint (TDF) database show that our method reaches state-of-the-art registration performances.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125574213","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-06-01DOI: 10.1109/ICB45273.2019.8987269
A. Anand, Amioy Kumar, Ajay Kumar
Personal identification using multibiometrics is desirable in a wide range of high-security and/or forensic application as it can address performance limitations from unimodal biometrics systems. This paper presents a new scheme the multibiometrics fusion to achieve performance improvement for the user identification/recognition. We model the biometric identification solution using an adaptive cohort ranking approach, which can more effectively utilize the cohort information for maximizing the true positive identification rates. In contrast to the tradition cohort-based methods, the proposed cohort ranking approach offers merit of being matcher independence as it does not make any assumption on the nature of score distributions from any of the biometric matcher(s). In addition, our scheme is adaptive and can be incorporated for any biometric matcher/technologies. The proposed approach is evaluated on publicly available unimodal and multimodal biometrics databases, i.e., BSSR1 multimodal matching scores for fingerprint and face matchers and XM2VTS matching scores from synchronize databases of face and voice. In both the unimodal and multimodal databases, our results indicate that the proposed approach can outperform the conventional adaptive identification approaches. The experimental results from both public databases are quite promising and validate the contributions from this work.
{"title":"Multibiometrics User Recognition using Adaptive Cohort Ranking","authors":"A. Anand, Amioy Kumar, Ajay Kumar","doi":"10.1109/ICB45273.2019.8987269","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987269","url":null,"abstract":"Personal identification using multibiometrics is desirable in a wide range of high-security and/or forensic application as it can address performance limitations from unimodal biometrics systems. This paper presents a new scheme the multibiometrics fusion to achieve performance improvement for the user identification/recognition. We model the biometric identification solution using an adaptive cohort ranking approach, which can more effectively utilize the cohort information for maximizing the true positive identification rates. In contrast to the tradition cohort-based methods, the proposed cohort ranking approach offers merit of being matcher independence as it does not make any assumption on the nature of score distributions from any of the biometric matcher(s). In addition, our scheme is adaptive and can be incorporated for any biometric matcher/technologies. The proposed approach is evaluated on publicly available unimodal and multimodal biometrics databases, i.e., BSSR1 multimodal matching scores for fingerprint and face matchers and XM2VTS matching scores from synchronize databases of face and voice. In both the unimodal and multimodal databases, our results indicate that the proposed approach can outperform the conventional adaptive identification approaches. The experimental results from both public databases are quite promising and validate the contributions from this work.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132850149","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-06-01DOI: 10.1109/ICB45273.2019.8987373
Kálmán Tornai, W. Scheirer
The most straightforward, yet insecure, methods of authenticating a person on smartphones derive from the solutions applied to personal computers or smart cards, namely the authorization by passwords or numeric codes. Alarmingly, the widespread use of smartphone platforms implies that people are carrying around sensitive information in their pocket, making the information more available physically. As smartphone owners are often using their devices in public areas, these short numeric codes or other forms of passwords can be obtained quickly through shoulder surfing, resulting in making that restricted data far more accessible for those who are not authorized to access the device. In this paper, we address the problem of biometric verifi-cation on smartphones. We propose a new approach for gesture-based verification that makes use of open set recognition algorithms. Further, we introduce a new database of inertial measurements to investigate the user identification capabilities of this approach. The results we have obtained indicate that this approach is a feasible solution, although the precision of the method depends highly on the chosen samples of the training set.
{"title":"Gesture-based User Identity Verification as an Open Set Problem for Smartphones","authors":"Kálmán Tornai, W. Scheirer","doi":"10.1109/ICB45273.2019.8987373","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987373","url":null,"abstract":"The most straightforward, yet insecure, methods of authenticating a person on smartphones derive from the solutions applied to personal computers or smart cards, namely the authorization by passwords or numeric codes. Alarmingly, the widespread use of smartphone platforms implies that people are carrying around sensitive information in their pocket, making the information more available physically. As smartphone owners are often using their devices in public areas, these short numeric codes or other forms of passwords can be obtained quickly through shoulder surfing, resulting in making that restricted data far more accessible for those who are not authorized to access the device. In this paper, we address the problem of biometric verifi-cation on smartphones. We propose a new approach for gesture-based verification that makes use of open set recognition algorithms. Further, we introduce a new database of inertial measurements to investigate the user identification capabilities of this approach. The results we have obtained indicate that this approach is a feasible solution, although the precision of the method depends highly on the chosen samples of the training set.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"224 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133165153","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-06-01DOI: 10.1109/ICB45273.2019.8987370
Anjith George, S. Marcel
Face recognition has evolved as a prominent biometric authentication modality. However, vulnerability to presentation attacks curtails its reliable deployment. Automatic detection of presentation attacks is essential for secure use of face recognition technology in unattended scenarios. In this work, we introduce a Convolutional Neural Network (CNN) based framework for presentation attack detection, with deep pixel-wise supervision. The framework uses only frame level information making it suitable for deployment in smart devices with minimal computational and time overhead. We demonstrate the effectiveness of the proposed approach in public datasets for both intra as well as cross-dataset experiments. The proposed approach achieves an HTER of 0% in Replay Mobile dataset and an ACER of 0.42% in Protocol-1 of OULU dataset outperforming state of the art methods.
{"title":"Deep Pixel-wise Binary Supervision for Face Presentation Attack Detection","authors":"Anjith George, S. Marcel","doi":"10.1109/ICB45273.2019.8987370","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987370","url":null,"abstract":"Face recognition has evolved as a prominent biometric authentication modality. However, vulnerability to presentation attacks curtails its reliable deployment. Automatic detection of presentation attacks is essential for secure use of face recognition technology in unattended scenarios. In this work, we introduce a Convolutional Neural Network (CNN) based framework for presentation attack detection, with deep pixel-wise supervision. The framework uses only frame level information making it suitable for deployment in smart devices with minimal computational and time overhead. We demonstrate the effectiveness of the proposed approach in public datasets for both intra as well as cross-dataset experiments. The proposed approach achieves an HTER of 0% in Replay Mobile dataset and an ACER of 0.42% in Protocol-1 of OULU dataset outperforming state of the art methods.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115630600","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-06-01DOI: 10.1109/ICB45273.2019.8987375
Pavel Korshunov, S. Marcel
It is becoming increasingly easy to automatically replace a face of one person in a video with the face of another person by using a pre-trained generative adversarial network (GAN). Recent public scandals, e.g., the faces of celebrities being swapped onto pornographic videos, call for automated ways to detect these Deepfake videos. To help developing such methods, in this paper, we present the first publicly available set of Deepfake videos generated from videos of VidTIMIT database. We used open source software based on GANs to create the Deepfakes, and we emphasize that training and blending parameters can significantly impact the quality of the resulted videos. To demonstrate this impact, we generated videos with low and high visual quality (320 videos each) using differently tuned parameter sets. We showed that the state of the art face recognition systems based on VGG and Facenet neural networks are vulnerable to Deepfake videos, with 85.62% and 95.00% false acceptance rates (on high quality versions) respectively, which means methods for detecting Deepfake videos are necessary. By considering several baseline approaches, we found the best performing method based on visual quality metrics, which is often used in presentation attack detection domain, to lead to 8.97% equal error rate on high quality Deep-fakes. Our experiments demonstrate that GAN-generated Deepfake videos are challenging for both face recognition systems and existing detection methods, and the further development of face swapping technology will make it even more so.
{"title":"Vulnerability assessment and detection of Deepfake videos","authors":"Pavel Korshunov, S. Marcel","doi":"10.1109/ICB45273.2019.8987375","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987375","url":null,"abstract":"It is becoming increasingly easy to automatically replace a face of one person in a video with the face of another person by using a pre-trained generative adversarial network (GAN). Recent public scandals, e.g., the faces of celebrities being swapped onto pornographic videos, call for automated ways to detect these Deepfake videos. To help developing such methods, in this paper, we present the first publicly available set of Deepfake videos generated from videos of VidTIMIT database. We used open source software based on GANs to create the Deepfakes, and we emphasize that training and blending parameters can significantly impact the quality of the resulted videos. To demonstrate this impact, we generated videos with low and high visual quality (320 videos each) using differently tuned parameter sets. We showed that the state of the art face recognition systems based on VGG and Facenet neural networks are vulnerable to Deepfake videos, with 85.62% and 95.00% false acceptance rates (on high quality versions) respectively, which means methods for detecting Deepfake videos are necessary. By considering several baseline approaches, we found the best performing method based on visual quality metrics, which is often used in presentation attack detection domain, to lead to 8.97% equal error rate on high quality Deep-fakes. Our experiments demonstrate that GAN-generated Deepfake videos are challenging for both face recognition systems and existing detection methods, and the further development of face swapping technology will make it even more so.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120954324","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-06-01DOI: 10.1109/ICB45273.2019.8987416
S. M. Iranmanesh, N. Nasrabadi
In this paper, we present an attribute-guided deep coupled learning framework to address the problem of matching polarimetric thermal face photos against a gallery of visible faces. The coupled framework contains two sub-networks, one dedicated to the visible spectrum and the second sub-network dedicated to the polarimetric thermal spectrum. Each sub-network is made of a generative adversarial network (GAN) architecture. We propose a novel Attribute-Guided Coupled Generative Adversarial Network (AGC-GAN) architecture which utilizes facial attributes to improve the thermal-to-visible face recognition performance. The proposed AGC-GAN exploits the facial attributes and leverages multiple loss functions in order to learn rich discriminative features in a common embedding subspace. To achieve a realistic photo reconstruction while preserving the discriminative information, we also add a perceptual loss term to the coupling loss function. An ablation study is performed to show the effectiveness of different loss functions for optimizing the proposed method. Moreover, the superiority of the model compared to the state-ofthe-art models is demonstrated using polarimetric dataset.
{"title":"Attribute-Guided Deep Polarimetric Thermal-to-visible Face Recognition","authors":"S. M. Iranmanesh, N. Nasrabadi","doi":"10.1109/ICB45273.2019.8987416","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987416","url":null,"abstract":"In this paper, we present an attribute-guided deep coupled learning framework to address the problem of matching polarimetric thermal face photos against a gallery of visible faces. The coupled framework contains two sub-networks, one dedicated to the visible spectrum and the second sub-network dedicated to the polarimetric thermal spectrum. Each sub-network is made of a generative adversarial network (GAN) architecture. We propose a novel Attribute-Guided Coupled Generative Adversarial Network (AGC-GAN) architecture which utilizes facial attributes to improve the thermal-to-visible face recognition performance. The proposed AGC-GAN exploits the facial attributes and leverages multiple loss functions in order to learn rich discriminative features in a common embedding subspace. To achieve a realistic photo reconstruction while preserving the discriminative information, we also add a perceptual loss term to the coupling loss function. An ablation study is performed to show the effectiveness of different loss functions for optimizing the proposed method. Moreover, the superiority of the model compared to the state-ofthe-art models is demonstrated using polarimetric dataset.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126056536","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-06-01DOI: 10.1109/ICB45273.2019.8987279
Peng Qian, Aojie Li, Manhua Liu
The image quality of latent fingerprints is usually poor with unclear ridge structure and various overlapping patterns. Enhancement is an important processing step to reduce the noise, recover the corrupted regions and improve the clarity of ridge structure for more accurate fingerprint recognition. Existing fingerprint enhancement methods cannot achieve good performance for latent fingerprints. In this paper, we propose a latent fingerprint enhancement method based on DenseUNet. First, to generate the training data, the high-quality fingerprints are overlapped with the structured noises. Then, a deep DenseUNet is constructed to transform the low-quality fingerprint image into the high-quality fingerprint image by pixels-to-pixels and end- to-end training. Finally, the whole latent fingerprint is iteratively enhanced with the DenseUNet model to achieve the image quality requirement. Experiment results and comparison on NIST SD27 latent fingerprint database are presented to show the promising performance of the proposed algorithm.
{"title":"Latent Fingerprint Enhancement Based on DenseUNet","authors":"Peng Qian, Aojie Li, Manhua Liu","doi":"10.1109/ICB45273.2019.8987279","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987279","url":null,"abstract":"The image quality of latent fingerprints is usually poor with unclear ridge structure and various overlapping patterns. Enhancement is an important processing step to reduce the noise, recover the corrupted regions and improve the clarity of ridge structure for more accurate fingerprint recognition. Existing fingerprint enhancement methods cannot achieve good performance for latent fingerprints. In this paper, we propose a latent fingerprint enhancement method based on DenseUNet. First, to generate the training data, the high-quality fingerprints are overlapped with the structured noises. Then, a deep DenseUNet is constructed to transform the low-quality fingerprint image into the high-quality fingerprint image by pixels-to-pixels and end- to-end training. Finally, the whole latent fingerprint is iteratively enhanced with the DenseUNet model to achieve the image quality requirement. Experiment results and comparison on NIST SD27 latent fingerprint database are presented to show the promising performance of the proposed algorithm.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125840415","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-06-01DOI: 10.1109/ICB45273.2019.8987379
G. Odinokikh, M. Korobkin, I. Solomatin, I. Efimov, A. Fartukov
Biometric methods are increasingly penetrating the field of mobile applications, confronting researchers with a huge number of problems that have not been considered before. Many different interaction scenarios in conjunction with the mobile device performance limitations challenge the capabilities of on-board biometrics. Saturated with complex textural features the iris image is used as a source for the extraction of unique features of the individual that are used for recognition. The mentioned features inherent to the interaction with the mobile device affect not only the source image quality but natural deformations of the iris leading to high intra-class variations and hence reducing the recognition performance. A novel method for iris feature extraction and matching is represented in this work. It is based on a lightweight CNN model combining the advantages of a classic approach and advanced deep learning techniques. The model utilizes shallow and deep feature representations in combination with characteristics describing the environment that helps to reduce intra-class variations and as a consequence the recognition errors. It showed high efficiency on the mobile and a few more datasets outperforming state-of-the-art methods by far.
{"title":"Iris Feature Extraction and Matching Method for Mobile Biometric Applications","authors":"G. Odinokikh, M. Korobkin, I. Solomatin, I. Efimov, A. Fartukov","doi":"10.1109/ICB45273.2019.8987379","DOIUrl":"https://doi.org/10.1109/ICB45273.2019.8987379","url":null,"abstract":"Biometric methods are increasingly penetrating the field of mobile applications, confronting researchers with a huge number of problems that have not been considered before. Many different interaction scenarios in conjunction with the mobile device performance limitations challenge the capabilities of on-board biometrics. Saturated with complex textural features the iris image is used as a source for the extraction of unique features of the individual that are used for recognition. The mentioned features inherent to the interaction with the mobile device affect not only the source image quality but natural deformations of the iris leading to high intra-class variations and hence reducing the recognition performance. A novel method for iris feature extraction and matching is represented in this work. It is based on a lightweight CNN model combining the advantages of a classic approach and advanced deep learning techniques. The model utilizes shallow and deep feature representations in combination with characteristics describing the environment that helps to reduce intra-class variations and as a consequence the recognition errors. It showed high efficiency on the mobile and a few more datasets outperforming state-of-the-art methods by far.","PeriodicalId":430846,"journal":{"name":"2019 International Conference on Biometrics (ICB)","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125848914","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}