Pub Date : 2018-08-01DOI: 10.1109/iisr.2018.8535699
E. S. Nugraha
This is the cover page of the Proceeding of the 5th International Conference on Family Business and Entrepreneurship.
这是《第五届家族企业与创业国际会议论文集》的封面。
{"title":"Cover page","authors":"E. S. Nugraha","doi":"10.1109/iisr.2018.8535699","DOIUrl":"https://doi.org/10.1109/iisr.2018.8535699","url":null,"abstract":"This is the cover page of the Proceeding of the 5th International Conference on Family Business and Entrepreneurship.","PeriodicalId":259849,"journal":{"name":"2018 International Workshop on Biometrics and Forensics (IWBF)","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131397531","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-06-01DOI: 10.1109/IWBF.2018.8401567
Ewelina Bartuzi, Katarzyna Roszczewska, A. Czajka, A. Pacut
This paper proposes a biometric recognition method based on thermal images of inner part of the hand, and a database of 21,000 thermal images of both hands acquired by a specialized thermal camera from 70 subjects. The data for each subject was acquired in three different sessions, with two first sessions organized on the same day, and the third session organized approximately two weeks apart. This allowed to analyze the stability of hand temperature in both short-term and long-term horizons. No hand stabilization or positioning devices were used during acquisition, making this setup closer to real-world, unconstrained applications. This required making our method translation-, rotationand scale-invariant. Two approaches for feature selection and classification are proposed and compared: feature engineering deploying texture descriptors such as Binarized Statistical Image Features (BSIF) and Gabor wavelets, and feature learning based on convolutional neural networks (CNN) trained in different environmental conditions. For within-session scenario we achieved 0.36% and 0.00% of equal error rate (EER) in the first and the second approach, respectively. Between-session EER stands at 27.98% for the first approach and 17.17% for the second one. These results allow for estimation of a short-term stability of hand thermal information. This paper presents the first known to us database of hand thermal images and the first biometric system based solely on hand thermal maps acquired by thermal sensor in unconstrained scenario.
{"title":"Unconstrained Biometric Recognition based on Thermal Hand Images","authors":"Ewelina Bartuzi, Katarzyna Roszczewska, A. Czajka, A. Pacut","doi":"10.1109/IWBF.2018.8401567","DOIUrl":"https://doi.org/10.1109/IWBF.2018.8401567","url":null,"abstract":"This paper proposes a biometric recognition method based on thermal images of inner part of the hand, and a database of 21,000 thermal images of both hands acquired by a specialized thermal camera from 70 subjects. The data for each subject was acquired in three different sessions, with two first sessions organized on the same day, and the third session organized approximately two weeks apart. This allowed to analyze the stability of hand temperature in both short-term and long-term horizons. No hand stabilization or positioning devices were used during acquisition, making this setup closer to real-world, unconstrained applications. This required making our method translation-, rotationand scale-invariant. Two approaches for feature selection and classification are proposed and compared: feature engineering deploying texture descriptors such as Binarized Statistical Image Features (BSIF) and Gabor wavelets, and feature learning based on convolutional neural networks (CNN) trained in different environmental conditions. For within-session scenario we achieved 0.36% and 0.00% of equal error rate (EER) in the first and the second approach, respectively. Between-session EER stands at 27.98% for the first approach and 17.17% for the second one. These results allow for estimation of a short-term stability of hand thermal information. This paper presents the first known to us database of hand thermal images and the first biometric system based solely on hand thermal maps acquired by thermal sensor in unconstrained scenario.","PeriodicalId":259849,"journal":{"name":"2018 International Workshop on Biometrics and Forensics (IWBF)","volume":"216 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115278709","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-06-01DOI: 10.1109/IWBF.2018.8401566
Qian Zheng, Ajay Kumar, Gang Pan
Recovery of 3D fingerprint data using photometric stereo generates 3D surface normal and albedo, which forms rich 3D fingerprint surface information. These surface normal's are further subjected to the reconstruction process, which integrates the surface normal to generate depth data. Since the source of depth information is essentially the surface normal, it is prudent to examine if this source information can itself be used for 3D fingerprint identification. In addition to avoiding the errors introduced by well-known integrability problem, such an approach can also enable significantly faster identification as the 3D reconstruction is the most computationally complex operation before the template matching. This paper investigates such an approach for 3D fingerprint identification using recovered surface normal and albedo information. We use publicly available 3D fingerprint database from 240 clients for the performance evaluation. The experimental results presented in this paper are highly promising, validates our approach, and indicate promises from matching contactless 3D fingerprints without the 3D surface reconstruction.
{"title":"Contactless 3D fingerprint identification without 3D reconstruction","authors":"Qian Zheng, Ajay Kumar, Gang Pan","doi":"10.1109/IWBF.2018.8401566","DOIUrl":"https://doi.org/10.1109/IWBF.2018.8401566","url":null,"abstract":"Recovery of 3D fingerprint data using photometric stereo generates 3D surface normal and albedo, which forms rich 3D fingerprint surface information. These surface normal's are further subjected to the reconstruction process, which integrates the surface normal to generate depth data. Since the source of depth information is essentially the surface normal, it is prudent to examine if this source information can itself be used for 3D fingerprint identification. In addition to avoiding the errors introduced by well-known integrability problem, such an approach can also enable significantly faster identification as the 3D reconstruction is the most computationally complex operation before the template matching. This paper investigates such an approach for 3D fingerprint identification using recovered surface normal and albedo information. We use publicly available 3D fingerprint database from 240 clients for the performance evaluation. The experimental results presented in this paper are highly promising, validates our approach, and indicate promises from matching contactless 3D fingerprints without the 3D surface reconstruction.","PeriodicalId":259849,"journal":{"name":"2018 International Workshop on Biometrics and Forensics (IWBF)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129301579","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-06-01DOI: 10.1109/IWBF.2018.8401564
M. Barni, Ehsan Nowroozi, B. Tondi
Contrast Enhancement (CE) detection in the presence of laundering attacks, i.e. common processing operators applied with the goal to erase the traces the CE detector looks for, is a challenging task. JPEG compression is one of the most harmful laundering attacks, which has been proven to deceive most CE detectors proposed so far. In this paper, we present a system that is able to detect contrast enhancement by means of adaptive histogram equalization in the presence of JPEG compression, by training a JPEG-aware SVM detector based on color SPAM features, i.e., an SVM detector trained on contrast-enhanced-then-JPEG-compressed images. Experimental results show that the detector works well only if the Quality Factor (QF) used during training matches the QF used to compress the images under test. To cope with this problem in cases where the QF cannot be extracted from the image header, we use a QF estimation step based on the idempotency properties of JPEG compression. Experimental results show good performance under a wide range of QFs.
{"title":"Detection of adaptive histogram equalization robust against JPEG compression","authors":"M. Barni, Ehsan Nowroozi, B. Tondi","doi":"10.1109/IWBF.2018.8401564","DOIUrl":"https://doi.org/10.1109/IWBF.2018.8401564","url":null,"abstract":"Contrast Enhancement (CE) detection in the presence of laundering attacks, i.e. common processing operators applied with the goal to erase the traces the CE detector looks for, is a challenging task. JPEG compression is one of the most harmful laundering attacks, which has been proven to deceive most CE detectors proposed so far. In this paper, we present a system that is able to detect contrast enhancement by means of adaptive histogram equalization in the presence of JPEG compression, by training a JPEG-aware SVM detector based on color SPAM features, i.e., an SVM detector trained on contrast-enhanced-then-JPEG-compressed images. Experimental results show that the detector works well only if the Quality Factor (QF) used during training matches the QF used to compress the images under test. To cope with this problem in cases where the QF cannot be extracted from the image header, we use a QF estimation step based on the idempotency properties of JPEG compression. Experimental results show good performance under a wide range of QFs.","PeriodicalId":259849,"journal":{"name":"2018 International Workshop on Biometrics and Forensics (IWBF)","volume":"221 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124370968","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-06-01DOI: 10.1109/IWBF.2018.8401561
Fan Liu, Feng Xu, Yuhua Ding, Sai Yang
In this paper, we propose a robust sparse representation method to address single sample per person problem by simultaneously exploiting the local and global structure of data. Considering the fact that most sparse representation methods use each testing sample separately and ignore the prior information from testing data, we seek the sparse representation of all testing samples together to capture the global structure of data. Moreover, we adopt an intra-class variance dictionary to describe various facial changes that can not be captured by the single training sample. To make use of local structure, we divide each face image into some blocks consisting of overlapped patches and assume the overlapped patches in a local block are different samples from the same class, which makes their coefficients have row-wise sparse structure. Finally, by imposing group sparsity constraint and sparsity constraint respectively on the coefficients corresponding to the training patches dictionary and variance dictionary, we obtain more discriminative sparse representation, whose coefficients can be directly utilized for classification. Experimental results on three public databases not only demonstrate effectiveness of the proposed approach but also show robustness to various facial variation.
{"title":"Learning structured sparse representation for single sample face recognition","authors":"Fan Liu, Feng Xu, Yuhua Ding, Sai Yang","doi":"10.1109/IWBF.2018.8401561","DOIUrl":"https://doi.org/10.1109/IWBF.2018.8401561","url":null,"abstract":"In this paper, we propose a robust sparse representation method to address single sample per person problem by simultaneously exploiting the local and global structure of data. Considering the fact that most sparse representation methods use each testing sample separately and ignore the prior information from testing data, we seek the sparse representation of all testing samples together to capture the global structure of data. Moreover, we adopt an intra-class variance dictionary to describe various facial changes that can not be captured by the single training sample. To make use of local structure, we divide each face image into some blocks consisting of overlapped patches and assume the overlapped patches in a local block are different samples from the same class, which makes their coefficients have row-wise sparse structure. Finally, by imposing group sparsity constraint and sparsity constraint respectively on the coefficients corresponding to the training patches dictionary and variance dictionary, we obtain more discriminative sparse representation, whose coefficients can be directly utilized for classification. Experimental results on three public databases not only demonstrate effectiveness of the proposed approach but also show robustness to various facial variation.","PeriodicalId":259849,"journal":{"name":"2018 International Workshop on Biometrics and Forensics (IWBF)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125690955","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-06-01DOI: 10.1109/IWBF.2018.8401557
Ramachandra Raghavendra, S. Venkatesh, K. Raja, C. Busch
Face recognition has become a ubiquitous way of establishing identity in many applications. Gender transformation therapy induces changes to face on both for structural and textural features. A challenge for face recognition system is, therefore, to reliably identify the subjects after they undergo gender change while the enrolment images correspond to pre-change. In this work, we propose a new framework based on augmenting and fine-tuning deep Residual Network-50 (ResNet-50). We employ YouTube database with 37 subjects whose images are self-captured to evaluate the performance of state-of-the-schemes. Obtained results demonstrate the superiority of the proposed scheme over twelve different state-of-the-art schemes with an improved Rank — 1 recognition rate.
{"title":"Transgender face recognition with off-the-shelf pre-trained CNNs: A comprehensive study","authors":"Ramachandra Raghavendra, S. Venkatesh, K. Raja, C. Busch","doi":"10.1109/IWBF.2018.8401557","DOIUrl":"https://doi.org/10.1109/IWBF.2018.8401557","url":null,"abstract":"Face recognition has become a ubiquitous way of establishing identity in many applications. Gender transformation therapy induces changes to face on both for structural and textural features. A challenge for face recognition system is, therefore, to reliably identify the subjects after they undergo gender change while the enrolment images correspond to pre-change. In this work, we propose a new framework based on augmenting and fine-tuning deep Residual Network-50 (ResNet-50). We employ YouTube database with 37 subjects whose images are self-captured to evaluate the performance of state-of-the-schemes. Obtained results demonstrate the superiority of the proposed scheme over twelve different state-of-the-art schemes with an improved Rank — 1 recognition rate.","PeriodicalId":259849,"journal":{"name":"2018 International Workshop on Biometrics and Forensics (IWBF)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116609827","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-06-01DOI: 10.1109/IWBF.2018.8401552
Martin Rieger, Jutta Hämmerle-Uhl, A. Uhl
Biometrie system security requires cryptographic protection of sample data under certain circumstances. We assess low complexity selective encryption schemes applied to JPEG2000 compressed iris data by conducting iris recognition on the selectively encrypted data. This paper specifically investigates the effect of applying the approach to normalised texture data instead of original sample data in order to further reduce the amount of data to be processed (i.e. compressed and encrypted). Result generalisability is facilitated by the employment of four different iris feature extraction schemes and the systematic consideration of three encryption variants. Depending on the applied iris recognition scheme, protection equivalent to full encryption can be achieved when encrypting 1/60–1/12 of the data amount of a full iris sample encoded in a JPEG2000 file.
{"title":"Efficient iris sample data protection using selective JPEG2000 encryption of normalised texture","authors":"Martin Rieger, Jutta Hämmerle-Uhl, A. Uhl","doi":"10.1109/IWBF.2018.8401552","DOIUrl":"https://doi.org/10.1109/IWBF.2018.8401552","url":null,"abstract":"Biometrie system security requires cryptographic protection of sample data under certain circumstances. We assess low complexity selective encryption schemes applied to JPEG2000 compressed iris data by conducting iris recognition on the selectively encrypted data. This paper specifically investigates the effect of applying the approach to normalised texture data instead of original sample data in order to further reduce the amount of data to be processed (i.e. compressed and encrypted). Result generalisability is facilitated by the employment of four different iris feature extraction schemes and the systematic consideration of three encryption variants. Depending on the applied iris recognition scheme, protection equivalent to full encryption can be achieved when encrypting 1/60–1/12 of the data amount of a full iris sample encoded in a JPEG2000 file.","PeriodicalId":259849,"journal":{"name":"2018 International Workshop on Biometrics and Forensics (IWBF)","volume":"95 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133687846","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-06-01DOI: 10.1109/IWBF.2018.8401551
M. H. Khan, M. S. Farid, M. Grzegorzek
Gait has emerged as a distinguishable human biological trait. It refers to the walking style of an individual and is considered an important biometric feature for person identification. Codebook based gait recognition algorithms have demonstrated excellent performance by achieving high recognition rates. However, such methods construct a codebook for each database or scenario. In this paper, we investigate the idea of using a generic codebook for gait recognition. The proposed codebook is built by using spatiotemporal characteristics of gait sequences from a large diverse synthetic gait database. We also propose a gait recognition algorithm based on this generic codebook. The advantages of the proposed algorithm over the existing methods include its independency from generating a codebook for each database, rather the proposed generic codebook can be used to encode any gait scenario. Moreover, the proposed algorithm is model free and does not require human body segmentation or modeling. The performance of the proposed generic codebook-based gait recognition algorithm is evaluated on two large gait databases TUM GAID and CMU MoBo, and recognition rate reveals the effectiveness of the proposed algorithm.
{"title":"Using a generic model for codebook-based gait recognition algorithms","authors":"M. H. Khan, M. S. Farid, M. Grzegorzek","doi":"10.1109/IWBF.2018.8401551","DOIUrl":"https://doi.org/10.1109/IWBF.2018.8401551","url":null,"abstract":"Gait has emerged as a distinguishable human biological trait. It refers to the walking style of an individual and is considered an important biometric feature for person identification. Codebook based gait recognition algorithms have demonstrated excellent performance by achieving high recognition rates. However, such methods construct a codebook for each database or scenario. In this paper, we investigate the idea of using a generic codebook for gait recognition. The proposed codebook is built by using spatiotemporal characteristics of gait sequences from a large diverse synthetic gait database. We also propose a gait recognition algorithm based on this generic codebook. The advantages of the proposed algorithm over the existing methods include its independency from generating a codebook for each database, rather the proposed generic codebook can be used to encode any gait scenario. Moreover, the proposed algorithm is model free and does not require human body segmentation or modeling. The performance of the proposed generic codebook-based gait recognition algorithm is evaluated on two large gait databases TUM GAID and CMU MoBo, and recognition rate reveals the effectiveness of the proposed algorithm.","PeriodicalId":259849,"journal":{"name":"2018 International Workshop on Biometrics and Forensics (IWBF)","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126996624","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-06-01DOI: 10.1109/IWBF.2018.8401563
Silvio Barra, G. Fenu, M. De Marsico, Aniello Castiglione, M. Nappi
The new frontier of biometrie authentication exploits wearable sensors. At present, there is no need of special equipment. Both cameras of increasing resolution, and MEMS-based sensors (Micro Electro-Mechanical Systems) are ubiquitously embedded in everyday mobile communication devices, especially smartphones. This makes their use economically attractive, and the investigation of the new provided possibilities increasingly widespread. The aim of the present paper is to demonstrate the possibility to control the access to a smartphone by recording and processing the dynamic signals produced by the simple gesture of lifting the phone, possibly in connection with further biometric information provided by ear recognition. In this respect, it continues and extends a previous work, by performing new experiments on a more challenging dataset.
{"title":"Have you permission to answer this phone?","authors":"Silvio Barra, G. Fenu, M. De Marsico, Aniello Castiglione, M. Nappi","doi":"10.1109/IWBF.2018.8401563","DOIUrl":"https://doi.org/10.1109/IWBF.2018.8401563","url":null,"abstract":"The new frontier of biometrie authentication exploits wearable sensors. At present, there is no need of special equipment. Both cameras of increasing resolution, and MEMS-based sensors (Micro Electro-Mechanical Systems) are ubiquitously embedded in everyday mobile communication devices, especially smartphones. This makes their use economically attractive, and the investigation of the new provided possibilities increasingly widespread. The aim of the present paper is to demonstrate the possibility to control the access to a smartphone by recording and processing the dynamic signals produced by the simple gesture of lifting the phone, possibly in connection with further biometric information provided by ear recognition. In this respect, it continues and extends a previous work, by performing new experiments on a more challenging dataset.","PeriodicalId":259849,"journal":{"name":"2018 International Workshop on Biometrics and Forensics (IWBF)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123587217","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-06-01DOI: 10.1109/IWBF.2018.8401553
Gustavo Carneiro Bicalho, M. C. Alves, L. Porto, C. Machado, F. Vidal
Over the last years, facial landmarks techniques were the first and main approach to solve biometric facial recognition and they are still capable of achieving great results in controlled environments. However, there are still open problems to be solved, such as how to deal with twins, scale variation and the face growth. In this work, we propose a new method based on measured values (ratios) from facial cephalometric landmarks, which uses an iris size as a normalization factor to solve the influence of face scale (face growth) effect and improving Equal Error Rates (EER) scores for a facial recognition system in specifics scenarios under 5%.
{"title":"Solving the face growth problem in the biometrie face recognition using Photo-Anthropometric ratios by iris normalization","authors":"Gustavo Carneiro Bicalho, M. C. Alves, L. Porto, C. Machado, F. Vidal","doi":"10.1109/IWBF.2018.8401553","DOIUrl":"https://doi.org/10.1109/IWBF.2018.8401553","url":null,"abstract":"Over the last years, facial landmarks techniques were the first and main approach to solve biometric facial recognition and they are still capable of achieving great results in controlled environments. However, there are still open problems to be solved, such as how to deal with twins, scale variation and the face growth. In this work, we propose a new method based on measured values (ratios) from facial cephalometric landmarks, which uses an iris size as a normalization factor to solve the influence of face scale (face growth) effect and improving Equal Error Rates (EER) scores for a facial recognition system in specifics scenarios under 5%.","PeriodicalId":259849,"journal":{"name":"2018 International Workshop on Biometrics and Forensics (IWBF)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128391667","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}