首页 > 最新文献

2019 IEEE 5th International Conference on Identity, Security, and Behavior Analysis (ISBA)最新文献

英文 中文
FDFNet: A Secure Cancelable Deep Finger Dorsal Template Generation Network Secured via. Bio-Hashing FDFNet:一个安全的可取消的深指背模板生成网络。Bio-Hashing
Avantika Singh, Ashish Arora, Shreyal Patel, Gaurav Jaswal, A. Nigam
Present world has already been consistently exploring the fine edges of online and digital world by imposing multiple challenging problems/scenarios. Similar to physical world, personal identity management is very crucial inorder to provide any secure online system. Last decade has seen a lot of work in this area using biometrics such as face, fingerprint, iris etc. Still there exist several vulnerabilities and one should have to address the problem of compromised biometrics much more seriously, since they cannot be modified easily once compromised. In this work, we have proposed a secure cancelable finger dorsal template generation network (learning domain specific features) secured via. Bio-Hashing. Proposed system effectively protects the original finger dorsal images by withdrawing compromised template and reassigning the new one. A novel Finger-Dorsal Feature Extraction Net (FDFNet) has been proposed for extracting the discriminative features. This network is exclusively trained on trait specific features without using any kind of pre-trained architecture. Later Bio-Hashing, a technique based on assigning a tokenized random number to each user, has been used to hash the features extracted from FDFNet. To test the performance of the proposed architecture, we have tested it over two benchmark public finger knuckle datasets: PolyU FKP and PolyU Contactless FKI. The experimental results shows the effectiveness of the proposed system in terms of security and accuracy.
当今世界已经通过施加多种具有挑战性的问题/场景,不断探索在线和数字世界的精细边缘。与现实世界类似,为了提供安全的在线系统,个人身份管理非常重要。在过去的十年中,在这一领域使用生物识别技术进行了大量的工作,如面部、指纹、虹膜等。但是仍然存在一些漏洞,人们应该更认真地解决生物识别信息泄露的问题,因为它们一旦被泄露就不能轻易修改。在这项工作中,我们提出了一个安全的可取消的手指背模板生成网络(学习特定领域的特征)。Bio-Hashing。该系统通过提取受损模板并重新分配新的模板,有效地保护了原始手指背图像。提出了一种新的手指-背侧特征提取网络(FDFNet)来提取识别特征。该网络专门针对特定的特征进行训练,而不使用任何预训练的架构。后来的生物哈希,一种基于给每个用户分配一个标记化随机数的技术,已经被用来对从FDFNet中提取的特征进行哈希。为了测试该架构的性能,我们在两个基准的公共手指关节数据集上进行了测试:理大FKP和理大非接触式FKI。实验结果表明了该系统在安全性和准确性方面的有效性。
{"title":"FDFNet: A Secure Cancelable Deep Finger Dorsal Template Generation Network Secured via. Bio-Hashing","authors":"Avantika Singh, Ashish Arora, Shreyal Patel, Gaurav Jaswal, A. Nigam","doi":"10.1109/ISBA.2019.8778520","DOIUrl":"https://doi.org/10.1109/ISBA.2019.8778520","url":null,"abstract":"Present world has already been consistently exploring the fine edges of online and digital world by imposing multiple challenging problems/scenarios. Similar to physical world, personal identity management is very crucial inorder to provide any secure online system. Last decade has seen a lot of work in this area using biometrics such as face, fingerprint, iris etc. Still there exist several vulnerabilities and one should have to address the problem of compromised biometrics much more seriously, since they cannot be modified easily once compromised. In this work, we have proposed a secure cancelable finger dorsal template generation network (learning domain specific features) secured via. Bio-Hashing. Proposed system effectively protects the original finger dorsal images by withdrawing compromised template and reassigning the new one. A novel Finger-Dorsal Feature Extraction Net (FDFNet) has been proposed for extracting the discriminative features. This network is exclusively trained on trait specific features without using any kind of pre-trained architecture. Later Bio-Hashing, a technique based on assigning a tokenized random number to each user, has been used to hash the features extracted from FDFNet. To test the performance of the proposed architecture, we have tested it over two benchmark public finger knuckle datasets: PolyU FKP and PolyU Contactless FKI. The experimental results shows the effectiveness of the proposed system in terms of security and accuracy.","PeriodicalId":270033,"journal":{"name":"2019 IEEE 5th International Conference on Identity, Security, and Behavior Analysis (ISBA)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129396998","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Spoofing PRNU Patterns of Iris Sensors while Preserving Iris Recognition 在保持虹膜识别的同时欺骗虹膜传感器的PRNU模式
Sudipta Banerjee, Vahid Mirjalili, A. Ross
The principle of Photo Response Non-Uniformity (PRNU) is used to link an image with its source, i.e., the sensor that produced it. In this work, we investigate if it is possible to modify an iris image acquired using one sensor in order to spoof the PRNU noise pattern of a different sensor. In this regard, we develop an image perturbation routine that iteratively modifies blocks of pixels in the original iris image such that its PRNU pattern approaches that of a target sensor. Experiments indicate the efficacy of the proposed perturbation method in spoofing PRNU patterns present in an iris image whilst still retaining its biometric content.
光响应非均匀性(PRNU)原理用于将图像与其源(即产生该图像的传感器)连接起来。在这项工作中,我们研究了是否有可能修改使用一个传感器获得的虹膜图像,以欺骗不同传感器的PRNU噪声模式。在这方面,我们开发了一个图像扰动程序,迭代修改原始虹膜图像中的像素块,使其PRNU模式接近目标传感器。实验表明,所提出的扰动方法在欺骗虹膜图像中存在的PRNU模式同时仍然保留其生物特征内容的有效性。
{"title":"Spoofing PRNU Patterns of Iris Sensors while Preserving Iris Recognition","authors":"Sudipta Banerjee, Vahid Mirjalili, A. Ross","doi":"10.1109/ISBA.2019.8778483","DOIUrl":"https://doi.org/10.1109/ISBA.2019.8778483","url":null,"abstract":"The principle of Photo Response Non-Uniformity (PRNU) is used to link an image with its source, i.e., the sensor that produced it. In this work, we investigate if it is possible to modify an iris image acquired using one sensor in order to spoof the PRNU noise pattern of a different sensor. In this regard, we develop an image perturbation routine that iteratively modifies blocks of pixels in the original iris image such that its PRNU pattern approaches that of a target sensor. Experiments indicate the efficacy of the proposed perturbation method in spoofing PRNU patterns present in an iris image whilst still retaining its biometric content.","PeriodicalId":270033,"journal":{"name":"2019 IEEE 5th International Conference on Identity, Security, and Behavior Analysis (ISBA)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134032742","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Removing Personally Identifiable Information from Shared Dataset for Keystroke Authentication Research 从共享数据集中删除个人身份信息用于击键认证研究
Jiaju Huang, Bryan Klee, Daniel Schuckers, Daqing Hou, S. Schuckers
Research on keystroke dynamics has the good potential to offer continuous authentication that complements conventional authentication methods in combating insider threats and identity theft before more harm can be done to the genuine users. Unfortunately, the large amount of data required by free-text keystroke authentication often contain personally identifiable information, or PII, and personally sensitive information, such as a user’s first name and last name, username and password for an account, bank card numbers, and social security numbers. As a result, there are privacy risks associated with keystroke data that must be mitigated before they are shared with other researchers. We conduct a systematic study to remove PII’s from a recent large keystroke dataset. We find substantial amounts of PII’s from the dataset, including names, usernames and passwords, social security numbers, and bank card numbers, which, if leaked, may lead to various harms to the user, including personal embarrassment, blackmails, financial loss, and identity theft. We thoroughly evaluate the effectiveness of our detection program for each kind of PII. We demonstrate that our PII detection program can achieve near perfect recall at the expense of losing some useful information (lower precision). Finally, we demonstrate that the removal of PII’s from the original dataset has only negligible impact on the detection error tradeoff of the free-text authentication algorithm by Gunetti and Picardi. We hope that this experience report will be useful in informing the design of privacy removal in future keystroke dynamics based user authentication systems.
对击键动力学的研究有很好的潜力,可以提供持续的身份验证,补充传统的身份验证方法,在真正的用户受到更多伤害之前,打击内部威胁和身份盗窃。不幸的是,自由文本击键身份验证所需的大量数据通常包含个人可识别信息(PII)和个人敏感信息,例如用户的名和姓、帐户的用户名和密码、银行卡号和社会保险号。因此,与击键数据相关的隐私风险必须在与其他研究人员共享之前加以缓解。我们进行了一项系统研究,从最近的大型击键数据集中删除PII。我们从数据集中发现了大量的个人身份信息,包括姓名、用户名和密码、社会安全号码和银行卡号码,如果这些信息泄露,可能会对用户造成各种伤害,包括个人尴尬、勒索、经济损失和身份盗窃。我们对每一种PII检测程序的有效性进行了彻底的评估。我们证明了我们的PII检测程序可以在损失一些有用信息(精度较低)的代价下实现近乎完美的召回。最后,我们证明了从原始数据集中去除PII对Gunetti和Picardi的自由文本认证算法的检测误差权衡的影响可以忽略不计。我们希望这份经验报告将有助于在未来基于用户身份验证系统的击键动力学中告知隐私删除的设计。
{"title":"Removing Personally Identifiable Information from Shared Dataset for Keystroke Authentication Research","authors":"Jiaju Huang, Bryan Klee, Daniel Schuckers, Daqing Hou, S. Schuckers","doi":"10.1109/ISBA.2019.8778628","DOIUrl":"https://doi.org/10.1109/ISBA.2019.8778628","url":null,"abstract":"Research on keystroke dynamics has the good potential to offer continuous authentication that complements conventional authentication methods in combating insider threats and identity theft before more harm can be done to the genuine users. Unfortunately, the large amount of data required by free-text keystroke authentication often contain personally identifiable information, or PII, and personally sensitive information, such as a user’s first name and last name, username and password for an account, bank card numbers, and social security numbers. As a result, there are privacy risks associated with keystroke data that must be mitigated before they are shared with other researchers. We conduct a systematic study to remove PII’s from a recent large keystroke dataset. We find substantial amounts of PII’s from the dataset, including names, usernames and passwords, social security numbers, and bank card numbers, which, if leaked, may lead to various harms to the user, including personal embarrassment, blackmails, financial loss, and identity theft. We thoroughly evaluate the effectiveness of our detection program for each kind of PII. We demonstrate that our PII detection program can achieve near perfect recall at the expense of losing some useful information (lower precision). Finally, we demonstrate that the removal of PII’s from the original dataset has only negligible impact on the detection error tradeoff of the free-text authentication algorithm by Gunetti and Picardi. We hope that this experience report will be useful in informing the design of privacy removal in future keystroke dynamics based user authentication systems.","PeriodicalId":270033,"journal":{"name":"2019 IEEE 5th International Conference on Identity, Security, and Behavior Analysis (ISBA)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129198601","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Utilizing Template Diversity for Fusion Of Face Recognizers 利用模板多样性进行人脸识别融合
S. Tulyakov, Nishant Sankaran, S. Setlur, V. Govindaraju
If multiple face images are available for the creation of person’s biometric template, some averaging method could be used to combine the feature vectors extracted from each image into a single template feature vector. Resulting average feature vector does not retain the information about image feature vector distribution. In this paper we consider the augmentation of such templates by the information about diversity of constituent face images, e.g. sample standard deviation of image feature vectors. We consider the theoretical model describing the conditions of the usefulness of template diversity measure, and see if such conditions hold in real life templates. We perform our experiments using IARPA face image datasets and deep CNN face recognizers.
如果有多张人脸图像可用于创建人的生物特征模板,可以使用某种平均方法将每张图像提取的特征向量组合成单个模板特征向量。所得的平均特征向量不保留图像特征向量分布的信息。在本文中,我们考虑利用人脸图像组成的多样性信息(如图像特征向量的样本标准差)来增强这些模板。我们考虑描述模板多样性度量有用性条件的理论模型,并看看这些条件是否在现实生活模板中成立。我们使用IARPA人脸图像数据集和深度CNN人脸识别器进行实验。
{"title":"Utilizing Template Diversity for Fusion Of Face Recognizers","authors":"S. Tulyakov, Nishant Sankaran, S. Setlur, V. Govindaraju","doi":"10.1109/ISBA.2019.8778556","DOIUrl":"https://doi.org/10.1109/ISBA.2019.8778556","url":null,"abstract":"If multiple face images are available for the creation of person’s biometric template, some averaging method could be used to combine the feature vectors extracted from each image into a single template feature vector. Resulting average feature vector does not retain the information about image feature vector distribution. In this paper we consider the augmentation of such templates by the information about diversity of constituent face images, e.g. sample standard deviation of image feature vectors. We consider the theoretical model describing the conditions of the usefulness of template diversity measure, and see if such conditions hold in real life templates. We perform our experiments using IARPA face image datasets and deep CNN face recognizers.","PeriodicalId":270033,"journal":{"name":"2019 IEEE 5th International Conference on Identity, Security, and Behavior Analysis (ISBA)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115071746","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Thermal to Visual Face Recognition using Transfer Learning 热到视觉人脸识别使用迁移学习
Yaswanth Gavini, B. Mehtre, A. Agarwal
Inter-modality face recognition refers to the matching of face images between different modalities and is done usually by taking visual images as source and one of the other modalities as a target. Performing facial recognition between thermal to visual is a tough task because of nonlinear spectral characteristics of thermal and visual images. However, this is a desirable requirement for night-time security applications and military surveillance. In this paper, we propose a method to improve the thermal classifier accuracy by using transfer learning and as a result, the accuracy of thermal to visual face recognition gets increased. The proposed method is tested on RGB-D-T dataset (45900 images) and UND-Xl collection (4584 images). Experimental results show that the overall accuracy of thermal to visual face recognition by transferring the knowledge gets increased to 94.32% from 89.3% on RGB-D-T dataset and from 81.54% to 90.33% on UND-Xl dataset.
跨模态人脸识别是指人脸图像在不同模态之间的匹配,通常以视觉图像为源,以其中一种模态为目标。由于热图像和视觉图像的非线性光谱特征,实现热图像和视觉图像之间的人脸识别是一项艰巨的任务。然而,这是夜间安全应用和军事监视的理想要求。本文提出了一种利用迁移学习来提高热分类器精度的方法,从而提高了热分类器对视觉人脸识别的精度。在RGB-D-T数据集(45900张)和UND-Xl数据集(4584张)上进行了测试。实验结果表明,在RGB-D-T数据集和un - xl数据集上,通过知识转移进行热视觉人脸识别的整体准确率分别从89.3%和81.54%提高到94.32%和90.33%。
{"title":"Thermal to Visual Face Recognition using Transfer Learning","authors":"Yaswanth Gavini, B. Mehtre, A. Agarwal","doi":"10.1109/ISBA.2019.8778474","DOIUrl":"https://doi.org/10.1109/ISBA.2019.8778474","url":null,"abstract":"Inter-modality face recognition refers to the matching of face images between different modalities and is done usually by taking visual images as source and one of the other modalities as a target. Performing facial recognition between thermal to visual is a tough task because of nonlinear spectral characteristics of thermal and visual images. However, this is a desirable requirement for night-time security applications and military surveillance. In this paper, we propose a method to improve the thermal classifier accuracy by using transfer learning and as a result, the accuracy of thermal to visual face recognition gets increased. The proposed method is tested on RGB-D-T dataset (45900 images) and UND-Xl collection (4584 images). Experimental results show that the overall accuracy of thermal to visual face recognition by transferring the knowledge gets increased to 94.32% from 89.3% on RGB-D-T dataset and from 81.54% to 90.33% on UND-Xl dataset.","PeriodicalId":270033,"journal":{"name":"2019 IEEE 5th International Conference on Identity, Security, and Behavior Analysis (ISBA)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116627420","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Deep Convolutional Neural Network for Dot and Incipient Ridge Detection in High-resolution Fingerprints 基于深度卷积神经网络的高分辨率指纹点和初纹检测
V. Anand, Vivek Kanhangad
Automated fingerprint recognition using partial and latent fingerprints employs level 3 features which provide additional information in the absence of sufficient number of level 1 and level 2 features. In this paper, we present a methodology for detecting two level 3 features namely, dots and incipient ridges. Specifically, we have designed a deep convolutional neural network which generates a dot map from the input fingerprint image. Subsequently, post-processing operations are performed on the obtained dot map to identify the coordinates of dots and incipient ridges. The results of our experiments on the publicly available PolyU HRF database demonstrate the effectiveness of the proposed algorithm.
使用部分和潜在指纹的自动指纹识别采用3级特征,在缺乏足够数量的1级和2级特征时提供额外信息。在本文中,我们提出了一种方法来检测两个3级特征,即点和初始脊。具体来说,我们设计了一个深度卷积神经网络,从输入的指纹图像中生成点图。然后,对得到的点图进行后处理操作,以识别点和初始脊的坐标。我们在公开的理大HRF数据库上的实验结果证明了该算法的有效性。
{"title":"Deep Convolutional Neural Network for Dot and Incipient Ridge Detection in High-resolution Fingerprints","authors":"V. Anand, Vivek Kanhangad","doi":"10.1109/ISBA.2019.8778527","DOIUrl":"https://doi.org/10.1109/ISBA.2019.8778527","url":null,"abstract":"Automated fingerprint recognition using partial and latent fingerprints employs level 3 features which provide additional information in the absence of sufficient number of level 1 and level 2 features. In this paper, we present a methodology for detecting two level 3 features namely, dots and incipient ridges. Specifically, we have designed a deep convolutional neural network which generates a dot map from the input fingerprint image. Subsequently, post-processing operations are performed on the obtained dot map to identify the coordinates of dots and incipient ridges. The results of our experiments on the publicly available PolyU HRF database demonstrate the effectiveness of the proposed algorithm.","PeriodicalId":270033,"journal":{"name":"2019 IEEE 5th International Conference on Identity, Security, and Behavior Analysis (ISBA)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126132321","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Enhanced Segmentation-CNN based Finger-Vein Recognition by Joint Training with Automatically Generated and Manual Labels 基于自动生成标签和人工标签联合训练的增强分割- cnn手指静脉识别
Ehsaneddin Jalilian, A. Uhl
Deep learning techniques are nowadays the leading approaches to solve complex machine learning and pattern recognition problems. For the first time, we utilize state-of-the-art semantic segmentation CNNs to extract vein patterns from near-infrared finger imagery and use them as the actual vein features in biometric finger-vein recognition. In this context, beside investigating the impact of training data volume, we propose a training model based on automatically generated labels, to improve the recognition performance of the resulting vein structures compared to (i) network training using manual labels only, and compared to (ii) well established classical recognition techniques relying on publicly available software. Proposing this model we also take a crucial step in reducing the amount of manually annotated labels required to train networks, whose generation is extremely time consuming and error-prone. As further contribution, we also release human annotated ground-truth vein pixel labels (required for training the networks) for a subset of a well known finger-vein database used in this work, and a corresponding tool for further annotations.
深度学习技术是当今解决复杂机器学习和模式识别问题的主要方法。我们首次利用最先进的语义分割cnn从近红外手指图像中提取静脉模式,并将其作为生物特征手指静脉识别的实际静脉特征。在这种情况下,除了研究训练数据量的影响外,我们还提出了一种基于自动生成标签的训练模型,与(i)仅使用手动标签的网络训练以及(ii)依赖公开可用软件的成熟经典识别技术相比,该模型可以提高所得到的静脉结构的识别性能。提出这个模型,我们也迈出了关键的一步,减少了训练网络所需的手动注释标签的数量,这些标签的生成非常耗时且容易出错。作为进一步的贡献,我们还为本工作中使用的一个众所周知的手指静脉数据库的子集发布了人类注释的地真静脉像素标签(训练网络所需),以及用于进一步注释的相应工具。
{"title":"Enhanced Segmentation-CNN based Finger-Vein Recognition by Joint Training with Automatically Generated and Manual Labels","authors":"Ehsaneddin Jalilian, A. Uhl","doi":"10.1109/ISBA.2019.8778522","DOIUrl":"https://doi.org/10.1109/ISBA.2019.8778522","url":null,"abstract":"Deep learning techniques are nowadays the leading approaches to solve complex machine learning and pattern recognition problems. For the first time, we utilize state-of-the-art semantic segmentation CNNs to extract vein patterns from near-infrared finger imagery and use them as the actual vein features in biometric finger-vein recognition. In this context, beside investigating the impact of training data volume, we propose a training model based on automatically generated labels, to improve the recognition performance of the resulting vein structures compared to (i) network training using manual labels only, and compared to (ii) well established classical recognition techniques relying on publicly available software. Proposing this model we also take a crucial step in reducing the amount of manually annotated labels required to train networks, whose generation is extremely time consuming and error-prone. As further contribution, we also release human annotated ground-truth vein pixel labels (required for training the networks) for a subset of a well known finger-vein database used in this work, and a corresponding tool for further annotations.","PeriodicalId":270033,"journal":{"name":"2019 IEEE 5th International Conference on Identity, Security, and Behavior Analysis (ISBA)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131440758","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Towards making Morphing Attack Detection robust using hybrid Scale-Space Colour Texture Features 利用混合比例空间颜色纹理特征实现变形攻击检测的鲁棒性
Raghavendra Ramachandra, S. Venkatesh, K. Raja, C. Busch
The widespread use of face recognition algorithms, especially in Automatic Border Control (ABC) systems has raised concerns due to potential attacks. Face morphing combines more than one face images to generate a single image that can be used in the passport enrolment procedure. Such morphed passports have proven to be a significant threat to national security, as two or more individuals that contributed to the morphed reference image can use that single travel document. In this work, we present a novel method based on hybrid colour features to automatically detect morphed face images. The proposed method is based on exploring multiple colour spaces and scale-spaces using a Laplacian pyramid to extract robust features. The texture features corresponding to each scale-space in different color spaces are extracted with Local Binary Patterns (LBP) and classified using a Spectral Regression Kernel Discriminant Analysis (SRKDA) classifier. The scores are further fused using sum rule to detect the morphed face images. Experiments are carried out on a large-scale morphed face image database consisting of printed and scanned images to reflect the real-life passport issuance scenario. The evaluation database consists of images comprised of 1270 bona fide face images and 2515 morphed face images. The performance of the proposed method is compared with seven different deep learning and seven different non-deep learning methods, which has indicated the best performance of the proposed scheme with Bona fide Presentation Classification Error (BPCER) = 0.86% @ Attack Presentation Classification Error Rate (APCER) = 10% and BPCER = 7.59% @ APCER = 5%. The obtained results indicate the robustness in detecting morphing attacks as compared to earlier works.
人脸识别算法的广泛使用,特别是在自动边境控制(ABC)系统中,由于潜在的攻击而引起了人们的关注。人脸变形将多个人脸图像组合在一起,生成可用于护照登记程序的单个图像。这种变形的护照已被证明是对国家安全的重大威胁,因为两个或两个以上的人可以为变形的参考图像做出贡献,使用同一份旅行证件。在这项工作中,我们提出了一种基于混合颜色特征的人脸变形图像自动检测方法。该方法基于探索多个颜色空间和尺度空间,使用拉普拉斯金字塔提取鲁棒特征。利用局部二值模式(LBP)提取不同颜色空间中每个尺度空间对应的纹理特征,并利用光谱回归核判别分析(SRKDA)分类器进行分类。利用和规则进一步融合分数,检测变形后的人脸图像。在一个由打印和扫描图像组成的大规模变形人脸图像数据库上进行实验,以反映真实的护照签发场景。该评价数据库由1270张真实人脸图像和2515张变形人脸图像组成。将所提出的方法与7种不同的深度学习方法和7种不同的非深度学习方法进行性能比较,结果表明,当真实表示分类错误率(BPCER) = 0.86% @攻击表示分类错误率(APCER) = 10%, BPCER = 7.59% @ APCER = 5%时,所提出的方案性能最佳。实验结果表明,该方法在检测变形攻击方面具有较好的鲁棒性。
{"title":"Towards making Morphing Attack Detection robust using hybrid Scale-Space Colour Texture Features","authors":"Raghavendra Ramachandra, S. Venkatesh, K. Raja, C. Busch","doi":"10.1109/ISBA.2019.8778488","DOIUrl":"https://doi.org/10.1109/ISBA.2019.8778488","url":null,"abstract":"The widespread use of face recognition algorithms, especially in Automatic Border Control (ABC) systems has raised concerns due to potential attacks. Face morphing combines more than one face images to generate a single image that can be used in the passport enrolment procedure. Such morphed passports have proven to be a significant threat to national security, as two or more individuals that contributed to the morphed reference image can use that single travel document. In this work, we present a novel method based on hybrid colour features to automatically detect morphed face images. The proposed method is based on exploring multiple colour spaces and scale-spaces using a Laplacian pyramid to extract robust features. The texture features corresponding to each scale-space in different color spaces are extracted with Local Binary Patterns (LBP) and classified using a Spectral Regression Kernel Discriminant Analysis (SRKDA) classifier. The scores are further fused using sum rule to detect the morphed face images. Experiments are carried out on a large-scale morphed face image database consisting of printed and scanned images to reflect the real-life passport issuance scenario. The evaluation database consists of images comprised of 1270 bona fide face images and 2515 morphed face images. The performance of the proposed method is compared with seven different deep learning and seven different non-deep learning methods, which has indicated the best performance of the proposed scheme with Bona fide Presentation Classification Error (BPCER) = 0.86% @ Attack Presentation Classification Error Rate (APCER) = 10% and BPCER = 7.59% @ APCER = 5%. The obtained results indicate the robustness in detecting morphing attacks as compared to earlier works.","PeriodicalId":270033,"journal":{"name":"2019 IEEE 5th International Conference on Identity, Security, and Behavior Analysis (ISBA)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132562765","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 41
Subband Analysis for Performance Improvement of Replay Attack Detection in Speaker Verification Systems 说话人验证系统中重放攻击检测性能改进的子带分析
S. Garg, Shruti Bhilare, Vivek Kanhangad
Automatic speaker verification systems have been widely employed in a variety of commercial applications. However, advancements in the field of speech technology have equipped the attackers with sophisticated techniques for circumventing speaker verification systems. The state-of-the-art countermeasures are fairly successful in detecting speech synthesis and voice conversion attacks. However, the problem of replay attack detection has not received much attention from the researchers. In this study, we perform subband analysis on constant-Q cepstral coefficient (CQCC) and mel-frequency cepstral coefficient (MFCC) features to improve the performance of replay attack detection. We have performed experiments on the ASVspoof 2017 database which consists of 3566 genuine and 15380 replay utterances. Our experimental results suggest that the features extracted from the high frequency band carries significant discriminatory information for replay attack detection. In particular, our approach achieves an improvement of 36.33% over the baseline replay attack detection method in terms of equal error rate.
自动说话人验证系统已广泛应用于各种商业应用。然而,语音技术领域的进步使攻击者具备了绕过说话人验证系统的复杂技术。最先进的对策在检测语音合成和语音转换攻击方面相当成功。然而,重放攻击检测问题一直没有受到研究者的重视。在本研究中,我们对恒q倒谱系数(CQCC)和mel-frequency倒谱系数(MFCC)特征进行子带分析,以提高重放攻击检测的性能。我们在ASVspoof 2017数据库上进行了实验,该数据库包含3566个真实话语和15380个重播话语。实验结果表明,提取的高频特征具有显著的判别信息,可用于重放攻击检测。特别是在等错误率方面,我们的方法比基准重放攻击检测方法提高了36.33%。
{"title":"Subband Analysis for Performance Improvement of Replay Attack Detection in Speaker Verification Systems","authors":"S. Garg, Shruti Bhilare, Vivek Kanhangad","doi":"10.1109/ISBA.2019.8778535","DOIUrl":"https://doi.org/10.1109/ISBA.2019.8778535","url":null,"abstract":"Automatic speaker verification systems have been widely employed in a variety of commercial applications. However, advancements in the field of speech technology have equipped the attackers with sophisticated techniques for circumventing speaker verification systems. The state-of-the-art countermeasures are fairly successful in detecting speech synthesis and voice conversion attacks. However, the problem of replay attack detection has not received much attention from the researchers. In this study, we perform subband analysis on constant-Q cepstral coefficient (CQCC) and mel-frequency cepstral coefficient (MFCC) features to improve the performance of replay attack detection. We have performed experiments on the ASVspoof 2017 database which consists of 3566 genuine and 15380 replay utterances. Our experimental results suggest that the features extracted from the high frequency band carries significant discriminatory information for replay attack detection. In particular, our approach achieves an improvement of 36.33% over the baseline replay attack detection method in terms of equal error rate.","PeriodicalId":270033,"journal":{"name":"2019 IEEE 5th International Conference on Identity, Security, and Behavior Analysis (ISBA)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122332399","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
期刊
2019 IEEE 5th International Conference on Identity, Security, and Behavior Analysis (ISBA)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1