首页 > 最新文献

2017 IEEE International Joint Conference on Biometrics (IJCB)最新文献

英文 中文
Robust face presentation attack detection on smartphones : An approach based on variable focus 基于可变焦点的智能手机鲁棒人脸呈现攻击检测方法
Pub Date : 2017-10-01 DOI: 10.1109/BTAS.2017.8272753
K. Raja, P. Wasnik, Ramachandra Raghavendra, C. Busch
Smartphone based facial biometric systems have been well used in many of the security applications starting from simple phone unlocking to secure banking applications. This work presents a new approach of exploring the intrinsic characteristics of the smartphone camera to capture a number of stack images in the depth-of-field. With the set of stack images obtained, we present a new feature-free and classifier-free approach to provide the presentation attack resistant face biometric system. With the entire system implemented on the smartphone, we demonstrate the applicability of the proposed scheme in obtaining a stack of images with varying focus to effectively determine the presentation attacks. We create a new database of 13250 images at different focal length to present a detailed analysis of vulnerability together with the evaluation of proposed scheme. An extensive evaluation of the newly created database comprising of 5 different Presentation Attack Instruments (PAI) has demonstrated an outstanding performance on all 5 PAI through proposed approach. With the set ofcomplementary benefits of proposed approach illustrated in this work, we deduce the robustness towards unseen 2D attacks.
基于智能手机的面部生物识别系统已经广泛应用于许多安全应用,从简单的手机解锁到安全的银行应用。本文提出了一种探索智能手机相机内在特征的新方法,用于在景深上捕获大量堆叠图像。在此基础上,我们提出了一种新的无特征和无分类器的方法来提供抗攻击的人脸生物识别系统。通过在智能手机上实现整个系统,我们证明了所提出的方案在获取不同焦点的图像堆栈以有效确定呈现攻击方面的适用性。我们创建了一个包含13250张不同焦距图像的新数据库,以详细分析漏洞并对所提出的方案进行评估。通过对包含5种不同呈现攻击工具(PAI)的新创建数据库的广泛评估,表明该方法在所有5种PAI上都具有出色的性能。通过本文所述方法的互补优势,我们推断了对不可见的2D攻击的鲁棒性。
{"title":"Robust face presentation attack detection on smartphones : An approach based on variable focus","authors":"K. Raja, P. Wasnik, Ramachandra Raghavendra, C. Busch","doi":"10.1109/BTAS.2017.8272753","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272753","url":null,"abstract":"Smartphone based facial biometric systems have been well used in many of the security applications starting from simple phone unlocking to secure banking applications. This work presents a new approach of exploring the intrinsic characteristics of the smartphone camera to capture a number of stack images in the depth-of-field. With the set of stack images obtained, we present a new feature-free and classifier-free approach to provide the presentation attack resistant face biometric system. With the entire system implemented on the smartphone, we demonstrate the applicability of the proposed scheme in obtaining a stack of images with varying focus to effectively determine the presentation attacks. We create a new database of 13250 images at different focal length to present a detailed analysis of vulnerability together with the evaluation of proposed scheme. An extensive evaluation of the newly created database comprising of 5 different Presentation Attack Instruments (PAI) has demonstrated an outstanding performance on all 5 PAI through proposed approach. With the set ofcomplementary benefits of proposed approach illustrated in this work, we deduce the robustness towards unseen 2D attacks.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121087894","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Using associative classification to authenticate mobile device users 使用关联分类验证移动设备用户
Pub Date : 2017-10-01 DOI: 10.1109/BTAS.2017.8272684
T. Neal, D. Woodard
Because passwords and personal identification numbers are easily forgotten, stolen, or reused on multiple accounts, the current norm for mobile device security is quickly becoming inefficient and inconvenient. Thus, manufacturers have worked to make physiological biometrics accessible to mobile device owners as improved security measures. While behavioral biometrics has yet to receive commercial attention, researchers have continued to consider these approaches as well. However, studies of interactive data are limited, and efforts which are aimed at improving the performance of such techniques remain relevant. Thus, this paper provides a performance analysis of application, Bluetooth, and Wi-Fi data collected from 189 subjects on a mobile device for user verification. Results indicate that user authentication can be achieved with up to 91% accuracy, demonstrating the effectiveness of associative classification as a feature extraction technique.
由于密码和个人识别码很容易被遗忘、被盗或在多个账户上重复使用,目前的移动设备安全规范正迅速变得低效和不方便。因此,制造商一直在努力使移动设备所有者能够使用生理生物识别技术,作为改进的安全措施。虽然行为生物识别技术尚未得到商业关注,但研究人员也在继续考虑这些方法。然而,对交互数据的研究是有限的,旨在提高这种技术性能的努力仍然是相关的。因此,本文提供了从移动设备上收集的189名受试者的应用程序,蓝牙和Wi-Fi数据的性能分析,以供用户验证。结果表明,用户身份验证的准确率高达91%,证明了关联分类作为一种特征提取技术的有效性。
{"title":"Using associative classification to authenticate mobile device users","authors":"T. Neal, D. Woodard","doi":"10.1109/BTAS.2017.8272684","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272684","url":null,"abstract":"Because passwords and personal identification numbers are easily forgotten, stolen, or reused on multiple accounts, the current norm for mobile device security is quickly becoming inefficient and inconvenient. Thus, manufacturers have worked to make physiological biometrics accessible to mobile device owners as improved security measures. While behavioral biometrics has yet to receive commercial attention, researchers have continued to consider these approaches as well. However, studies of interactive data are limited, and efforts which are aimed at improving the performance of such techniques remain relevant. Thus, this paper provides a performance analysis of application, Bluetooth, and Wi-Fi data collected from 189 subjects on a mobile device for user verification. Results indicate that user authentication can be achieved with up to 91% accuracy, demonstrating the effectiveness of associative classification as a feature extraction technique.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":"43 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124969286","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Shared dataset on natural human-computer interaction to support continuous authentication research 自然人机交互共享数据集,支持持续认证研究
Pub Date : 2017-10-01 DOI: 10.1109/BTAS.2017.8272738
Chris Murphy, Jiaju Huang, Daqing Hou, S. Schuckers
Conventional one-stop authentication of a computer terminal takes place at a user's initial sign-on. In contrast, continuous authentication protects against the case where an intruder takes over an authenticated terminal or simply has access to sign-on credentials. Behavioral biometrics has had some success in providing continuous authentication without requiring additional hardware. However, further advancement requires benchmarking existing algorithms against large, shared datasets. To this end, we provide a novel large dataset that captures not only keystrokes, but also mouse events and active programs. Our dataset is collected using passive logging software to monitor user interactions with the mouse, keyboard, and software programs. Data was collected from 103 users in a completely uncontrolled, natural setting, over a time span of 2.5 years. We apply Gunetti & Picardi's algorithm, a state-of-the-art algorithm in free text keystroke dynamics, as an initial benchmarkfor the new dataset.
计算机终端的传统一站式认证在用户首次登录时进行。相反,连续身份验证可以防止入侵者接管经过身份验证的终端或仅仅访问登录凭据。行为生物识别技术在不需要额外硬件的情况下提供连续身份验证方面取得了一些成功。然而,进一步的进步需要针对大型共享数据集对现有算法进行基准测试。为此,我们提供了一个新的大型数据集,不仅可以捕获击键,还可以捕获鼠标事件和活动程序。我们的数据集是使用被动日志软件收集的,用于监视用户与鼠标、键盘和软件程序的交互。在2.5年的时间里,在完全不受控制的自然环境中收集了103名用户的数据。我们应用Gunetti & Picardi算法,这是一种最先进的自由文本击键动力学算法,作为新数据集的初始基准。
{"title":"Shared dataset on natural human-computer interaction to support continuous authentication research","authors":"Chris Murphy, Jiaju Huang, Daqing Hou, S. Schuckers","doi":"10.1109/BTAS.2017.8272738","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272738","url":null,"abstract":"Conventional one-stop authentication of a computer terminal takes place at a user's initial sign-on. In contrast, continuous authentication protects against the case where an intruder takes over an authenticated terminal or simply has access to sign-on credentials. Behavioral biometrics has had some success in providing continuous authentication without requiring additional hardware. However, further advancement requires benchmarking existing algorithms against large, shared datasets. To this end, we provide a novel large dataset that captures not only keystrokes, but also mouse events and active programs. Our dataset is collected using passive logging software to monitor user interactions with the mouse, keyboard, and software programs. Data was collected from 103 users in a completely uncontrolled, natural setting, over a time span of 2.5 years. We apply Gunetti & Picardi's algorithm, a state-of-the-art algorithm in free text keystroke dynamics, as an initial benchmarkfor the new dataset.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127930750","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 29
Towards open-set face recognition using hashing functions 利用哈希函数实现开集人脸识别
Pub Date : 2017-10-01 DOI: 10.1109/BTAS.2017.8272751
R. H. Vareto, Samira Silva, F. Costa, W. R. Schwartz
Face Recognition is one of the most relevant problems in computer vision as we consider its importance to areas such as surveillance, forensics and psychology. Furthermore, open-set face recognition has a large room for improvement since only few researchers have focused on it. In fact, a real-world recognition system has to cope with several unseen individuals and determine whether a given face image is associated with a subject registered in a gallery of known individuals. In this work, we combine hashing functions and classification methods to estimate when probe samples are known (i.e., belong to the gallery set). We carry out experiments with partial least squares and neural networks and show how response value histograms tend to behave for known and unknown individuals whenever we test a probe sample. In addition, we conduct experiments on FRGCv1, PubFig83 and VGGFace to show that our method continues effective regardless of the dataset difficulty.
人脸识别是计算机视觉中最相关的问题之一,因为我们认为它对监视、法医学和心理学等领域都很重要。此外,由于研究人员较少,开放集人脸识别还有很大的改进空间。事实上,现实世界的识别系统必须处理几个看不见的个体,并确定给定的人脸图像是否与已知个体库中注册的对象相关联。在这项工作中,我们结合了哈希函数和分类方法来估计探针样本何时已知(即属于画廊集)。我们用偏最小二乘法和神经网络进行了实验,并展示了每当我们测试探测样本时,响应值直方图对已知和未知个体的行为倾向。此外,我们在FRGCv1、PubFig83和VGGFace上进行了实验,表明无论数据集难度如何,我们的方法都是有效的。
{"title":"Towards open-set face recognition using hashing functions","authors":"R. H. Vareto, Samira Silva, F. Costa, W. R. Schwartz","doi":"10.1109/BTAS.2017.8272751","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272751","url":null,"abstract":"Face Recognition is one of the most relevant problems in computer vision as we consider its importance to areas such as surveillance, forensics and psychology. Furthermore, open-set face recognition has a large room for improvement since only few researchers have focused on it. In fact, a real-world recognition system has to cope with several unseen individuals and determine whether a given face image is associated with a subject registered in a gallery of known individuals. In this work, we combine hashing functions and classification methods to estimate when probe samples are known (i.e., belong to the gallery set). We carry out experiments with partial least squares and neural networks and show how response value histograms tend to behave for known and unknown individuals whenever we test a probe sample. In addition, we conduct experiments on FRGCv1, PubFig83 and VGGFace to show that our method continues effective regardless of the dataset difficulty.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133316512","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 29
Conditional random fields incorporate convolutional neural networks for human eye sclera semantic segmentation
Pub Date : 2017-10-01 DOI: 10.1109/BTAS.2017.8272768
Russel Mesbah, B. McCane, S. Mills
Sclera segmentation as an ocular biometric has been of an interest in a variety of security and medical applications. The current approaches mostly rely on handcrafted features which make the generalisation of the learnt hypothesis challenging encountering images taken from various angles, and in different visible light spectrums. Convolutional Neural Networks (CNNs) are capable of extracting the corresponding features automatically. Despite the fact that CNNs showed a remarkable performance in a variety of image semantic segmentations, the output can be noisy and less accurate particularly in object boundaries. To address this issue, we have used Conditional Random Fields (CRFs) to regulate the CNN outputs. The results of applying this technique to sclera segmentation dataset (SSERBC 2017) are comparable with the state of the art solutions.
巩膜分割作为一种眼部生物识别技术已经在各种安全和医学应用中引起了人们的兴趣。目前的方法主要依赖于手工制作的特征,这使得从不同角度和不同可见光光谱拍摄的图像难以概括所学假设。卷积神经网络(cnn)能够自动提取相应的特征。尽管cnn在各种图像语义分割中表现出色,但输出可能会有噪声,特别是在物体边界上的准确性较低。为了解决这个问题,我们使用条件随机场(CRFs)来调节CNN输出。将该技术应用于巩膜分割数据集(SSERBC 2017)的结果与最先进的解决方案相当。
{"title":"Conditional random fields incorporate convolutional neural networks for human eye sclera semantic segmentation","authors":"Russel Mesbah, B. McCane, S. Mills","doi":"10.1109/BTAS.2017.8272768","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272768","url":null,"abstract":"Sclera segmentation as an ocular biometric has been of an interest in a variety of security and medical applications. The current approaches mostly rely on handcrafted features which make the generalisation of the learnt hypothesis challenging encountering images taken from various angles, and in different visible light spectrums. Convolutional Neural Networks (CNNs) are capable of extracting the corresponding features automatically. Despite the fact that CNNs showed a remarkable performance in a variety of image semantic segmentations, the output can be noisy and less accurate particularly in object boundaries. To address this issue, we have used Conditional Random Fields (CRFs) to regulate the CNN outputs. The results of applying this technique to sclera segmentation dataset (SSERBC 2017) are comparable with the state of the art solutions.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":"480 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123054720","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Accuracy evaluation of handwritten signature verification: Rethinking the random-skilled forgeries dichotomy 手写签名验证的准确性评估:重新思考随机-熟练伪造二分法
Pub Date : 2017-10-01 DOI: 10.1109/BTAS.2017.8272711
Javier Galbally, M. Gomez-Barrero, A. Ross
Traditionally, the accuracy of signature verification systems has been evaluated following a protocol that considers two independent impostor scenarios: random forgeries and skilled forgeries. Although such an approach is not necessarily incorrect, it can lead to a misinterpretation of the results of the assessment process. Furthermore, such a full separation between both types of impostors may be unrealistic in many operational real-world applications. The current article discusses the soundness of the random-skilled impostor dichotomy and proposes complementary approaches to report the accuracy of signature verification systems, discussing their advantages and limitations.
传统上,签名验证系统的准确性是根据一种协议来评估的,该协议考虑了两种独立的冒充者场景:随机伪造和熟练伪造。虽然这种方法不一定是不正确的,但它可能导致对评估过程结果的误解。此外,在许多实际操作的应用程序中,完全区分这两种类型的冒名顶替者可能是不现实的。本文讨论了随机技术冒充者二分法的合理性,并提出了报告签名验证系统准确性的补充方法,讨论了它们的优点和局限性。
{"title":"Accuracy evaluation of handwritten signature verification: Rethinking the random-skilled forgeries dichotomy","authors":"Javier Galbally, M. Gomez-Barrero, A. Ross","doi":"10.1109/BTAS.2017.8272711","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272711","url":null,"abstract":"Traditionally, the accuracy of signature verification systems has been evaluated following a protocol that considers two independent impostor scenarios: random forgeries and skilled forgeries. Although such an approach is not necessarily incorrect, it can lead to a misinterpretation of the results of the assessment process. Furthermore, such a full separation between both types of impostors may be unrealistic in many operational real-world applications. The current article discusses the soundness of the random-skilled impostor dichotomy and proposes complementary approaches to report the accuracy of signature verification systems, discussing their advantages and limitations.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":"316 7","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113958766","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
DNA2FACE: An approach to correlating 3D facial structure and DNA DNA2FACE:一种将三维面部结构与DNA相关联的方法
Pub Date : 2017-10-01 DOI: 10.1109/BTAS.2017.8272746
Nisha Srinivas, Ryan Tokola, A. Mikkilineni, I. Nookaew, M. Leuze, Chris Boehnen
In this paper we introduce the concept of correlating genetic variations in an individual's specific genetic code (DNA) and facial morphology. This is the first step in the research effort to estimate facial appearance from DNA samples, which is gaining momentum within intelligence, law enforcement and national security communities. The dataset for the study consisting of genetic data and 3D facial scans (phenotype) data was obtained through the FaceBase Consortium. The proposed approach has three main steps: phenotype feature extraction from 3D face images, genotype feature extraction from a DNA sample, and genome-wide association analysis to determine genetic variations that contribute to facial structure and appearance. Results indicate that there exist significant correlations between genetic information and facial structure. We have identified 30 single nucleotide polymorphisms (SNPs), i.e. genetic variations, that significantly contribute to facial structure and appearance. We conclude with a preliminary attempt at facial reconstruction from the genetic data and emphasize on the complexity of the problem and the challenges encountered.
在本文中,我们介绍了个体特定遗传密码(DNA)与面部形态相关的遗传变异的概念。这是从DNA样本中估计面部特征的研究工作的第一步,在情报、执法和国家安全领域正获得越来越多的动力。该研究的数据集由遗传数据和3D面部扫描(表型)数据组成,通过FaceBase联盟获得。提出的方法有三个主要步骤:从3D面部图像中提取表型特征,从DNA样本中提取基因型特征,以及全基因组关联分析,以确定影响面部结构和外观的遗传变异。结果表明,遗传信息与面部结构存在显著相关性。我们已经确定了30个单核苷酸多态性(SNPs),即遗传变异,对面部结构和外观有重要影响。我们总结了从遗传数据中进行面部重建的初步尝试,并强调了问题的复杂性和遇到的挑战。
{"title":"DNA2FACE: An approach to correlating 3D facial structure and DNA","authors":"Nisha Srinivas, Ryan Tokola, A. Mikkilineni, I. Nookaew, M. Leuze, Chris Boehnen","doi":"10.1109/BTAS.2017.8272746","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272746","url":null,"abstract":"In this paper we introduce the concept of correlating genetic variations in an individual's specific genetic code (DNA) and facial morphology. This is the first step in the research effort to estimate facial appearance from DNA samples, which is gaining momentum within intelligence, law enforcement and national security communities. The dataset for the study consisting of genetic data and 3D facial scans (phenotype) data was obtained through the FaceBase Consortium. The proposed approach has three main steps: phenotype feature extraction from 3D face images, genotype feature extraction from a DNA sample, and genome-wide association analysis to determine genetic variations that contribute to facial structure and appearance. Results indicate that there exist significant correlations between genetic information and facial structure. We have identified 30 single nucleotide polymorphisms (SNPs), i.e. genetic variations, that significantly contribute to facial structure and appearance. We conclude with a preliminary attempt at facial reconstruction from the genetic data and emphasize on the complexity of the problem and the challenges encountered.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122832353","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Finger vein image retrieval via affinity-preserving K-means hashing 基于亲和保持k均值哈希的手指静脉图像检索
Pub Date : 2017-10-01 DOI: 10.1109/BTAS.2017.8272720
Kun Su, Gongping Yang, Lu Yang, Yilong Yin
Efficient identification of finger veins is still a challenging problem due to the increasing size of the finger vein database. Most leading finger vein image identification methods have high-dimensional real-valued features, which result in extremely high computation complexity. Hashing algorithms are extraordinary effective ways to facilitate finger vein image retrieval. Therefore, in this paper, we proposed a finger vein image retrieval scheme based on Affinity-Preserving K-means Hashing (APKMH) algorithm and bag of subspaces based image feature. At first, we represent finger vein image by Nonlinearly Sub-space Coding (NSC) method which can obtain the discriminative finger vein image features. Then the features space is partitioned into multiple subsegments. In each subsegment, we employ the APKMH algorithm, which can simultaneously construct the visual codebook by directly k-means clustering and encode the feature vector as the binary index of the codeword. Experimental results on a large fused finger vein dataset demonstrate that our hashing method outperforms the state-of-the-art finger vein retrieval methods.
由于手指静脉数据库的不断扩大,有效识别手指静脉仍然是一个具有挑战性的问题。现有的指静脉图像识别方法大多具有高维实值特征,计算复杂度极高。哈希算法是一种非常有效的手指静脉图像检索方法。为此,本文提出了一种基于亲和性保持k均值哈希(Affinity-Preserving K-means hash, APKMH)算法和基于子空间特征包的指静脉图像检索方案。首先,采用非线性子空间编码(NSC)方法对手指静脉图像进行表征,得到具有判别性的手指静脉图像特征。然后将特征空间划分为多个子段。在每个子段中,我们采用了APKMH算法,该算法可以同时通过直接k-means聚类构建视觉码本,并将特征向量编码为码字的二进制索引。在一个大型手指静脉数据集上的实验结果表明,我们的哈希方法优于目前最先进的手指静脉检索方法。
{"title":"Finger vein image retrieval via affinity-preserving K-means hashing","authors":"Kun Su, Gongping Yang, Lu Yang, Yilong Yin","doi":"10.1109/BTAS.2017.8272720","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272720","url":null,"abstract":"Efficient identification of finger veins is still a challenging problem due to the increasing size of the finger vein database. Most leading finger vein image identification methods have high-dimensional real-valued features, which result in extremely high computation complexity. Hashing algorithms are extraordinary effective ways to facilitate finger vein image retrieval. Therefore, in this paper, we proposed a finger vein image retrieval scheme based on Affinity-Preserving K-means Hashing (APKMH) algorithm and bag of subspaces based image feature. At first, we represent finger vein image by Nonlinearly Sub-space Coding (NSC) method which can obtain the discriminative finger vein image features. Then the features space is partitioned into multiple subsegments. In each subsegment, we employ the APKMH algorithm, which can simultaneously construct the visual codebook by directly k-means clustering and encode the feature vector as the binary index of the codeword. Experimental results on a large fused finger vein dataset demonstrate that our hashing method outperforms the state-of-the-art finger vein retrieval methods.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123957306","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Deep convolutional dynamic texture learning with adaptive channel-discriminability for 3D mask face anti-spoofing 基于自适应通道判别的深度卷积动态纹理学习3D蒙版人脸防欺骗
Pub Date : 2017-10-01 DOI: 10.1109/BTAS.2017.8272765
Rui Shao, X. Lan, P. Yuen
3D mask spoofing attack has been one of the main challenges in face recognition. A real face displays a different motion behaviour compared to a 3D mask spoof attempt, which is reflected by different facial dynamic textures. However, the different dynamic information usually exists in the subtle texture level, which cannot be fully differentiated by traditional hand-crafted texture-based methods. In this paper, we propose a novel method for 3D mask face anti-spoofing, namely deep convolutional dynamic texture learning, which learns robust dynamic texture information from fine-grained deep convolutional features. Moreover, channel-discriminability constraint is adaptively incorporated to weight the discriminability of feature channels in the learning process. Experiments on both public datasets validate that the proposed method achieves promising results under intra and cross dataset scenario.
三维面具欺骗攻击一直是人脸识别中的主要挑战之一。与3D面具欺骗相比,真实的面部表现出不同的运动行为,这反映在不同的面部动态纹理上。然而,不同的动态信息通常存在于细微纹理层面,传统的基于手工纹理的方法无法完全区分这些动态信息。本文提出了一种新的3D掩模人脸防欺骗方法,即深度卷积动态纹理学习,该方法从细粒度深度卷积特征中学习鲁棒动态纹理信息。此外,在学习过程中,自适应地引入信道可判别性约束,对特征信道的可判别性进行加权。在两个公共数据集上的实验验证了该方法在数据集内和跨数据集场景下都取得了很好的效果。
{"title":"Deep convolutional dynamic texture learning with adaptive channel-discriminability for 3D mask face anti-spoofing","authors":"Rui Shao, X. Lan, P. Yuen","doi":"10.1109/BTAS.2017.8272765","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272765","url":null,"abstract":"3D mask spoofing attack has been one of the main challenges in face recognition. A real face displays a different motion behaviour compared to a 3D mask spoof attempt, which is reflected by different facial dynamic textures. However, the different dynamic information usually exists in the subtle texture level, which cannot be fully differentiated by traditional hand-crafted texture-based methods. In this paper, we propose a novel method for 3D mask face anti-spoofing, namely deep convolutional dynamic texture learning, which learns robust dynamic texture information from fine-grained deep convolutional features. Moreover, channel-discriminability constraint is adaptively incorporated to weight the discriminability of feature channels in the learning process. Experiments on both public datasets validate that the proposed method achieves promising results under intra and cross dataset scenario.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":"82 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126283746","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 67
Deep learning with time-frequency representation for pulse estimation from facial videos 基于时频表示的深度学习人脸视频脉冲估计
Pub Date : 2017-10-01 DOI: 10.1109/BTAS.2017.8272721
G. Hsu, Arulmurugan Ambikapathi, Ming Chen
Accurate pulse estimation is of pivotal importance in acquiring the critical physical conditions of human subjects under test, and facial video based pulse estimation approaches recently gained attention owing to their simplicity. In this work, we have endeavored to develop a novel deep learning approach as the core part for pulse (heart rate) estimation by using a common RGB camera. Our approach consists of four steps. We first begin by detecting the face and its landmarks, and thereby locate the required facial ROI. In Step 2, we extract the sample mean sequences of the R, G, and B channels from the facial ROI, and explore three processing schemes for noise removal and signal enhancement. In Step 3, the Short-Time Fourier Transform (STFT) is employed to build the 2D Time-Frequency Representations (TFRs) of the sequences. The 2D TFR enables the formulation of the pulse estimation as an image-based classification problem, which can be solved in Step 4 by a deep Con-volutional Neural Network (CNN). Our approach is one of the pioneering works for attempting real-time pulse estimation using a deep learning framework. We have developed a pulse database, called the Pulse from Face (PFF), and used it to train the CNN. The PFF database will be made publicly available to advance related research. When compared to state-of-the-art pulse estimation approaches on the standard MAHNOB-HCI database, the proposed approach has exhibited superior performance.
准确的脉冲估计对于获取被测人体的临界身体状态至关重要,基于人脸视频的脉冲估计方法因其简单而受到关注。在这项工作中,我们努力开发一种新的深度学习方法,作为使用普通RGB相机进行脉搏(心率)估计的核心部分。我们的方法包括四个步骤。我们首先从检测人脸及其地标开始,从而定位所需的人脸ROI。在步骤2中,我们从面部ROI中提取R、G和B通道的样本均值序列,并探索了三种去噪和信号增强的处理方案。在步骤3中,使用短时傅里叶变换(STFT)来构建序列的二维时频表示(TFRs)。2D TFR可以将脉冲估计表述为基于图像的分类问题,该问题可以在步骤4中通过深度卷积神经网络(CNN)解决。我们的方法是尝试使用深度学习框架进行实时脉冲估计的开创性工作之一。我们开发了一个脉冲数据库,称为面部脉冲(PFF),并使用它来训练CNN。PFF数据库将向公众开放,以推进有关研究。当与标准MAHNOB-HCI数据库上的最先进的脉冲估计方法进行比较时,所提出的方法表现出优越的性能。
{"title":"Deep learning with time-frequency representation for pulse estimation from facial videos","authors":"G. Hsu, Arulmurugan Ambikapathi, Ming Chen","doi":"10.1109/BTAS.2017.8272721","DOIUrl":"https://doi.org/10.1109/BTAS.2017.8272721","url":null,"abstract":"Accurate pulse estimation is of pivotal importance in acquiring the critical physical conditions of human subjects under test, and facial video based pulse estimation approaches recently gained attention owing to their simplicity. In this work, we have endeavored to develop a novel deep learning approach as the core part for pulse (heart rate) estimation by using a common RGB camera. Our approach consists of four steps. We first begin by detecting the face and its landmarks, and thereby locate the required facial ROI. In Step 2, we extract the sample mean sequences of the R, G, and B channels from the facial ROI, and explore three processing schemes for noise removal and signal enhancement. In Step 3, the Short-Time Fourier Transform (STFT) is employed to build the 2D Time-Frequency Representations (TFRs) of the sequences. The 2D TFR enables the formulation of the pulse estimation as an image-based classification problem, which can be solved in Step 4 by a deep Con-volutional Neural Network (CNN). Our approach is one of the pioneering works for attempting real-time pulse estimation using a deep learning framework. We have developed a pulse database, called the Pulse from Face (PFF), and used it to train the CNN. The PFF database will be made publicly available to advance related research. When compared to state-of-the-art pulse estimation approaches on the standard MAHNOB-HCI database, the proposed approach has exhibited superior performance.","PeriodicalId":372008,"journal":{"name":"2017 IEEE International Joint Conference on Biometrics (IJCB)","volume":"604 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129215996","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 75
期刊
2017 IEEE International Joint Conference on Biometrics (IJCB)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1