首页 > 最新文献

2015 IEEE 7th International Conference on Biometrics Theory, Applications and Systems (BTAS)最新文献

英文 中文
Joint prototype and metric learning for set-to-set matching: Application to biometrics 集合对集合匹配的联合原型和度量学习:在生物识别中的应用
Mengjun Leng, Panagiotis Moutafis, I. Kakadiaris
In this paper, we focus on the problem of image set classification. Since existing methods utilize all available samples to model each image set, the corresponding time and storage requirements are high. Such methods are also susceptible to outliers. To address these challenges, we propose a method that jointly learns prototypes and a Mahalanobis distance. The prototypes learned represent the gallery image sets using fewer samples, while the classification accuracy is maintained or improved. The distance learned ensures that the notion of similarity between sets of images is reflected more accurately. Specifically, each gallery set is modeled as a hull spanned by the learned prototypes. The prototypes and distance metric are alternately updated using an iterative scheme. Experimental results using the YouTube Face, ETH-80, and Cambridge Hand Gesture datasets illustrate the improvements obtained.
本文主要研究图像集分类问题。由于现有方法利用所有可用的样本来建模每个图像集,因此相应的时间和存储要求很高。这种方法也容易受到异常值的影响。为了解决这些挑战,我们提出了一种联合学习原型和马氏距离的方法。学习到的原型使用更少的样本来表示图库图像集,同时保持或提高了分类精度。学习到的距离确保了图像集之间的相似性概念更准确地反映出来。具体来说,每个画廊集被建模为由学习原型跨越的船体。使用迭代方案交替更新原型和距离度量。使用YouTube Face, ETH-80和Cambridge Hand Gesture数据集的实验结果说明了所获得的改进。
{"title":"Joint prototype and metric learning for set-to-set matching: Application to biometrics","authors":"Mengjun Leng, Panagiotis Moutafis, I. Kakadiaris","doi":"10.1109/BTAS.2015.7358771","DOIUrl":"https://doi.org/10.1109/BTAS.2015.7358771","url":null,"abstract":"In this paper, we focus on the problem of image set classification. Since existing methods utilize all available samples to model each image set, the corresponding time and storage requirements are high. Such methods are also susceptible to outliers. To address these challenges, we propose a method that jointly learns prototypes and a Mahalanobis distance. The prototypes learned represent the gallery image sets using fewer samples, while the classification accuracy is maintained or improved. The distance learned ensures that the notion of similarity between sets of images is reflected more accurately. Specifically, each gallery set is modeled as a hull spanned by the learned prototypes. The prototypes and distance metric are alternately updated using an iterative scheme. Experimental results using the YouTube Face, ETH-80, and Cambridge Hand Gesture datasets illustrate the improvements obtained.","PeriodicalId":404972,"journal":{"name":"2015 IEEE 7th International Conference on Biometrics Theory, Applications and Systems (BTAS)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129759070","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Smartwatch-based biometric gait recognition 基于智能手表的生物识别步态识别
Andrew H. Johnston, Gary M. Weiss
The advent of commercial smartwatches provides an intriguing new platform for mobile biometrics. Like their smartphone counterparts, these mobile devices can perform gait-based biometric identification because they too contain an accelerometer and a gyroscope. However, smartwatches have several advantages over smartphones for biometric identification because users almost always wear their watch in the same location and orientation. This location (i.e. the wrist) tends to provide more information about a user's movements than the most common location for smartphones (pockets or handbags). In this paper we show the feasibility of using smartwatches for gait-based biometrics by demonstrating the high levels of accuracy that can result from smartwatch-based identification and authentication models. Applications of smartwatch-based biometrics range from a new authentication challenge for use in a multifactor authentication system to automatic personalization by identifying the user of a shared device.
商用智能手表的出现为移动生物识别技术提供了一个有趣的新平台。与智能手机一样,这些移动设备也可以进行基于步态的生物识别,因为它们也包含一个加速度计和一个陀螺仪。然而,与智能手机相比,智能手表在生物识别方面有几个优势,因为用户几乎总是在相同的位置和方向佩戴手表。这个位置(如手腕)比智能手机上最常见的位置(如口袋或手提包)提供更多关于用户动作的信息。在本文中,我们通过展示基于智能手表的识别和认证模型的高准确性,展示了将智能手表用于基于步态的生物识别的可行性。基于智能手表的生物识别技术的应用范围从用于多因素身份验证系统的新身份验证挑战到通过识别共享设备的用户来实现自动个性化。
{"title":"Smartwatch-based biometric gait recognition","authors":"Andrew H. Johnston, Gary M. Weiss","doi":"10.1109/BTAS.2015.7358794","DOIUrl":"https://doi.org/10.1109/BTAS.2015.7358794","url":null,"abstract":"The advent of commercial smartwatches provides an intriguing new platform for mobile biometrics. Like their smartphone counterparts, these mobile devices can perform gait-based biometric identification because they too contain an accelerometer and a gyroscope. However, smartwatches have several advantages over smartphones for biometric identification because users almost always wear their watch in the same location and orientation. This location (i.e. the wrist) tends to provide more information about a user's movements than the most common location for smartphones (pockets or handbags). In this paper we show the feasibility of using smartwatches for gait-based biometrics by demonstrating the high levels of accuracy that can result from smartwatch-based identification and authentication models. Applications of smartwatch-based biometrics range from a new authentication challenge for use in a multifactor authentication system to automatic personalization by identifying the user of a shared device.","PeriodicalId":404972,"journal":{"name":"2015 IEEE 7th International Conference on Biometrics Theory, Applications and Systems (BTAS)","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128225053","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 131
Latent fingerprint from multiple surfaces: Database and quality analysis 多表面潜指纹:数据库及质量分析
A. Sankaran, Akshay Agarwal, Rohit Keshari, Soumyadeep Ghosh, Anjali Sharma, Mayank Vatsa, Richa Singh
Latent fingerprints are lifted from multiple types of surfaces, which vary in material type, texture, color, and shape. These differences in the surfaces introduce significant intra-class variations in the lifted prints such as availability of partial print, background noise, and poor ridge structure quality. Due to these observed variations, the overall quality and the matching performance of latent fingerprints vary with respect to surface properties. Thus, characterizing the performance of latent fingerprints according to the surfaces they are lifted from is an important research problem that needs attention. In this research, we create a novel multi-surface latent fingerprint database and make it publicly available for the research community. The database consists of 551 latent fingerprints from 51 subjects lifted from eight different surfaces. Using existing algorithms, we characterize the quality of latent fingerprints and compute the matching performance to analyze the effect of different surfaces.
潜在指纹是从多种表面提取的,这些表面在材料类型、纹理、颜色和形状上各不相同。这些表面上的差异在提升的打印中引入了显著的类内差异,例如部分打印的可用性、背景噪声和差的脊结构质量。由于这些观察到的变化,潜在指纹的整体质量和匹配性能因表面特性而异。因此,根据提取指纹的表面特征来表征潜指纹的性能是一个需要关注的重要研究问题。在这项研究中,我们创建了一个新的多表面潜在指纹数据库,并将其公开提供给研究社区。该数据库由51名受试者的551个潜在指纹组成,这些指纹取自8个不同的表面。在现有算法的基础上,对潜在指纹的质量进行表征,计算匹配性能,分析不同表面对潜在指纹的影响。
{"title":"Latent fingerprint from multiple surfaces: Database and quality analysis","authors":"A. Sankaran, Akshay Agarwal, Rohit Keshari, Soumyadeep Ghosh, Anjali Sharma, Mayank Vatsa, Richa Singh","doi":"10.1109/BTAS.2015.7358773","DOIUrl":"https://doi.org/10.1109/BTAS.2015.7358773","url":null,"abstract":"Latent fingerprints are lifted from multiple types of surfaces, which vary in material type, texture, color, and shape. These differences in the surfaces introduce significant intra-class variations in the lifted prints such as availability of partial print, background noise, and poor ridge structure quality. Due to these observed variations, the overall quality and the matching performance of latent fingerprints vary with respect to surface properties. Thus, characterizing the performance of latent fingerprints according to the surfaces they are lifted from is an important research problem that needs attention. In this research, we create a novel multi-surface latent fingerprint database and make it publicly available for the research community. The database consists of 551 latent fingerprints from 51 subjects lifted from eight different surfaces. Using existing algorithms, we characterize the quality of latent fingerprints and compute the matching performance to analyze the effect of different surfaces.","PeriodicalId":404972,"journal":{"name":"2015 IEEE 7th International Conference on Biometrics Theory, Applications and Systems (BTAS)","volume":"180 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132905524","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Improvements to keystroke-based authentication by adding linguistic context 通过添加语言上下文改进按键式身份验证
Adam Goodkind, David Guy Brizan, Andrew Rosenberg
Traditional keystroke-based authentication methods rely on timing of and between individual keystrokes, oblivious to the context in which the typing is taking place. By incorporating linguistic context into a keystroke-based user authentication system, we are able to improve performance, as measured by EER. Taking advantage of patterns in keystroke dynamics, we show that typists employ unique behavior relative to syntactic and lexical constructs, which can be used to help identify the typist.
传统的基于击键的身份验证方法依赖于单个击键的时间和击键之间的时间,而忽略了打字时的语境。通过将语言上下文纳入基于按键的用户身份验证系统,我们能够提高性能(以 EER 衡量)。利用击键动态模式的优势,我们展示了打字员在句法和词汇结构方面的独特行为,这些行为可用于帮助识别打字员。
{"title":"Improvements to keystroke-based authentication by adding linguistic context","authors":"Adam Goodkind, David Guy Brizan, Andrew Rosenberg","doi":"10.1109/BTAS.2015.7358766","DOIUrl":"https://doi.org/10.1109/BTAS.2015.7358766","url":null,"abstract":"Traditional keystroke-based authentication methods rely on timing of and between individual keystrokes, oblivious to the context in which the typing is taking place. By incorporating linguistic context into a keystroke-based user authentication system, we are able to improve performance, as measured by EER. Taking advantage of patterns in keystroke dynamics, we show that typists employ unique behavior relative to syntactic and lexical constructs, which can be used to help identify the typist.","PeriodicalId":404972,"journal":{"name":"2015 IEEE 7th International Conference on Biometrics Theory, Applications and Systems (BTAS)","volume":"30 5","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120845810","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
On smartphone camera based fingerphoto authentication 基于智能手机摄像头的指纹认证
A. Sankaran, Aakarsh Malhotra, Apoorva Mittal, Mayank Vatsa, Richa Singh
Authenticating fingerphoto images captured using a smartphone camera, provide a good alternate solution in place of traditional pin or pattern based approaches. There are multiple challenges associated with fingerphoto authentication such as background variations, environmental illumination, estimating finger position, and camera resolution. In this research, we propose a novel ScatNet feature based fingerphoto matching approach. Effective fingerphoto segmentation and enhancement are performed to aid the matching process and to attenuate the effect of capture variations. Further, we propose and create a publicly available smartphone fingerphoto database having three different subsets addressing the challenges of environmental illumination and background, along with their corresponding live scan fingerprints. Experimental results show improved performance across multiple challenges present in the database.
验证使用智能手机相机拍摄的指纹照片图像,提供了一个很好的替代解决方案,以取代传统的基于针或图案的方法。与指纹照片认证相关的挑战有很多,比如背景变化、环境照明、估计手指位置和相机分辨率。在这项研究中,我们提出了一种新的基于ScatNet特征的指纹照片匹配方法。进行有效的指纹图像分割和增强,以帮助匹配过程和减弱捕获变化的影响。此外,我们提出并创建了一个公开可用的智能手机指纹照片数据库,其中包含三个不同的子集,用于解决环境照明和背景的挑战,以及相应的实时扫描指纹。实验结果表明,在数据库中存在多个挑战时,该方法的性能有所提高。
{"title":"On smartphone camera based fingerphoto authentication","authors":"A. Sankaran, Aakarsh Malhotra, Apoorva Mittal, Mayank Vatsa, Richa Singh","doi":"10.1109/BTAS.2015.7358782","DOIUrl":"https://doi.org/10.1109/BTAS.2015.7358782","url":null,"abstract":"Authenticating fingerphoto images captured using a smartphone camera, provide a good alternate solution in place of traditional pin or pattern based approaches. There are multiple challenges associated with fingerphoto authentication such as background variations, environmental illumination, estimating finger position, and camera resolution. In this research, we propose a novel ScatNet feature based fingerphoto matching approach. Effective fingerphoto segmentation and enhancement are performed to aid the matching process and to attenuate the effect of capture variations. Further, we propose and create a publicly available smartphone fingerphoto database having three different subsets addressing the challenges of environmental illumination and background, along with their corresponding live scan fingerprints. Experimental results show improved performance across multiple challenges present in the database.","PeriodicalId":404972,"journal":{"name":"2015 IEEE 7th International Conference on Biometrics Theory, Applications and Systems (BTAS)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115356000","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 59
Unconstrained face verification using fisher vectors computed from frontalized faces 基于fisher向量的无约束人脸验证
Jun-Cheng Chen, S. Sankaranarayanan, Vishal M. Patel, R. Chellappa
We present an algorithm for unconstrained face verification using Fisher vectors computed from frontalized off-frontal gallery and probe faces. In the training phase, we use the Labeled Faces in the Wild (LFW) dataset to learn the Fisher vector encoding and the joint Bayesian metric. Given an image containing the query face, we perform face detection and landmark localization followed by frontalization to normalize the effect of pose. We further extract dense SIFT features which are then encoded using the Fisher vector learnt during the training phase. The similarity scores are then computed using the learnt joint Bayesian metric. CMC curves and FAR/TAR numbers calculated for a subset of the IARPA JANUS challenge dataset are presented.
我们提出了一种基于Fisher向量的无约束人脸验证算法。在训练阶段,我们使用标记的野外面孔(LFW)数据集学习Fisher向量编码和联合贝叶斯度量。给定包含查询人脸的图像,我们进行人脸检测和地标定位,然后进行正面化以标准化姿态的效果。我们进一步提取密集的SIFT特征,然后使用在训练阶段学习的Fisher向量进行编码。然后使用学习到的联合贝叶斯度量来计算相似性分数。给出了针对IARPA JANUS挑战数据集子集计算的CMC曲线和FAR/TAR数。
{"title":"Unconstrained face verification using fisher vectors computed from frontalized faces","authors":"Jun-Cheng Chen, S. Sankaranarayanan, Vishal M. Patel, R. Chellappa","doi":"10.1109/BTAS.2015.7358802","DOIUrl":"https://doi.org/10.1109/BTAS.2015.7358802","url":null,"abstract":"We present an algorithm for unconstrained face verification using Fisher vectors computed from frontalized off-frontal gallery and probe faces. In the training phase, we use the Labeled Faces in the Wild (LFW) dataset to learn the Fisher vector encoding and the joint Bayesian metric. Given an image containing the query face, we perform face detection and landmark localization followed by frontalization to normalize the effect of pose. We further extract dense SIFT features which are then encoded using the Fisher vector learnt during the training phase. The similarity scores are then computed using the learnt joint Bayesian metric. CMC curves and FAR/TAR numbers calculated for a subset of the IARPA JANUS challenge dataset are presented.","PeriodicalId":404972,"journal":{"name":"2015 IEEE 7th International Conference on Biometrics Theory, Applications and Systems (BTAS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130775163","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Towards repeatable, reproducible, and efficient biometric technology evaluations 迈向可重复、可再现、高效的生物识别技术评估
Gregory Fiumara, W. Salamon, C. Watson
With the proliferation of biometric-based identity management solutions, biometric algorithms need to be tested now more than ever. Independent biometric technology evaluations are needed to perform this testing, but are not trivial to run, as demonstrated by only a handful of organizations attempting to perform such a feat. Worse, many software development packages designed for running biometric technology evaluations available today shy away from techniques that enable automation, a concept that supports reproducible research. The evaluation software used for testing biometric recognition algorithms needs to efficiently scale as the sample datasets employed by researchers grow increasingly large. With better software, additional entities with their own biometric data collection repositories could easily administer a reproducible biometric technology evaluation. Existing evaluation software is available, but these packages do not always follow best practices and they are lacking several important features. This paper identifies the necessary requirements and ideal characteristics of a robust biometric evaluation toolkit and introduces our implementation thereof, which has been used in several large-scale biometric technology evaluations by multiple organizations.
随着基于生物识别的身份管理解决方案的激增,生物识别算法现在比以往任何时候都更需要测试。执行此测试需要独立的生物识别技术评估,但是运行起来并不容易,因为只有少数组织尝试执行此类壮举。更糟糕的是,许多为运行生物识别技术评估而设计的软件开发软件包都回避了自动化技术,而自动化是支持可重复研究的概念。随着研究人员使用的样本数据集越来越大,用于测试生物特征识别算法的评估软件需要有效地扩展。有了更好的软件,拥有自己的生物特征数据收集库的其他实体可以轻松地管理可重复的生物特征技术评估。现有的评估软件是可用的,但是这些软件包并不总是遵循最佳实践,而且它们缺乏几个重要的特性。本文确定了一个健壮的生物识别评估工具包的必要要求和理想特征,并介绍了我们对其的实现,该工具包已被多个组织用于几次大规模生物识别技术评估。
{"title":"Towards repeatable, reproducible, and efficient biometric technology evaluations","authors":"Gregory Fiumara, W. Salamon, C. Watson","doi":"10.1109/BTAS.2015.7358800","DOIUrl":"https://doi.org/10.1109/BTAS.2015.7358800","url":null,"abstract":"With the proliferation of biometric-based identity management solutions, biometric algorithms need to be tested now more than ever. Independent biometric technology evaluations are needed to perform this testing, but are not trivial to run, as demonstrated by only a handful of organizations attempting to perform such a feat. Worse, many software development packages designed for running biometric technology evaluations available today shy away from techniques that enable automation, a concept that supports reproducible research. The evaluation software used for testing biometric recognition algorithms needs to efficiently scale as the sample datasets employed by researchers grow increasingly large. With better software, additional entities with their own biometric data collection repositories could easily administer a reproducible biometric technology evaluation. Existing evaluation software is available, but these packages do not always follow best practices and they are lacking several important features. This paper identifies the necessary requirements and ideal characteristics of a robust biometric evaluation toolkit and introduces our implementation thereof, which has been used in several large-scale biometric technology evaluations by multiple organizations.","PeriodicalId":404972,"journal":{"name":"2015 IEEE 7th International Conference on Biometrics Theory, Applications and Systems (BTAS)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123676907","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Limbus impact removal for off-angle iris recognition using eye models 基于眼模型的边缘冲击去除非角度虹膜识别
Osman M. Kurtuncu, M. Karakaya
The traditional iris recognition algorithms segment the iris image at the cornea-sclera border as the outer boundary because they consider the visible portion of iris as the entire iris texture. However, limbus, an additional semitransparent eye structure at junction of the cornea and sclera, occludes iris textures at the sides that cannot be seen at the off-angle iris images. In the biometrics community, limbus occlusion is unnoticed due to its limited effect at frontal iris images. However, to ignore the effect of the limbus occlusion in off-angle iris images causes significant performance degradation in iris biometrics. In this paper, we first investigate the limbus impact on off-angle iris recognition. Then, we propose a new approach to remove the effect of limbus occlusion. In our approach, we segmented iris image at its actual outer iris boundary instead of the visible outer iris boundary as in traditional methods and normalize them based on the actual outer iris boundary. The invisible iris region in unwrapped image that is occluded by limbus is eliminated by including it into the mask. Based on the relation between the segmentation parameters of actual and visible iris boundaries, we generate a transfer function and estimate the actual iris boundary from the segmented visible iris boundary depending on the known limbus height and gaze angle. Moreover, based on experiments with the synthetic iris dataset from the biometric eye model, we first show that not only the acquisition angle but also the limbus height negatively affects the performance of the off-angle iris recognition and then we eliminate this negative effect with applying our proposed method.
传统的虹膜识别算法将虹膜的可见部分视为整个虹膜纹理,将虹膜图像在角膜-巩膜边界处分割为外边界。然而,角膜缘是角膜和巩膜交界处的一种额外的半透明眼睛结构,它遮挡了在离角虹膜图像上无法看到的两侧虹膜纹理。在生物识别界,边缘遮挡由于其在额虹膜图像上的有限影响而被忽视。然而,在非角度虹膜图像中忽略边缘遮挡的影响会导致虹膜生物识别性能的显著下降。本文首先研究边缘对非角度虹膜识别的影响。然后,我们提出了一种新的方法来消除边缘遮挡的影响。在我们的方法中,我们在虹膜实际外边界处分割虹膜图像,而不是像传统方法那样在虹膜可见外边界处分割虹膜图像,并基于虹膜实际外边界进行归一化。通过将未包裹图像中被边缘遮挡的不可见虹膜区域加入掩模,消除了虹膜区域。根据实际虹膜边界和可见虹膜边界分割参数之间的关系,根据已知的边缘高度和注视角度,生成传递函数,从分割后的可见虹膜边界估计实际虹膜边界。此外,基于生物特征眼模型合成虹膜数据集的实验,我们首先证明了采集角度和边缘高度对非角度虹膜识别性能的负面影响,然后应用我们的方法消除了这种负面影响。
{"title":"Limbus impact removal for off-angle iris recognition using eye models","authors":"Osman M. Kurtuncu, M. Karakaya","doi":"10.1109/BTAS.2015.7358775","DOIUrl":"https://doi.org/10.1109/BTAS.2015.7358775","url":null,"abstract":"The traditional iris recognition algorithms segment the iris image at the cornea-sclera border as the outer boundary because they consider the visible portion of iris as the entire iris texture. However, limbus, an additional semitransparent eye structure at junction of the cornea and sclera, occludes iris textures at the sides that cannot be seen at the off-angle iris images. In the biometrics community, limbus occlusion is unnoticed due to its limited effect at frontal iris images. However, to ignore the effect of the limbus occlusion in off-angle iris images causes significant performance degradation in iris biometrics. In this paper, we first investigate the limbus impact on off-angle iris recognition. Then, we propose a new approach to remove the effect of limbus occlusion. In our approach, we segmented iris image at its actual outer iris boundary instead of the visible outer iris boundary as in traditional methods and normalize them based on the actual outer iris boundary. The invisible iris region in unwrapped image that is occluded by limbus is eliminated by including it into the mask. Based on the relation between the segmentation parameters of actual and visible iris boundaries, we generate a transfer function and estimate the actual iris boundary from the segmented visible iris boundary depending on the known limbus height and gaze angle. Moreover, based on experiments with the synthetic iris dataset from the biometric eye model, we first show that not only the acquisition angle but also the limbus height negatively affects the performance of the off-angle iris recognition and then we eliminate this negative effect with applying our proposed method.","PeriodicalId":404972,"journal":{"name":"2015 IEEE 7th International Conference on Biometrics Theory, Applications and Systems (BTAS)","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124452239","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Music and images as contexts in a context-aware touch-based authentication system 音乐和图像在基于上下文的触摸认证系统中作为上下文
Abena Primo, V. Phoha
Touch-based authentication is now a promising method for ensuring secure access on mobile devices with some researchers reporting very low EERs for authentication. However, these works have not shown experimentally the impact of contexts beyond phone orientation on touch-based authentication. In this work, we present experimental results on how touch-based authentication is impacted by users who are listening to music while swiping, users who are not listening to music while swiping, users who are swiping over images and users who are not swiping over images. We experiment with a data-set which we collected for this purpose from 34 subjects. Moreover, we provide design considerations towards a touch-based context-aware system and show how a module which considers the presence of music (which we found to have statistical significance) can be incorporated.
基于触摸的身份验证现在是确保移动设备安全访问的一种很有前途的方法,一些研究人员报告了非常低的身份验证EERs。然而,这些工作并没有在实验中显示手机方向以外的环境对基于触摸的身份验证的影响。在这项工作中,我们展示了基于触摸的身份验证如何受到在滑动时听音乐的用户、在滑动时不听音乐的用户、在滑动图像的用户和没有在滑动图像的用户的影响的实验结果。我们用一组为此目的从34个受试者中收集的数据进行实验。此外,我们还提供了基于触摸的情境感知系统的设计考虑,并展示了如何将考虑音乐存在的模块(我们发现具有统计显著性)纳入其中。
{"title":"Music and images as contexts in a context-aware touch-based authentication system","authors":"Abena Primo, V. Phoha","doi":"10.1109/BTAS.2015.7358779","DOIUrl":"https://doi.org/10.1109/BTAS.2015.7358779","url":null,"abstract":"Touch-based authentication is now a promising method for ensuring secure access on mobile devices with some researchers reporting very low EERs for authentication. However, these works have not shown experimentally the impact of contexts beyond phone orientation on touch-based authentication. In this work, we present experimental results on how touch-based authentication is impacted by users who are listening to music while swiping, users who are not listening to music while swiping, users who are swiping over images and users who are not swiping over images. We experiment with a data-set which we collected for this purpose from 34 subjects. Moreover, we provide design considerations towards a touch-based context-aware system and show how a module which considers the presence of music (which we found to have statistical significance) can be incorporated.","PeriodicalId":404972,"journal":{"name":"2015 IEEE 7th International Conference on Biometrics Theory, Applications and Systems (BTAS)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127238266","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
A robust sclera segmentation algorithm 一种鲁棒巩膜分割算法
P. Radu, J. Ferryman, Peter Wild
Sclera segmentation is shown to be of significant importance for eye and iris biometrics. However, sclera segmentation has not been extensively researched as a separate topic, but mainly summarized as a component of a broader task. This paper proposes a novel sclera segmentation algorithm for colour images which operates at pixel-level. Exploring various colour spaces, the proposed approach is robust to image noise and different gaze directions. The algorithm's robustness is enhanced by a two-stage classifier. At the first stage, a set of simple classifiers is employed, while at the second stage, a neural network classifier operates on the probabilities' space generated by the classifiers at the stage 1. The proposed method was ranked 1st in Sclera Segmentation Benchmarking Competition 2015, part of BTAS 2015, with a precision of 95.05% corresponding to a recall of 94.56%.
巩膜分割在眼睛和虹膜生物识别中具有重要意义。然而,巩膜分割并没有作为一个单独的主题进行广泛的研究,而主要是作为一个更广泛的任务的组成部分进行总结。提出了一种基于像素级的彩色图像巩膜分割算法。该方法探索了不同的色彩空间,对图像噪声和不同的凝视方向具有鲁棒性。采用两阶段分类器增强了算法的鲁棒性。第一阶段使用一组简单分类器,第二阶段使用神经网络分类器对第一阶段分类器生成的概率空间进行运算。该方法在2015巩膜分割基准竞赛(BTAS 2015的一部分)中排名第一,准确率为95.05%,召回率为94.56%。
{"title":"A robust sclera segmentation algorithm","authors":"P. Radu, J. Ferryman, Peter Wild","doi":"10.1109/BTAS.2015.7358746","DOIUrl":"https://doi.org/10.1109/BTAS.2015.7358746","url":null,"abstract":"Sclera segmentation is shown to be of significant importance for eye and iris biometrics. However, sclera segmentation has not been extensively researched as a separate topic, but mainly summarized as a component of a broader task. This paper proposes a novel sclera segmentation algorithm for colour images which operates at pixel-level. Exploring various colour spaces, the proposed approach is robust to image noise and different gaze directions. The algorithm's robustness is enhanced by a two-stage classifier. At the first stage, a set of simple classifiers is employed, while at the second stage, a neural network classifier operates on the probabilities' space generated by the classifiers at the stage 1. The proposed method was ranked 1st in Sclera Segmentation Benchmarking Competition 2015, part of BTAS 2015, with a precision of 95.05% corresponding to a recall of 94.56%.","PeriodicalId":404972,"journal":{"name":"2015 IEEE 7th International Conference on Biometrics Theory, Applications and Systems (BTAS)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134523660","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 27
期刊
2015 IEEE 7th International Conference on Biometrics Theory, Applications and Systems (BTAS)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1