Accurate iris segmentation in non-cooperative environments using fully convolutional networks

Nianfeng Liu, Haiqing Li, Man Zhang, Jing Liu, Zhenan Sun, T. Tan
{"title":"Accurate iris segmentation in non-cooperative environments using fully convolutional networks","authors":"Nianfeng Liu, Haiqing Li, Man Zhang, Jing Liu, Zhenan Sun, T. Tan","doi":"10.1109/ICB.2016.7550055","DOIUrl":null,"url":null,"abstract":"Conventional iris recognition requires controlled conditions (e.g., close acquisition distance and stop-and-stare scheme) and high user cooperation for image acquisition. Non-cooperative acquisition environments introduce many adverse factors such as blur, off-axis, occlusions and specular reflections, which challenge existing iris segmentation approaches. In this paper, we present two iris segmentation models, namely hierarchical convolutional neural networks (HCNNs) and multi-scale fully convolutional network (MFCNs), for noisy iris images acquired at-a-distance and on-the-move. Both models automatically locate iris pixels without handcrafted features or rules. Moreover, the features and classifiers are jointly optimized. They are end-to-end models which require no further pre- and post-processing and outperform other state-of-the-art methods. Compared with HCNNs, MFCNs take input of arbitrary size and produces correspondingly-sized output without sliding window prediction, which makes MFCNs more efficient. The shallow, fine layers and deep, global layers are combined in MFCNs to capture both the texture details and global structure of iris patterns. Experimental results show that MFCNs are more robust than HCNNs to noises, and can greatly improve the current state-of-the-arts by 25.62% and 13.24% on the UBIRIS.v2 and CASIA.v4-distance databases, respectively.","PeriodicalId":308715,"journal":{"name":"2016 International Conference on Biometrics (ICB)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"158","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2016 International Conference on Biometrics (ICB)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICB.2016.7550055","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 158

Abstract

Conventional iris recognition requires controlled conditions (e.g., close acquisition distance and stop-and-stare scheme) and high user cooperation for image acquisition. Non-cooperative acquisition environments introduce many adverse factors such as blur, off-axis, occlusions and specular reflections, which challenge existing iris segmentation approaches. In this paper, we present two iris segmentation models, namely hierarchical convolutional neural networks (HCNNs) and multi-scale fully convolutional network (MFCNs), for noisy iris images acquired at-a-distance and on-the-move. Both models automatically locate iris pixels without handcrafted features or rules. Moreover, the features and classifiers are jointly optimized. They are end-to-end models which require no further pre- and post-processing and outperform other state-of-the-art methods. Compared with HCNNs, MFCNs take input of arbitrary size and produces correspondingly-sized output without sliding window prediction, which makes MFCNs more efficient. The shallow, fine layers and deep, global layers are combined in MFCNs to capture both the texture details and global structure of iris patterns. Experimental results show that MFCNs are more robust than HCNNs to noises, and can greatly improve the current state-of-the-arts by 25.62% and 13.24% on the UBIRIS.v2 and CASIA.v4-distance databases, respectively.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于全卷积网络的非合作环境下虹膜准确分割
传统的虹膜识别需要控制条件(如近距离采集和停盯方案)和高度的用户配合来进行图像采集。非合作采集环境引入了模糊、离轴、遮挡和镜面反射等不利因素,对现有的虹膜分割方法提出了挑战。本文提出了两种虹膜分割模型,即层次卷积神经网络(HCNNs)和多尺度全卷积网络(MFCNs),用于远距离和运动中获取的有噪虹膜图像。这两种模型都可以自动定位虹膜像素,而不需要手工制作特征或规则。并对特征和分类器进行了联合优化。它们是端到端模型,不需要进一步的预处理和后处理,性能优于其他最先进的方法。与hcnn相比,MFCNs采用任意大小的输入,产生相应大小的输出,不需要滑动窗口预测,这使得MFCNs的效率更高。在MFCNs中,将浅层精细层和深层全局层结合起来,同时捕捉虹膜图案的纹理细节和全局结构。实验结果表明,MFCNs对噪声的鲁棒性优于hcnn,在UBIRIS上的鲁棒性分别提高了25.62%和13.24%。v2和CASIA。V4-distance数据库。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Latent fingerprint segmentation based on linear density Experimental results on multi-modal fusion of EEG-based personal verification algorithms Transferring deep representation for NIR-VIS heterogeneous face recognition TripleA: Accelerated accuracy-preserving alignment for iris-codes Reliable face anti-spoofing using multispectral SWIR imaging
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1