Hierarchical Iris Image Super Resolution based on Wavelet Transform

Yufeng Xia, Peipei Li, Jia Wang, Zhili Zhang, Duanling Li, Zhaofeng He
{"title":"Hierarchical Iris Image Super Resolution based on Wavelet Transform","authors":"Yufeng Xia, Peipei Li, Jia Wang, Zhili Zhang, Duanling Li, Zhaofeng He","doi":"10.1145/3529446.3529453","DOIUrl":null,"url":null,"abstract":"Iris images under the surveillance scenario are often low-quality, which makes the iris recognition challenging. Recently, deep learning-based methods are adopted to enhance the quality of iris images and achieve remarkable performance. However, these methods ignore the characteristics of the iris texture, which is important for iris recognition. In order to restore richer texture details, we propose a super-resolution network based on Wavelet with Transformer and Residual Attention Network (WTRAN). Specifically, we treat the low-resolution images as the low-frequency wavelet coefficients after wavelet decomposition and predict the corresponding high-frequency wavelet coefficients sequence. In order to extract detailed local features, we adopt both channel and spatial attention, and propose a Residual Dense Attention Block (RDAB). Furthermore, we propose a Convolutional Transformer Attention Module (CTAM) to integrate transformer and CNN to extract both the global topology and local texture details. In addition to constraining the quality of image generation, effective identity preserving constraints are also used to ensure the consistency of the super-resolution images in the high-level semantic space. Extensive experiments show that the proposed method has achieved competitive iris image super resolution results compared with the most advanced super-resolution method.","PeriodicalId":151062,"journal":{"name":"Proceedings of the 4th International Conference on Image Processing and Machine Vision","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 4th International Conference on Image Processing and Machine Vision","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3529446.3529453","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Iris images under the surveillance scenario are often low-quality, which makes the iris recognition challenging. Recently, deep learning-based methods are adopted to enhance the quality of iris images and achieve remarkable performance. However, these methods ignore the characteristics of the iris texture, which is important for iris recognition. In order to restore richer texture details, we propose a super-resolution network based on Wavelet with Transformer and Residual Attention Network (WTRAN). Specifically, we treat the low-resolution images as the low-frequency wavelet coefficients after wavelet decomposition and predict the corresponding high-frequency wavelet coefficients sequence. In order to extract detailed local features, we adopt both channel and spatial attention, and propose a Residual Dense Attention Block (RDAB). Furthermore, we propose a Convolutional Transformer Attention Module (CTAM) to integrate transformer and CNN to extract both the global topology and local texture details. In addition to constraining the quality of image generation, effective identity preserving constraints are also used to ensure the consistency of the super-resolution images in the high-level semantic space. Extensive experiments show that the proposed method has achieved competitive iris image super resolution results compared with the most advanced super-resolution method.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于小波变换的分层虹膜图像超分辨率研究
监控场景下的虹膜图像通常质量较低,这给虹膜识别带来了挑战。近年来,人们采用基于深度学习的方法来提高虹膜图像的质量,并取得了显著的效果。然而,这些方法忽略了虹膜纹理的特征,而虹膜纹理对虹膜识别至关重要。为了恢复更丰富的纹理细节,提出了一种基于小波变换和残差注意网络(WTRAN)的超分辨率网络。具体来说,我们将低分辨率图像作为小波分解后的低频小波系数,并预测相应的高频小波系数序列。为了提取详细的局部特征,我们采用通道注意和空间注意相结合的方法,提出了残差密集注意块(RDAB)。此外,我们提出了一种卷积变压器注意模块(CTAM),将变压器和CNN相结合,提取全局拓扑和局部纹理细节。在约束图像生成质量的同时,采用有效的同一性保持约束,保证了高语义空间超分辨率图像的一致性。大量实验表明,与最先进的超分辨率方法相比,该方法取得了具有竞争力的虹膜图像超分辨率结果。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Adaptive Covariance Matrix based on Blur Evaluation for Visual-Inertial Navigation Masked Face Recognition with 3D Facial Geometric Attributes Research on the Application of Three-dimensional Digital Model in the Protection and Inheritance of Traditional Architecture: –Take the example of the Ma Tau Wall of Huizhou architecture Infrared small target detection algorithm with complex background based on YOLO-NWD DnT: Learning Unsupervised Denoising Transformer from Single Noisy Image
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1