See as You Desire: Scale-Adaptive Face Super-Resolution for Varying Low Resolutions

IF 8.9 1区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS IEEE Internet of Things Journal Pub Date : 2024-11-06 DOI:10.1109/JIOT.2024.3492716
Ling Li;Yan Zhang;Lin Yuan;Shuang Li;Xinbo Gao
{"title":"See as You Desire: Scale-Adaptive Face Super-Resolution for Varying Low Resolutions","authors":"Ling Li;Yan Zhang;Lin Yuan;Shuang Li;Xinbo Gao","doi":"10.1109/JIOT.2024.3492716","DOIUrl":null,"url":null,"abstract":"Face super-resolution (FSR) is critical for bolstering intelligent security in Internet of Things (IoT) systems. Recent deep learning-driven FSR algorithms have attained remarkable progress. However, they always require separate model training and optimization for each scaling factor or input resolution, leading to inefficiency and impracticality. To overcome these limitations, we propose SAFNet, an innovative framework tailored for scale-adaptive FSR with arbitrary input resolution. SAFNet integrates scale information into representation learning to enable adaptive feature extraction and introduces dual-embedding attention to boost adaptive feature reconstruction. It leverages facial self-similarity and spatial-frequency collaboration to achieve precise scale-aware SR representations. This is attained through three key modules: 1) the scale adaption guidance unit (SAGU); 2) the scale-aware nonlocal self-similarity (SNLS) module; and 3) the spatial-frequency interactive modulation (SFIM) module. SAGU imports scaling factors using frequency encoding, SNLS exploits self-similarity to enrich feature representations, and SFIM incorporates spatial and frequency information to predict target pixel values adaptively. Comprehensive evaluations across four benchmark datasets reveal that SAFNet outperforms the second-best compared state-of-the-art (SOTA) method by about 0.2 dB/0.007 in PSNR/SSIM (<inline-formula> <tex-math>$\\times 4$ </tex-math></inline-formula> on CelebA) with reduced 18.68%/42.64% computational complexity/time cost. This demonstrates SAFNet’s effectiveness and superiority, showcasing its potential as a promising solution for scale and input resolution adaptation challenges in FSR. The code will be available at <uri>https://github.com/ICVIPLab/SAFNet</uri>.","PeriodicalId":54347,"journal":{"name":"IEEE Internet of Things Journal","volume":"12 6","pages":"6979-6996"},"PeriodicalIF":8.9000,"publicationDate":"2024-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Internet of Things Journal","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10745550/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

Abstract

Face super-resolution (FSR) is critical for bolstering intelligent security in Internet of Things (IoT) systems. Recent deep learning-driven FSR algorithms have attained remarkable progress. However, they always require separate model training and optimization for each scaling factor or input resolution, leading to inefficiency and impracticality. To overcome these limitations, we propose SAFNet, an innovative framework tailored for scale-adaptive FSR with arbitrary input resolution. SAFNet integrates scale information into representation learning to enable adaptive feature extraction and introduces dual-embedding attention to boost adaptive feature reconstruction. It leverages facial self-similarity and spatial-frequency collaboration to achieve precise scale-aware SR representations. This is attained through three key modules: 1) the scale adaption guidance unit (SAGU); 2) the scale-aware nonlocal self-similarity (SNLS) module; and 3) the spatial-frequency interactive modulation (SFIM) module. SAGU imports scaling factors using frequency encoding, SNLS exploits self-similarity to enrich feature representations, and SFIM incorporates spatial and frequency information to predict target pixel values adaptively. Comprehensive evaluations across four benchmark datasets reveal that SAFNet outperforms the second-best compared state-of-the-art (SOTA) method by about 0.2 dB/0.007 in PSNR/SSIM ( $\times 4$ on CelebA) with reduced 18.68%/42.64% computational complexity/time cost. This demonstrates SAFNet’s effectiveness and superiority, showcasing its potential as a promising solution for scale and input resolution adaptation challenges in FSR. The code will be available at https://github.com/ICVIPLab/SAFNet.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
随心所欲:针对不同低分辨率的规模自适应人脸超级分辨率
面部超分辨率(FSR)对于增强物联网(IoT)系统的智能安全至关重要。近年来,深度学习驱动的FSR算法取得了显著的进展。然而,它们总是需要对每个比例因子或输入分辨率进行单独的模型训练和优化,导致效率低下和不实用。为了克服这些限制,我们提出了SAFNet,这是一个为任意输入分辨率的尺度自适应FSR量身定制的创新框架。SAFNet将尺度信息集成到表征学习中,实现自适应特征提取,并引入双嵌入关注,增强自适应特征重建。它利用面部自相似性和空间频率协作来实现精确的尺度感知SR表示。这是通过三个关键模块实现的:1)尺度自适应制导单元(SAGU);2)尺度感知非局部自相似(SNLS)模块;3)空频交互调制(SFIM)模块。SAGU通过频率编码导入比例因子,SNLS利用自相似性丰富特征表示,SFIM结合空间和频率信息自适应预测目标像素值。对四个基准数据集的综合评估表明,SAFNet在PSNR/SSIM (CelebA为$\times 4$)方面优于第二好的比较先进(SOTA)方法约0.2 dB/0.007,计算复杂度和时间成本分别降低了18.68%/42.64%。这证明了SAFNet的有效性和优越性,展示了它作为FSR中规模和输入分辨率适应挑战的有前途的解决方案的潜力。代码可在https://github.com/ICVIPLab/SAFNet上获得。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
IEEE Internet of Things Journal
IEEE Internet of Things Journal Computer Science-Information Systems
CiteScore
17.60
自引率
13.20%
发文量
1982
期刊介绍: The EEE Internet of Things (IoT) Journal publishes articles and review articles covering various aspects of IoT, including IoT system architecture, IoT enabling technologies, IoT communication and networking protocols such as network coding, and IoT services and applications. Topics encompass IoT's impacts on sensor technologies, big data management, and future internet design for applications like smart cities and smart homes. Fields of interest include IoT architecture such as things-centric, data-centric, service-oriented IoT architecture; IoT enabling technologies and systematic integration such as sensor technologies, big sensor data management, and future Internet design for IoT; IoT services, applications, and test-beds such as IoT service middleware, IoT application programming interface (API), IoT application design, and IoT trials/experiments; IoT standardization activities and technology development in different standard development organizations (SDO) such as IEEE, IETF, ITU, 3GPP, ETSI, etc.
期刊最新文献
A Robust Position Approach Based on Masked KalmanNet for GNSS/LEO/INS Integrated Navigation System Blockchain-Based Distributed Trust Model for Secure IoT Communication Research on Intelligent Internet of Underwater Things Orientation Technology Based on Polarization Pattern Restoration Maintaining Line-of-sight Communications: A Vision-aided Approach Offset Pointing for Energy-efficient Reception in Underwater Optical Wireless Communication: Modeling and Performance Analysis
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1