Implicit Mutual Learning With Dual-Branch Networks for Face Super-Resolution

Kangli Zeng;Zhongyuan Wang;Tao Lu;Jianyu Chen;Zheng He;Zhen Han
{"title":"Implicit Mutual Learning With Dual-Branch Networks for Face Super-Resolution","authors":"Kangli Zeng;Zhongyuan Wang;Tao Lu;Jianyu Chen;Zheng He;Zhen Han","doi":"10.1109/TBIOM.2024.3354333","DOIUrl":null,"url":null,"abstract":"Face super-resolution (SR) algorithms have recently made significant progress. However, most existing methods prefer to employ texture and structure information together to promote the generation of high-resolution features, neglecting the mutual encouragement between them, as well as the effective unification of their own low-level and high-level information, thus yielding unsatisfactory results. To address these problems, we propose an implicit mutual learning of dual-branch networks for face super-resolution, which adequately considers both extraction and aggregation of structure and texture information. The proposed approach consists of four essential blocks. First, the deep feature extractor is equipped with a deep feature reinforcement module (DFRM) based on two-stage cross-dimensional attention (TCA), which behaves in the texture enhancement and structure reconstruction branches, respectively. Then, we elaborate two information exchange blocks for two branches, one for the first information exchange block (FIEB) from the texture branch to the structure branch and one for the second information exchange block (SIEB) from the structure branch to the texture branch. These two interaction blocks perform further fusion enhancement of potential features. Finally, a hybrid fusion network (HFNet) based on supervised attention executes adaptive aggregation of the enhanced texture and structure maps. Additionally, we use a joint loss function that modifies the recovery of structure information, diminishes the use of potentially erroneous information, and encourages the generation of realistic face images. Experiments on public datasets show that our method consistently achieves better quantitative and qualitative results than SOTA methods.","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"6 2","pages":"182-194"},"PeriodicalIF":0.0000,"publicationDate":"2024-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on biometrics, behavior, and identity science","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10409565/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Face super-resolution (SR) algorithms have recently made significant progress. However, most existing methods prefer to employ texture and structure information together to promote the generation of high-resolution features, neglecting the mutual encouragement between them, as well as the effective unification of their own low-level and high-level information, thus yielding unsatisfactory results. To address these problems, we propose an implicit mutual learning of dual-branch networks for face super-resolution, which adequately considers both extraction and aggregation of structure and texture information. The proposed approach consists of four essential blocks. First, the deep feature extractor is equipped with a deep feature reinforcement module (DFRM) based on two-stage cross-dimensional attention (TCA), which behaves in the texture enhancement and structure reconstruction branches, respectively. Then, we elaborate two information exchange blocks for two branches, one for the first information exchange block (FIEB) from the texture branch to the structure branch and one for the second information exchange block (SIEB) from the structure branch to the texture branch. These two interaction blocks perform further fusion enhancement of potential features. Finally, a hybrid fusion network (HFNet) based on supervised attention executes adaptive aggregation of the enhanced texture and structure maps. Additionally, we use a joint loss function that modifies the recovery of structure information, diminishes the use of potentially erroneous information, and encourages the generation of realistic face images. Experiments on public datasets show that our method consistently achieves better quantitative and qualitative results than SOTA methods.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
利用双分支网络进行隐式互学以实现人脸超分辨率
人脸超分辨率(SR)算法近年来取得了重大进展。然而,现有方法大多偏向于同时利用纹理和结构信息来促进高分辨率特征的生成,忽视了它们之间的相互促进作用,也忽视了它们自身低层次信息和高层次信息的有效统一,因此效果并不理想。针对这些问题,我们提出了一种用于人脸超分辨率的双分支网络隐式相互学习方法,该方法充分考虑了结构和纹理信息的提取和聚合。所提出的方法由四个基本模块组成。首先,深度特征提取器配备了一个基于两级跨维注意力(TCA)的深度特征强化模块(DFRM),分别作用于纹理增强和结构重建分支。然后,我们为两个分支设计了两个信息交换块,一个是从纹理分支到结构分支的第一个信息交换块(FIEB),另一个是从结构分支到纹理分支的第二个信息交换块(SIEB)。这两个交互块对潜在特征进行进一步的融合增强。最后,基于监督注意力的混合融合网络(HFNet)会对增强的纹理和结构图进行自适应聚合。此外,我们还使用了一个联合损失函数,该函数可修改结构信息的恢复,减少潜在错误信息的使用,并鼓励生成逼真的人脸图像。在公共数据集上进行的实验表明,我们的方法在定量和定性方面始终比 SOTA 方法取得更好的结果。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
10.90
自引率
0.00%
发文量
0
期刊最新文献
2024 Index IEEE Transactions on Biometrics, Behavior, and Identity Science Vol. 6 Table of Contents IEEE T-BIOM Editorial Board Changes IEEE Transactions on Biometrics, Behavior, and Identity Science Cutting-Edge Biometrics Research: Selected Best Papers From IJCB 2023
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1