Yongfei Zhang, Xinying Lin, Hong Yang, Jie He, Linbo Qing, Xiaohai He, Yi Li, Honggang Chen
{"title":"A Multi-Attention Feature Distillation Neural Network for Lightweight Single Image Super-Resolution","authors":"Yongfei Zhang, Xinying Lin, Hong Yang, Jie He, Linbo Qing, Xiaohai He, Yi Li, Honggang Chen","doi":"10.1155/2024/3255233","DOIUrl":null,"url":null,"abstract":"In recent years, remarkable performance improvements have been produced by deep convolutional neural networks (CNN) for single image super-resolution (SISR). Nevertheless, a high proportion of CNN-based SISR models are with quite a few network parameters and high computational complexity for deep or wide architectures. How to more fully utilize deep features to make a balance between model complexity and reconstruction performance is one of the main challenges in this field. To address this problem, on the basis of the well-known information multi-distillation model, a multi-attention feature distillation network termed as MAFDN is developed for lightweight and accurate SISR. Specifically, an effective multi-attention feature distillation block (MAFDB) is designed and used as the basic feature extraction unit in MAFDN. With the help of multi-attention layers including pixel attention, spatial attention, and channel attention, MAFDB uses multiple information distillation branches to learn more discriminative and representative features. Furthermore, MAFDB introduces the depthwise over-parameterized convolutional layer (DO-Conv)-based residual block (OPCRB) to enhance its ability without incurring any parameter and computation increase in the inference stage. The results on commonly used datasets demonstrate that our MAFDN outperforms existing representative lightweight SISR models when taking both reconstruction performance and model complexity into consideration. For example, for ×4 SR on Set5, MAFDN (597K/33.79G) obtains 0.21 dB/0.0037 and 0.10 dB/0.0015 PSNR/SSIM gains over the attention-based SR model AFAN (692K/50.90G) and the feature distillation-based SR model DDistill-SR (675K/32.83G), respectively.","PeriodicalId":5,"journal":{"name":"ACS Applied Materials & Interfaces","volume":"157 1","pages":""},"PeriodicalIF":8.2000,"publicationDate":"2024-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACS Applied Materials & Interfaces","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1155/2024/3255233","RegionNum":2,"RegionCategory":"材料科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"MATERIALS SCIENCE, MULTIDISCIPLINARY","Score":null,"Total":0}
引用次数: 0
Abstract
In recent years, remarkable performance improvements have been produced by deep convolutional neural networks (CNN) for single image super-resolution (SISR). Nevertheless, a high proportion of CNN-based SISR models are with quite a few network parameters and high computational complexity for deep or wide architectures. How to more fully utilize deep features to make a balance between model complexity and reconstruction performance is one of the main challenges in this field. To address this problem, on the basis of the well-known information multi-distillation model, a multi-attention feature distillation network termed as MAFDN is developed for lightweight and accurate SISR. Specifically, an effective multi-attention feature distillation block (MAFDB) is designed and used as the basic feature extraction unit in MAFDN. With the help of multi-attention layers including pixel attention, spatial attention, and channel attention, MAFDB uses multiple information distillation branches to learn more discriminative and representative features. Furthermore, MAFDB introduces the depthwise over-parameterized convolutional layer (DO-Conv)-based residual block (OPCRB) to enhance its ability without incurring any parameter and computation increase in the inference stage. The results on commonly used datasets demonstrate that our MAFDN outperforms existing representative lightweight SISR models when taking both reconstruction performance and model complexity into consideration. For example, for ×4 SR on Set5, MAFDN (597K/33.79G) obtains 0.21 dB/0.0037 and 0.10 dB/0.0015 PSNR/SSIM gains over the attention-based SR model AFAN (692K/50.90G) and the feature distillation-based SR model DDistill-SR (675K/32.83G), respectively.
期刊介绍:
ACS Applied Materials & Interfaces is a leading interdisciplinary journal that brings together chemists, engineers, physicists, and biologists to explore the development and utilization of newly-discovered materials and interfacial processes for specific applications. Our journal has experienced remarkable growth since its establishment in 2009, both in terms of the number of articles published and the impact of the research showcased. We are proud to foster a truly global community, with the majority of published articles originating from outside the United States, reflecting the rapid growth of applied research worldwide.