Jin Liu , Yang Yang , Biyun Xu , Hao Yu , Yaozong Zhang , Qian Li , Zhenghua Huang
{"title":"RSTC: Residual Swin Transformer Cascade to approximate Taylor expansion for image denoising","authors":"Jin Liu , Yang Yang , Biyun Xu , Hao Yu , Yaozong Zhang , Qian Li , Zhenghua Huang","doi":"10.1016/j.cviu.2024.104132","DOIUrl":null,"url":null,"abstract":"<div><p>Traditional denoising methods establish mathematical models by employing different priors, which can achieve preferable results but they are usually time-consuming and their outputs are not adaptive on regularization parameters. While the success of end-to-end deep learning denoising strategies depends on a large amount of data and lacks a theoretical interpretability. In order to address the above problems, this paper proposes a novel image denoising method, namely Residual Swin Transformer Cascade (RSTC), based on Taylor expansion. The key procedures of our RSTC are specified as follows: Firstly, we discuss the relationship between image denoising model and Taylor expansion, as well as its adjacent derivative parts. Secondly, we use a lightweight deformable convolutional neural network to estimate the basic layer of Taylor expansion and a residual network where swin transformer block is selected as a backbone for pursuing the solution of the derivative layer. Finally, the results of the two networks contribute to the approximation solution of Taylor expansion. In the experiments, we firstly test and discuss the selection of network parameters to verify its effectiveness. Then, we compare it with existing advanced methods in terms of visualization and quantification, and the results show that our method has a powerful generalization ability and performs better than state-of-the-art denoising methods on performance improvement and structure preservation.</p></div>","PeriodicalId":50633,"journal":{"name":"Computer Vision and Image Understanding","volume":"248 ","pages":"Article 104132"},"PeriodicalIF":4.3000,"publicationDate":"2024-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computer Vision and Image Understanding","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1077314224002133","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Traditional denoising methods establish mathematical models by employing different priors, which can achieve preferable results but they are usually time-consuming and their outputs are not adaptive on regularization parameters. While the success of end-to-end deep learning denoising strategies depends on a large amount of data and lacks a theoretical interpretability. In order to address the above problems, this paper proposes a novel image denoising method, namely Residual Swin Transformer Cascade (RSTC), based on Taylor expansion. The key procedures of our RSTC are specified as follows: Firstly, we discuss the relationship between image denoising model and Taylor expansion, as well as its adjacent derivative parts. Secondly, we use a lightweight deformable convolutional neural network to estimate the basic layer of Taylor expansion and a residual network where swin transformer block is selected as a backbone for pursuing the solution of the derivative layer. Finally, the results of the two networks contribute to the approximation solution of Taylor expansion. In the experiments, we firstly test and discuss the selection of network parameters to verify its effectiveness. Then, we compare it with existing advanced methods in terms of visualization and quantification, and the results show that our method has a powerful generalization ability and performs better than state-of-the-art denoising methods on performance improvement and structure preservation.
期刊介绍:
The central focus of this journal is the computer analysis of pictorial information. Computer Vision and Image Understanding publishes papers covering all aspects of image analysis from the low-level, iconic processes of early vision to the high-level, symbolic processes of recognition and interpretation. A wide range of topics in the image understanding area is covered, including papers offering insights that differ from predominant views.
Research Areas Include:
• Theory
• Early vision
• Data structures and representations
• Shape
• Range
• Motion
• Matching and recognition
• Architecture and languages
• Vision systems