{"title":"Towards theoretically-founded learning-based denoising","authors":"Wenda Zhou, S. Jalali","doi":"10.1109/ISIT.2019.8849348","DOIUrl":null,"url":null,"abstract":"Denoising a stationary process (Xi)i∈ℤ corrupted by additive white Gaussian noise (Zi)i∈ℤ, i.e., recovering Xn from Yn = Xn + Zn, is a classic and fundamental problem in information theory and statistical signal processing. Theoretically-founded and computationally-efficient denoising algorithms which are applicable to general sources are yet to be found. In a Bayesian setup, given the distribution of Xn, a minimum mean square error (MMSE) denoiser computes E[Xn|Yn]. However, for general sources, computing E[Xn|Yn] is computationally very challenging, if not infeasible. In this paper, starting from a Bayesian setup, a novel denoiser, namely, quantized maximum a posteriori (Q-MAP) denoiser, is proposed and its asymptotic performance is analyzed. Both for memoryless sources, and for structured first-order Markov sources, it is shown that, asymptotically, as σ2 (noise variance) converges to zero, $\\frac{1}{{{\\sigma ^2}}}{\\text{E}}\\left[ {{{\\left( {{X_i} - \\hat X_i^{{\\text{Q - MAP}}}} \\right)}^2}} \\right]$ converges to the information dimension of the source. For the studied memoryless sources, this limit is known to be optimal. A key advantage of the Q-MAP denoiser is that, unlike a MMSE denoiser, it highlights the key properties of the source distribution that are to be used in its denoising. This naturally leads to a learning-based denoising algorithm. Using ImageNet database for training, initial simulation results exploring the performance of such a learning-based denoiser in image denoising are presented.","PeriodicalId":6708,"journal":{"name":"2019 IEEE International Symposium on Information Theory (ISIT)","volume":"43 1","pages":"2714-2718"},"PeriodicalIF":0.0000,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 IEEE International Symposium on Information Theory (ISIT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISIT.2019.8849348","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
Denoising a stationary process (Xi)i∈ℤ corrupted by additive white Gaussian noise (Zi)i∈ℤ, i.e., recovering Xn from Yn = Xn + Zn, is a classic and fundamental problem in information theory and statistical signal processing. Theoretically-founded and computationally-efficient denoising algorithms which are applicable to general sources are yet to be found. In a Bayesian setup, given the distribution of Xn, a minimum mean square error (MMSE) denoiser computes E[Xn|Yn]. However, for general sources, computing E[Xn|Yn] is computationally very challenging, if not infeasible. In this paper, starting from a Bayesian setup, a novel denoiser, namely, quantized maximum a posteriori (Q-MAP) denoiser, is proposed and its asymptotic performance is analyzed. Both for memoryless sources, and for structured first-order Markov sources, it is shown that, asymptotically, as σ2 (noise variance) converges to zero, $\frac{1}{{{\sigma ^2}}}{\text{E}}\left[ {{{\left( {{X_i} - \hat X_i^{{\text{Q - MAP}}}} \right)}^2}} \right]$ converges to the information dimension of the source. For the studied memoryless sources, this limit is known to be optimal. A key advantage of the Q-MAP denoiser is that, unlike a MMSE denoiser, it highlights the key properties of the source distribution that are to be used in its denoising. This naturally leads to a learning-based denoising algorithm. Using ImageNet database for training, initial simulation results exploring the performance of such a learning-based denoiser in image denoising are presented.