Rafael Flock, Shuigen Liu, Yiqiu Dong, Xin T. Tong
{"title":"Local MALA-within-Gibbs for Bayesian image deblurring with total variation prior","authors":"Rafael Flock, Shuigen Liu, Yiqiu Dong, Xin T. Tong","doi":"arxiv-2409.09810","DOIUrl":null,"url":null,"abstract":"We consider Bayesian inference for image deblurring with total variation (TV)\nprior. Since the posterior is analytically intractable, we resort to Markov\nchain Monte Carlo (MCMC) methods. However, since most MCMC methods\nsignificantly deteriorate in high dimensions, they are not suitable to handle\nhigh resolution imaging problems. In this paper, we show how low-dimensional\nsampling can still be facilitated by exploiting the sparse conditional\nstructure of the posterior. To this end, we make use of the local structures of\nthe blurring operator and the TV prior by partitioning the image into\nrectangular blocks and employing a blocked Gibbs sampler with proposals\nstemming from the Metropolis-Hastings adjusted Langevin Algorithm (MALA). We\nprove that this MALA-within-Gibbs (MLwG) sampling algorithm has\ndimension-independent block acceptance rates and dimension-independent\nconvergence rate. In order to apply the MALA proposals, we approximate the TV\nby a smoothed version, and show that the introduced approximation error is\nevenly distributed and dimension-independent. Since the posterior is a Gibbs\ndensity, we can use the Hammersley-Clifford Theorem to identify the posterior\nconditionals which only depend locally on the neighboring blocks. We outline\ncomputational strategies to evaluate the conditionals, which are the target\ndensities in the Gibbs updates, locally and in parallel. In two numerical\nexperiments, we validate the dimension-independent properties of the MLwG\nalgorithm and demonstrate its superior performance over MALA.","PeriodicalId":501162,"journal":{"name":"arXiv - MATH - Numerical Analysis","volume":"82 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - MATH - Numerical Analysis","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.09810","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
We consider Bayesian inference for image deblurring with total variation (TV)
prior. Since the posterior is analytically intractable, we resort to Markov
chain Monte Carlo (MCMC) methods. However, since most MCMC methods
significantly deteriorate in high dimensions, they are not suitable to handle
high resolution imaging problems. In this paper, we show how low-dimensional
sampling can still be facilitated by exploiting the sparse conditional
structure of the posterior. To this end, we make use of the local structures of
the blurring operator and the TV prior by partitioning the image into
rectangular blocks and employing a blocked Gibbs sampler with proposals
stemming from the Metropolis-Hastings adjusted Langevin Algorithm (MALA). We
prove that this MALA-within-Gibbs (MLwG) sampling algorithm has
dimension-independent block acceptance rates and dimension-independent
convergence rate. In order to apply the MALA proposals, we approximate the TV
by a smoothed version, and show that the introduced approximation error is
evenly distributed and dimension-independent. Since the posterior is a Gibbs
density, we can use the Hammersley-Clifford Theorem to identify the posterior
conditionals which only depend locally on the neighboring blocks. We outline
computational strategies to evaluate the conditionals, which are the target
densities in the Gibbs updates, locally and in parallel. In two numerical
experiments, we validate the dimension-independent properties of the MLwG
algorithm and demonstrate its superior performance over MALA.