Tin Barisin, Jesus Angulo, Katja Schladitz, Claudia Redenbach
SIAM Journal on Imaging Sciences, Volume 17, Issue 2, Page 1284-1313, June 2024. Abstract. Scattering networks yield powerful and robust hierarchical image descriptors which do not require lengthy training and which work well with very few training data. However, they rely on sampling the scale dimension. Hence, they become sensitive to scale variations and are unable to generalize to unseen scales. In this work, we define an alternative feature representation based on the Riesz transform. We detail and analyze the mathematical foundations behind this representation. In particular, it inherits scale equivariance from the Riesz transform and completely avoids sampling of the scale dimension. Additionally, the number of features in the representation is reduced by a factor four compared to scattering networks. Nevertheless, our representation performs comparably well for texture classification with an interesting addition: scale equivariance. Our method yields very good performance when dealing with scales outside of those covered by the training dataset. The usefulness of the equivariance property is demonstrated on the digit classification task, where accuracy remains stable even for scales four times larger than the one chosen for training. As a second example, we consider classification of textures. Finally, we show how this representation can be used to build hybrid deep learning methods that are more stable to scale variations than standard deep networks.
{"title":"Riesz Feature Representation: Scale Equivariant Scattering Network for Classification Tasks","authors":"Tin Barisin, Jesus Angulo, Katja Schladitz, Claudia Redenbach","doi":"10.1137/23m1584836","DOIUrl":"https://doi.org/10.1137/23m1584836","url":null,"abstract":"SIAM Journal on Imaging Sciences, Volume 17, Issue 2, Page 1284-1313, June 2024. <br/> Abstract. Scattering networks yield powerful and robust hierarchical image descriptors which do not require lengthy training and which work well with very few training data. However, they rely on sampling the scale dimension. Hence, they become sensitive to scale variations and are unable to generalize to unseen scales. In this work, we define an alternative feature representation based on the Riesz transform. We detail and analyze the mathematical foundations behind this representation. In particular, it inherits scale equivariance from the Riesz transform and completely avoids sampling of the scale dimension. Additionally, the number of features in the representation is reduced by a factor four compared to scattering networks. Nevertheless, our representation performs comparably well for texture classification with an interesting addition: scale equivariance. Our method yields very good performance when dealing with scales outside of those covered by the training dataset. The usefulness of the equivariance property is demonstrated on the digit classification task, where accuracy remains stable even for scales four times larger than the one chosen for training. As a second example, we consider classification of textures. Finally, we show how this representation can be used to build hybrid deep learning methods that are more stable to scale variations than standard deep networks.","PeriodicalId":49528,"journal":{"name":"SIAM Journal on Imaging Sciences","volume":null,"pages":null},"PeriodicalIF":2.1,"publicationDate":"2024-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141529027","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
SIAM Journal on Imaging Sciences, Volume 17, Issue 2, Page 1206-1254, June 2024. Abstract.This paper presents a novel stochastic optimization methodology to perform empirical Bayesian inference in semi-blind image deconvolution problems. Given a blurred image and a parametric class of possible operators, the proposed optimization approach automatically calibrates the parameters of the blur model by maximum marginal likelihood estimation, followed by (non-blind) image deconvolution by maximum a posteriori estimation conditionally to the estimated model parameters. In addition to the blur model, the proposed approach also automatically calibrates the noise level as well as any regularization parameters. The marginal likelihood of the blur, noise, and regularization parameters is generally computationally intractable, as it requires calculating several integrals over the entire solution space. Our approach addresses this difficulty by using a stochastic approximation proximal gradient optimization scheme, which iteratively solves such integrals by using a Moreau–Yosida regularized unadjusted Langevin Markov chain Monte Carlo algorithm. This optimization strategy can be easily and efficiently applied to any model that is log-concave and by using the same gradient and proximal operators that are required to compute the maximum a posteriori solution by convex optimization. We provide convergence guarantees for the proposed optimization scheme under realistic and easily verifiable conditions and subsequently demonstrate the effectiveness of the approach with a series of deconvolution experiments and comparisons with alternative strategies from the state of the art
{"title":"Marginal Likelihood Estimation in Semiblind Image Deconvolution: A Stochastic Approximation Approach","authors":"Charlesquin Kemajou Mbakam, Marcelo Pereyra, Jean-François Giovannelli","doi":"10.1137/23m1584496","DOIUrl":"https://doi.org/10.1137/23m1584496","url":null,"abstract":"SIAM Journal on Imaging Sciences, Volume 17, Issue 2, Page 1206-1254, June 2024. <br/> Abstract.This paper presents a novel stochastic optimization methodology to perform empirical Bayesian inference in semi-blind image deconvolution problems. Given a blurred image and a parametric class of possible operators, the proposed optimization approach automatically calibrates the parameters of the blur model by maximum marginal likelihood estimation, followed by (non-blind) image deconvolution by maximum a posteriori estimation conditionally to the estimated model parameters. In addition to the blur model, the proposed approach also automatically calibrates the noise level as well as any regularization parameters. The marginal likelihood of the blur, noise, and regularization parameters is generally computationally intractable, as it requires calculating several integrals over the entire solution space. Our approach addresses this difficulty by using a stochastic approximation proximal gradient optimization scheme, which iteratively solves such integrals by using a Moreau–Yosida regularized unadjusted Langevin Markov chain Monte Carlo algorithm. This optimization strategy can be easily and efficiently applied to any model that is log-concave and by using the same gradient and proximal operators that are required to compute the maximum a posteriori solution by convex optimization. We provide convergence guarantees for the proposed optimization scheme under realistic and easily verifiable conditions and subsequently demonstrate the effectiveness of the approach with a series of deconvolution experiments and comparisons with alternative strategies from the state of the art","PeriodicalId":49528,"journal":{"name":"SIAM Journal on Imaging Sciences","volume":null,"pages":null},"PeriodicalIF":2.1,"publicationDate":"2024-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141503739","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
SIAM Journal on Imaging Sciences, Volume 17, Issue 2, Page 1182-1205, June 2024. Abstract.Recently, the CTV-RPCA model proposed the first recoverable theory for separating low-rank and local-smooth matrices and sparse matrices based on the correlated total variation (CTV) regularizer. However, the CTV-RPCA model ignores the influence of noise, which makes the model unable to effectively extract low-rank and local-smooth principal components under noisy circumstances. To alleviate this issue, this article extends the CTV-RPCA model by considering the influence of noise and proposes two robust models with parameter adaptive adjustment, i.e., Stable Principal Component Pursuit based on CTV (CTV-SPCP) and Square Root Principal Component Pursuit based on CTV (CTV-[math]). Furthermore, we present a statistical recoverable error bound for the proposed models, which allows us to know the relationship between the solution of the proposed models and the ground-truth. It is worth mentioning that, in the absence of noise, our theory degenerates back to the exact recoverable theory of the CTV-RPCA model. Finally, we develop the effective algorithms with the strict convergence guarantees. Extensive experiments adequately validate the theoretical assertions and also demonstrate the superiority of the proposed models over many state-of-the-art methods on various typical applications, including video foreground extraction, multispectral image denoising, and hyperspectral image denoising. The source code is released at https://github.com/andrew-pengjj/CTV-SPCP.
{"title":"Stable Local-Smooth Principal Component Pursuit","authors":"Jiangjun Peng, Hailin Wang, Xiangyong Cao, Xixi Jia, Hongying Zhang, Deyu Meng","doi":"10.1137/23m1580164","DOIUrl":"https://doi.org/10.1137/23m1580164","url":null,"abstract":"SIAM Journal on Imaging Sciences, Volume 17, Issue 2, Page 1182-1205, June 2024. <br/> Abstract.Recently, the CTV-RPCA model proposed the first recoverable theory for separating low-rank and local-smooth matrices and sparse matrices based on the correlated total variation (CTV) regularizer. However, the CTV-RPCA model ignores the influence of noise, which makes the model unable to effectively extract low-rank and local-smooth principal components under noisy circumstances. To alleviate this issue, this article extends the CTV-RPCA model by considering the influence of noise and proposes two robust models with parameter adaptive adjustment, i.e., Stable Principal Component Pursuit based on CTV (CTV-SPCP) and Square Root Principal Component Pursuit based on CTV (CTV-[math]). Furthermore, we present a statistical recoverable error bound for the proposed models, which allows us to know the relationship between the solution of the proposed models and the ground-truth. It is worth mentioning that, in the absence of noise, our theory degenerates back to the exact recoverable theory of the CTV-RPCA model. Finally, we develop the effective algorithms with the strict convergence guarantees. Extensive experiments adequately validate the theoretical assertions and also demonstrate the superiority of the proposed models over many state-of-the-art methods on various typical applications, including video foreground extraction, multispectral image denoising, and hyperspectral image denoising. The source code is released at https://github.com/andrew-pengjj/CTV-SPCP.","PeriodicalId":49528,"journal":{"name":"SIAM Journal on Imaging Sciences","volume":null,"pages":null},"PeriodicalIF":2.1,"publicationDate":"2024-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141503588","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
SIAM Journal on Imaging Sciences, Volume 17, Issue 2, Page 1145-1181, June 2024. Abstract.This paper investigates the convergence properties and applications of the three-operator splitting method, also known as the Davis–Yin splitting (DYS) method, integrated with extrapolation and plug-and-play (PnP) denoiser within a nonconvex framework. We first propose an extrapolated DYS method to effectively solve a class of structural nonconvex optimization problems that involve minimizing the sum of three possibly nonconvex functions. Our approach provides an algorithmic framework that encompasses both extrapolated forward–backward splitting and extrapolated Douglas–Rachford splitting methods. To establish the convergence of the proposed method, we rigorously analyze its behavior based on the Kurdyka–Łojasiewicz property, subject to some tight parameter conditions. Moreover, we introduce two extrapolated PnP-DYS methods with convergence guarantee, where the traditional regularization step is replaced by a gradient step–based denoiser. This denoiser is designed using a differentiable neural network and can be reformulated as the proximal operator of a specific nonconvex functional. We conduct extensive experiments on image deblurring and image superresolution problems, where our numerical results showcase the advantage of the extrapolation strategy and the superior performance of the learning-based model that incorporates the PnP denoiser in terms of achieving high-quality recovery images.
{"title":"Extrapolated Plug-and-Play Three-Operator Splitting Methods for Nonconvex Optimization with Applications to Image Restoration","authors":"Zhongming Wu, Chaoyan Huang, Tieyong Zeng","doi":"10.1137/23m1611166","DOIUrl":"https://doi.org/10.1137/23m1611166","url":null,"abstract":"SIAM Journal on Imaging Sciences, Volume 17, Issue 2, Page 1145-1181, June 2024. <br/> Abstract.This paper investigates the convergence properties and applications of the three-operator splitting method, also known as the Davis–Yin splitting (DYS) method, integrated with extrapolation and plug-and-play (PnP) denoiser within a nonconvex framework. We first propose an extrapolated DYS method to effectively solve a class of structural nonconvex optimization problems that involve minimizing the sum of three possibly nonconvex functions. Our approach provides an algorithmic framework that encompasses both extrapolated forward–backward splitting and extrapolated Douglas–Rachford splitting methods. To establish the convergence of the proposed method, we rigorously analyze its behavior based on the Kurdyka–Łojasiewicz property, subject to some tight parameter conditions. Moreover, we introduce two extrapolated PnP-DYS methods with convergence guarantee, where the traditional regularization step is replaced by a gradient step–based denoiser. This denoiser is designed using a differentiable neural network and can be reformulated as the proximal operator of a specific nonconvex functional. We conduct extensive experiments on image deblurring and image superresolution problems, where our numerical results showcase the advantage of the extrapolation strategy and the superior performance of the learning-based model that incorporates the PnP denoiser in terms of achieving high-quality recovery images.","PeriodicalId":49528,"journal":{"name":"SIAM Journal on Imaging Sciences","volume":null,"pages":null},"PeriodicalIF":2.1,"publicationDate":"2024-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141531359","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
SIAM Journal on Imaging Sciences, Volume 17, Issue 2, Page 1118-1144, June 2024. Abstract.In this paper, we develop an efficient stochastic variance reduced gradient descent algorithm to solve the affine rank minimization problem consisting of finding a matrix of minimum rank from linear measurements. The proposed algorithm as a stochastic gradient descent strategy enjoys a more favorable complexity than that using full gradients. It also reduces the variance of the stochastic gradient at each iteration and accelerates the rate of convergence. We prove that the proposed algorithm converges linearly in expectation to the solution under a restricted isometry condition. Numerical experimental results demonstrate that the proposed algorithm has a clear advantageous balance of efficiency, adaptivity, and accuracy compared with other state-of-the-art algorithms.
{"title":"Stochastic Variance Reduced Gradient for Affine Rank Minimization Problem","authors":"Ningning Han, Juan Nie, Jian Lu, Michael K. Ng","doi":"10.1137/23m1555387","DOIUrl":"https://doi.org/10.1137/23m1555387","url":null,"abstract":"SIAM Journal on Imaging Sciences, Volume 17, Issue 2, Page 1118-1144, June 2024. <br/> Abstract.In this paper, we develop an efficient stochastic variance reduced gradient descent algorithm to solve the affine rank minimization problem consisting of finding a matrix of minimum rank from linear measurements. The proposed algorithm as a stochastic gradient descent strategy enjoys a more favorable complexity than that using full gradients. It also reduces the variance of the stochastic gradient at each iteration and accelerates the rate of convergence. We prove that the proposed algorithm converges linearly in expectation to the solution under a restricted isometry condition. Numerical experimental results demonstrate that the proposed algorithm has a clear advantageous balance of efficiency, adaptivity, and accuracy compared with other state-of-the-art algorithms.","PeriodicalId":49528,"journal":{"name":"SIAM Journal on Imaging Sciences","volume":null,"pages":null},"PeriodicalIF":2.1,"publicationDate":"2024-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141518265","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Teresa Klatzer, Paul Dobson, Yoann Altmann, Marcelo Pereyra, Jesus Maria Sanz-Serna, Konstantinos C. Zygalakis
SIAM Journal on Imaging Sciences, Volume 17, Issue 2, Page 1078-1117, June 2024. Abstract.This paper presents a new accelerated proximal Markov chain Monte Carlo methodology to perform Bayesian inference in imaging inverse problems with an underlying convex geometry. The proposed strategy takes the form of a stochastic relaxed proximal-point iteration that admits two complementary interpretations. For models that are smooth or regularized by Moreau–Yosida smoothing, the algorithm is equivalent to an implicit midpoint discretization of an overdamped Langevin diffusion targeting the posterior distribution of interest. This discretization is asymptotically unbiased for Gaussian targets and shown to converge in an accelerated manner for any target that is [math]-strongly log-concave (i.e., requiring in the order of [math] iterations to converge, similar to accelerated optimization schemes), comparing favorably to Pereyra, Vargas Mieles, and Zygalakis [SIAM J. Imaging Sci., 13 (2020), pp. 905–935], which is only provably accelerated for Gaussian targets and has bias. For models that are not smooth, the algorithm is equivalent to a Leimkuhler–Matthews discretization of a Langevin diffusion targeting a Moreau–Yosida approximation of the posterior distribution of interest and hence achieves a significantly lower bias than conventional unadjusted Langevin strategies based on the Euler–Maruyama discretization. For targets that are [math]-strongly log-concave, the provided nonasymptotic convergence analysis also identifies the optimal time step, which maximizes the convergence speed. The proposed methodology is demonstrated through a range of experiments related to image deconvolution with Gaussian and Poisson noise with assumption-driven and data-driven convex priors. Source codes for the numerical experiments of this paper are available from https://github.com/MI2G/accelerated-langevin-imla.
{"title":"Accelerated Bayesian Imaging by Relaxed Proximal-Point Langevin Sampling","authors":"Teresa Klatzer, Paul Dobson, Yoann Altmann, Marcelo Pereyra, Jesus Maria Sanz-Serna, Konstantinos C. Zygalakis","doi":"10.1137/23m1594832","DOIUrl":"https://doi.org/10.1137/23m1594832","url":null,"abstract":"SIAM Journal on Imaging Sciences, Volume 17, Issue 2, Page 1078-1117, June 2024. <br/> Abstract.This paper presents a new accelerated proximal Markov chain Monte Carlo methodology to perform Bayesian inference in imaging inverse problems with an underlying convex geometry. The proposed strategy takes the form of a stochastic relaxed proximal-point iteration that admits two complementary interpretations. For models that are smooth or regularized by Moreau–Yosida smoothing, the algorithm is equivalent to an implicit midpoint discretization of an overdamped Langevin diffusion targeting the posterior distribution of interest. This discretization is asymptotically unbiased for Gaussian targets and shown to converge in an accelerated manner for any target that is [math]-strongly log-concave (i.e., requiring in the order of [math] iterations to converge, similar to accelerated optimization schemes), comparing favorably to Pereyra, Vargas Mieles, and Zygalakis [SIAM J. Imaging Sci., 13 (2020), pp. 905–935], which is only provably accelerated for Gaussian targets and has bias. For models that are not smooth, the algorithm is equivalent to a Leimkuhler–Matthews discretization of a Langevin diffusion targeting a Moreau–Yosida approximation of the posterior distribution of interest and hence achieves a significantly lower bias than conventional unadjusted Langevin strategies based on the Euler–Maruyama discretization. For targets that are [math]-strongly log-concave, the provided nonasymptotic convergence analysis also identifies the optimal time step, which maximizes the convergence speed. The proposed methodology is demonstrated through a range of experiments related to image deconvolution with Gaussian and Poisson noise with assumption-driven and data-driven convex priors. Source codes for the numerical experiments of this paper are available from https://github.com/MI2G/accelerated-langevin-imla.","PeriodicalId":49528,"journal":{"name":"SIAM Journal on Imaging Sciences","volume":null,"pages":null},"PeriodicalIF":2.1,"publicationDate":"2024-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141551084","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
SIAM Journal on Imaging Sciences, Volume 17, Issue 2, Page 1040-1077, June 2024. Abstract.We consider a class of optimization problems defined over trees with unary cost terms and shifted pairwise cost terms. These problems arise when considering block coordinate descent (BCD) approaches for solving inverse problems with total generalized variation (TGV) regularizers or their nonconvex generalizations. We introduce a linear-time reduction that transforms the shifted problems into their nonshifted counterparts. However, combining existing continuous dynamic programming (DP) algorithms with the reduction does not lead to BCD iterations that compute TGV-like solutions. This problem can be overcome by considering a box-constrained modification of the subproblems or smoothing the cost terms of the TGV regularized problem. The former leads to shifted and box-constrained subproblems, for which we propose a linear-time reduction to their unconstrained counterpart. The latter naturally leads to problems with smooth unary and pairwise cost terms. With this in mind, we propose two novel continuous DP algorithms that can solve (convex and nonconvex) problems with piecewise quadratic unary and pairwise cost terms. We prove that the algorithm for the convex case has quadratic worst-case time and memory complexity, while the algorithm for the nonconvex case has exponential time and memory complexity, but works well in practice for smooth truncated total variation pairwise costs. Finally, we demonstrate the applicability of the proposed algorithms for solving inverse problems with first-order and higher-order regularizers.
SIAM 影像科学期刊》第 17 卷第 2 期第 1040-1077 页,2024 年 6 月。 摘要.我们考虑了一类定义在具有一元代价项和移位成对代价项的树上的优化问题。这些问题是在考虑用块坐标下降(BCD)方法解决具有总广义变异(TGV)正则或其非凸广义的逆问题时出现的。我们引入了一种线性时间还原法,可将移位问题转化为非移位问题。然而,将现有的连续动态编程(DP)算法与还原法结合起来,并不会产生能计算类似 TGV 解的 BCD 迭代。要解决这个问题,可以考虑对子问题进行箱约束修改,或者对 TGV 正则化问题的代价项进行平滑处理。前者会导致移位和盒式受限子问题,为此我们提出了一种线性时间还原为无约束对应问题的方法。后者自然会导致具有平滑单值和成对成本项的问题。有鉴于此,我们提出了两种新颖的连续 DP 算法,可以解决具有片断二次单项式和成对代价项的(凸和非凸)问题。我们证明,凸情况下的算法具有二次最坏情况时间和内存复杂度,而非凸情况下的算法具有指数时间和内存复杂度,但在实践中对于平滑截断的总变化成对成本效果很好。最后,我们展示了所提算法在解决具有一阶和高阶正则的逆问题时的适用性。
{"title":"Total Generalized Variation on a Tree","authors":"Muhamed Kuric, Jan Ahmetspahic, Thomas Pock","doi":"10.1137/23m1556915","DOIUrl":"https://doi.org/10.1137/23m1556915","url":null,"abstract":"SIAM Journal on Imaging Sciences, Volume 17, Issue 2, Page 1040-1077, June 2024. <br/> Abstract.We consider a class of optimization problems defined over trees with unary cost terms and shifted pairwise cost terms. These problems arise when considering block coordinate descent (BCD) approaches for solving inverse problems with total generalized variation (TGV) regularizers or their nonconvex generalizations. We introduce a linear-time reduction that transforms the shifted problems into their nonshifted counterparts. However, combining existing continuous dynamic programming (DP) algorithms with the reduction does not lead to BCD iterations that compute TGV-like solutions. This problem can be overcome by considering a box-constrained modification of the subproblems or smoothing the cost terms of the TGV regularized problem. The former leads to shifted and box-constrained subproblems, for which we propose a linear-time reduction to their unconstrained counterpart. The latter naturally leads to problems with smooth unary and pairwise cost terms. With this in mind, we propose two novel continuous DP algorithms that can solve (convex and nonconvex) problems with piecewise quadratic unary and pairwise cost terms. We prove that the algorithm for the convex case has quadratic worst-case time and memory complexity, while the algorithm for the nonconvex case has exponential time and memory complexity, but works well in practice for smooth truncated total variation pairwise costs. Finally, we demonstrate the applicability of the proposed algorithms for solving inverse problems with first-order and higher-order regularizers.","PeriodicalId":49528,"journal":{"name":"SIAM Journal on Imaging Sciences","volume":null,"pages":null},"PeriodicalIF":2.1,"publicationDate":"2024-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141171789","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Assembling a Learnable Mumford–Shah Type Model with Multigrid Technique for Image Segmentation","authors":"Junying Meng, Weihong Guo, Jun Liu, Mingrui Yang","doi":"10.1137/23m1577663","DOIUrl":"https://doi.org/10.1137/23m1577663","url":null,"abstract":"","PeriodicalId":49528,"journal":{"name":"SIAM Journal on Imaging Sciences","volume":null,"pages":null},"PeriodicalIF":2.1,"publicationDate":"2024-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141111706","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Trent DeGiovanni, Fernando Guevara Vasquez, China Mauck
SIAM Journal on Imaging Sciences, Volume 17, Issue 2, Page 984-1006, June 2024. Abstract.We use thermal noise induced currents to image the real and imaginary parts of the conductivity of a body. Covariances of the thermal noise currents measured at a few electrodes are shown to be related to a deterministic problem. We use the covariances obtained while selectively heating the body to recover the real power density in the body under known boundary conditions and at a known frequency. The resulting inverse problem is related to acousto-electric tomography, but where the conductivity is complex and only the real power is measured. We study the local solvability of this problem by determining where its linearization is elliptic. Numerical experiments illustrating this inverse problem are included.
{"title":"Imaging with Thermal Noise Induced Currents","authors":"Trent DeGiovanni, Fernando Guevara Vasquez, China Mauck","doi":"10.1137/23m1571630","DOIUrl":"https://doi.org/10.1137/23m1571630","url":null,"abstract":"SIAM Journal on Imaging Sciences, Volume 17, Issue 2, Page 984-1006, June 2024. <br/> Abstract.We use thermal noise induced currents to image the real and imaginary parts of the conductivity of a body. Covariances of the thermal noise currents measured at a few electrodes are shown to be related to a deterministic problem. We use the covariances obtained while selectively heating the body to recover the real power density in the body under known boundary conditions and at a known frequency. The resulting inverse problem is related to acousto-electric tomography, but where the conductivity is complex and only the real power is measured. We study the local solvability of this problem by determining where its linearization is elliptic. Numerical experiments illustrating this inverse problem are included.","PeriodicalId":49528,"journal":{"name":"SIAM Journal on Imaging Sciences","volume":null,"pages":null},"PeriodicalIF":2.1,"publicationDate":"2024-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141152808","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
SIAM Journal on Imaging Sciences, Volume 17, Issue 2, Page 951-983, June 2024. Abstract.Spherical radial-basis-based kernel interpolation abounds in image sciences, including geophysical image reconstruction, climate trends description, and image rendering, due to its excellent spatial localization property and perfect approximation performance. However, in dealing with noisy data, kernel interpolation frequently behaves not so well due to the large condition number of the kernel matrix and instability of the interpolation process. In this paper, we introduce a weighted spectral filter approach to reduce the condition number of the kernel matrix and then stabilize kernel interpolation. The main building blocks of the proposed method are the well-developed spherical positive quadrature rules and high-pass spectral filters. Using a recently developed integral operator approach for spherical data analysis, we theoretically demonstrate that the proposed weighted spectral filter approach succeeds in breaking through the bottleneck of kernel interpolation, especially in fitting noisy data. We provide optimal approximation rates of the new method to show that our approach does not compromise the predicting accuracy. Furthermore, we conduct both toy simulations and two real-world data experiments with synthetically added noise in geophysical image reconstruction and climate image processing to verify our theoretical assertions and show the feasibility of the weighted spectral filter approach.
{"title":"Weighted Spectral Filters for Kernel Interpolation on Spheres: Estimates of Prediction Accuracy for Noisy Data","authors":"Xiaotong Liu, Jinxin Wang, Di Wang, Shao-Bo Lin","doi":"10.1137/23m1585350","DOIUrl":"https://doi.org/10.1137/23m1585350","url":null,"abstract":"SIAM Journal on Imaging Sciences, Volume 17, Issue 2, Page 951-983, June 2024. <br/> Abstract.Spherical radial-basis-based kernel interpolation abounds in image sciences, including geophysical image reconstruction, climate trends description, and image rendering, due to its excellent spatial localization property and perfect approximation performance. However, in dealing with noisy data, kernel interpolation frequently behaves not so well due to the large condition number of the kernel matrix and instability of the interpolation process. In this paper, we introduce a weighted spectral filter approach to reduce the condition number of the kernel matrix and then stabilize kernel interpolation. The main building blocks of the proposed method are the well-developed spherical positive quadrature rules and high-pass spectral filters. Using a recently developed integral operator approach for spherical data analysis, we theoretically demonstrate that the proposed weighted spectral filter approach succeeds in breaking through the bottleneck of kernel interpolation, especially in fitting noisy data. We provide optimal approximation rates of the new method to show that our approach does not compromise the predicting accuracy. Furthermore, we conduct both toy simulations and two real-world data experiments with synthetically added noise in geophysical image reconstruction and climate image processing to verify our theoretical assertions and show the feasibility of the weighted spectral filter approach.","PeriodicalId":49528,"journal":{"name":"SIAM Journal on Imaging Sciences","volume":null,"pages":null},"PeriodicalIF":2.1,"publicationDate":"2024-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141153960","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}