Pub Date : 2024-04-17DOI: 10.1088/1361-6420/ad3b34
Pu Yang and Bin Dong
Magnetic resonance imaging (MRI) is a widely used medical imaging technique, but its long acquisition time can be a limiting factor in clinical settings. To address this issue, researchers have been exploring ways to reduce the acquisition time while maintaining the reconstruction quality. Previous works have focused on finding either sparse samplers with a fixed reconstructor or finding reconstructors with a fixed sampler. However, these approaches do not fully utilize the potential of joint learning of samplers and reconstructors. In this paper, we propose an alternating training framework for jointly learning a good pair of samplers and reconstructors via deep reinforcement learning. In particular, we consider the process of MRI sampling as a sampling trajectory controlled by a sampler, and introduce a novel sparse-reward partially observed Markov decision process (POMDP) to formulate the MRI sampling trajectory. Compared to the dense-reward POMDP used in existing works, the proposed sparse-reward POMDP is more computationally efficient and has a provable advantage. Moreover, the proposed framework, called learning to sample and reconstruct (L2SR), overcomes the training mismatch problem that arises in previous methods that use dense-reward POMDP. By alternately updating samplers and reconstructors, L2SR learns a pair of samplers and reconstructors that achieve state-of-the-art reconstruction performances on the fastMRI dataset. Codes are available at https://github.com/yangpuPKU/L2SR-Learning-to-Sample-and-Reconstruct.
{"title":"L2SR: learning to sample and reconstruct for accelerated MRI via reinforcement learning","authors":"Pu Yang and Bin Dong","doi":"10.1088/1361-6420/ad3b34","DOIUrl":"https://doi.org/10.1088/1361-6420/ad3b34","url":null,"abstract":"Magnetic resonance imaging (MRI) is a widely used medical imaging technique, but its long acquisition time can be a limiting factor in clinical settings. To address this issue, researchers have been exploring ways to reduce the acquisition time while maintaining the reconstruction quality. Previous works have focused on finding either sparse samplers with a fixed reconstructor or finding reconstructors with a fixed sampler. However, these approaches do not fully utilize the potential of joint learning of samplers and reconstructors. In this paper, we propose an alternating training framework for jointly learning a good pair of samplers and reconstructors via deep reinforcement learning. In particular, we consider the process of MRI sampling as a sampling trajectory controlled by a sampler, and introduce a novel sparse-reward partially observed Markov decision process (POMDP) to formulate the MRI sampling trajectory. Compared to the dense-reward POMDP used in existing works, the proposed sparse-reward POMDP is more computationally efficient and has a provable advantage. Moreover, the proposed framework, called learning to sample and reconstruct (L2SR), overcomes the training mismatch problem that arises in previous methods that use dense-reward POMDP. By alternately updating samplers and reconstructors, L2SR learns a pair of samplers and reconstructors that achieve state-of-the-art reconstruction performances on the fastMRI dataset. Codes are available at https://github.com/yangpuPKU/L2SR-Learning-to-Sample-and-Reconstruct.","PeriodicalId":50275,"journal":{"name":"Inverse Problems","volume":"14 1","pages":""},"PeriodicalIF":2.1,"publicationDate":"2024-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140806620","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-02DOI: 10.1088/1361-6420/ad35e3
Qingping Zhou, Jiayu Qian, Junqi Tang, Jinglai Li
Combining the strengths of model-based iterative algorithms and data-driven deep learning solutions, deep unrolling networks (DuNets) have become a popular tool to solve inverse imaging problems. Although DuNets have been successfully applied to many linear inverse problems, their performance tends to be impaired by nonlinear problems. Inspired by momentum acceleration techniques that are often used in optimization algorithms, we propose a recurrent momentum acceleration (RMA) framework that uses a long short-term memory recurrent neural network (LSTM-RNN) to simulate the momentum acceleration process. The RMA module leverages the ability of the LSTM-RNN to learn and retain knowledge from the previous gradients. We apply RMA to two popular DuNets—the learned proximal gradient descent (LPGD) and the learned primal-dual (LPD) methods, resulting in LPGD-RMA and LPD-RMA, respectively. We provide experimental results on two nonlinear inverse problems: a nonlinear deconvolution problem, and an electrical impedance tomography problem with limited boundary measurements. In the first experiment we have observed that the improvement due to RMA largely increases with respect to the nonlinearity of the problem. The results of the second example further demonstrate that the RMA schemes can significantly improve the performance of DuNets in strongly ill-posed problems.
{"title":"Deep unrolling networks with recurrent momentum acceleration for nonlinear inverse problems","authors":"Qingping Zhou, Jiayu Qian, Junqi Tang, Jinglai Li","doi":"10.1088/1361-6420/ad35e3","DOIUrl":"https://doi.org/10.1088/1361-6420/ad35e3","url":null,"abstract":"Combining the strengths of model-based iterative algorithms and data-driven deep learning solutions, deep unrolling networks (DuNets) have become a popular tool to solve inverse imaging problems. Although DuNets have been successfully applied to many linear inverse problems, their performance tends to be impaired by nonlinear problems. Inspired by momentum acceleration techniques that are often used in optimization algorithms, we propose a recurrent momentum acceleration (RMA) framework that uses a long short-term memory recurrent neural network (LSTM-RNN) to simulate the momentum acceleration process. The RMA module leverages the ability of the LSTM-RNN to learn and retain knowledge from the previous gradients. We apply RMA to two popular DuNets—the learned proximal gradient descent (LPGD) and the learned primal-dual (LPD) methods, resulting in LPGD-RMA and LPD-RMA, respectively. We provide experimental results on two nonlinear inverse problems: a nonlinear deconvolution problem, and an electrical impedance tomography problem with limited boundary measurements. In the first experiment we have observed that the improvement due to RMA largely increases with respect to the nonlinearity of the problem. The results of the second example further demonstrate that the RMA schemes can significantly improve the performance of DuNets in strongly ill-posed problems.","PeriodicalId":50275,"journal":{"name":"Inverse Problems","volume":"240 1","pages":""},"PeriodicalIF":2.1,"publicationDate":"2024-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140561607","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-26DOI: 10.1088/1361-6420/ad3333
Andrea Ebner, Markus Haltmeier
Inverse problems are key issues in several scientific areas, including signal processing and medical imaging. Since inverse problems typically suffer from instability with respect to data perturbations, a variety of regularization techniques have been proposed. In particular, the use of filtered diagonal frame decompositions (DFDs) has proven to be effective and computationally efficient. However, existing convergence analysis applies only to linear filters and a few non-linear filters such as soft thresholding. In this paper, we analyze filtered DFDs with general non-linear filters. In particular, our results generalize singular value decomposition-based spectral filtering from linear to non-linear filters as a special case. As a first approach, we establish a connection between non-linear diagonal frame filtering and variational regularization, allowing us to use results from variational regularization to derive the convergence of non-linear spectral filtering. In the second approach, as our main theoretical results, we relax the assumptions involved in the variational case while still deriving convergence. Furthermore, we discuss connections between non-linear filtering and plug-and-play regularization and explore potential benefits of this relationship.
{"title":"Convergence of non-linear diagonal frame filtering for regularizing inverse problems","authors":"Andrea Ebner, Markus Haltmeier","doi":"10.1088/1361-6420/ad3333","DOIUrl":"https://doi.org/10.1088/1361-6420/ad3333","url":null,"abstract":"Inverse problems are key issues in several scientific areas, including signal processing and medical imaging. Since inverse problems typically suffer from instability with respect to data perturbations, a variety of regularization techniques have been proposed. In particular, the use of filtered diagonal frame decompositions (DFDs) has proven to be effective and computationally efficient. However, existing convergence analysis applies only to linear filters and a few non-linear filters such as soft thresholding. In this paper, we analyze filtered DFDs with general non-linear filters. In particular, our results generalize singular value decomposition-based spectral filtering from linear to non-linear filters as a special case. As a first approach, we establish a connection between non-linear diagonal frame filtering and variational regularization, allowing us to use results from variational regularization to derive the convergence of non-linear spectral filtering. In the second approach, as our main theoretical results, we relax the assumptions involved in the variational case while still deriving convergence. Furthermore, we discuss connections between non-linear filtering and plug-and-play regularization and explore potential benefits of this relationship.","PeriodicalId":50275,"journal":{"name":"Inverse Problems","volume":"14 1","pages":""},"PeriodicalIF":2.1,"publicationDate":"2024-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140315439","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We consider an inverse spectral problem on a quantum graph associated with the square lattice. Assuming that the potentials on the edges are compactly supported and symmetric, we show that the Dirichlet-to-Neumann map for a boundary value problem on a finite part of the graph uniquely determines the potentials. We obtain a reconstruction procedure, which is based on the reduction of the differential Schrödinger operator to a discrete one. As a corollary of the main results, it is proved that the S-matrix for all energies in any given open set in the continuous spectrum uniquely specifies the potentials on the square lattice.
我们考虑了与方阵相关的量子图上的反谱问题。假设边上的势是紧凑支撑和对称的,我们证明了图的有限部分上边界值问题的 Dirichlet 到 Neumann 映射唯一地决定了势。我们获得了一种基于将微分薛定谔算子还原为离散算子的重构程序。作为主要结果的一个推论,我们证明了连续谱中任何给定开集的所有能量的 S 矩阵唯一地指定了方格上的势。
{"title":"Inverse spectral problem for the Schrödinger operator on the square lattice","authors":"Dongjie Wu, Chuan-Fu Yang, Natalia Pavlovna Bondarenko","doi":"10.1088/1361-6420/ad3332","DOIUrl":"https://doi.org/10.1088/1361-6420/ad3332","url":null,"abstract":"We consider an inverse spectral problem on a quantum graph associated with the square lattice. Assuming that the potentials on the edges are compactly supported and symmetric, we show that the Dirichlet-to-Neumann map for a boundary value problem on a finite part of the graph uniquely determines the potentials. We obtain a reconstruction procedure, which is based on the reduction of the differential Schrödinger operator to a discrete one. As a corollary of the main results, it is proved that the S-matrix for all energies in any given open set in the continuous spectrum uniquely specifies the potentials on the square lattice.","PeriodicalId":50275,"journal":{"name":"Inverse Problems","volume":"63 1","pages":""},"PeriodicalIF":2.1,"publicationDate":"2024-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140315583","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-25DOI: 10.1088/1361-6420/ad2cf8
Phuoc-Truong Huynh, Konstantin Pieper, Daniel Walter
The objective of this work is to quantify the reconstruction error in sparse inverse problems with measures and stochastic noise, motivated by optimal sensor placement. To be useful in this context, the error quantities must be explicit in the sensor configuration and robust with respect to the source, yet relatively easy to compute in practice, compared to a direct evaluation of the error by a large number of samples. In particular, we consider the identification of a measure consisting of an unknown linear combination of point sources from a finite number of measurements contaminated by Gaussian noise. The statistical framework for recovery relies on two main ingredients: first, a convex but non-smooth variational Tikhonov point estimator over the space of Radon measures and, second, a suitable mean-squared error based on its Hellinger–Kantorovich distance to the ground truth. To quantify the error, we employ a non-degenerate source condition as well as careful linearization arguments to derive a computable upper bound. This leads to asymptotically sharp error estimates in expectation that are explicit in the sensor configuration. Thus they can be used to estimate the expected reconstruction error for a given sensor configuration and guide the placement of sensors in sparse inverse problems.
{"title":"Towards optimal sensor placement for inverse problems in spaces of measures","authors":"Phuoc-Truong Huynh, Konstantin Pieper, Daniel Walter","doi":"10.1088/1361-6420/ad2cf8","DOIUrl":"https://doi.org/10.1088/1361-6420/ad2cf8","url":null,"abstract":"The objective of this work is to quantify the reconstruction error in sparse inverse problems with measures and stochastic noise, motivated by optimal sensor placement. To be useful in this context, the error quantities must be explicit in the sensor configuration and robust with respect to the source, yet relatively easy to compute in practice, compared to a direct evaluation of the error by a large number of samples. In particular, we consider the identification of a measure consisting of an unknown linear combination of point sources from a finite number of measurements contaminated by Gaussian noise. The statistical framework for recovery relies on two main ingredients: first, a convex but non-smooth variational Tikhonov point estimator over the space of Radon measures and, second, a suitable mean-squared error based on its Hellinger–Kantorovich distance to the ground truth. To quantify the error, we employ a non-degenerate source condition as well as careful linearization arguments to derive a computable upper bound. This leads to asymptotically sharp error estimates in expectation that are explicit in the sensor configuration. Thus they can be used to estimate the expected reconstruction error for a given sensor configuration and guide the placement of sensors in sparse inverse problems.","PeriodicalId":50275,"journal":{"name":"Inverse Problems","volume":"6 1","pages":""},"PeriodicalIF":2.1,"publicationDate":"2024-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140316622","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-20DOI: 10.1088/1361-6420/ad2cfa
Kevin Bui, Zichao (Wendy) Di
Ptychography, a prevalent imaging technique in fields such as biology and optics, poses substantial challenges in its reconstruction process, characterized by nonconvexity and large-scale requirements. This paper presents a novel approach by introducing a class of variational models that incorporate the weighted difference of anisotropic–isotropic total variation. This formulation enables the handling of measurements corrupted by Gaussian or Poisson noise, effectively addressing the nonconvex challenge. To tackle the large-scale nature of the problem, we propose an efficient stochastic alternating direction method of multipliers, which guarantees convergence under mild conditions. Numerical experiments validate the superiority of our approach by demonstrating its capability to successfully reconstruct complex-valued images, especially in recovering the phase components even in the presence of highly corrupted measurements.
{"title":"A stochastic ADMM algorithm for large-scale ptychography with weighted difference of anisotropic and isotropic total variation","authors":"Kevin Bui, Zichao (Wendy) Di","doi":"10.1088/1361-6420/ad2cfa","DOIUrl":"https://doi.org/10.1088/1361-6420/ad2cfa","url":null,"abstract":"Ptychography, a prevalent imaging technique in fields such as biology and optics, poses substantial challenges in its reconstruction process, characterized by nonconvexity and large-scale requirements. This paper presents a novel approach by introducing a class of variational models that incorporate the weighted difference of anisotropic–isotropic total variation. This formulation enables the handling of measurements corrupted by Gaussian or Poisson noise, effectively addressing the nonconvex challenge. To tackle the large-scale nature of the problem, we propose an efficient stochastic alternating direction method of multipliers, which guarantees convergence under mild conditions. Numerical experiments validate the superiority of our approach by demonstrating its capability to successfully reconstruct complex-valued images, especially in recovering the phase components even in the presence of highly corrupted measurements.","PeriodicalId":50275,"journal":{"name":"Inverse Problems","volume":"40 1","pages":""},"PeriodicalIF":2.1,"publicationDate":"2024-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140315608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}