Pub Date : 2025-11-24DOI: 10.1109/TCI.2025.3636750
Jun Li;Maohua Wang;Li Xu;Yifan Wu;Yin Gao
Deep learning has significantly advanced motion deblurring. However, most existing methods do not fully exploit the synergy between spatial and frequency domains, and conventional channel attention mechanisms are suboptimal for frequency-domain feature representation, thereby limiting further progress. To address these limitations, we propose integrating spatial deformation and frequency analysis through a Strip Fourier Transform Module (SFTM). SFTM exploits blur characteristics to enhance motion feature extraction. Additionally, we introduce an Instance Frequency-Domain Channel Attention (IFCA) module, which exploits both low- and high-frequency components to extract salient features. The resulting Strip Fourier Transform Network (SFTNet), combining frequency- and spatial-domain techniques, outperforms existing methods by improving deblurring performance while reducing computational complexity. Extensive experiments on benchmark datasets demonstrate that our method consistently achieves superior results in complex scenarios compared to state-of-the-art approaches.
{"title":"SFTNet: Strip Fourier Transform Network for Motion Deblurring","authors":"Jun Li;Maohua Wang;Li Xu;Yifan Wu;Yin Gao","doi":"10.1109/TCI.2025.3636750","DOIUrl":"https://doi.org/10.1109/TCI.2025.3636750","url":null,"abstract":"Deep learning has significantly advanced motion deblurring. However, most existing methods do not fully exploit the synergy between spatial and frequency domains, and conventional channel attention mechanisms are suboptimal for frequency-domain feature representation, thereby limiting further progress. To address these limitations, we propose integrating spatial deformation and frequency analysis through a Strip Fourier Transform Module (SFTM). SFTM exploits blur characteristics to enhance motion feature extraction. Additionally, we introduce an Instance Frequency-Domain Channel Attention (IFCA) module, which exploits both low- and high-frequency components to extract salient features. The resulting Strip Fourier Transform Network (SFTNet), combining frequency- and spatial-domain techniques, outperforms existing methods by improving deblurring performance while reducing computational complexity. Extensive experiments on benchmark datasets demonstrate that our method consistently achieves superior results in complex scenarios compared to state-of-the-art approaches.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"12 ","pages":"37-45"},"PeriodicalIF":4.8,"publicationDate":"2025-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145766177","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-24DOI: 10.1109/TCI.2025.3636751
Hitesh D. Khunti;Bhaskar D. Rao
This paper presents a unified framework that synergically combines model-based iterative algorithms with deep learning-based approaches for tomographic image reconstruction. In particular, the proposed method integrates the interpretability and adaptability of model-based techniques with the expressive power of deep learning,enabling sophisticated non-linear and data-driven priors that enhance reconstruction quality. This synergy yields a framework that is interpretable, robust, generalizable, and produces higher-quality images, effectively addressing key limitations of model-based and learning-based approaches in isolation. First, we show there exists a simple approach to generalize and accelerate Expectation Maximization algorithms which can adaptively speedup convergence based on individual voxel values. We then introduce a key re-parametrization that enables viewing multiple reconstruction algorithms as special cases of a general mapping function between iterations. Building on these insights, we propose a novel model-based deep neural network architecture that effectively is a generalized deep unrolling of a family of algorithms. The proposed method learns to reconstruct high-quality images by systematically performing the required trade-off across the represented algorithms, or it can learn a specific algorithm through training without compromising its robustness and generalization. Furthermore, to address the scarcity of PET imaging data the proposed method can be trained both in supervised and self-supervised regime. Our approach demonstrates superior adaptation with limited training data across varying noise levels, scan duration and out-of-distribution data. Experimental results show significant improvements in image quality compared to both existing iterative methods and deep learning approaches, while maintaining computational efficiency and theoretical interpretability. Code is publicly available online.
{"title":"Learning Generalized Mapping Functions via Deep-Unrolling for PET Image Reconstruction","authors":"Hitesh D. Khunti;Bhaskar D. Rao","doi":"10.1109/TCI.2025.3636751","DOIUrl":"https://doi.org/10.1109/TCI.2025.3636751","url":null,"abstract":"This paper presents a unified framework that synergically combines model-based iterative algorithms with deep learning-based approaches for tomographic image reconstruction. In particular, the proposed method integrates the interpretability and adaptability of model-based techniques with the expressive power of deep learning,enabling sophisticated non-linear and data-driven priors that enhance reconstruction quality. This synergy yields a framework that is interpretable, robust, generalizable, and produces higher-quality images, effectively addressing key limitations of model-based and learning-based approaches in isolation. First, we show there exists a simple approach to generalize and accelerate Expectation Maximization algorithms which can adaptively speedup convergence based on individual voxel values. We then introduce a key re-parametrization that enables viewing multiple reconstruction algorithms as special cases of a general mapping function between iterations. Building on these insights, we propose a novel model-based deep neural network architecture that effectively is a generalized deep unrolling of a family of algorithms. The proposed method learns to reconstruct high-quality images by systematically performing the required trade-off across the represented algorithms, or it can learn a specific algorithm through training without compromising its robustness and generalization. Furthermore, to address the scarcity of PET imaging data the proposed method can be trained both in supervised and self-supervised regime. Our approach demonstrates superior adaptation with limited training data across varying noise levels, scan duration and out-of-distribution data. Experimental results show significant improvements in image quality compared to both existing iterative methods and deep learning approaches, while maintaining computational efficiency and theoretical interpretability. Code is publicly available online.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"1654-1667"},"PeriodicalIF":4.8,"publicationDate":"2025-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145674711","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
To address the challenge of image degradation caused by light scattering in turbid underwater environments, this study proposes an underwater descattering framework integrating polarization physics with low-rank decomposition. First, a dynamic attenuation-regularized low-rank decomposition model is established, enabling adaptive parameter adjustment to separate background scattered light from target signals. Then, a nonlinear correlation equation for polarization-driven transmittance based on Beer-Lambert Law is developed, combining isotropic intensity attenuation transmittance through adaptive weighting mechanism to establish the composite transmittance, more in line with the underwater optical and physical characteristics Finally, a dual-constraint optimization architecture is designed to effectively suppress descattering noise. Experimental results demonstrate that our method achieves better imaging results and significant improvements in key metrics. This research establishes an innovative “underwater imaging-cross-domain migration” paradigm for scattering environment imaging, showing promising applications in marine exploration and intelligent navigation.
{"title":"Low-Rank Decomposition and Polarization-Driven Transmittance Synergy for Underwater Descattering With Cross-Domain Generalization","authors":"Weifeng Kong;Guanying Huo;Chao Peng;Yong Su;Zhen Cheng","doi":"10.1109/TCI.2025.3636757","DOIUrl":"https://doi.org/10.1109/TCI.2025.3636757","url":null,"abstract":"To address the challenge of image degradation caused by light scattering in turbid underwater environments, this study proposes an underwater descattering framework integrating polarization physics with low-rank decomposition. First, a dynamic attenuation-regularized low-rank decomposition model is established, enabling adaptive parameter adjustment to separate background scattered light from target signals. Then, a nonlinear correlation equation for polarization-driven transmittance based on Beer-Lambert Law is developed, combining isotropic intensity attenuation transmittance through adaptive weighting mechanism to establish the composite transmittance, more in line with the underwater optical and physical characteristics Finally, a dual-constraint optimization architecture is designed to effectively suppress descattering noise. Experimental results demonstrate that our method achieves better imaging results and significant improvements in key metrics. This research establishes an innovative “underwater imaging-cross-domain migration” paradigm for scattering environment imaging, showing promising applications in marine exploration and intelligent navigation.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"1668-1681"},"PeriodicalIF":4.8,"publicationDate":"2025-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145674743","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In medical or industrial measurements, limited angle reconstruction is a kind of ill-posed problem in computed tomography (CT). The CT images reconstructed from conventional analytical algorithms demonstrates structural distortions and artifacts. Recently, deep learning-based methods are utilized to resolve the limited angle reconstruction problem. However, most of the methods are supervised and require full scan CT images for training purposes. The training labels are often unavailable under real-world scenarios. In this work, we proposed a self-supervised limited-angle reconstruction method for CT that entirely eliminates the reliance on external supervision signals. It requires only limited-angle data acquired via a limited-angle acquisition protocol, enabling computationally efficient and rapid reconstruction. The method utilized the reconstruction results from Tuy's inversion method as training labels. To bridge the distribution gap between the regions that satisfy and fail to satisfy Tuy's data sufficiency condition, a dedicated data synthesis process was designed. The method was validated using both numerical simulations and real experimental data. Results demonstrated that the proposed method can effectively suppress the limited angle artifacts without the need of any full scan CT labels. The performance of the proposed method approaches that of the supervised methods, based on visual inspections. The proposed method is also computationally efficient, enabling real-time limited angle CT reconstruction.
{"title":"TIF-Net: Self-Supervised Net via Tuy's Inversion Formula for Limited Angle Reconstruction","authors":"Guojun Zhu;Xinyun Zhong;Wenhui Huang;Guotao Quan;Yan Xi;Shipeng Xie;Yikun Zhang;Xu Ji;Yang Chen","doi":"10.1109/TCI.2025.3627134","DOIUrl":"https://doi.org/10.1109/TCI.2025.3627134","url":null,"abstract":"In medical or industrial measurements, limited angle reconstruction is a kind of ill-posed problem in computed tomography (CT). The CT images reconstructed from conventional analytical algorithms demonstrates structural distortions and artifacts. Recently, deep learning-based methods are utilized to resolve the limited angle reconstruction problem. However, most of the methods are supervised and require full scan CT images for training purposes. The training labels are often unavailable under real-world scenarios. In this work, we proposed a self-supervised limited-angle reconstruction method for CT that entirely eliminates the reliance on external supervision signals. It requires only limited-angle data acquired via a limited-angle acquisition protocol, enabling computationally efficient and rapid reconstruction. The method utilized the reconstruction results from Tuy's inversion method as training labels. To bridge the distribution gap between the regions that satisfy and fail to satisfy Tuy's data sufficiency condition, a dedicated data synthesis process was designed. The method was validated using both numerical simulations and real experimental data. Results demonstrated that the proposed method can effectively suppress the limited angle artifacts without the need of any full scan CT labels. The performance of the proposed method approaches that of the supervised methods, based on visual inspections. The proposed method is also computationally efficient, enabling real-time limited angle CT reconstruction.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"1559-1571"},"PeriodicalIF":4.8,"publicationDate":"2025-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145510208","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-31DOI: 10.1109/TCI.2025.3627065
Bailin Zhuang;Li Liu;Jinxiang Du;Lei Zhong;Haoyang Liang;Ming Gong;Qihang Zhang;Honggang Gu;Shiyuan Liu
The axial misalignment, one of the most critical systematic errors in ptychographic imaging system, may cause inconsistencies between inversion reconstruction and physical experiment, leading to reconstruction artifacts. Here, we propose a precise and robust autofocused ptychographic imaging algorithm based on mid-frequency discrete cosine transform operator to assess image sharpness cross virtual planes within the depth of field. This algorithm demonstrates unprecedented sensitivity to the defocus blur, enabling it to escape from local minima and suppress convergence oscillations without the need for complex cross-domain computations. Both simulations and experiments for amplitude and biological specimens indicate that the proposed approach stably converges within an uncertainty on the order of depth of field, effectively eliminating reconstruction artifacts, and delivers a several-fold to orders of magnitude improvement in convergence speed, calibration accuracy and uncertainty compared to conventional total variation model based autofocused ptychographic imaging algorithms. Furthermore, its simple-to-implement architecture ensures excellent compatibility with diverse ptychographic reconstruction frameworks, significantly expanding its applicability to a wide range of coherent diffraction imaging techniques, such as multi-plane phase retrieval, in-line holography, and coherent tomography.
{"title":"Autofocused Ptychographic Imaging Based on Mid-Frequency Discrete Cosine Transform","authors":"Bailin Zhuang;Li Liu;Jinxiang Du;Lei Zhong;Haoyang Liang;Ming Gong;Qihang Zhang;Honggang Gu;Shiyuan Liu","doi":"10.1109/TCI.2025.3627065","DOIUrl":"https://doi.org/10.1109/TCI.2025.3627065","url":null,"abstract":"The axial misalignment, one of the most critical systematic errors in ptychographic imaging system, may cause inconsistencies between inversion reconstruction and physical experiment, leading to reconstruction artifacts. Here, we propose a precise and robust autofocused ptychographic imaging algorithm based on mid-frequency discrete cosine transform operator to assess image sharpness cross virtual planes within the depth of field. This algorithm demonstrates unprecedented sensitivity to the defocus blur, enabling it to escape from local minima and suppress convergence oscillations without the need for complex cross-domain computations. Both simulations and experiments for amplitude and biological specimens indicate that the proposed approach stably converges within an uncertainty on the order of depth of field, effectively eliminating reconstruction artifacts, and delivers a several-fold to orders of magnitude improvement in convergence speed, calibration accuracy and uncertainty compared to conventional total variation model based autofocused ptychographic imaging algorithms. Furthermore, its simple-to-implement architecture ensures excellent compatibility with diverse ptychographic reconstruction frameworks, significantly expanding its applicability to a wide range of coherent diffraction imaging techniques, such as multi-plane phase retrieval, in-line holography, and coherent tomography.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"1644-1653"},"PeriodicalIF":4.8,"publicationDate":"2025-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145612045","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-28DOI: 10.1109/TCI.2025.3626235
Yufeng Liu;Yong Wang;Mei Liu
Inverse Synthetic Aperture Radar (ISAR) imaging faces challenges in cross-range scaling for maneuvering targets due to nonuniform rotation, which complicates the estimation of rotational parameters. Though the traditional methods are theoretically effective, they are constrained by high computational complexity, thereby limiting real-time applications. Meanwhile, the monopulse technique, which is derived from monopulse radar and known for high angular measurement accuracy, has been integrated with ISAR to enhance imaging performance. However, traditional methods directly apply the monopulse technique after ISAR imaging, prone to inducing angle glint phenomena, which degrade the accuracy of scatterer projection. This paper innovatively integrates the monopulse technique into the ISAR imaging process and proposes a novel cross-range scaling method for maneuvering targets. By establishing a relationship between the monopulse angle and the ISAR equivalent rotation angle, the method decouples the estimation of Equivalent Rotation Velocity (ERV) and Equivalent Rotation Acceleration (ERA). Amplitude information is used to derive ERV, while phase information is utilized to obtain ERA, which significantly reduces computational complexity. The proposed algorithm offers high estimation accuracy and is well-suited for real-time, high-precision applications. Simulation results validate the effectiveness of the method and demonstrate its potential to enhance ISAR cross-range scaling for maneuvering targets.
{"title":"A Novel ISAR Imaging Scaling Approach for Maneuvering Targets Based on Monopulse Radar","authors":"Yufeng Liu;Yong Wang;Mei Liu","doi":"10.1109/TCI.2025.3626235","DOIUrl":"https://doi.org/10.1109/TCI.2025.3626235","url":null,"abstract":"Inverse Synthetic Aperture Radar (ISAR) imaging faces challenges in cross-range scaling for maneuvering targets due to nonuniform rotation, which complicates the estimation of rotational parameters. Though the traditional methods are theoretically effective, they are constrained by high computational complexity, thereby limiting real-time applications. Meanwhile, the monopulse technique, which is derived from monopulse radar and known for high angular measurement accuracy, has been integrated with ISAR to enhance imaging performance. However, traditional methods directly apply the monopulse technique after ISAR imaging, prone to inducing angle glint phenomena, which degrade the accuracy of scatterer projection. This paper innovatively integrates the monopulse technique into the ISAR imaging process and proposes a novel cross-range scaling method for maneuvering targets. By establishing a relationship between the monopulse angle and the ISAR equivalent rotation angle, the method decouples the estimation of Equivalent Rotation Velocity (ERV) and Equivalent Rotation Acceleration (ERA). Amplitude information is used to derive ERV, while phase information is utilized to obtain ERA, which significantly reduces computational complexity. The proposed algorithm offers high estimation accuracy and is well-suited for real-time, high-precision applications. Simulation results validate the effectiveness of the method and demonstrate its potential to enhance ISAR cross-range scaling for maneuvering targets.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"1521-1533"},"PeriodicalIF":4.8,"publicationDate":"2025-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145455860","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-27DOI: 10.1109/TCI.2025.3626239
Dongrui Dai;Xiang Zou;Wuliang Shi;Yuxiang Xing
Computed Laminography (CL) has significant advantages in 3D imaging of plate-like objects. However, interlayer aliasing artifacts due to incomplete data acquisition limit its application. In this work, we propose a Two-stage Anisotropic Gaussian Splatting for CL reconstruction (TAG-Splat). Specifically, the scanned object is modeled as a series of 3D Gaussian kernels with learnable parameters. In the first stage, we employ FDK to obtain an analytical result of preliminary estimation by that we fit a Gaussian kernel representation (GKR) in a supervised manner. In the second stage, we analogize the cone-beam X-ray scanner to a pinhole camera model, and apply the differentiable rasterization technique from 3D Gaussian Splatting (3DGS) to generate rendered projections of the GKR at arbitrary angles. To accommodate various CL imaging geometries, we incorporate a virtual detector plane and establish a mapping relationship to the real projection data to satisfy camera conditions. Additionally, we design a novel anisotropic Gaussian regularization tailored to the characteristics of CL, which effectively suppresses aliasing artifacts and restores axial resolution. The Gaussian parameters are iteratively optimized according to data fidelity, and volumetric reconstruction is obtained through voxelization of the Gaussians in the end. Both simulations and real experiments on rotational CL demonstrate that the proposed TAG-Splat achieves superior reconstruction performance than traditional analytical and iterative methods.
{"title":"TAG-Splat: Two-Stage Anisotropic Gaussian Splatting for CL Reconstruction","authors":"Dongrui Dai;Xiang Zou;Wuliang Shi;Yuxiang Xing","doi":"10.1109/TCI.2025.3626239","DOIUrl":"https://doi.org/10.1109/TCI.2025.3626239","url":null,"abstract":"Computed Laminography (CL) has significant advantages in 3D imaging of plate-like objects. However, interlayer aliasing artifacts due to incomplete data acquisition limit its application. In this work, we propose a Two-stage Anisotropic Gaussian Splatting for CL reconstruction (TAG-Splat). Specifically, the scanned object is modeled as a series of 3D Gaussian kernels with learnable parameters. In the first stage, we employ FDK to obtain an analytical result of preliminary estimation by that we fit a Gaussian kernel representation (GKR) in a supervised manner. In the second stage, we analogize the cone-beam X-ray scanner to a pinhole camera model, and apply the differentiable rasterization technique from 3D Gaussian Splatting (3DGS) to generate rendered projections of the GKR at arbitrary angles. To accommodate various CL imaging geometries, we incorporate a virtual detector plane and establish a mapping relationship to the real projection data to satisfy camera conditions. Additionally, we design a novel anisotropic Gaussian regularization tailored to the characteristics of CL, which effectively suppresses aliasing artifacts and restores axial resolution. The Gaussian parameters are iteratively optimized according to data fidelity, and volumetric reconstruction is obtained through voxelization of the Gaussians in the end. Both simulations and real experiments on rotational CL demonstrate that the proposed TAG-Splat achieves superior reconstruction performance than traditional analytical and iterative methods.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"1572-1584"},"PeriodicalIF":4.8,"publicationDate":"2025-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145510209","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-23DOI: 10.1109/TCI.2025.3625054
Yu Gong;Yicheng Jiang;Zitao Liu;Yun Zhang
With the advancement of modern radar systems, there are increasingly stringent requirements for reconstructing Inverse Synthetic Aperture Radar (ISAR) images from gap missing sampling (GMS) data. Compressed sensing (CS), while being a conventional approach for sparse reconstruction, suffers from inherent discrete dictionary mismatch issues that degrade reconstruction accuracy. Matrix completion (MC) methods, leveraging the low-rank properties of matrices, prevent the grid mismatch problem by directly recovering missing data. Although existing Hankel transformation methods can address GMS reconstruction, their computational efficiency remains quite slow. To address the fast ISAR imaging problem with azimuth GMS, we propose a fast imaging algorithm based on structured Toeplitz matrix. Our approach leverages the inherent low-rank properties of data by employing a structured Toeplitz formulation, thereby exploiting the enhanced low-rank property from the data structure. Numerical simulations reveal that the Toeplitz transformation achieves superior accuracy relative to the Hankel transformation. For achieving high-efficiency and high-precision image reconstruction, we further develop a reconstruction algorithm based on the fast Alternating Direction Method of Multipliers (ADMM). In contrast to the SLR+S algorithm using Hankel transformation, our proposed algorithm significantly reduces computational time while maintaining reconstruction accuracy. Finally, the experimental results further validate the effectiveness of the proposed algorithm, providing substantial support for its engineering applications.
{"title":"Fast ISAR Imaging Algorithm for Azimuth Gapped Data Based on Structured Toeplitz Matrix","authors":"Yu Gong;Yicheng Jiang;Zitao Liu;Yun Zhang","doi":"10.1109/TCI.2025.3625054","DOIUrl":"https://doi.org/10.1109/TCI.2025.3625054","url":null,"abstract":"With the advancement of modern radar systems, there are increasingly stringent requirements for reconstructing Inverse Synthetic Aperture Radar (ISAR) images from gap missing sampling (GMS) data. Compressed sensing (CS), while being a conventional approach for sparse reconstruction, suffers from inherent discrete dictionary mismatch issues that degrade reconstruction accuracy. Matrix completion (MC) methods, leveraging the low-rank properties of matrices, prevent the grid mismatch problem by directly recovering missing data. Although existing Hankel transformation methods can address GMS reconstruction, their computational efficiency remains quite slow. To address the fast ISAR imaging problem with azimuth GMS, we propose a fast imaging algorithm based on structured Toeplitz matrix. Our approach leverages the inherent low-rank properties of data by employing a structured Toeplitz formulation, thereby exploiting the enhanced low-rank property from the data structure. Numerical simulations reveal that the Toeplitz transformation achieves superior accuracy relative to the Hankel transformation. For achieving high-efficiency and high-precision image reconstruction, we further develop a reconstruction algorithm based on the fast Alternating Direction Method of Multipliers (ADMM). In contrast to the SLR+S algorithm using Hankel transformation, our proposed algorithm significantly reduces computational time while maintaining reconstruction accuracy. Finally, the experimental results further validate the effectiveness of the proposed algorithm, providing substantial support for its engineering applications.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"1548-1558"},"PeriodicalIF":4.8,"publicationDate":"2025-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145455813","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-23DOI: 10.1109/TCI.2025.3625052
Tao Hong;Zhaoyi Xu;Se Young Chun;Luis Hernandez-Garcia;Jeffrey A. Fessler
In compressed sensing (CS) MRI, model-based methods are pivotal to achieving accurate reconstruction. One of the main challenges in model-based methods is finding an effective prior to describe the statistical distribution of the target image. Plug-and-Play (PnP) and REgularization by Denoising (RED) are two general frameworks that use denoisers as the prior. While PnP/RED methods with convolutional neural network (CNN) based denoisers outperform classical hand-crafted priors in CS MRI, their convergence theory relies on assumptions that do not hold for practical CNN models. The recently developed gradient-driven denoisers offer a framework that bridges the gap between practical performance and theoretical guarantees. However, the numerical solvers for the associated minimization problem remain slow for CS MRI reconstruction. This paper proposes a complex quasi-Newton proximal method that achieves faster convergence than existing approaches. To address the complex domain in CS MRI, we propose a modified Hessian estimation method that guarantees Hermitian positive definiteness. Furthermore, we provide a rigorous convergence analysis of the proposed method for nonconvex settings. Numerical experiments on both Cartesian and non-Cartesian sampling trajectories demonstrate the effectiveness and efficiency of our approach.
{"title":"Convergent Complex Quasi-Newton Proximal Methods for Gradient-Driven Denoisers in Compressed Sensing MRI Reconstruction","authors":"Tao Hong;Zhaoyi Xu;Se Young Chun;Luis Hernandez-Garcia;Jeffrey A. Fessler","doi":"10.1109/TCI.2025.3625052","DOIUrl":"https://doi.org/10.1109/TCI.2025.3625052","url":null,"abstract":"In compressed sensing (CS) MRI, model-based methods are pivotal to achieving accurate reconstruction. One of the main challenges in model-based methods is finding an effective prior to describe the statistical distribution of the target image. Plug-and-Play (PnP) and REgularization by Denoising (RED) are two general frameworks that use denoisers as the prior. While PnP/RED methods with convolutional neural network (CNN) based denoisers outperform classical hand-crafted priors in CS MRI, their convergence theory relies on assumptions that do not hold for practical CNN models. The recently developed gradient-driven denoisers offer a framework that bridges the gap between practical performance and theoretical guarantees. However, the numerical solvers for the associated minimization problem remain slow for CS MRI reconstruction. This paper proposes a complex quasi-Newton proximal method that achieves faster convergence than existing approaches. To address the complex domain in CS MRI, we propose a modified Hessian estimation method that guarantees Hermitian positive definiteness. Furthermore, we provide a rigorous convergence analysis of the proposed method for nonconvex settings. Numerical experiments on both Cartesian and non-Cartesian sampling trajectories demonstrate the effectiveness and efficiency of our approach.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"1534-1547"},"PeriodicalIF":4.8,"publicationDate":"2025-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145455811","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Neural-network-based approaches have shown promising performance in the field of electrical impedance tomography (EIT), but their effectiveness in multi-resolution reconstruction tasks within this domain still requires further validation. We introduce MR-EIT, a dual-mode (data-driven and unsupervised) multi-resolution method for EIT reconstruction. MR-EIT integrates an ordered feature extraction module and an unordered coordinate feature expression module. The former learns the mapping from voltage to two-dimensional conductivity features through pre-training, while the latter realizes multi-resolution reconstruction independent of the order and size of the input sequence by utilizing symmetric functions and local feature extraction mechanisms. In the data-driven mode, MR-EIT reconstructs high-resolution images from low-resolution data of finite element meshes through two stages of pre-training and joint training and demonstrates excellent performance in simulation experiments. In the unsupervised learning mode, MR-EIT does not require pre-training data and performs iterative optimization solely based on measured voltages to rapidly achieve image reconstruction from low to high resolution. It shows robustness to noise and efficient super-resolution reconstruction capabilities in both simulation and real water tank experiments. Experimental results indicate that MR-EIT outperforms the comparison methods in terms of Structural Similarity (SSIM) and Relative Image Error (RIE), especially in the unsupervised learning mode, where it can significantly reduce the number of iterations and improve image reconstruction quality.
{"title":"MR-EIT: Multi-Resolution Reconstruction for Electrical Impedance Tomography via Data-Driven and Unsupervised Dual-Mode Neural Networks","authors":"Fangming Shi;Jinzhen Liu;Xiangqian Meng;Yapeng Zhou;Hui Xiong","doi":"10.1109/TCI.2025.3625049","DOIUrl":"https://doi.org/10.1109/TCI.2025.3625049","url":null,"abstract":"Neural-network-based approaches have shown promising performance in the field of electrical impedance tomography (EIT), but their effectiveness in multi-resolution reconstruction tasks within this domain still requires further validation. We introduce MR-EIT, a dual-mode (data-driven and unsupervised) multi-resolution method for EIT reconstruction. MR-EIT integrates an ordered feature extraction module and an unordered coordinate feature expression module. The former learns the mapping from voltage to two-dimensional conductivity features through pre-training, while the latter realizes multi-resolution reconstruction independent of the order and size of the input sequence by utilizing symmetric functions and local feature extraction mechanisms. In the data-driven mode, MR-EIT reconstructs high-resolution images from low-resolution data of finite element meshes through two stages of pre-training and joint training and demonstrates excellent performance in simulation experiments. In the unsupervised learning mode, MR-EIT does not require pre-training data and performs iterative optimization solely based on measured voltages to rapidly achieve image reconstruction from low to high resolution. It shows robustness to noise and efficient super-resolution reconstruction capabilities in both simulation and real water tank experiments. Experimental results indicate that MR-EIT outperforms the comparison methods in terms of Structural Similarity (SSIM) and Relative Image Error (RIE), especially in the unsupervised learning mode, where it can significantly reduce the number of iterations and improve image reconstruction quality.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"12 ","pages":"1-10"},"PeriodicalIF":4.8,"publicationDate":"2025-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145766169","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}