Pub Date : 2026-01-19DOI: 10.1109/TCI.2026.3655460
Tao Liu;Yu Wang;Biao Tian;Shiyou Xu;Zengping Chen
Effective and accurate translational motion compensation is crucial for inverse synthetic aperture radar (ISAR) imaging. Traditional data-based translational motion compensation methods are inapplicable to uniformly accelerated targets in low signal-to-noise ratio (SNR) scenarios. In this paper, a parametric approach is proposed based on joint modified phase difference and two-step least squares (MPD-TSLS). The method employs a second-order polynomial model for translational motion, whereby the energy of all scatterers is converted to a single range and Doppler cell through MPD operation, respectively. The estimated acceleration and velocity are then obtained by TSLS. To enhance precision, an optimization approach based on LS for refining polynomial parameters is also employed. The proposed method achieves a substantial increase in SNR, ensuring precise compensation accuracy while maintaining high computational efficiency by relying exclusively on fast Fourier transform (FFT) and matrix operations. The experimental results obtained from both simulated and real datasets fully verify that the proposed method exhibits superior performance compared with the other implemented methods.
{"title":"Joint Translational Motion Compensation of ISAR Imaging for Uniformly Accelerated Motion Targets Based on MPD-TSLS Under Low SNR","authors":"Tao Liu;Yu Wang;Biao Tian;Shiyou Xu;Zengping Chen","doi":"10.1109/TCI.2026.3655460","DOIUrl":"https://doi.org/10.1109/TCI.2026.3655460","url":null,"abstract":"Effective and accurate translational motion compensation is crucial for inverse synthetic aperture radar (ISAR) imaging. Traditional data-based translational motion compensation methods are inapplicable to uniformly accelerated targets in low signal-to-noise ratio (SNR) scenarios. In this paper, a parametric approach is proposed based on joint modified phase difference and two-step least squares (MPD-TSLS). The method employs a second-order polynomial model for translational motion, whereby the energy of all scatterers is converted to a single range and Doppler cell through MPD operation, respectively. The estimated acceleration and velocity are then obtained by TSLS. To enhance precision, an optimization approach based on LS for refining polynomial parameters is also employed. The proposed method achieves a substantial increase in SNR, ensuring precise compensation accuracy while maintaining high computational efficiency by relying exclusively on fast Fourier transform (FFT) and matrix operations. The experimental results obtained from both simulated and real datasets fully verify that the proposed method exhibits superior performance compared with the other implemented methods.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"12 ","pages":"349-363"},"PeriodicalIF":4.8,"publicationDate":"2026-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146082115","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dual-energy material decomposition is widely used in clinical diagnosis, especially for material characterization. However, conventional image-domain methods suffer from noise amplification, thus reducing signal-to-noise ratio and compromising diagnostic accuracy. Although deep learning approaches have shown significant progress, they often require high-quality paired or unpaired labels, limiting their clinical application. To address these issues, this work explores the feasibility of weakly supervised methods and proposes a denoising prior guided weakly supervised learning framework, DPD-DEMD, to achieve high-accuracy image-domain dual-energy material decomposition. DPD-DEMD utilizes pretrained CT denoising models to construct robust priors for our dual energy material decomposition task. Furthermore, we propose an adaptive confidence mask mechanism for pseudo label generation and a multi-prior fusion strategy, thereby substantially improving the stability and reliability of the weakly supervised learning process. In addition, we fully exploit the correlation between dual energy images and further propose global-local regularization loss to improve the material decomposition accuracy. Extensive experiments conducted on both simulated and clinical datasets verify the superior performance and robustness of the proposed method, thereby demonstrating its potential clinical value in material decomposition.
{"title":"DPD-DEMD: Denoising Prior Guided Weakly Supervised Image Domain Dual-Energy Material Decomposition","authors":"Xinyun Zhong;Xu Zhuo;Tianling Lyu;Yikun Zhang;Qianjin Feng;Guotao Quan;Xu Ji;Yang Chen","doi":"10.1109/TCI.2026.3654807","DOIUrl":"https://doi.org/10.1109/TCI.2026.3654807","url":null,"abstract":"Dual-energy material decomposition is widely used in clinical diagnosis, especially for material characterization. However, conventional image-domain methods suffer from noise amplification, thus reducing signal-to-noise ratio and compromising diagnostic accuracy. Although deep learning approaches have shown significant progress, they often require high-quality paired or unpaired labels, limiting their clinical application. To address these issues, this work explores the feasibility of weakly supervised methods and proposes a denoising prior guided weakly supervised learning framework, DPD-DEMD, to achieve high-accuracy image-domain dual-energy material decomposition. DPD-DEMD utilizes pretrained CT denoising models to construct robust priors for our dual energy material decomposition task. Furthermore, we propose an adaptive confidence mask mechanism for pseudo label generation and a multi-prior fusion strategy, thereby substantially improving the stability and reliability of the weakly supervised learning process. In addition, we fully exploit the correlation between dual energy images and further propose global-local regularization loss to improve the material decomposition accuracy. Extensive experiments conducted on both simulated and clinical datasets verify the superior performance and robustness of the proposed method, thereby demonstrating its potential clinical value in material decomposition.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"12 ","pages":"334-348"},"PeriodicalIF":4.8,"publicationDate":"2026-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146082092","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Virtual monoenergetic images (VMIs), reconstructed from dual-energy CT (DECT) by capturing photon attenuation data at two distinct energy levels, can reduce beam-hardening artifacts and provide more quantitatively accurate attenuation measurements. Data-driven deep learning approaches have demonstrated the feasibility of synthesizing VMIs from conventional single-energy CT (SECT) scans. However, the lack of incorporation of physics-related information in such methods compromises their interpretability and robustness. Here we propose a novel hybrid data-driven framework that synergizes convolutional neural networks with physics-based material decomposition derived from DECT principles. This approach directly yields high-quality VMIs across various keV levels from SECT acquisitions. Through rigorous validation on 130 clinical cases spanning diverse anatomical regions and pathological conditions, our method demonstrates significant improvements over conventional purely data-driven approaches, as evidenced by enhanced anatomical visualization and superior performance on quantitative metrics. By eliminating dependence on DECT hardware while maintaining computational efficiency and incorporating physics-guided constraints, our framework leverages the widespread availability of SECT to provide a cost-effective, high-performance solution for diagnostic imaging in routine clinical practice.
{"title":"Data-Driven Multi-keV Virtual Monoenergetic Images Generation From Single-Energy CT Guided by Image-Domain Material Decomposition","authors":"Wenwen Zhang;Zihan Chai;Yantao Niu;Zhijie Zhang;Linxuan Li;Baohua Sun;Junfang Xian;Wei Zhao","doi":"10.1109/TCI.2026.3653309","DOIUrl":"https://doi.org/10.1109/TCI.2026.3653309","url":null,"abstract":"Virtual monoenergetic images (VMIs), reconstructed from dual-energy CT (DECT) by capturing photon attenuation data at two distinct energy levels, can reduce beam-hardening artifacts and provide more quantitatively accurate attenuation measurements. Data-driven deep learning approaches have demonstrated the feasibility of synthesizing VMIs from conventional single-energy CT (SECT) scans. However, the lack of incorporation of physics-related information in such methods compromises their interpretability and robustness. Here we propose a novel hybrid data-driven framework that synergizes convolutional neural networks with physics-based material decomposition derived from DECT principles. This approach directly yields high-quality VMIs across various keV levels from SECT acquisitions. Through rigorous validation on 130 clinical cases spanning diverse anatomical regions and pathological conditions, our method demonstrates significant improvements over conventional purely data-driven approaches, as evidenced by enhanced anatomical visualization and superior performance on quantitative metrics. By eliminating dependence on DECT hardware while maintaining computational efficiency and incorporating physics-guided constraints, our framework leverages the widespread availability of SECT to provide a cost-effective, high-performance solution for diagnostic imaging in routine clinical practice.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"12 ","pages":"321-333"},"PeriodicalIF":4.8,"publicationDate":"2026-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146026483","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hyperspectral image (HSI) is applicable in many fields due to the ability in discriminating different materials. Collecting HSI usually requires expensive hardware and long period. Reconstructing HSI from RGB image, also called spectral super-resolution (SSR), is an affordable and feasible way for HSI acquisition. Despite the SSR results achieved by existing deep unfolding networks (DUNs), they still face challenges in: 1) recovering the fine-grained and realistic details; 2) suppressing the spectral distortion. Diffusion model has advantages in generating diverse and realistic contents, while its fidelity is limited due to the inherent randomness. In this study, to reconstruct a faithful and realistic HSI, we integrate the diffusion model in DUN, and propose a degradation-aware unrolling diffusion model for SSR (deDiff-SSR). The generative diffusion prior is jointly leveraged with the spectral degradation and deep prior learning. Specifically, we first pre-train a channel attention enhanced denoising diffusion probabilistic model (DDPM), the spectral correlation is exploited for learning the diffusion prior of HSI. To aware the degradation, by optimizing a diffusion and deep priors regularized HSI SSR model, we propose a degradation-aware diffusion sampling method, the spectral degradation is learned to refine each diffusion sampling step. Via unrolling the degradation-aware diffusion sampling steps, we build the deDiff-SSR network. It contains diffusion and deep proximal operators to represent the diffusion and deep priors, respectively. We implement the diffusion proximal operator with one sampling step of the pre-trained DDPM. Moreover, we design a state-space Transformer as the deep proximal operator, the spectral-spatial long-range relationship of HSI can be efficiently captured. The experiments on several indoor and remote sensing datasets demonstrate the effectiveness of deDiff-SSR.
{"title":"Learning Degradation-Aware Diffusion Prior for Hyperspectral Reconstruction From RGB Image","authors":"Jingxiang Yang;Haifeng Xu;Heyuan Yin;Hongyi Liu;Liang Xiao","doi":"10.1109/TCI.2025.3650359","DOIUrl":"https://doi.org/10.1109/TCI.2025.3650359","url":null,"abstract":"Hyperspectral image (HSI) is applicable in many fields due to the ability in discriminating different materials. Collecting HSI usually requires expensive hardware and long period. Reconstructing HSI from RGB image, also called spectral super-resolution (SSR), is an affordable and feasible way for HSI acquisition. Despite the SSR results achieved by existing deep unfolding networks (DUNs), they still face challenges in: 1) recovering the fine-grained and realistic details; 2) suppressing the spectral distortion. Diffusion model has advantages in generating diverse and realistic contents, while its fidelity is limited due to the inherent randomness. In this study, to reconstruct a faithful and realistic HSI, we integrate the diffusion model in DUN, and propose a degradation-aware unrolling diffusion model for SSR (deDiff-SSR). The generative diffusion prior is jointly leveraged with the spectral degradation and deep prior learning. Specifically, we first pre-train a channel attention enhanced denoising diffusion probabilistic model (DDPM), the spectral correlation is exploited for learning the diffusion prior of HSI. To aware the degradation, by optimizing a diffusion and deep priors regularized HSI SSR model, we propose a degradation-aware diffusion sampling method, the spectral degradation is learned to refine each diffusion sampling step. Via unrolling the degradation-aware diffusion sampling steps, we build the deDiff-SSR network. It contains diffusion and deep proximal operators to represent the diffusion and deep priors, respectively. We implement the diffusion proximal operator with one sampling step of the pre-trained DDPM. Moreover, we design a state-space Transformer as the deep proximal operator, the spectral-spatial long-range relationship of HSI can be efficiently captured. The experiments on several indoor and remote sensing datasets demonstrate the effectiveness of deDiff-SSR.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"12 ","pages":"256-269"},"PeriodicalIF":4.8,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145982258","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-30DOI: 10.1109/TCI.2025.3649390
Suwoong Yeom;Hosung Son;Chanhee Kang;Eunho Shin;Joonsoo Kim;Kug-jin Yun;Suk-Ju Kang
Novel view synthesis for dynamic scenes is a critical challenge in computer vision and computational imaging. Despite significant advancements, generating realistic images from monocularly captured dynamic scenes remains a complex task. Recent methods leveraging neural radiance fields and 3D Gaussian splatting in canonical spaces have made notable progress. However, these approaches often estimate both color and geometric information within a single space, limiting their effectiveness in handling large deformations of dynamic objects or significant color variations. To address these challenges, we propose ICSD-NeRF, which optimizes color and geometry in separate canonical spaces. Additionally, we introduce decision fields to effectively distinguish and optimize static and dynamic objects, enabling dynamic regions to be disentangled from the static background during training. To further enhance the representation of geometric structures in static regions, we employ an MLP to refine geometric features. We validate our approach on widely used dynamic scene novel view synthesis datasets, demonstrating that ICSD-NeRF outperforms existing methods by achieving higher rendering accuracy. Notably, our method achieves higher PSNR scores on benchmark datasets than current state-of-the-art techniques.
{"title":"ICSD-NeRF: Independent Canonical Spaces for Enhanced Dynamic Scene Modeling in Neural Radiance Fields","authors":"Suwoong Yeom;Hosung Son;Chanhee Kang;Eunho Shin;Joonsoo Kim;Kug-jin Yun;Suk-Ju Kang","doi":"10.1109/TCI.2025.3649390","DOIUrl":"https://doi.org/10.1109/TCI.2025.3649390","url":null,"abstract":"Novel view synthesis for dynamic scenes is a critical challenge in computer vision and computational imaging. Despite significant advancements, generating realistic images from monocularly captured dynamic scenes remains a complex task. Recent methods leveraging neural radiance fields and 3D Gaussian splatting in canonical spaces have made notable progress. However, these approaches often estimate both color and geometric information within a single space, limiting their effectiveness in handling large deformations of dynamic objects or significant color variations. To address these challenges, we propose ICSD-NeRF, which optimizes color and geometry in separate canonical spaces. Additionally, we introduce decision fields to effectively distinguish and optimize static and dynamic objects, enabling dynamic regions to be disentangled from the static background during training. To further enhance the representation of geometric structures in static regions, we employ an MLP to refine geometric features. We validate our approach on widely used dynamic scene novel view synthesis datasets, demonstrating that ICSD-NeRF outperforms existing methods by achieving higher rendering accuracy. Notably, our method achieves higher PSNR scores on benchmark datasets than current state-of-the-art techniques.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"12 ","pages":"242-255"},"PeriodicalIF":4.8,"publicationDate":"2025-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145982194","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-30DOI: 10.1109/TCI.2025.3649389
Canberk Ekmekci;Mujdat Cetin
Uncertainty quantification plays an important role in achieving trustworthy and reliable learning-based computational imaging. Recent advances in generative modeling and Bayesian neural networks have enabled the development of uncertainty-aware image reconstruction methods. Current generative model-based methods seek to quantify the inherent (aleatoric) uncertainty on the underlying image for given measurements by learning to sample from the posterior distribution of the underlying image. On the other hand, Bayesian neural network-based approaches aim to quantify the model (epistemic) uncertainty on the parameters of a deep neural network-based reconstruction method by approximating the posterior distribution of those parameters. Unfortunately, an ongoing need for an inversion method that can jointly quantify complex aleatoric uncertainty and epistemic uncertainty patterns still persists. In this paper, we present a scalable framework that can quantify both aleatoric and epistemic uncertainties. The proposed framework accepts an existing generative model-based posterior sampling method as an input and introduces an epistemic uncertainty quantification capability through Bayesian neural networks with latent variables and deep ensembling. Furthermore, by leveraging the conformal prediction methodology, the proposed framework can be easily calibrated to ensure rigorous uncertainty quantification. We evaluated the proposed framework on magnetic resonance imaging, computed tomography, and image inpainting problems and showed that the epistemic and aleatoric uncertainty estimates produced by the proposed framework display the characteristic features of true epistemic and aleatoric uncertainties. Furthermore, our results demonstrated that the use of conformal prediction on top of the proposed framework enables marginal coverage guarantees consistent with frequentist principles.
{"title":"Conformalized Generative Bayesian Imaging: An Uncertainty Quantification Framework for Computational Imaging","authors":"Canberk Ekmekci;Mujdat Cetin","doi":"10.1109/TCI.2025.3649389","DOIUrl":"https://doi.org/10.1109/TCI.2025.3649389","url":null,"abstract":"Uncertainty quantification plays an important role in achieving trustworthy and reliable learning-based computational imaging. Recent advances in generative modeling and Bayesian neural networks have enabled the development of uncertainty-aware image reconstruction methods. Current generative model-based methods seek to quantify the inherent (aleatoric) uncertainty on the underlying image for given measurements by learning to sample from the posterior distribution of the underlying image. On the other hand, Bayesian neural network-based approaches aim to quantify the model (epistemic) uncertainty on the parameters of a deep neural network-based reconstruction method by approximating the posterior distribution of those parameters. Unfortunately, an ongoing need for an inversion method that can jointly quantify complex aleatoric uncertainty and epistemic uncertainty patterns still persists. In this paper, we present a scalable framework that can quantify both aleatoric and epistemic uncertainties. The proposed framework accepts an existing generative model-based posterior sampling method as an input and introduces an epistemic uncertainty quantification capability through Bayesian neural networks with latent variables and deep ensembling. Furthermore, by leveraging the conformal prediction methodology, the proposed framework can be easily calibrated to ensure rigorous uncertainty quantification. We evaluated the proposed framework on magnetic resonance imaging, computed tomography, and image inpainting problems and showed that the epistemic and aleatoric uncertainty estimates produced by the proposed framework display the characteristic features of true epistemic and aleatoric uncertainties. Furthermore, our results demonstrated that the use of conformal prediction on top of the proposed framework enables marginal coverage guarantees consistent with frequentist principles.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"12 ","pages":"216-229"},"PeriodicalIF":4.8,"publicationDate":"2025-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145929459","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-25DOI: 10.1109/TCI.2025.3648531
Roser Viñals;Jean-Philippe Thiran
Sparse arrays offer a promising solution for reducing the data volume required to reconstruct ultrasound images, making them well-suited for portable and wireless devices. However, the quality of images beamformed from a limited number of transducer elements is significantly degraded. This study proposes a deep learning-based inpainting technique to estimate complete radio frequency (RF) signals (before beamforming) from downsampled RF signals obtained using a subset of transducer elements. This approach allows for quality enhancement without the need to beamform images, offering flexibility particularly beneficial for applications such as speed-of-sound estimation algorithms. We introduce a model-based loss function that combines the signal and image domains by incorporating a measurement model associated with image reconstruction. We then compare this loss with another that accounts solely for the RF signal domain. Additionally, we train our network exclusively in the RF image domain, mapping images beamformed from downsampled RF signals to those from complete signals. We compare these approaches qualitatively and quantitatively, with all enhancing image quality. The proposed method with the model-based loss achieves superior detail and quality metrics. Although trained on downsampled RF signals simulating sparse arrays in reception, all methods — especially our inpainting approach with the model-based loss — demonstrate strong adaptability to ultrafast acquisitions with reduced transducer elements in both transmission and reception. This highlights their potential for reducing the number of transducer elements in ultrasound probes. Furthermore, the proposed method exhibits superior generalization performance when evaluated on a different probe than the one used to acquire the training dataset.
{"title":"Deep Learning-Based Inpainting for Sparse Arrays in Ultrafast Ultrasound Imaging","authors":"Roser Viñals;Jean-Philippe Thiran","doi":"10.1109/TCI.2025.3648531","DOIUrl":"https://doi.org/10.1109/TCI.2025.3648531","url":null,"abstract":"Sparse arrays offer a promising solution for reducing the data volume required to reconstruct ultrasound images, making them well-suited for portable and wireless devices. However, the quality of images beamformed from a limited number of transducer elements is significantly degraded. This study proposes a deep learning-based inpainting technique to estimate complete radio frequency (RF) signals (before beamforming) from downsampled RF signals obtained using a subset of transducer elements. This approach allows for quality enhancement without the need to beamform images, offering flexibility particularly beneficial for applications such as speed-of-sound estimation algorithms. We introduce a model-based loss function that combines the signal and image domains by incorporating a measurement model associated with image reconstruction. We then compare this loss with another that accounts solely for the RF signal domain. Additionally, we train our network exclusively in the RF image domain, mapping images beamformed from downsampled RF signals to those from complete signals. We compare these approaches qualitatively and quantitatively, with all enhancing image quality. The proposed method with the model-based loss achieves superior detail and quality metrics. Although trained on downsampled RF signals simulating sparse arrays in reception, all methods — especially our inpainting approach with the model-based loss — demonstrate strong adaptability to ultrafast acquisitions with reduced transducer elements in both transmission and reception. This highlights their potential for reducing the number of transducer elements in ultrasound probes. Furthermore, the proposed method exhibits superior generalization performance when evaluated on a different probe than the one used to acquire the training dataset.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"12 ","pages":"187-202"},"PeriodicalIF":4.8,"publicationDate":"2025-12-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11316255","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145929582","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this study, we propose a novel calibration method for magnetic particle imaging (MPI), in which a field-based calibration procedure is employed directly in the sinogram domain for a dynamic 3D field free line (FFL) sinusoidal trajectory. Unlike the conventional voxel-wise calibration in the image-domain, our approach involves performing calibration measurements in sinogram domain with alternating magnetic field offsets. The resulting sinogram-based calibration data are then used to synthesize the 3D system matrix (SM) in the image domain by exploiting the shift invariance property of MPI systems. With the proposed method, frequency mixing terms originating from the simultaneous translation and rotation of the FFL can be acquired without the use of an external calibration device, such as a precise 3D-axis robot. The proposed sinogram-based calibration method for the 3D FFL trajectory with 100 ms duration achieved acquisition of a 57 × 57 × 11 SM in approximately 1 minute with minimal image degradation. In contrast, the conventional system calibration method requires about 1 h for the same spatial resolution.
{"title":"Fast Sinogram-Based System Calibration for Field Free Line Magnetic Particle Imaging","authors":"Serhat Ilbey;Justin Ackers;Heinrich Lehr;Matthias Graeser;Jochen Franke","doi":"10.1109/TCI.2025.3647229","DOIUrl":"https://doi.org/10.1109/TCI.2025.3647229","url":null,"abstract":"In this study, we propose a novel calibration method for magnetic particle imaging (MPI), in which a field-based calibration procedure is employed directly in the sinogram domain for a dynamic 3D field free line (FFL) sinusoidal trajectory. Unlike the conventional voxel-wise calibration in the image-domain, our approach involves performing calibration measurements in sinogram domain with alternating magnetic field offsets. The resulting sinogram-based calibration data are then used to synthesize the 3D system matrix (SM) in the image domain by exploiting the shift invariance property of MPI systems. With the proposed method, frequency mixing terms originating from the simultaneous translation and rotation of the FFL can be acquired without the use of an external calibration device, such as a precise 3D-axis robot. The proposed sinogram-based calibration method for the 3D FFL trajectory with 100 ms duration achieved acquisition of a 57 × 57 × 11 SM in approximately 1 minute with minimal image degradation. In contrast, the conventional system calibration method requires about 1 h for the same spatial resolution.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"12 ","pages":"270-281"},"PeriodicalIF":4.8,"publicationDate":"2025-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11313552","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145982191","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-19DOI: 10.1109/TCI.2025.3646325
Michael Jaeger;Vera H.J. van Hal;Richard Lopata;Hans-Martin Schwab
Knowledge of the spatial distribution of speed of sound (SoS) will greatly improve the diagnostic power of abdominal ultrasound (US) imaging. It can be used to account for wavefront aberrations and improving the B-mode image quality, but SoS is also an important disease marker as it can reflect disease-related changes in tissue composition. However, the relatively small aperture size compared to the desired image depth in abdominal US imaging inherently limits the axial resolution and the estimation accuracy of determining SoS. Fundamentally, a larger physical aperture would produce images with improved contrast and resolution. In this study, we assess the impact of combining two probes to a larger aperture on the estimation accuracy of SoS. Two ways of dual-probe operation are investigated, monostatic (where only one probe transmits and receives at a time) and bistatic (where both probes receive on each transmit), as well as classical single-probe operation. The quality of SoS maps is compared in digital and physical phantoms mimicking abdominal imaging. The primary result is that axial resolution of bistatic operation clearly outperforms the one of monostatic dual-probe operation, both in simulations (by 74$%$ and 96$%$) and in the experiment (by 103$%$), indicating that the larger angle differences provided by trans-aperture data (where one probe receives and the other transmits) is essential for the superior axial resolution. Furthermore, bistatic dual-probe operation produced SoS maps with improved background uniformity and performed best at recovering layer contrast, while the latter was substantially more underestimated in monostatic dual-probe and classical single-probe operation. This study emphasizes the importance of larger effective apertures for future ultrasound device designs.
{"title":"Dual-Probe Pulse-Echo Speed-of-Sound Imaging","authors":"Michael Jaeger;Vera H.J. van Hal;Richard Lopata;Hans-Martin Schwab","doi":"10.1109/TCI.2025.3646325","DOIUrl":"https://doi.org/10.1109/TCI.2025.3646325","url":null,"abstract":"Knowledge of the spatial distribution of speed of sound (SoS) will greatly improve the diagnostic power of abdominal ultrasound (US) imaging. It can be used to account for wavefront aberrations and improving the B-mode image quality, but SoS is also an important disease marker as it can reflect disease-related changes in tissue composition. However, the relatively small aperture size compared to the desired image depth in abdominal US imaging inherently limits the axial resolution and the estimation accuracy of determining SoS. Fundamentally, a larger physical aperture would produce images with improved contrast and resolution. In this study, we assess the impact of combining two probes to a larger aperture on the estimation accuracy of SoS. Two ways of dual-probe operation are investigated, monostatic (where only one probe transmits and receives at a time) and bistatic (where both probes receive on each transmit), as well as classical single-probe operation. The quality of SoS maps is compared in digital and physical phantoms mimicking abdominal imaging. The primary result is that axial resolution of bistatic operation clearly outperforms the one of monostatic dual-probe operation, both in simulations (by 74<inline-formula><tex-math>$%$</tex-math></inline-formula> and 96<inline-formula><tex-math>$%$</tex-math></inline-formula>) and in the experiment (by 103<inline-formula><tex-math>$%$</tex-math></inline-formula>), indicating that the larger angle differences provided by trans-aperture data (where one probe receives and the other transmits) is essential for the superior axial resolution. Furthermore, bistatic dual-probe operation produced SoS maps with improved background uniformity and performed best at recovering layer contrast, while the latter was substantially more underestimated in monostatic dual-probe and classical single-probe operation. This study emphasizes the importance of larger effective apertures for future ultrasound device designs.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"12 ","pages":"230-241"},"PeriodicalIF":4.8,"publicationDate":"2025-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145982185","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}