首页 > 最新文献

IEEE Transactions on Computational Imaging最新文献

英文 中文
Vignetting Correction Through Color-Intensity Map Entropy Optimization 基于色强图熵优化的渐晕校正
IF 4.2 2区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-06-26 DOI: 10.1109/TCI.2025.3583465
Zhuang He;Hai-Miao Hu;Likun Gao;Haoxin Hu;Xinhui Xue;Zhenglin Tang;Difeng Zhu;Haowen Zheng;Chongze Wang
Vignetting correction is an essential process of image signal processing. It is an important part for obtaining high-quality images, but the research in this field has not been fully emphasized. The mainstream methods are based on calibration which processes are complex. And many methods get low accuracy and poor robustness in practical. In this paper, we analyzed the optical principle of vignetting and its influence on the image. Then, we proposed an algorithm based on color-intensity map entropy optimization to correct image vignetting. Moreover, because of the lack of dataset of vignetting, we proposed a method for constructing vignetting image dataset through capturing the real scenes. Compared with the dataset generated through simulation, our dataset is more authentic and reliable. Many experiments have been carried out on this dataset, and the results proved that the proposed algorithm achieved the best performance.
渐晕校正是图像信号处理的一个重要过程。它是获取高质量图像的重要组成部分,但这方面的研究尚未得到充分重视。主流的方法是基于校准的方法,过程复杂。在实际应用中,许多方法精度低,鲁棒性差。本文分析了渐晕的光学原理及其对图像的影响。然后,我们提出了一种基于颜色强度映射熵优化的图像渐晕校正算法。此外,针对目前缺乏渐晕图像数据集的问题,提出了一种通过捕捉真实场景构建渐晕图像数据集的方法。与模拟生成的数据集相比,我们的数据集更加真实可靠。在该数据集上进行了大量的实验,结果证明了该算法取得了最佳的性能。
{"title":"Vignetting Correction Through Color-Intensity Map Entropy Optimization","authors":"Zhuang He;Hai-Miao Hu;Likun Gao;Haoxin Hu;Xinhui Xue;Zhenglin Tang;Difeng Zhu;Haowen Zheng;Chongze Wang","doi":"10.1109/TCI.2025.3583465","DOIUrl":"https://doi.org/10.1109/TCI.2025.3583465","url":null,"abstract":"Vignetting correction is an essential process of image signal processing. It is an important part for obtaining high-quality images, but the research in this field has not been fully emphasized. The mainstream methods are based on calibration which processes are complex. And many methods get low accuracy and poor robustness in practical. In this paper, we analyzed the optical principle of vignetting and its influence on the image. Then, we proposed an algorithm based on color-intensity map entropy optimization to correct image vignetting. Moreover, because of the lack of dataset of vignetting, we proposed a method for constructing vignetting image dataset through capturing the real scenes. Compared with the dataset generated through simulation, our dataset is more authentic and reliable. Many experiments have been carried out on this dataset, and the results proved that the proposed algorithm achieved the best performance.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"911-925"},"PeriodicalIF":4.2,"publicationDate":"2025-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144623935","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
G2L-Stereo: Global to Local Two-Stage Real-Time Stereo Matching Network G2L-Stereo:全局到局部两阶段实时立体匹配网络
IF 4.2 2区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-06-19 DOI: 10.1109/TCI.2025.3581105
Jie Tang;Gaofeng Peng;Jialu Liu;Bo Yu
Developing fast and accurate stereo matching algorithms is crucial for real-world embedded vision applications. Depth information plays a significant role in scene understanding, and depth calculated through stereo matching is generally considered to be more precise and reliable than that obtained from monocular depth estimation. However, speed-oriented stereo matching methods often suffer from poor feature representation due to sparse sampling and detail loss caused by unreasonable disparity allocation during upsampling. To address these issues, we propose G2L-Stereo, a two-stage real-time stereo matching network that combines global disparity range prediction and local disparity range prediction. In the global disparity range prediction stage, we introduce feature-guided connections for cost aggregation, enhancing the expressive power of sparse features by aligning the feature space across different scales of cost volumes. We also incorporate confidence estimation into the upsampling algorithm to reduce the propagation of inaccurate disparities during upsampling, yielding more precise disparity maps. In the local disparity range prediction stage, we develop a disparity refinement module guided by neighborhood similarity. This module aggregates similar neighboring costs to estimate disparity residuals and refine disparities, restoring lost details in the low-resolution disparity map and further enhancing disparity accuracy. Extensive experiments on the SceneFlow and KITTI datasets validate the effectiveness of our model, showing that G2L-Stereo achieves fast inference while maintaining accuracy comparable to state-of-the-art methods.
开发快速准确的立体匹配算法对于现实世界的嵌入式视觉应用至关重要。深度信息在场景理解中起着重要的作用,通过立体匹配计算的深度通常被认为比单目深度估计获得的深度更精确和可靠。然而,以速度为导向的立体匹配方法往往由于采样稀疏和上采样过程中视差分配不合理导致的细节丢失而导致特征表达不佳。为了解决这些问题,我们提出了G2L-Stereo,一种结合全局视差范围预测和局部视差范围预测的两阶段实时立体匹配网络。在全局视差范围预测阶段,我们引入特征引导连接进行成本聚合,通过在不同尺度的成本体积上对齐特征空间来增强稀疏特征的表达能力。我们还将置信度估计纳入上采样算法中,以减少上采样过程中不准确的差异传播,从而产生更精确的视差图。在局部视差范围预测阶段,我们开发了一个以邻域相似度为导向的视差细化模块。该模块聚合相似的相邻代价来估计视差残差并细化视差,恢复低分辨率视差图中丢失的细节,进一步提高视差精度。在SceneFlow和KITTI数据集上进行的大量实验验证了我们模型的有效性,表明G2L-Stereo在保持与最先进方法相当的精度的同时实现了快速推理。
{"title":"G2L-Stereo: Global to Local Two-Stage Real-Time Stereo Matching Network","authors":"Jie Tang;Gaofeng Peng;Jialu Liu;Bo Yu","doi":"10.1109/TCI.2025.3581105","DOIUrl":"https://doi.org/10.1109/TCI.2025.3581105","url":null,"abstract":"Developing fast and accurate stereo matching algorithms is crucial for real-world embedded vision applications. Depth information plays a significant role in scene understanding, and depth calculated through stereo matching is generally considered to be more precise and reliable than that obtained from monocular depth estimation. However, speed-oriented stereo matching methods often suffer from poor feature representation due to sparse sampling and detail loss caused by unreasonable disparity allocation during upsampling. To address these issues, we propose G2L-Stereo, a two-stage real-time stereo matching network that combines global disparity range prediction and local disparity range prediction. In the global disparity range prediction stage, we introduce feature-guided connections for cost aggregation, enhancing the expressive power of sparse features by aligning the feature space across different scales of cost volumes. We also incorporate confidence estimation into the upsampling algorithm to reduce the propagation of inaccurate disparities during upsampling, yielding more precise disparity maps. In the local disparity range prediction stage, we develop a disparity refinement module guided by neighborhood similarity. This module aggregates similar neighboring costs to estimate disparity residuals and refine disparities, restoring lost details in the low-resolution disparity map and further enhancing disparity accuracy. Extensive experiments on the SceneFlow and KITTI datasets validate the effectiveness of our model, showing that G2L-Stereo achieves fast inference while maintaining accuracy comparable to state-of-the-art methods.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"852-863"},"PeriodicalIF":4.2,"publicationDate":"2025-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144524418","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Energy-Coded Spectral CT Imaging Method Based on Projection Mix Separation 基于投影混合分离的能量编码光谱CT成像方法
IF 4.2 2区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-06-11 DOI: 10.1109/TCI.2025.3578762
Xiaojie Zhao;Yihong Li;Yan Han;Ping Chen;Jiaotong Wei
Spectral CT can be used to perform material decomposition from polychromatic attenuation data, generate virtual monochromatic or virtual narrow-energy-width images in which beam hardening artifacts are suppressed, and provide detailed energy attenuation coefficients for material characterization. We propose an energy-coded spectral CT imaging method that is based on projection mix separation, which enables simultaneous energy decoding and image reconstruction. An X-ray energy-coded forward model is then constructed. Leveraging the Poisson statistical properties of the measurement data, we formulate a constrained optimization problem for both the energy-coded coefficient matrix and the material decomposition coefficient matrix, which is solved using a block coordinate descent algorithm. Simulations and experimental results demonstrate that the decoded energy spectrum distribution and virtual narrow-energy-width CT images are accurate and effective. The proposed method suppresses beam hardening artifacts and enhances the material identification capabilities of traditional CT.
光谱CT可用于从多色衰减数据进行材料分解,生成虚拟单色或虚拟窄能宽图像,其中束硬化伪影被抑制,并为材料表征提供详细的能量衰减系数。提出了一种基于投影混合分离的能量编码光谱CT成像方法,实现了能量解码和图像重建的同步进行。然后构造了x射线能量编码正演模型。利用测量数据的泊松统计特性,提出了能量编码系数矩阵和材料分解系数矩阵的约束优化问题,并采用分块坐标下降算法求解。仿真和实验结果表明,解码后的能谱分布和虚拟窄能宽CT图像是准确有效的。该方法抑制了光束硬化伪影,提高了传统CT的材料识别能力。
{"title":"Energy-Coded Spectral CT Imaging Method Based on Projection Mix Separation","authors":"Xiaojie Zhao;Yihong Li;Yan Han;Ping Chen;Jiaotong Wei","doi":"10.1109/TCI.2025.3578762","DOIUrl":"https://doi.org/10.1109/TCI.2025.3578762","url":null,"abstract":"Spectral CT can be used to perform material decomposition from polychromatic attenuation data, generate virtual monochromatic or virtual narrow-energy-width images in which beam hardening artifacts are suppressed, and provide detailed energy attenuation coefficients for material characterization. We propose an energy-coded spectral CT imaging method that is based on projection mix separation, which enables simultaneous energy decoding and image reconstruction. An X-ray energy-coded forward model is then constructed. Leveraging the Poisson statistical properties of the measurement data, we formulate a constrained optimization problem for both the energy-coded coefficient matrix and the material decomposition coefficient matrix, which is solved using a block coordinate descent algorithm. Simulations and experimental results demonstrate that the decoded energy spectrum distribution and virtual narrow-energy-width CT images are accurate and effective. The proposed method suppresses beam hardening artifacts and enhances the material identification capabilities of traditional CT.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"839-851"},"PeriodicalIF":4.2,"publicationDate":"2025-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144492200","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Iterative Collaboration Network Guided by Reconstruction Prior for Medical Image Super-Resolution 基于先验重构的医学图像超分辨率迭代协同网络
IF 4.2 2区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-06-06 DOI: 10.1109/TCI.2025.3577340
Xiaoyan Kui;Zexin Ji;Beiji Zou;Yang Li;Yulan Dai;Liming Chen;Pierre Vera;Su Ruan
High-resolution medical images can provide more detailed information for better diagnosis. Conventional medical image super-resolution relies on a single task which first performs the extraction of the features and then upscaling based on the features. The features extracted may not be complete for super-resolution. Recent multi-task learning, including reconstruction and super-resolution, is a good solution to obtain additional relevant information. The interaction between the two tasks is often insufficient, which still leads to incomplete and less relevant deep features. To address above limitations, we propose an iterative collaboration network (ICONet) to improve communications between tasks by progressively incorporating reconstruction prior to the super-resolution learning procedure in an iterative collaboration way. It consists of a reconstruction branch, a super-resolution branch, and a SR-Rec fusion module. The reconstruction branch generates the artifact-free image as prior, which is followed by a super-resolution branch for prior knowledge-guided super-resolution. Unlike the widely-used convolutional neural networks for extracting local features and Transformers with quadratic computational complexity for modeling long-range dependencies, we develop a new residual spatial-channel feature learning (RSCFL) module of two branches to efficiently establish feature relationships in spatial and channel dimensions. Moreover, the designed SR-Rec fusion module fuses the reconstruction prior and super-resolution features with each other in an adaptive manner. Our ICONet is built with multi-stage models to iteratively upscale the low-resolution images using steps of ${2 times }$ and simultaneously interact between two branches in multi-stage supervisions. Quantitative and qualitative experimental results on the benchmarking dataset show that our ICONet outperforms most state-of-the-art approaches.
高分辨率医学图像可以为更好的诊断提供更详细的信息。传统的医学图像超分辨率依赖于一个单一的任务,即首先进行特征提取,然后根据特征进行升级。对于超分辨率,提取的特征可能不完整。最近的多任务学习,包括重建和超分辨率,是获得额外相关信息的好方法。两个任务之间的交互往往不足,这仍然导致深度特征不完整和相关性较低。为了解决上述限制,我们提出了一个迭代协作网络(ICONet),通过在超分辨率学习过程之前以迭代协作的方式逐步纳入重建来改善任务之间的通信。它由一个重建分支、一个超分辨率分支和一个SR-Rec融合模块组成。重建分支生成无伪像的先验图像,重建分支生成先验知识引导下的超分辨率图像。与广泛使用的卷积神经网络用于提取局部特征和具有二次计算复杂度的变压器用于建模远程依赖关系不同,我们开发了一种新的两个分支的剩余空间通道特征学习(RSCFL)模块,以有效地建立空间和通道维度的特征关系。此外,设计的SR-Rec融合模块以自适应方式将重建先验特征和超分辨率特征融合在一起。我们的ICONet是用多阶段模型构建的,使用${2 times}$的步长迭代升级低分辨率图像,并在多阶段监督中同时在两个分支之间进行交互。在基准测试数据集上的定量和定性实验结果表明,我们的ICONet优于大多数最先进的方法。
{"title":"Iterative Collaboration Network Guided by Reconstruction Prior for Medical Image Super-Resolution","authors":"Xiaoyan Kui;Zexin Ji;Beiji Zou;Yang Li;Yulan Dai;Liming Chen;Pierre Vera;Su Ruan","doi":"10.1109/TCI.2025.3577340","DOIUrl":"https://doi.org/10.1109/TCI.2025.3577340","url":null,"abstract":"High-resolution medical images can provide more detailed information for better diagnosis. Conventional medical image super-resolution relies on a single task which first performs the extraction of the features and then upscaling based on the features. The features extracted may not be complete for super-resolution. Recent multi-task learning, including reconstruction and super-resolution, is a good solution to obtain additional relevant information. The interaction between the two tasks is often insufficient, which still leads to incomplete and less relevant deep features. To address above limitations, we propose an iterative collaboration network (ICONet) to improve communications between tasks by progressively incorporating reconstruction prior to the super-resolution learning procedure in an iterative collaboration way. It consists of a reconstruction branch, a super-resolution branch, and a SR-Rec fusion module. The reconstruction branch generates the artifact-free image as prior, which is followed by a super-resolution branch for prior knowledge-guided super-resolution. Unlike the widely-used convolutional neural networks for extracting local features and Transformers with quadratic computational complexity for modeling long-range dependencies, we develop a new residual spatial-channel feature learning (RSCFL) module of two branches to efficiently establish feature relationships in spatial and channel dimensions. Moreover, the designed SR-Rec fusion module fuses the reconstruction prior and super-resolution features with each other in an adaptive manner. Our ICONet is built with multi-stage models to iteratively upscale the low-resolution images using steps of <inline-formula> <tex-math>${2 times }$</tex-math></inline-formula> and simultaneously interact between two branches in multi-stage supervisions. Quantitative and qualitative experimental results on the benchmarking dataset show that our ICONet outperforms most state-of-the-art approaches.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"827-838"},"PeriodicalIF":4.2,"publicationDate":"2025-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144336075","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
$L^{2}$FMamba: Lightweight Light Field Image Super-Resolution With State Space Model 轻量级光场图像超分辨率与状态空间模型
IF 4.2 2区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-06-06 DOI: 10.1109/TCI.2025.3577338
Zeqiang Wei;Kai Jin;Zeyi Hou;Kuan Song;Xiuzhuang Zhou
Transformers bring significantly improved performance to the light field image super-resolution task due to their long-range dependency modeling capability. However, the inherently high computational complexity of their core self-attention mechanism has increasingly hindered their advancement in this task. To address this issue, we first introduce the LF-VSSM block, a novel module inspired by progressive feature extraction, to efficiently capture critical long-range spatial-angular dependencies in light field images. LF-VSSM successively extracts spatial features within sub-aperture images, spatial-angular features between sub-aperture images, and spatial-angular features between light field image pixels. On this basis, we propose a lightweight network, $L^{2}$FMamba (Lightweight Light Field Mamba), which integrates the LF-VSSM block to leverage light field features for super-resolution tasks while overcoming the computational challenges of Transformer-based approaches. Extensive experiments on multiple light field datasets demonstrate that our method reduces the number of parameters and complexity while achieving superior super-resolution performance with faster inference speed.
变形金刚由于具有远程依赖建模能力,大大提高了光场图像超分辨率任务的性能。然而,它们的核心自注意机制固有的高计算复杂性越来越阻碍了它们在这项任务中的进展。为了解决这一问题,我们首先引入了LF-VSSM模块,这是一种受渐进式特征提取启发的新型模块,可以有效地捕获光场图像中关键的远程空间-角度依赖关系。LF-VSSM依次提取子孔径图像内的空间特征、子孔径图像间的空间角特征和光场图像像素间的空间角特征。在此基础上,我们提出了一个轻量级网络,$L^{2}$FMamba(轻量级光场Mamba),它集成了LF-VSSM块,在克服基于变压器的方法的计算挑战的同时,利用光场特征进行超分辨率任务。在多光场数据集上的大量实验表明,我们的方法在减少参数数量和复杂性的同时,以更快的推理速度获得了优越的超分辨率性能。
{"title":"$L^{2}$FMamba: Lightweight Light Field Image Super-Resolution With State Space Model","authors":"Zeqiang Wei;Kai Jin;Zeyi Hou;Kuan Song;Xiuzhuang Zhou","doi":"10.1109/TCI.2025.3577338","DOIUrl":"https://doi.org/10.1109/TCI.2025.3577338","url":null,"abstract":"Transformers bring significantly improved performance to the light field image super-resolution task due to their long-range dependency modeling capability. However, the inherently high computational complexity of their core self-attention mechanism has increasingly hindered their advancement in this task. To address this issue, we first introduce the LF-VSSM block, a novel module inspired by progressive feature extraction, to efficiently capture critical long-range spatial-angular dependencies in light field images. LF-VSSM successively extracts spatial features within sub-aperture images, spatial-angular features between sub-aperture images, and spatial-angular features between light field image pixels. On this basis, we propose a lightweight network, <inline-formula><tex-math>$L^{2}$</tex-math></inline-formula>FMamba (Lightweight Light Field Mamba), which integrates the LF-VSSM block to leverage light field features for super-resolution tasks while overcoming the computational challenges of Transformer-based approaches. Extensive experiments on multiple light field datasets demonstrate that our method reduces the number of parameters and complexity while achieving superior super-resolution performance with faster inference speed.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"816-826"},"PeriodicalIF":4.2,"publicationDate":"2025-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144323022","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Laser Ultrasonic Imaging Via the Time Domain Linear Sampling Method 基于时域线性采样方法的激光超声成像
IF 4.2 2区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-06-06 DOI: 10.1109/TCI.2025.3577405
Jian Song;Fatemeh Pourahmadian;Todd W. Murray;Venkatalakshmi V. Narumanchi
This study investigates the imaging ability of the time-domain linear sampling method (TLSM) when applied to laser ultrasonic (LU) tomography of subsurface defects from limited-aperture measurements. In this vein, the TLSM indicator and its spectral counterpart known as the multifrequency LSM are formulated within the context of LU testing. The affiliated imaging functionals are then computed using synthetic and experimental data germane to LU inspection of aluminum alloy specimens with manufactured defects. Hyperparameters of inversion are computationally analyzed. We demonstrate using synthetic data that the TLSM indicator has the unique ability to recover weak (or hard-to-reach) scatterers and has the potential to generate higher quality images compared to LSM. Provided high-SNR measurements, this advantage may be preserved in reconstructions from LU test data.
本文研究了时域线性采样方法(TLSM)应用于有限孔径测量的激光超声(LU)亚表面缺陷层析成像的成像能力。在这种情况下,TLSM指标及其对应的频谱(称为多频LSM)是在LU测试的上下文中制定的。然后,利用合成和实验数据计算了与带有制造缺陷的铝合金试样的LU检测相关的相关成像函数。对反演的超参数进行了计算分析。我们使用合成数据证明,与LSM相比,TLSM指标具有恢复弱(或难以到达的)散射体的独特能力,并且有可能生成更高质量的图像。如果提供高信噪比测量,则可以在从LU测试数据重建中保留这一优势。
{"title":"Laser Ultrasonic Imaging Via the Time Domain Linear Sampling Method","authors":"Jian Song;Fatemeh Pourahmadian;Todd W. Murray;Venkatalakshmi V. Narumanchi","doi":"10.1109/TCI.2025.3577405","DOIUrl":"https://doi.org/10.1109/TCI.2025.3577405","url":null,"abstract":"This study investigates the imaging ability of the time-domain linear sampling method (TLSM) when applied to laser ultrasonic (LU) tomography of subsurface defects from limited-aperture measurements. In this vein, the TLSM indicator and its spectral counterpart known as the multifrequency LSM are formulated within the context of LU testing. The affiliated imaging functionals are then computed using synthetic and experimental data germane to LU inspection of aluminum alloy specimens with manufactured defects. Hyperparameters of inversion are computationally analyzed. We demonstrate using synthetic data that the TLSM indicator has the unique ability to recover weak (or hard-to-reach) scatterers and has the potential to generate higher quality images compared to LSM. Provided high-SNR measurements, this advantage may be preserved in reconstructions from LU test data.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"803-815"},"PeriodicalIF":4.2,"publicationDate":"2025-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144323023","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robust Correspondence Imaging Against Random Disturbances With Single-Pixel Detection 基于单像素检测的抗随机干扰鲁棒对应成像
IF 4.2 2区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-06-06 DOI: 10.1109/TCI.2025.3577334
Zhihan Xu;Yin Xiao;Wen Chen
Random disturbance has become a great challenge for correspondence imaging (CI) due to dynamic and nonlinear scaling factors. In this paper, we propose a robust CI against random disturbances for high-quality object reconstruction. To remove the effect of dynamic scaling factors induced by random disturbance, a wavelet and total variation (WATV) algorithm is developed to estimate a series of varying thresholds. Then, light intensities collected by a single-pixel detector are processed by using the series of estimated varying thresholds. To realize high-quality object reconstruction, the binarized light intensities and a series of random patterns are fed into a plug-and-play priors (PnP) algorithm with an iteration framework and a general denoiser, called as CI-PnP. Theoretical descriptions are given in detail to reveal the formation mechanism in CI under random disturbance. Optical measurements are conducted to verify robustness of the proposed CI against random disturbances. It is demonstrated that the proposed method can remove the effect of dynamic scaling factors induced by random disturbance, and can realize high-quality object reconstruction. The proposed method provides a promising solution to achieving ultra-high robustness against random disturbances in CI, and is promising in various applications.
由于动态和非线性的标度因子,随机干扰成为通信成像(CI)的一大挑战。在本文中,我们提出了一种抗随机干扰的鲁棒CI,用于高质量的目标重建。为了消除随机扰动引起的动态标度因子的影响,提出了一种小波和全变分(WATV)算法来估计一系列变化阈值。然后,使用一系列估计的变化阈值对单像素检测器收集的光强进行处理。为了实现高质量的目标重建,将二值化光强和一系列随机模式输入到具有迭代框架和通用去噪器的即插即用先验(PnP)算法中,称为CI-PnP。对随机扰动下CI的形成机理进行了详细的理论描述。光学测量验证了所提出的CI对随机干扰的鲁棒性。实验表明,该方法可以消除随机干扰引起的动态尺度因子的影响,实现高质量的目标重建。该方法为实现CI中对随机干扰的超高鲁棒性提供了一种有希望的解决方案,并且在各种应用中都有前景。
{"title":"Robust Correspondence Imaging Against Random Disturbances With Single-Pixel Detection","authors":"Zhihan Xu;Yin Xiao;Wen Chen","doi":"10.1109/TCI.2025.3577334","DOIUrl":"https://doi.org/10.1109/TCI.2025.3577334","url":null,"abstract":"Random disturbance has become a great challenge for correspondence imaging (CI) due to dynamic and nonlinear scaling factors. In this paper, we propose a robust CI against random disturbances for high-quality object reconstruction. To remove the effect of dynamic scaling factors induced by random disturbance, a wavelet and total variation (WATV) algorithm is developed to estimate a series of varying thresholds. Then, light intensities collected by a single-pixel detector are processed by using the series of estimated varying thresholds. To realize high-quality object reconstruction, the binarized light intensities and a series of random patterns are fed into a plug-and-play priors (PnP) algorithm with an iteration framework and a general denoiser, called as CI-PnP. Theoretical descriptions are given in detail to reveal the formation mechanism in CI under random disturbance. Optical measurements are conducted to verify robustness of the proposed CI against random disturbances. It is demonstrated that the proposed method can remove the effect of dynamic scaling factors induced by random disturbance, and can realize high-quality object reconstruction. The proposed method provides a promising solution to achieving ultra-high robustness against random disturbances in CI, and is promising in various applications.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"901-910"},"PeriodicalIF":4.2,"publicationDate":"2025-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144581738","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Physics-Inspired Deep Learning Framework With Polar Coordinate Attention for Ptychographic Imaging 基于极坐标关注的物理启发深度学习框架
IF 4.2 2区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-06-06 DOI: 10.1109/TCI.2025.3572250
Han Yue;Jun Cheng;Yu-Xuan Ren;Chien-Chun Chen;Grant A. van Riessen;Philip Heng Wai Leong;Steve Feng Shu
Ptychographic imaging confronts inherent challenges in applying deep learning for phase retrieval from diffraction patterns. Conventional neural architectures, both convolutional neural networks and Transformer-based methods, are optimized for natural images with Euclidean spatial neighborhood-based inductive biases that exhibit geometric mismatch with the concentric coherent patterns characteristic of diffraction data in reciprocal space. In this paper, we present PPN, a physics-inspired deep learning network with Polar Coordinate Attention (PoCA) for ptychographic imaging, that aligns neural inductive biases with diffraction physics through a dual-branch architecture separating local feature extraction from non-local coherence modeling. It consists of a PoCA mechanism that replaces Euclidean spatial priors with physically consistent radial-angular correlations. PPN outperforms existing end-to-end models, with spectral and spatial analysis confirming its greater preservation of high-frequency details. Notably, PPN maintains robust performance compared to iterative methods even at low overlap ratios — well-suited for high-throughput imaging in real-world acquisition scenarios for samples with consistent structural characteristics.
平面成像在应用深度学习从衍射图样中提取相位时面临着固有的挑战。传统的神经网络架构,包括卷积神经网络和基于变压器的方法,都是针对自然图像进行优化的,这些图像具有基于欧几里得空间邻域的归纳偏差,在互易空间中与衍射数据的同心相干模式特征表现出几何不匹配。在本文中,我们提出了PPN,这是一种物理启发的深度学习网络,具有极坐标注意(PoCA),用于平面成像,它通过双分支架构将局部特征提取与非局部相干建模分离开来,将神经归纳偏差与衍射物理相结合。它由一个PoCA机制组成,该机制用物理上一致的径向角相关性取代欧几里德空间先验。PPN优于现有的端到端模型,其频谱和空间分析证实其更能保存高频细节。值得注意的是,与迭代方法相比,PPN即使在低重叠比下也保持了强大的性能,非常适合在具有一致结构特征的样品的实际采集场景中进行高通量成像。
{"title":"A Physics-Inspired Deep Learning Framework With Polar Coordinate Attention for Ptychographic Imaging","authors":"Han Yue;Jun Cheng;Yu-Xuan Ren;Chien-Chun Chen;Grant A. van Riessen;Philip Heng Wai Leong;Steve Feng Shu","doi":"10.1109/TCI.2025.3572250","DOIUrl":"https://doi.org/10.1109/TCI.2025.3572250","url":null,"abstract":"Ptychographic imaging confronts inherent challenges in applying deep learning for phase retrieval from diffraction patterns. Conventional neural architectures, both convolutional neural networks and Transformer-based methods, are optimized for natural images with Euclidean spatial neighborhood-based inductive biases that exhibit geometric mismatch with the concentric coherent patterns characteristic of diffraction data in reciprocal space. In this paper, we present PPN, a physics-inspired deep learning network with Polar Coordinate Attention (PoCA) for ptychographic imaging, that aligns neural inductive biases with diffraction physics through a dual-branch architecture separating local feature extraction from non-local coherence modeling. It consists of a PoCA mechanism that replaces Euclidean spatial priors with physically consistent radial-angular correlations. PPN outperforms existing end-to-end models, with spectral and spatial analysis confirming its greater preservation of high-frequency details. Notably, PPN maintains robust performance compared to iterative methods even at low overlap ratios — well-suited for high-throughput imaging in real-world acquisition scenarios for samples with consistent structural characteristics.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"888-900"},"PeriodicalIF":4.2,"publicationDate":"2025-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11027575","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144557938","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
RGMLN:Residual Graph Model Learning Network for Bioluminescence Tomography 生物发光层析成像残差图模型学习网络
IF 4.2 2区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-06-02 DOI: 10.1109/TCI.2025.3572727
De Wei;Yizhe Zhao;Shuangchen Li;Heng Zhang;Beilei Wang;Xiaowei He;Jingjing Yu;Huangjian Yi;Xuelei He;Hongbo Guo
For bioluminescence tomography reconstruction, regularization algorithms and deep learning frameworks have been widely studied and achieved impressive results. However, the parameter selection of the regularization algorithm and the poor interpretability of deep learning methods have become the key factors that affect the reconstruction results and hinder its applicability. To mitigate the effects of this problem, in this paper, we proposed a novel residual graph model learning network (RGMLN) for bioluminescence tomography reconstruction by combining the advantages of regularization method and deep learning. RGMLN is based on the inference process of the thresholding iterative shrinkage algorithm. The difference is that the penalty term of the regularization method was replaced by a learnable nonlinear mapping between the residual and source distributions to ensure the interpretability of network. Meanwhile, considering the non-Euclidean property of the finite element mesh, a graph convolution operation based on Laplacian graph theory was conducted to aggregate features of mesh nodes using the topological information of the tetrahedral mesh. Lastly, based on residual learning and auto-encoder strategies, gradient descent and prox mapping modules were designed to structure the model-driven RGMLN method to take advantage of both the interpretability of iterative techniques and the flexibility of learning methods. Both numerical and in vivo experiments confirmed that the proposed network has excellent positioning accuracy and can be applied to different meshes and wavelengths.
对于生物发光层析重建,正则化算法和深度学习框架已经得到了广泛的研究,并取得了令人印象深刻的成果。然而,正则化算法的参数选择和深度学习方法的可解释性差成为影响重构结果和阻碍其适用性的关键因素。为了缓解这一问题的影响,本文结合正则化方法和深度学习的优点,提出了一种新的残差图模型学习网络(RGMLN)用于生物发光层析成像重建。RGMLN是基于阈值迭代收缩算法的推理过程。区别在于正则化方法的惩罚项被残差分布和源分布之间可学习的非线性映射所取代,以保证网络的可解释性。同时,考虑到有限元网格的非欧几里得特性,利用四面体网格的拓扑信息,进行基于拉普拉斯图论的图卷积运算,对网格节点进行特征聚合。最后,基于残差学习和自编码器策略,设计了梯度下降和近似映射模块来构建模型驱动的RGMLN方法,以充分利用迭代技术的可解释性和学习方法的灵活性。数值和体内实验均证实了该网络具有良好的定位精度,可适用于不同的网格和波长。
{"title":"RGMLN:Residual Graph Model Learning Network for Bioluminescence Tomography","authors":"De Wei;Yizhe Zhao;Shuangchen Li;Heng Zhang;Beilei Wang;Xiaowei He;Jingjing Yu;Huangjian Yi;Xuelei He;Hongbo Guo","doi":"10.1109/TCI.2025.3572727","DOIUrl":"https://doi.org/10.1109/TCI.2025.3572727","url":null,"abstract":"For bioluminescence tomography reconstruction, regularization algorithms and deep learning frameworks have been widely studied and achieved impressive results. However, the parameter selection of the regularization algorithm and the poor interpretability of deep learning methods have become the key factors that affect the reconstruction results and hinder its applicability. To mitigate the effects of this problem, in this paper, we proposed a novel residual graph model learning network (RGMLN) for bioluminescence tomography reconstruction by combining the advantages of regularization method and deep learning. RGMLN is based on the inference process of the thresholding iterative shrinkage algorithm. The difference is that the penalty term of the regularization method was replaced by a learnable nonlinear mapping between the residual and source distributions to ensure the interpretability of network. Meanwhile, considering the non-Euclidean property of the finite element mesh, a graph convolution operation based on Laplacian graph theory was conducted to aggregate features of mesh nodes using the topological information of the tetrahedral mesh. Lastly, based on residual learning and auto-encoder strategies, gradient descent and prox mapping modules were designed to structure the model-driven RGMLN method to take advantage of both the interpretability of iterative techniques and the flexibility of learning methods. Both numerical and <italic>in vivo</i> experiments confirmed that the proposed network has excellent positioning accuracy and can be applied to different meshes and wavelengths.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"790-802"},"PeriodicalIF":4.2,"publicationDate":"2025-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144272944","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multispectral and Hyperspectral Image Fusion With Spectrally Varying Blurs and MM Algorithm 光谱变化模糊的多光谱和高光谱图像融合及MM算法
IF 4.2 2区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-04-28 DOI: 10.1109/TCI.2025.3565138
Dan Pineau;François Orieux;Alain Abergel
The fusion of multispectral and hyperspectral data allows for restoring data with enhanced spatial and spectral resolutions. In cases of varying spatial blur, the current approach is to solve an ill-posed inverse problem by minimizing a mixed criterion. This minimization commonly involves an iterative gradient-based method. This paper proposes a new algorithm based on the Majorize-Minimize approach to compute the minimizer of a semi-quadratic convex edge-preserving criterion. The proposition relies on a reachable explicit solution of the quadratic majorant without the need to solve a Sylvester equation and for which we developed the proof of existence that was missing in a previous work. We conduct experiments on realistic synthetic measurements for the James Webb Space Telescope and show that our proposed solutions outperform the state-of-the-art in both computation time, achieving a 7000-fold speedup with the closed-form solution, and reconstruction quality, with a 2 dB PSNR improvement for the MM-based solution.
多光谱和高光谱数据的融合允许以增强的空间和光谱分辨率恢复数据。在变化空间模糊的情况下,目前的方法是通过最小化混合准则来解决病态逆问题。这种最小化通常涉及基于迭代梯度的方法。本文提出了一种基于最大化-最小化方法的半二次凸边保边准则的最小化算法。这个命题依赖于二次元的可达显式解,而不需要解Sylvester方程,我们为此开发了存在性证明,这在以前的工作中是缺失的。我们在詹姆斯·韦伯太空望远镜的实际合成测量上进行了实验,结果表明,我们提出的解决方案在计算时间和重建质量上都优于目前最先进的解决方案,在封闭形式的解决方案中实现了7000倍的加速,在基于mm的解决方案中实现了2 dB的PSNR改进。
{"title":"Multispectral and Hyperspectral Image Fusion With Spectrally Varying Blurs and MM Algorithm","authors":"Dan Pineau;François Orieux;Alain Abergel","doi":"10.1109/TCI.2025.3565138","DOIUrl":"https://doi.org/10.1109/TCI.2025.3565138","url":null,"abstract":"The fusion of multispectral and hyperspectral data allows for restoring data with enhanced spatial and spectral resolutions. In cases of varying spatial blur, the current approach is to solve an ill-posed inverse problem by minimizing a mixed criterion. This minimization commonly involves an iterative gradient-based method. This paper proposes a new algorithm based on the Majorize-Minimize approach to compute the minimizer of a semi-quadratic convex edge-preserving criterion. The proposition relies on a reachable explicit solution of the quadratic majorant without the need to solve a Sylvester equation and for which we developed the proof of existence that was missing in a previous work. We conduct experiments on realistic synthetic measurements for the James Webb Space Telescope and show that our proposed solutions outperform the state-of-the-art in both computation time, achieving a 7000-fold speedup with the closed-form solution, and reconstruction quality, with a 2 dB PSNR improvement for the MM-based solution.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"704-716"},"PeriodicalIF":4.2,"publicationDate":"2025-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144171033","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Transactions on Computational Imaging
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1