首页 > 最新文献

IEEE Transactions on Computational Imaging最新文献

英文 中文
Convergent Primal-Dual Plug-and-Play Image Restoration: A General Algorithm and Applications 收敛原双即插即用图像恢复:一种通用算法及应用
IF 4.8 2区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-12-15 DOI: 10.1109/TCI.2025.3644248
Yodai Suzuki;Ryosuke Isono;Shunsuke Ono
We propose a general deep plug-and-play (PnP) algorithm with a theoretical convergence guarantee. PnP strategies have demonstrated outstanding performance in various image restoration tasks by exploiting the powerful priors underlying Gaussian denoisers. However, existing PnP methods often lack theoretical convergence guarantees under realistic assumptions due to their ad-hoc nature, resulting in inconsistent behavior. Moreover, even when convergence guarantees are provided, they are typically designed for specific settings or require a considerable computational cost in handling non-quadratic data-fidelity terms and additional constraints, which are key components in many image restoration scenarios. To tackle these challenges, we integrate the PnP paradigm with primal-dual splitting (PDS), an efficient proximal splitting methodology for solving a wide range of convex optimization problems, and develop a general convergent PnP framework. Specifically, we establish theoretical conditions for the convergence of the proposed PnP algorithm under a reasonable assumption. Furthermore, we show that the problem solved by the proposed PnP algorithm is not a standard convex optimization problem but a more general monotone inclusion problem, where we provide a mathematical representation of the solution set. Our approach efficiently handles a broad class of image restoration problems with guaranteed theoretical convergence. Numerical experiments on specific image restoration tasks validate the practicality and effectiveness of our theoretical results.
提出了一种具有理论收敛性保证的深度即插即用算法。PnP策略利用高斯去噪的强大先验,在各种图像恢复任务中表现出出色的性能。然而,现有的PnP方法由于其即时性,往往缺乏在现实假设下的理论收敛保证,导致行为不一致。此外,即使提供了收敛保证,它们也通常是为特定设置而设计的,或者在处理非二次数据保真度项和附加约束方面需要相当大的计算成本,这是许多图像恢复场景中的关键组件。为了解决这些挑战,我们将PnP范式与原始对偶分裂(PDS)相结合,PDS是一种有效的近端分裂方法,用于解决广泛的凸优化问题,并开发了一个通用收敛的PnP框架。具体而言,我们在合理的假设下建立了所提出的PnP算法收敛的理论条件。此外,我们证明了所提出的PnP算法解决的问题不是标准的凸优化问题,而是更一般的单调包含问题,其中我们提供了解集的数学表示。我们的方法有效地处理了广泛的图像恢复问题,并保证了理论收敛性。具体图像恢复任务的数值实验验证了理论结果的实用性和有效性。
{"title":"Convergent Primal-Dual Plug-and-Play Image Restoration: A General Algorithm and Applications","authors":"Yodai Suzuki;Ryosuke Isono;Shunsuke Ono","doi":"10.1109/TCI.2025.3644248","DOIUrl":"https://doi.org/10.1109/TCI.2025.3644248","url":null,"abstract":"We propose a general deep plug-and-play (PnP) algorithm with a theoretical convergence guarantee. PnP strategies have demonstrated outstanding performance in various image restoration tasks by exploiting the powerful priors underlying Gaussian denoisers. However, existing PnP methods often lack theoretical convergence guarantees under realistic assumptions due to their ad-hoc nature, resulting in inconsistent behavior. Moreover, even when convergence guarantees are provided, they are typically designed for specific settings or require a considerable computational cost in handling non-quadratic data-fidelity terms and additional constraints, which are key components in many image restoration scenarios. To tackle these challenges, we integrate the PnP paradigm with primal-dual splitting (PDS), an efficient proximal splitting methodology for solving a wide range of convex optimization problems, and develop a general convergent PnP framework. Specifically, we establish theoretical conditions for the convergence of the proposed PnP algorithm under a reasonable assumption. Furthermore, we show that the problem solved by the proposed PnP algorithm is not a standard convex optimization problem but a more general monotone inclusion problem, where we provide a mathematical representation of the solution set. Our approach efficiently handles a broad class of image restoration problems with guaranteed theoretical convergence. Numerical experiments on specific image restoration tasks validate the practicality and effectiveness of our theoretical results.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"12 ","pages":"142-157"},"PeriodicalIF":4.8,"publicationDate":"2025-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11299501","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145830973","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
High Temporal-Lateral Resolution Photoacoustic Microscopy Imaging With Dual Branch Graph Induced Fusion Network 双分支图诱导融合网络的高时间-侧向分辨率光声显微镜成像
IF 4.8 2区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-12-12 DOI: 10.1109/TCI.2025.3643726
Zhengyuan Zhang;Xiangjun Yin;Zhuoyi Lin;Wenwen Zhang;Ze Feng;Arunima Sharma;Manojit Pramanik;Chia-Wen Lin;Yuanjin Zheng
Photoacoustic microscopy (PAM) is a novel implementation of photoacoustic imaging (PAI) for visualizing the 3D bio-structure, which is realized by raster scanning of the tissue. However, temporal resolution, lateral resolution, and penetration depth, as three involved critical imaging parameters, have mutual effect to one the other. The improvement of one parameter results in the degradation of other two parameters, which constrains the overall performance of the PAM system. In this work, we propose to break these limitations by hardware and software co-design. Starting with low lateral resolution, low sampling rate AR-PAM imaging which possesses deep penetration capability, we aim to enhance the lateral resolution and upscale the images, so that high temporal resolution (scanning speed), high lateral resolution, and deep penetration for the PAM system can be achieved. Considering huge information gap between input image and target image, a dedicated dual branch network is proposed, which includes a high resolution branch and a high speed branch to fully extract embedded information from training data. What's more, to effectively fuse the high level semantic information and low level spatial details from these two branches, a novel self-attention and graph induced fusion module is designed. This module significantly eliminates blurring effect to enhance imaging resolution and amplifies the true signal to increase imaging contrast, as proved by comparison algorithms. As a result, the imaging speed is increased 16× times and the imaging lateral resolution is improved 5× times, while the deep penetration merit of AR-PAM modality is still reserved.
光声显微镜(PAM)是光声成像(PAI)的一种新实现,通过对组织进行光栅扫描来实现三维生物结构的可视化。然而,时间分辨率、横向分辨率和穿透深度这三个关键成像参数是相互影响的。一个参数的提高会导致另外两个参数的降低,从而限制了PAM系统的整体性能。在这项工作中,我们建议通过硬件和软件协同设计来打破这些限制。从具有深穿透能力的低横向分辨率、低采样率的AR-PAM成像入手,提高横向分辨率和图像的高品位,从而实现PAM系统的高时间分辨率(扫描速度)、高横向分辨率和深穿透能力。考虑到输入图像与目标图像之间存在巨大的信息差距,提出了一种包含高分辨率分支和高速分支的专用双分支网络,以充分提取训练数据中的嵌入信息。此外,为了有效地融合这两个分支的高层语义信息和低层空间细节,设计了一种新的自注意图诱导融合模块。对比算法证明,该模块显著消除了模糊效应,提高了成像分辨率,放大了真实信号,提高了成像对比度。因此,成像速度提高了16倍,成像横向分辨率提高了5倍,但仍保留了AR-PAM模式的深穿透优点。
{"title":"High Temporal-Lateral Resolution Photoacoustic Microscopy Imaging With Dual Branch Graph Induced Fusion Network","authors":"Zhengyuan Zhang;Xiangjun Yin;Zhuoyi Lin;Wenwen Zhang;Ze Feng;Arunima Sharma;Manojit Pramanik;Chia-Wen Lin;Yuanjin Zheng","doi":"10.1109/TCI.2025.3643726","DOIUrl":"https://doi.org/10.1109/TCI.2025.3643726","url":null,"abstract":"Photoacoustic microscopy (PAM) is a novel implementation of photoacoustic imaging (PAI) for visualizing the 3D bio-structure, which is realized by raster scanning of the tissue. However, temporal resolution, lateral resolution, and penetration depth, as three involved critical imaging parameters, have mutual effect to one the other. The improvement of one parameter results in the degradation of other two parameters, which constrains the overall performance of the PAM system. In this work, we propose to break these limitations by hardware and software co-design. Starting with low lateral resolution, low sampling rate AR-PAM imaging which possesses deep penetration capability, we aim to enhance the lateral resolution and upscale the images, so that high temporal resolution (scanning speed), high lateral resolution, and deep penetration for the PAM system can be achieved. Considering huge information gap between input image and target image, a dedicated dual branch network is proposed, which includes a high resolution branch and a high speed branch to fully extract embedded information from training data. What's more, to effectively fuse the high level semantic information and low level spatial details from these two branches, a novel self-attention and graph induced fusion module is designed. This module significantly eliminates blurring effect to enhance imaging resolution and amplifies the true signal to increase imaging contrast, as proved by comparison algorithms. As a result, the imaging speed is increased 16× times and the imaging lateral resolution is improved 5× times, while the deep penetration merit of AR-PAM modality is still reserved.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"12 ","pages":"73-85"},"PeriodicalIF":4.8,"publicationDate":"2025-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145830966","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DANG: Data Augmentation Based on NIR-II Guided Diffusion Model for Fluorescence Molecular Tomography 基于NIR-II引导扩散模型的荧光分子层析成像数据增强
IF 4.8 2区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-12-11 DOI: 10.1109/TCI.2025.3643313
Qiushi Huang;Chunzhao Li;Anqi Xiao;Jie Tian;Zhenhua Hu
Fluorescence molecular tomography (FMT), particularly within the second near-infrared window (NIR-II, 1000-1700 nm), is a sophisticated imaging technique for numerous medical applications, enabling reconstruction of the threedimensional distribution of internal tumors from surface fluorescence signals. Recent studies have demonstrated the effectiveness of deep learning methods in FMT reconstruction tasks; however, their performance heavily relies on large-scale, diverse labeled datasets. The existing research primarily focuses on datasets with static tumor characteristics, including fixed tumor numbers, locations, and sizes, which show an insufficient pattern diversity, limiting the neural networks' generalization ability for complex real-world scenarios. To address this limitation, we draw inspiration from the similarity between Monte Carlo photon simulation and the sampling process of diffusion model, to propose a diffusion-based data augmentation strategy. Further, we introduce a novel NIR-II-specific guidance mechanism to enhance sample fidelity and diversity by incorporating spectral optical properties. Quantitative analysis validated that high-quality NIR-II fluorescence signal samples are synthesized, where the proposed NIR-II guidance achieved a 56.7% reduction in Fréchet Inception Distance and a 21.5% improvement in Inception Score, covering a broad spectrum of patterns. Since the synthetic samples are unlabeled, they are integrated with the original dataset to train FMT neural networks using semi-supervised learning. By combining patterndiversifying strengths of diffusion models with semi-supervised learning, the proposed strategy maximizes the utility of limited datasets. Both simulative and in vivo experiments confirmed that data augmentation significantly enhances the network's reconstruction performance in precisely localizing tumor sources and reconstructing complex morphologies.
荧光分子断层扫描(FMT),特别是在第二个近红外窗口(NIR-II, 1000-1700 nm)内,是一种复杂的成像技术,用于许多医学应用,能够从表面荧光信号重建内部肿瘤的三维分布。最近的研究已经证明了深度学习方法在FMT重建任务中的有效性;然而,它们的性能严重依赖于大规模、多样化的标记数据集。现有的研究主要集中在具有静态肿瘤特征的数据集,包括固定的肿瘤数量、位置和大小,这些数据集的模式多样性不足,限制了神经网络对复杂现实场景的泛化能力。为了解决这一局限性,我们从蒙特卡罗光子模拟与扩散模型采样过程的相似性中获得灵感,提出了一种基于扩散的数据增强策略。此外,我们引入了一种新的nir - ii特异性制导机制,通过结合光谱光学特性来提高样品的保真度和多样性。定量分析证实,合成了高质量的NIR-II荧光信号样本,其中提出的NIR-II指导使fr起始距离减少56.7%,起始分数提高21.5%,覆盖了广泛的模式。由于合成样本是未标记的,因此它们与原始数据集集成,使用半监督学习来训练FMT神经网络。通过将扩散模型的模式多样化优势与半监督学习相结合,提出的策略最大化了有限数据集的效用。模拟实验和体内实验均证实,数据增强显著提高了网络在精确定位肿瘤来源和重建复杂形态方面的重建性能。
{"title":"DANG: Data Augmentation Based on NIR-II Guided Diffusion Model for Fluorescence Molecular Tomography","authors":"Qiushi Huang;Chunzhao Li;Anqi Xiao;Jie Tian;Zhenhua Hu","doi":"10.1109/TCI.2025.3643313","DOIUrl":"https://doi.org/10.1109/TCI.2025.3643313","url":null,"abstract":"Fluorescence molecular tomography (FMT), particularly within the second near-infrared window (NIR-II, 1000-1700 nm), is a sophisticated imaging technique for numerous medical applications, enabling reconstruction of the threedimensional distribution of internal tumors from surface fluorescence signals. Recent studies have demonstrated the effectiveness of deep learning methods in FMT reconstruction tasks; however, their performance heavily relies on large-scale, diverse labeled datasets. The existing research primarily focuses on datasets with static tumor characteristics, including fixed tumor numbers, locations, and sizes, which show an insufficient pattern diversity, limiting the neural networks' generalization ability for complex real-world scenarios. To address this limitation, we draw inspiration from the similarity between Monte Carlo photon simulation and the sampling process of diffusion model, to propose a diffusion-based data augmentation strategy. Further, we introduce a novel NIR-II-specific guidance mechanism to enhance sample fidelity and diversity by incorporating spectral optical properties. Quantitative analysis validated that high-quality NIR-II fluorescence signal samples are synthesized, where the proposed NIR-II guidance achieved a 56.7% reduction in Fréchet Inception Distance and a 21.5% improvement in Inception Score, covering a broad spectrum of patterns. Since the synthetic samples are unlabeled, they are integrated with the original dataset to train FMT neural networks using semi-supervised learning. By combining patterndiversifying strengths of diffusion models with semi-supervised learning, the proposed strategy maximizes the utility of limited datasets. Both simulative and in vivo experiments confirmed that data augmentation significantly enhances the network's reconstruction performance in precisely localizing tumor sources and reconstructing complex morphologies.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"12 ","pages":"128-141"},"PeriodicalIF":4.8,"publicationDate":"2025-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145830964","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
M3Depth: Wavelet-Enhanced Depth Estimation on Mars via Mutual Boosting of Dual-Modal Data M3Depth:基于双模态数据相互增强的小波增强火星深度估计
IF 4.8 2区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-12-10 DOI: 10.1109/TCI.2025.3642761
Junjie Li;Jiawei Wang;Miyu Li;Yu Liu;Yumei Wang;Haitao Xu
Depth estimation plays a great potential role in obstacle avoidance and navigation for further Mars exploration missions. Compared to traditional stereo matching, learning-based stereo depth estimation provides a data-driven approach to infer dense and precise depth maps from stereo image pairs. However, these methods always suffer performance degradation in environments with sparse textures and lacking geometric constraints, such as the unstructured terrain of Mars. To address these challenges, we propose M3Depth, a depth estimation model tailored for Mars rovers. Considering the sparse and smooth texture of Martian terrain, which is primarily composed of low-frequency features, our model incorporates a convolutional kernel based on wavelet transform that effectively captures low-frequency response and expands the receptive field. Additionally, we introduce a consistency loss that explicitly models the complementary relationship between depth map and surface normal map, utilizing the surface normal as a geometric constraint to enhance the accuracy of depth estimation. Besides, a pixel-wise refinement module with mutual boosting mechanism is designed to iteratively refine both depth and surface normal predictions. Experimental results on synthetic Mars datasets with depth annotations show that M3Depth achieves a 16% improvement in depth estimation accuracy compared to other state-of-the-art methods in depth estimation. Furthermore, the model demonstrates strong applicability in real-world Martian scenarios, offering a promising solution for future Mars exploration missions.
深度估计在未来火星探测任务的避障和导航中发挥着巨大的潜在作用。与传统的立体匹配相比,基于学习的立体深度估计提供了一种数据驱动的方法,可以从立体图像对中推断出密集和精确的深度图。然而,这些方法在纹理稀疏和缺乏几何约束的环境中总是会出现性能下降,比如火星的非结构化地形。为了解决这些挑战,我们提出了M3Depth,这是一个为火星探测器量身定制的深度估计模型。考虑到火星地形纹理稀疏光滑,主要由低频特征组成,我们的模型结合了基于小波变换的卷积核,有效地捕获了低频响应,扩大了接收场。此外,我们引入了一致性损失,明确地模拟了深度图和表面法线图之间的互补关系,利用表面法线作为几何约束来提高深度估计的精度。此外,还设计了具有相互促进机制的逐像素细化模块,对深度和表面法线预测进行迭代细化。在具有深度标注的合成火星数据集上的实验结果表明,与其他最先进的深度估计方法相比,M3Depth的深度估计精度提高了16%。此外,该模型在真实的火星场景中具有很强的适用性,为未来的火星探测任务提供了一个有希望的解决方案。
{"title":"M3Depth: Wavelet-Enhanced Depth Estimation on Mars via Mutual Boosting of Dual-Modal Data","authors":"Junjie Li;Jiawei Wang;Miyu Li;Yu Liu;Yumei Wang;Haitao Xu","doi":"10.1109/TCI.2025.3642761","DOIUrl":"https://doi.org/10.1109/TCI.2025.3642761","url":null,"abstract":"Depth estimation plays a great potential role in obstacle avoidance and navigation for further Mars exploration missions. Compared to traditional stereo matching, learning-based stereo depth estimation provides a data-driven approach to infer dense and precise depth maps from stereo image pairs. However, these methods always suffer performance degradation in environments with sparse textures and lacking geometric constraints, such as the unstructured terrain of Mars. To address these challenges, we propose M<sup>3</sup>Depth, a depth estimation model tailored for Mars rovers. Considering the sparse and smooth texture of Martian terrain, which is primarily composed of low-frequency features, our model incorporates a convolutional kernel based on wavelet transform that effectively captures low-frequency response and expands the receptive field. Additionally, we introduce a consistency loss that explicitly models the complementary relationship between depth map and surface normal map, utilizing the surface normal as a geometric constraint to enhance the accuracy of depth estimation. Besides, a pixel-wise refinement module with mutual boosting mechanism is designed to iteratively refine both depth and surface normal predictions. Experimental results on synthetic Mars datasets with depth annotations show that M<sup>3</sup>Depth achieves a 16% improvement in depth estimation accuracy compared to other state-of-the-art methods in depth estimation. Furthermore, the model demonstrates strong applicability in real-world Martian scenarios, offering a promising solution for future Mars exploration missions.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"12 ","pages":"158-171"},"PeriodicalIF":4.8,"publicationDate":"2025-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145886634","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Self-Supervised Learning-Based Reconstruction of High-Resolution 4D Light Fields 基于自监督学习的高分辨率四维光场重建
IF 4.8 2区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-12-10 DOI: 10.1109/TCI.2025.3642236
Jianxin Lei;Dongze Wu;Chengcai Xu;Hongcheng Gu;Guangquan Zhou;Junhui Hou;Ping Zhou
Hand-held light field (LF) cameras often exhibit low spatial resolution due to the inherent trade-off between spatial and angular dimensions. Existing supervised learning-based LF spatial super-resolution (SR) methods, which rely on pre-defined image degradation models, struggle to overcome the domain gap between the training phase—where LFs with natural resolution are used as ground truth—and the inference phase, which aims to reconstruct higher-resolution LFs, especially when applied to real-world data. To address this challenge, this paper introduces a novel self-supervised learning-based method for LF spatial SR, which can produce higher spatial resolution LF images than originally captured ones without pre-defined image degradation models. The self-supervised method incorporates a hybrid LF imaging prototype, a real-world hybrid LF dataset, and a self-supervised LF spatial SR framework. The prototype makes reference image pairs between low-resolution central-view sub-aperture images and high-resolution (HR) images. The self-supervised framework consists of a well-designed LF spatial SR network with hybrid input, a central-view synthesis network with an HR-aware loss that enables side-view sub-aperture images to learn high-frequency information from the only HR central view reference image, and a backward degradation network with an epipolar-plane image gradient loss to preserve LF parallax structures. Extensive experiments on both simulated and real-world datasets demonstrate the significant superiority of our approach over state-of-the-art ones in reconstructing higher spatial resolution LF images without pre-defined degradation.
手持光场(LF)相机往往表现出较低的空间分辨率,由于固有的权衡空间和角度的尺寸。现有的基于监督学习的LFs空间超分辨率(SR)方法依赖于预定义的图像退化模型,难以克服训练阶段(自然分辨率的LFs被用作基础真值)和推理阶段(旨在重建更高分辨率的LFs,特别是在应用于现实世界数据时)之间的域差距。为了解决这一问题,本文引入了一种新的基于自监督学习的LF空间SR方法,该方法可以产生比原始捕获的更高的空间分辨率的LF图像,而无需预先定义图像退化模型。自监督方法结合了混合LF成像原型、现实世界混合LF数据集和自监督LF空间SR框架。该原型在低分辨率中央视子孔径图像和高分辨率(HR)图像之间建立了参考图像对。自监督框架包括一个设计良好的带有混合输入的LF空间SR网络,一个具有HR感知损失的中心视图合成网络,使侧视子孔径图像能够从唯一的HR中心视图参考图像中学习高频信息,以及一个具有极平面图像梯度损失的向后退化网络,以保留LF视差结构。在模拟和真实数据集上进行的大量实验表明,我们的方法在重建更高空间分辨率的LF图像而没有预定义的退化方面比最先进的方法具有显著的优势。
{"title":"Self-Supervised Learning-Based Reconstruction of High-Resolution 4D Light Fields","authors":"Jianxin Lei;Dongze Wu;Chengcai Xu;Hongcheng Gu;Guangquan Zhou;Junhui Hou;Ping Zhou","doi":"10.1109/TCI.2025.3642236","DOIUrl":"https://doi.org/10.1109/TCI.2025.3642236","url":null,"abstract":"Hand-held light field (LF) cameras often exhibit low spatial resolution due to the inherent trade-off between spatial and angular dimensions. Existing supervised learning-based LF spatial super-resolution (SR) methods, which rely on pre-defined image degradation models, struggle to overcome the domain gap between the training phase—where LFs with natural resolution are used as ground truth—and the inference phase, which aims to reconstruct higher-resolution LFs, especially when applied to real-world data. To address this challenge, this paper introduces a novel self-supervised learning-based method for LF spatial SR, which can produce higher spatial resolution LF images than originally captured ones without pre-defined image degradation models. The self-supervised method incorporates a hybrid LF imaging prototype, a real-world hybrid LF dataset, and a self-supervised LF spatial SR framework. The prototype makes reference image pairs between low-resolution central-view sub-aperture images and high-resolution (HR) images. The self-supervised framework consists of a well-designed LF spatial SR network with hybrid input, a central-view synthesis network with an HR-aware loss that enables side-view sub-aperture images to learn high-frequency information from the only HR central view reference image, and a backward degradation network with an epipolar-plane image gradient loss to preserve LF parallax structures. Extensive experiments on both simulated and real-world datasets demonstrate the significant superiority of our approach over state-of-the-art ones in reconstructing higher spatial resolution LF images without pre-defined degradation.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"12 ","pages":"114-127"},"PeriodicalIF":4.8,"publicationDate":"2025-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145830904","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
List of Reviewers 审稿人名单
IF 4.8 2区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-12-10 DOI: 10.1109/TCI.2025.3641749
{"title":"List of Reviewers","authors":"","doi":"10.1109/TCI.2025.3641749","DOIUrl":"https://doi.org/10.1109/TCI.2025.3641749","url":null,"abstract":"","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"11 ","pages":"1682-1685"},"PeriodicalIF":4.8,"publicationDate":"2025-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11296854","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145729404","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Distribution-Adaptive Hierarchical Quantization Enhanced Binary Networks for Spectral Compressive Imaging 光谱压缩成像的分布自适应分层量化增强二值网络
IF 4.8 2区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-12-08 DOI: 10.1109/TCI.2025.3641031
Mengying Jin;Liang Xiao;Zhihui Wei
Hyperspectral image processing faces significant challenges in storage and computation. Snapshot Compressive Imaging (SCI) effectively encodes three-dimensional data into two-dimensional measurements, facilitating efficient data acquisition. However, reconstructing high-quality data from these compressed measurements remains a formidable task. Binary Neural Networks (BNNs) have gained attention for their ability to reduce storage requirements and computational costs. Yet, they often struggle with accuracy loss, fixed quantization limits, and lack of domain knowledge utilization. To overcome these limitations, distribution-adaptive hierarchical quantization-enhanced binary networks are proposed to achieve efficient SCI reconstruction. First, an adaptive distribution strategy and a binary weight evaluation branch are proposed to improve representation accuracy. Second, a hierarchical quantization scheme is presented to enhance multiscale feature extraction while maintaining efficiency. Third, domain-specific priors and a novel sparsity constraint are incorporated to capture fine details and improve training stability. The experimental results demonstrate the superiority of our approach, achieving an increase of 1.98 dB in PSNR and an improvement of 0.055 in SSIM compared to state-of-the-art BNNs.
高光谱图像处理在存储和计算方面面临着重大挑战。快照压缩成像(SCI)有效地将三维数据编码为二维测量,促进了高效的数据采集。然而,从这些压缩测量中重建高质量的数据仍然是一项艰巨的任务。二进制神经网络(BNNs)因其降低存储需求和计算成本的能力而受到关注。然而,它们经常与准确性损失、固定量化限制和缺乏领域知识利用作斗争。为了克服这些限制,提出了分布自适应分层量化增强二值网络来实现高效的SCI重建。首先,提出了一种自适应分布策略和二值权评估分支来提高表征精度;其次,提出了一种分层量化方案,在保证提取效率的前提下增强多尺度特征提取。第三,结合领域特定先验和新的稀疏性约束来捕获精细细节,提高训练稳定性。实验结果证明了我们的方法的优越性,与最先进的bnn相比,PSNR提高了1.98 dB, SSIM提高了0.055。
{"title":"Distribution-Adaptive Hierarchical Quantization Enhanced Binary Networks for Spectral Compressive Imaging","authors":"Mengying Jin;Liang Xiao;Zhihui Wei","doi":"10.1109/TCI.2025.3641031","DOIUrl":"https://doi.org/10.1109/TCI.2025.3641031","url":null,"abstract":"Hyperspectral image processing faces significant challenges in storage and computation. Snapshot Compressive Imaging (SCI) effectively encodes three-dimensional data into two-dimensional measurements, facilitating efficient data acquisition. However, reconstructing high-quality data from these compressed measurements remains a formidable task. Binary Neural Networks (BNNs) have gained attention for their ability to reduce storage requirements and computational costs. Yet, they often struggle with accuracy loss, fixed quantization limits, and lack of domain knowledge utilization. To overcome these limitations, distribution-adaptive hierarchical quantization-enhanced binary networks are proposed to achieve efficient SCI reconstruction. First, an adaptive distribution strategy and a binary weight evaluation branch are proposed to improve representation accuracy. Second, a hierarchical quantization scheme is presented to enhance multiscale feature extraction while maintaining efficiency. Third, domain-specific priors and a novel sparsity constraint are incorporated to capture fine details and improve training stability. The experimental results demonstrate the superiority of our approach, achieving an increase of 1.98 dB in PSNR and an improvement of 0.055 in SSIM compared to state-of-the-art BNNs.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"12 ","pages":"86-101"},"PeriodicalIF":4.8,"publicationDate":"2025-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145830968","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Moving Targets Imaging by SVD of a Space-Velocity MIMO Radar Data Driven Matrix 空速MIMO雷达数据驱动矩阵的SVD运动目标成像
IF 4.8 2区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-12-05 DOI: 10.1109/TCI.2025.3640864
Liliana Borcea;Josselin Garnier
We introduce a method for Multiple Input Multiple Output (MIMO) radar imaging of moving targets in a strongly reflecting, complex stationary scenery (clutter). The radar system has fixed nearby antennas that play the dual role of sources and receivers. It gathers data either by emitting probing pulses from one antenna at a time, or by sending from all the antennas non-coherent, possibly orthogonal, waveforms. We show how to obtain from the measurements an imaging function that depends on search position and velocity and is approximately separable in these variables, for a single moving target. For multiple moving targets in clutter, the imaging function is a sum of separable functions. By sampling this imaging function on a position-velocity grid we obtain an imaging matrix whose Singular Value Decomposition (SVD) allows the separation of the clutter and the targets moving at different velocities. The decomposition also leads directly to estimates of the locations and motion of the targets. The imaging method is designed to work in strong clutter, with unknown and possibly heterogeneous statistics. It does not require prior estimation of the covariance matrix of the clutter response or of its rank. We give an analysis of the imaging method and illustrate how it works with numerical simulations.
介绍了一种多输入多输出(MIMO)雷达在强反射、复杂静止环境(杂波)下对运动目标的成像方法。雷达系统有固定的附近天线,扮演源和接收器的双重角色。它收集数据的方式要么是一次从一个天线发射探测脉冲,要么是从所有天线发射非相干的、可能是正交的波形。我们展示了如何从测量中获得成像函数,该函数取决于搜索位置和速度,并且在这些变量中近似可分,对于单个移动目标。对于杂波环境下的多个运动目标,其成像函数是可分离函数的和。通过在位置-速度网格上对该成像函数进行采样,得到了一个成像矩阵,该矩阵的奇异值分解(SVD)可以将杂波和以不同速度运动的目标分离开来。分解还直接导致对目标的位置和运动的估计。该成像方法设计用于强杂波,具有未知和可能异构的统计数据。它不需要事先估计杂波响应的协方差矩阵或其秩。我们对成像方法进行了分析,并用数值模拟说明了它是如何工作的。
{"title":"Moving Targets Imaging by SVD of a Space-Velocity MIMO Radar Data Driven Matrix","authors":"Liliana Borcea;Josselin Garnier","doi":"10.1109/TCI.2025.3640864","DOIUrl":"https://doi.org/10.1109/TCI.2025.3640864","url":null,"abstract":"We introduce a method for Multiple Input Multiple Output (MIMO) radar imaging of moving targets in a strongly reflecting, complex stationary scenery (clutter). The radar system has fixed nearby antennas that play the dual role of sources and receivers. It gathers data either by emitting probing pulses from one antenna at a time, or by sending from all the antennas non-coherent, possibly orthogonal, waveforms. We show how to obtain from the measurements an imaging function that depends on search position and velocity and is approximately separable in these variables, for a single moving target. For multiple moving targets in clutter, the imaging function is a sum of separable functions. By sampling this imaging function on a position-velocity grid we obtain an imaging matrix whose Singular Value Decomposition (SVD) allows the separation of the clutter and the targets moving at different velocities. The decomposition also leads directly to estimates of the locations and motion of the targets. The imaging method is designed to work in strong clutter, with unknown and possibly heterogeneous statistics. It does not require prior estimation of the covariance matrix of the clutter response or of its rank. We give an analysis of the imaging method and illustrate how it works with numerical simulations.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"12 ","pages":"172-186"},"PeriodicalIF":4.8,"publicationDate":"2025-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145929445","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Convergence-Guaranteed Spectral CT Reconstruction via Internal and External Prior Mining 基于内外先验挖掘的收敛保证谱CT重建
IF 4.8 2区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-11-24 DOI: 10.1109/TCI.2025.3636743
Chunyan Liu;Dianlin Hu;Jiangjun Peng;Hong Wang;Qianyu Shu;Jianjun Wang
Spectral computed tomography (CT) is an imaging technology that utilizes the absorption characteristics of different X-ray energies to obtain the X-ray attenuation characteristics of objects in different energy ranges. However, the limited number of photons detected by spectral CT under a specific X-ray spectrum leads to obvious projection data noise. Making full use of the various properties of the original data is an effective way to recover a clean image from a small amount of noisy projection data. This paper proposes a spectral CT reconstruction method based on representative coefficient image denoising under a low-rank decomposition framework. This method integrates model-driven internal low-rank and nonlocal priors, and data-driven external deep priors, aiming to fully exploit the inherent spectral correlation, nonlocal self-similarity and deep spatial features in spectral CT images. Specifically, we use low-rank decomposition to characterize the global low-rankness of spectral CT images under a plug-and-play framework, and jointly utilize nonlocal low-rankness and smoothness as well as deep image priors to denoise representative coefficient images. Therefore, the proposed method faithfully represents the real underlying information of images by cleverly combining internal and external, nonlocal and local priors. Meanwhile, we design an effective proximal alternating minimization (PAM) algorithm to solve the proposed reconstruction model and establish the theoretical guarantee of the proposed algorithm. Experimental results show that compared with existing popular algorithms, the proposed method can significantly reduce running time while improving spectral CT images quality.
光谱计算机断层扫描(CT)是一种利用不同x射线能量的吸收特性来获得不同能量范围内物体的x射线衰减特性的成像技术。然而,在特定的x射线光谱下,光谱CT检测到的光子数量有限,导致投影数据噪声明显。充分利用原始数据的各种属性是从少量噪声投影数据中恢复干净图像的有效途径。提出了一种低秩分解框架下基于代表性系数图像去噪的光谱CT重构方法。该方法将模型驱动的内部低秩和非局部先验和数据驱动的外部深度先验相结合,旨在充分利用光谱CT图像固有的光谱相关性、非局部自相似性和深度空间特征。具体而言,我们在即插即用框架下,利用低秩分解对光谱CT图像的全局低秩度进行表征,并联合利用非局部低秩度和平滑度以及深度图像先验对代表性系数图像进行去噪。因此,该方法将内部先验与外部先验、非局部先验与局部先验巧妙结合,忠实地反映了图像的真实底层信息。同时,我们设计了一种有效的邻域交替极小化(PAM)算法来求解所提出的重构模型,并为所提出的算法建立了理论保证。实验结果表明,与现有的流行算法相比,该方法在提高光谱CT图像质量的同时显著缩短了运行时间。
{"title":"Convergence-Guaranteed Spectral CT Reconstruction via Internal and External Prior Mining","authors":"Chunyan Liu;Dianlin Hu;Jiangjun Peng;Hong Wang;Qianyu Shu;Jianjun Wang","doi":"10.1109/TCI.2025.3636743","DOIUrl":"https://doi.org/10.1109/TCI.2025.3636743","url":null,"abstract":"Spectral computed tomography (CT) is an imaging technology that utilizes the absorption characteristics of different X-ray energies to obtain the X-ray attenuation characteristics of objects in different energy ranges. However, the limited number of photons detected by spectral CT under a specific X-ray spectrum leads to obvious projection data noise. Making full use of the various properties of the original data is an effective way to recover a clean image from a small amount of noisy projection data. This paper proposes a spectral CT reconstruction method based on representative coefficient image denoising under a low-rank decomposition framework. This method integrates model-driven internal low-rank and nonlocal priors, and data-driven external deep priors, aiming to fully exploit the inherent spectral correlation, nonlocal self-similarity and deep spatial features in spectral CT images. Specifically, we use low-rank decomposition to characterize the global low-rankness of spectral CT images under a plug-and-play framework, and jointly utilize nonlocal low-rankness and smoothness as well as deep image priors to denoise representative coefficient images. Therefore, the proposed method faithfully represents the real underlying information of images by cleverly combining internal and external, nonlocal and local priors. Meanwhile, we design an effective proximal alternating minimization (PAM) algorithm to solve the proposed reconstruction model and establish the theoretical guarantee of the proposed algorithm. Experimental results show that compared with existing popular algorithms, the proposed method can significantly reduce running time while improving spectral CT images quality.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"12 ","pages":"46-59"},"PeriodicalIF":4.8,"publicationDate":"2025-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145766180","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fast Correction for Geometric Distortion in PFA Wavefront Curvature Compensation PFA波前曲率补偿中几何畸变的快速校正
IF 4.8 2区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-11-24 DOI: 10.1109/TCI.2025.3636746
Yining Zhang;Jixia Fan;Yanqi Liu;Xinhua Mao
In large-scene synthetic aperture radar (SAR) imaging, the selection of appropriate algorithms is crucial as it directly impacts processing efficiency and image fidelity. The Polar Format Algorithm (PFA) is widely used for its high-speed image formation capabilities. However, its reliance on the planar wavefront approximation inevitably introduces phase errors. A primary challenge arising from linear components of these errors is geometric distortion, which manifests as space-variant shift from actual positions. Traditional inverse warping correction method based on two-dimensional(2-D) interpolation suffers from high computational costs. To address this limitation, this paper proposes a separable 2-D interpolation framework that decouples the correction process into two one-dimensional (1-D) interpolations along azimuth and range axes. Through inverse solutions of geometric distortion functions, it is demonstrated that applying this framework to geometric distortion correction can effectively reduce the complexity while preserving image precision. Simulations and real-data comparisons validate that the proposed fast geometric distortion correction method significantly improve correction speed thus boosting overall computational efficiency.
在大场景合成孔径雷达(SAR)成像中,算法的选择至关重要,它直接影响到处理效率和图像保真度。极坐标格式算法(Polar Format Algorithm, PFA)因其高速图像生成能力而得到广泛应用。然而,它对平面波前近似的依赖不可避免地引入了相位误差。由这些误差的线性分量引起的一个主要挑战是几何畸变,它表现为与实际位置的空间变异偏移。传统的基于二维(2-D)插值的逆翘曲校正方法计算量大。为了解决这一限制,本文提出了一个可分离的二维插值框架,该框架将校正过程解耦为沿方位轴和距离轴的两个一维(一维)插值。通过几何畸变函数的反解,证明将该框架应用于几何畸变校正可以有效地降低校正复杂度,同时保持图像精度。仿真和实际数据对比验证了所提出的快速几何畸变校正方法显著提高了校正速度,从而提高了整体计算效率。
{"title":"Fast Correction for Geometric Distortion in PFA Wavefront Curvature Compensation","authors":"Yining Zhang;Jixia Fan;Yanqi Liu;Xinhua Mao","doi":"10.1109/TCI.2025.3636746","DOIUrl":"https://doi.org/10.1109/TCI.2025.3636746","url":null,"abstract":"In large-scene synthetic aperture radar (SAR) imaging, the selection of appropriate algorithms is crucial as it directly impacts processing efficiency and image fidelity. The Polar Format Algorithm (PFA) is widely used for its high-speed image formation capabilities. However, its reliance on the planar wavefront approximation inevitably introduces phase errors. A primary challenge arising from linear components of these errors is geometric distortion, which manifests as space-variant shift from actual positions. Traditional inverse warping correction method based on two-dimensional(2-D) interpolation suffers from high computational costs. To address this limitation, this paper proposes a separable 2-D interpolation framework that decouples the correction process into two one-dimensional (1-D) interpolations along azimuth and range axes. Through inverse solutions of geometric distortion functions, it is demonstrated that applying this framework to geometric distortion correction can effectively reduce the complexity while preserving image precision. Simulations and real-data comparisons validate that the proposed fast geometric distortion correction method significantly improve correction speed thus boosting overall computational efficiency.","PeriodicalId":56022,"journal":{"name":"IEEE Transactions on Computational Imaging","volume":"12 ","pages":"60-72"},"PeriodicalIF":4.8,"publicationDate":"2025-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145766194","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Transactions on Computational Imaging
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1