首页 > 最新文献

IEEE transactions on image processing : a publication of the IEEE Signal Processing Society最新文献

英文 中文
Reviewer Summary for Transactions on Image Processing 《图像处理汇刊》审稿人总结
IF 13.7 Pub Date : 2026-01-12 DOI: 10.1109/TIP.2025.3650664
{"title":"Reviewer Summary for Transactions on Image Processing","authors":"","doi":"10.1109/TIP.2025.3650664","DOIUrl":"10.1109/TIP.2025.3650664","url":null,"abstract":"","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"34 ","pages":"8684-8708"},"PeriodicalIF":13.7,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11346802","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145955219","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reflectance Prediction-based Knowledge Distillation for Robust 3D Object Detection in Compressed Point Clouds. 基于反射率预测的压缩点云三维目标鲁棒检测方法。
IF 13.7 Pub Date : 2026-01-02 DOI: 10.1109/TIP.2025.3648203
Hao Jing, Anhong Wang, Yifan Zhang, Donghan Bu, Junhui Hou

Regarding intelligent transportation systems, low-bitrate transmission via lossy point cloud compression is vital for facilitating real-time collaborative perception among connected agents, such as vehicles and infrastructures, under restricted bandwidth. In existing compression transmission systems, the sender lossily compresses point coordinates and reflectance to generate a transmission code stream, which faces transmission burdens from reflectance encoding and limited detection robustness due to information loss. To address these issues, this paper proposes a 3D object detection framework with reflectance prediction-based knowledge distillation (RPKD). We compress point coordinates while discarding reflectance during low-bitrate transmission, and feed the decoded non-reflectance compressed point clouds into a student detector. The discarded reflectance is then reconstructed by a geometry-based reflectance prediction (RP) module within the student detector for precise detection. A teacher detector with the same structure as the student detector is designed for performing reflectance knowledge distillation (RKD) and detection knowledge distillation (DKD) from raw to compressed point clouds. Our cross-source distillation training strategy (CDTS) equips the student detector with robustness to low-quality compressed data while preserving the accuracy benefits of raw data through transferred distillation knowledge. Experimental results on the KITTI and DAIR-V2X-V datasets demonstrate that our method can boost detection accuracy for compressed point clouds across multiple code rates. We will release the code publicly at https://github.com/HaoJing-SX/RPKD.

对于智能交通系统,在有限带宽下,通过有损点云压缩进行的低比特率传输对于促进连接代理(如车辆和基础设施)之间的实时协同感知至关重要。在现有的压缩传输系统中,发送方对点坐标和反射率进行有损压缩生成传输码流,这既面临着反射率编码带来的传输负担,又面临着信息丢失导致的检测鲁棒性受限的问题。为了解决这些问题,本文提出了一种基于反射率预测的知识蒸馏(RPKD)的三维目标检测框架。我们在低比特率传输过程中压缩点坐标,同时丢弃反射率,并将解码后的非反射率压缩点云送入学生探测器。然后通过学生检测器内基于几何的反射率预测(RP)模块重建丢弃的反射率,以进行精确检测。设计了一种与学生检测器结构相同的教师检测器,用于从原始点云到压缩点云进行反射知识蒸馏(RKD)和检测知识蒸馏(DKD)。我们的跨源蒸馏训练策略(CDTS)使学生检测器对低质量压缩数据具有鲁棒性,同时通过转移的蒸馏知识保持原始数据的准确性。在KITTI和DAIR-V2X-V数据集上的实验结果表明,该方法可以提高压缩点云在多个码率下的检测精度。我们将在https://github.com/HaoJing-SX/RPKD公开发布代码。
{"title":"Reflectance Prediction-based Knowledge Distillation for Robust 3D Object Detection in Compressed Point Clouds.","authors":"Hao Jing, Anhong Wang, Yifan Zhang, Donghan Bu, Junhui Hou","doi":"10.1109/TIP.2025.3648203","DOIUrl":"https://doi.org/10.1109/TIP.2025.3648203","url":null,"abstract":"<p><p>Regarding intelligent transportation systems, low-bitrate transmission via lossy point cloud compression is vital for facilitating real-time collaborative perception among connected agents, such as vehicles and infrastructures, under restricted bandwidth. In existing compression transmission systems, the sender lossily compresses point coordinates and reflectance to generate a transmission code stream, which faces transmission burdens from reflectance encoding and limited detection robustness due to information loss. To address these issues, this paper proposes a 3D object detection framework with reflectance prediction-based knowledge distillation (RPKD). We compress point coordinates while discarding reflectance during low-bitrate transmission, and feed the decoded non-reflectance compressed point clouds into a student detector. The discarded reflectance is then reconstructed by a geometry-based reflectance prediction (RP) module within the student detector for precise detection. A teacher detector with the same structure as the student detector is designed for performing reflectance knowledge distillation (RKD) and detection knowledge distillation (DKD) from raw to compressed point clouds. Our cross-source distillation training strategy (CDTS) equips the student detector with robustness to low-quality compressed data while preserving the accuracy benefits of raw data through transferred distillation knowledge. Experimental results on the KITTI and DAIR-V2X-V datasets demonstrate that our method can boost detection accuracy for compressed point clouds across multiple code rates. We will release the code publicly at https://github.com/HaoJing-SX/RPKD.</p>","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"PP ","pages":""},"PeriodicalIF":13.7,"publicationDate":"2026-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145893537","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Token Calibration for Transformer-Based Domain Adaptation 基于变压器域自适应的令牌校正。
IF 13.7 Pub Date : 2026-01-01 DOI: 10.1109/TIP.2025.3647367
Xiaowei Fu;Shiyu Ye;Chenxu Zhang;Fuxiang Huang;Xin Xu;Lei Zhang
Unsupervised Domain Adaptation (UDA) aims to transfer knowledge from a labeled source domain to an unlabeled target domain by learning domain-invariant representations. Motivated by the recent success of Vision Transformers (ViTs), several UDA approaches have adopted ViT architectures to exploit fine-grained patch-level representations, which are unified as Transformer-based $D$ omain $A$ daptation (TransDA) independent of CNN-based. However, we have a key observation in TransDA: due to inherent domain shifts, patches (tokens) from different semantic categories across domains may exhibit abnormally high similarities, which can mislead the self-attention mechanism and degrade adaptation performance. To solve that, we propose a novel $P$ atch- $A$ daptation Transformer (PATrans), which first identifies similarity-anomalous patches and then adaptively suppresses their negative impact to domain alignment, i.e. token calibration. Specifically, we introduce a $P$ atch- $A$ daptation $A$ ttention (PAA) mechanism to replace the standard self-attention mechanism, which consists of a weight-shared triple-branch mixed attention mechanism and a patch-level domain discriminator. The mixed attention integrates self-attention and cross-attention to enhance intra-domain feature modeling and inter-domain similarity estimation. Meanwhile, the patch-level domain discriminator quantifies the anomaly probability of each patch, enabling dynamic reweighting to mitigate the impact of unreliable patch correspondences. Furthermore, we introduce a contrastive attention regularization strategy, which leverages category-level information in a contrastive learning framework to promote class-consistent attention distributions. Extensive experiments on four benchmark datasets demonstrate that PATrans attains significant improvements over existing state-of-the-art UDA methods (e.g., 89.2% on the VisDA-2017). Code is available at: https://github.com/YSY145/PATrans
无监督域自适应(Unsupervised Domain Adaptation, UDA)旨在通过学习域不变表示,将知识从标记的源域转移到未标记的目标域。受视觉变形器(Vision transformer, ViT)最近成功的启发,一些UDA方法采用ViT架构来利用细粒度的补丁级表示,这些表示统一为基于变形器的域自适应(TransDA),独立于基于cnn的域自适应。然而,我们在TransDA中有一个关键的观察:由于固有的领域转移,来自不同语义类别的补丁(token)跨领域可能表现出异常高的相似性,这可能会误导自注意机制并降低自适应性能。为了解决这个问题,我们提出了一种新的补丁自适应变压器(PATrans),它首先识别相似异常的补丁,然后自适应地抑制它们对域对齐的负面影响,即标记校准。具体来说,我们引入了一种补丁自适应注意(PAA)机制来取代标准的自注意机制,该机制由一个权重共享的三分支混合注意机制和一个补丁级域鉴别器组成。混合注意融合了自注意和交叉注意,增强了域内特征建模和域间相似度估计。同时,补丁级域鉴别器量化每个补丁的异常概率,实现动态加权,以减轻不可靠的补丁对应的影响。此外,我们引入了一种对比注意正则化策略,该策略利用对比学习框架中的类别级信息来促进类别一致的注意分布。在四个基准数据集上进行的大量实验表明,PATrans比现有的最先进的UDA方法取得了显著的改进(例如,在VisDA-2017上取得了89.2%的改进)。代码可从https://github.com/YSY145/PATrans获得。
{"title":"Token Calibration for Transformer-Based Domain Adaptation","authors":"Xiaowei Fu;Shiyu Ye;Chenxu Zhang;Fuxiang Huang;Xin Xu;Lei Zhang","doi":"10.1109/TIP.2025.3647367","DOIUrl":"10.1109/TIP.2025.3647367","url":null,"abstract":"Unsupervised Domain Adaptation (UDA) aims to transfer knowledge from a labeled source domain to an unlabeled target domain by learning domain-invariant representations. Motivated by the recent success of Vision Transformers (ViTs), several UDA approaches have adopted ViT architectures to exploit fine-grained patch-level representations, which are unified as <italic>Trans</i>former-based <inline-formula> <tex-math>$D$ </tex-math></inline-formula>omain <inline-formula> <tex-math>$A$ </tex-math></inline-formula>daptation (TransDA) independent of CNN-based. However, we have a key observation in TransDA: due to inherent domain shifts, patches (tokens) from different semantic categories across domains may exhibit abnormally high similarities, which can mislead the self-attention mechanism and degrade adaptation performance. To solve that, we propose a novel <inline-formula> <tex-math>$P$ </tex-math></inline-formula>atch-<inline-formula> <tex-math>$A$ </tex-math></inline-formula>daptation <italic>Trans</i>former (PATrans), which first <italic>identifies</i> similarity-anomalous patches and then adaptively <italic>suppresses</i> their negative impact to domain alignment, i.e. <italic>token calibration</i>. Specifically, we introduce a <inline-formula> <tex-math>$P$ </tex-math></inline-formula>atch-<inline-formula> <tex-math>$A$ </tex-math></inline-formula>daptation <inline-formula> <tex-math>$A$ </tex-math></inline-formula>ttention (<italic>PAA</i>) mechanism to replace the standard self-attention mechanism, which consists of a weight-shared triple-branch mixed attention mechanism and a patch-level domain discriminator. The mixed attention integrates self-attention and cross-attention to enhance intra-domain feature modeling and inter-domain similarity estimation. Meanwhile, the patch-level domain discriminator quantifies the anomaly probability of each patch, enabling dynamic reweighting to mitigate the impact of unreliable patch correspondences. Furthermore, we introduce a contrastive attention regularization strategy, which leverages category-level information in a contrastive learning framework to promote class-consistent attention distributions. Extensive experiments on four benchmark datasets demonstrate that PATrans attains significant improvements over existing state-of-the-art UDA methods (e.g., 89.2% on the VisDA-2017). Code is available at: <uri>https://github.com/YSY145/PATrans</uri>","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"35 ","pages":"57-68"},"PeriodicalIF":13.7,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145890754","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Coupled Diffusion Posterior Sampling for Unsupervised Hyperspectral and Multispectral Images Fusion 无监督高光谱与多光谱图像融合的耦合扩散后验采样。
IF 13.7 Pub Date : 2026-01-01 DOI: 10.1109/TIP.2025.3647207
Yang Xu;Jian Zhu;Danfeng Hong;Zhihui Wei;Zebin Wu
Hyperspectral images (HSIs) and multispectral images (MSIs) fusion is a hot topic in the remote sensing society. A high-resolution HSI (HR-HSI) can be obtained by fusing a low-resolution HSI (LR-HSI) and a high-resolution MSI (HR-MSI) or RGB image. However, most deep learning-based methods require a large amount of HR-HSIs for supervised training, which is very rare in practice. In this paper, we propose a coupled diffusion posterior sampling (CDPS) method for HSI and MSI fusion in which the HR-HSIs are no longer required in the training process. Because the LR-HSI contains the spectral information and HR-MSI contains the spatial information of the captured scene, we design an unsupervised strategy that learns the required diffusion priors directly and solely from the input test image pair (the LR-HSI and HR-MSI themselves). Then, a coupled diffusion posterior sampling method is proposed to introduce the two priors in the diffusion posterior sampling which leverages the observed LR-HSI and HR-MSI as fidelity terms. Experimental results demonstrate that the proposed method outperforms other state-of-the-art unsupervised HSI and MSI fusion methods. Additionally, this method utilizes smaller networks that are simpler and easier to train without other data.
高光谱图像与多光谱图像的融合是遥感领域的研究热点。高分辨率HSI (HR-HSI)是由低分辨率HSI (LR-HSI)和高分辨率MSI (HR-MSI)或RGB图像融合而成的。然而,大多数基于深度学习的方法需要大量的hr - hsi进行监督训练,这在实践中是非常罕见的。在本文中,我们提出了一种用于HSI和MSI融合的耦合扩散后验采样(CDPS)方法,该方法在训练过程中不再需要hr -HSI。由于LR-HSI包含光谱信息,HR-MSI包含捕获场景的空间信息,因此我们设计了一种无监督策略,该策略直接且仅从输入测试图像对(LR-HSI和HR-MSI本身)中学习所需的扩散先验。然后,提出了一种耦合扩散后验抽样方法,利用观测到的LR-HSI和HR-MSI作为保真度项,在扩散后验抽样中引入两个先验。实验结果表明,该方法优于其他无监督HSI和MSI融合方法。此外,这种方法利用更小的网络,更简单,更容易训练,不需要其他数据。
{"title":"Coupled Diffusion Posterior Sampling for Unsupervised Hyperspectral and Multispectral Images Fusion","authors":"Yang Xu;Jian Zhu;Danfeng Hong;Zhihui Wei;Zebin Wu","doi":"10.1109/TIP.2025.3647207","DOIUrl":"10.1109/TIP.2025.3647207","url":null,"abstract":"Hyperspectral images (HSIs) and multispectral images (MSIs) fusion is a hot topic in the remote sensing society. A high-resolution HSI (HR-HSI) can be obtained by fusing a low-resolution HSI (LR-HSI) and a high-resolution MSI (HR-MSI) or RGB image. However, most deep learning-based methods require a large amount of HR-HSIs for supervised training, which is very rare in practice. In this paper, we propose a coupled diffusion posterior sampling (CDPS) method for HSI and MSI fusion in which the HR-HSIs are no longer required in the training process. Because the LR-HSI contains the spectral information and HR-MSI contains the spatial information of the captured scene, we design an unsupervised strategy that learns the required diffusion priors directly and solely from the input test image pair (the LR-HSI and HR-MSI themselves). Then, a coupled diffusion posterior sampling method is proposed to introduce the two priors in the diffusion posterior sampling which leverages the observed LR-HSI and HR-MSI as fidelity terms. Experimental results demonstrate that the proposed method outperforms other state-of-the-art unsupervised HSI and MSI fusion methods. Additionally, this method utilizes smaller networks that are simpler and easier to train without other data.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"35 ","pages":"69-84"},"PeriodicalIF":13.7,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145890776","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Implicit Neural Compression of Point Clouds. 点云的隐式神经压缩。
IF 13.7 Pub Date : 2026-01-01 DOI: 10.1109/TIP.2025.3648141
Hongning Ruan, Yulin Shao, Qianqian Yang, Liang Zhao, Zhaoyang Zhang, Dusit Niyato

Point clouds have gained prominence across numerous applications due to their ability to accurately represent 3D objects and scenes. However, efficiently compressing unstructured, high-precision point cloud data remains a significant challenge. In this paper, we propose NeRC3, a novel point cloud compression framework that leverages implicit neural representations (INRs) to encode both geometry and attributes of dense point clouds. Our approach employs two coordinate-based neural networks: one maps spatial coordinates to voxel occupancy, while the other maps occupied voxels to their attributes, thereby implicitly representing the geometry and attributes of a voxelized point cloud. The encoder quantizes and compresses network parameters alongside auxiliary information required for reconstruction, while the decoder reconstructs the original point cloud by inputting voxel coordinates into the neural networks. Furthermore, we extend our method to dynamic point cloud compression through techniques that reduce temporal redundancy, including a 4D spatio-temporal representation termed 4D-NeRC3. Experimental results validate the effectiveness of our approach: For static point clouds, NeRC3 outperforms octree-based G-PCC standard and existing INR-based methods. For dynamic point clouds, 4D-NeRC3 achieves superior geometry compression performance compared to the latest G-PCC and V-PCC standards, while matching state-of-the-art learning-based methods. It also demonstrates competitive performance in joint geometry and attribute compression.

点云由于其精确表示3D物体和场景的能力,在许多应用中获得了突出地位。然而,高效压缩非结构化、高精度的点云数据仍然是一个重大挑战。在本文中,我们提出了一种新的点云压缩框架NeRC3,它利用隐式神经表示(INRs)对密集点云的几何和属性进行编码。我们的方法采用了两个基于坐标的神经网络:一个将空间坐标映射到体素占用,而另一个将占用的体素映射到它们的属性,从而隐式地表示体素化点云的几何和属性。编码器量化和压缩网络参数以及重建所需的辅助信息,而解码器通过将体素坐标输入神经网络来重建原始点云。此外,我们通过减少时间冗余的技术将我们的方法扩展到动态点云压缩,包括称为4D- nerc3的4D时空表示。实验结果验证了我们方法的有效性:对于静态点云,NeRC3优于基于octree的G-PCC标准和现有的基于inr的方法。对于动态点云,与最新的G-PCC和V-PCC标准相比,4D-NeRC3实现了卓越的几何压缩性能,同时与最先进的基于学习的方法相匹配。它在关节几何和属性压缩方面也表现出具有竞争力的性能。
{"title":"Implicit Neural Compression of Point Clouds.","authors":"Hongning Ruan, Yulin Shao, Qianqian Yang, Liang Zhao, Zhaoyang Zhang, Dusit Niyato","doi":"10.1109/TIP.2025.3648141","DOIUrl":"https://doi.org/10.1109/TIP.2025.3648141","url":null,"abstract":"<p><p>Point clouds have gained prominence across numerous applications due to their ability to accurately represent 3D objects and scenes. However, efficiently compressing unstructured, high-precision point cloud data remains a significant challenge. In this paper, we propose NeRC<sup>3</sup>, a novel point cloud compression framework that leverages implicit neural representations (INRs) to encode both geometry and attributes of dense point clouds. Our approach employs two coordinate-based neural networks: one maps spatial coordinates to voxel occupancy, while the other maps occupied voxels to their attributes, thereby implicitly representing the geometry and attributes of a voxelized point cloud. The encoder quantizes and compresses network parameters alongside auxiliary information required for reconstruction, while the decoder reconstructs the original point cloud by inputting voxel coordinates into the neural networks. Furthermore, we extend our method to dynamic point cloud compression through techniques that reduce temporal redundancy, including a 4D spatio-temporal representation termed 4D-NeRC<sup>3</sup>. Experimental results validate the effectiveness of our approach: For static point clouds, NeRC<sup>3</sup> outperforms octree-based G-PCC standard and existing INR-based methods. For dynamic point clouds, 4D-NeRC<sup>3</sup> achieves superior geometry compression performance compared to the latest G-PCC and V-PCC standards, while matching state-of-the-art learning-based methods. It also demonstrates competitive performance in joint geometry and attribute compression.</p>","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"PP ","pages":""},"PeriodicalIF":13.7,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145890733","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Task-Driven Underwater Image Enhancement via Hierarchical Semantic Refinement 基于层次语义细化的任务驱动水下图像增强。
IF 13.7 Pub Date : 2026-01-01 DOI: 10.1109/TIP.2025.3647323
Meng Yu;Liquan Shen;Yihan Yu;Yu Zhang;Rui Le
Underwater image enhancement (UIE) is crucial for robust marine exploration, yet existing methods prioritize perceptual quality while overlooking irreversible semantic corruption that impairs downstream tasks. Unlike terrestrial images, underwater semantics exhibit layer-specific degradations: shallow features suffer from color shifts and edge erosion, while deep features face semantic ambiguity. These distortions entangle with semantic content across feature hierarchies, where direct enhancement amplifies interference in downstream tasks. Even if distortions are removed, the damaged semantic structures cannot be fully recovered, making it imperative to further enhance corrupted content. To address these challenges, we propose a task-driven UIE framework that redefines enhancement as machine-interpretable semantic recovery rather than mere distortion removal. First, we introduce a multi-scale underwater distortion-aware generator to perceive distortions across feature levels and provide a prior for distortion removal. Second, leveraging this prior and the absence of clean underwater references, we propose a stable self-supervised disentanglement strategy to explicitly separate distortions from corrupted content through CLIP-based semantic constraints and identity consistency. Finally, to compensate for the irreversible semantic loss, we design a task-aware hierarchical enhancement module that refines shallow details via spatial-frequency fusion and strengthens deep semantics through multi-scale context aggregation, aligning results with machine vision requirements. Extensive experiments on segmentation, detection, and saliency tasks demonstrate the superiority of our method in restoring machine-friendly semantics from degraded underwater images. Our code is available at https://github.com/gemyumeng/HSRUIE
水下图像增强(UIE)对于强大的海洋探测至关重要,然而现有的方法优先考虑感知质量,而忽略了损害下游任务的不可逆转的语义损坏。与陆地图像不同,水下图像的语义表现出特定层的退化:浅层特征遭受颜色偏移和边缘侵蚀,而深层特征面临语义模糊。这些扭曲与跨特征层次的语义内容纠缠在一起,其中直接增强放大了对下游任务的干扰。即使消除了扭曲,损坏的语义结构也不能完全恢复,因此必须进一步增强损坏的内容。为了解决这些挑战,我们提出了一个任务驱动的UIE框架,该框架将增强重新定义为机器可解释的语义恢复,而不仅仅是失真去除。首先,我们引入了一个多尺度水下失真感知生成器来感知跨特征级别的失真,并为失真去除提供先验条件。其次,利用这一先验和缺乏干净的水下引用,我们提出了一种稳定的自监督解纠缠策略,通过基于clip的语义约束和身份一致性明确地将扭曲与损坏内容分离开来。最后,为了弥补不可逆的语义损失,我们设计了一个任务感知的分层增强模块,该模块通过空间频率融合来细化浅层细节,通过多尺度上下文聚合来增强深层语义,使结果与机器视觉要求保持一致。在分割、检测和显著性任务上的大量实验证明了我们的方法在从退化的水下图像中恢复机器友好语义方面的优越性。我们的代码可在https://github.com/gemyumeng/HSRUIE上获得。
{"title":"Task-Driven Underwater Image Enhancement via Hierarchical Semantic Refinement","authors":"Meng Yu;Liquan Shen;Yihan Yu;Yu Zhang;Rui Le","doi":"10.1109/TIP.2025.3647323","DOIUrl":"10.1109/TIP.2025.3647323","url":null,"abstract":"Underwater image enhancement (UIE) is crucial for robust marine exploration, yet existing methods prioritize perceptual quality while overlooking irreversible semantic corruption that impairs downstream tasks. Unlike terrestrial images, underwater semantics exhibit layer-specific degradations: shallow features suffer from color shifts and edge erosion, while deep features face semantic ambiguity. These distortions entangle with semantic content across feature hierarchies, where direct enhancement amplifies interference in downstream tasks. Even if distortions are removed, the damaged semantic structures cannot be fully recovered, making it imperative to further enhance corrupted content. To address these challenges, we propose a task-driven UIE framework that redefines enhancement as machine-interpretable semantic recovery rather than mere distortion removal. First, we introduce a multi-scale underwater distortion-aware generator to perceive distortions across feature levels and provide a prior for distortion removal. Second, leveraging this prior and the absence of clean underwater references, we propose a stable self-supervised disentanglement strategy to explicitly separate distortions from corrupted content through CLIP-based semantic constraints and identity consistency. Finally, to compensate for the irreversible semantic loss, we design a task-aware hierarchical enhancement module that refines shallow details via spatial-frequency fusion and strengthens deep semantics through multi-scale context aggregation, aligning results with machine vision requirements. Extensive experiments on segmentation, detection, and saliency tasks demonstrate the superiority of our method in restoring machine-friendly semantics from degraded underwater images. Our code is available at <uri>https://github.com/gemyumeng/HSRUIE</uri>","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"35 ","pages":"42-56"},"PeriodicalIF":13.7,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145890756","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
M3D: A Benchmark Dataset and Model for Microscopic 3D Shape Reconstruction M3D:微观三维形状重建的基准数据集和模型。
IF 13.7 Pub Date : 2025-12-31 DOI: 10.1109/TIP.2025.3646889
Tao Yan;Yingying Wang;Yuhua Qian;Jiangfeng Zhang;Feijiang Li;Peng Wu;Lu Chen;Jieru Jia;Xiaoying Guo
Microscopic 3D shape reconstruction using depth from focus (DFF) is crucial in precision manufacturing for 3D modeling and quality control. However, the absence of high-precision microscopic DFF datasets and the significant differences between existing DFF datasets and microscopic DFF data in optical design, imaging principles and scene characteristics hinder the performance of current DFF models in microscopic tasks. To address this, we introduce M3D, a novel microscopic DFF dataset, constructed using a self-developed microscopic device. It includes multi-focus image sequences of 1,952 scenes across five categories, with depth labels obtained through the 3D TFT algorithm applied to dense image sequences for initial depth estimation and calibration. All labels are then compared and analyzed against the design values, and those with large errors are eliminated. We also propose M3DNet, a frequency-aware end-to-end network, to tackle challenges like shallow depth-of-field (DoF) and weak textures. Results show that M3D compensates for the limitations of macroscopic DFF datasets and extends DFF applications to microscopic scenarios. M3DNet effectively captures rapid focus decay and improves performance on public DFF datasets by leveraging superior global feature extraction. Additionally, it exhibits strong robustness even in extreme conditions. Dataset and code are available at https://github.com/jiangfeng-Z/M3D
利用聚焦深度(depth from focus, DFF)进行微观三维形状重建是精密制造中三维建模和质量控制的关键。然而,由于缺乏高精度的微观DFF数据集,以及现有DFF数据集与微观DFF数据在光学设计、成像原理和场景特征方面存在显著差异,阻碍了当前DFF模型在微观任务中的表现。为了解决这个问题,我们引入了M3D,一个新的微观DFF数据集,使用自行开发的微观设备构建。它包括5类1952个场景的多焦点图像序列,通过3D TFT算法对密集图像序列进行初始深度估计和校准,获得深度标签。然后将所有标签与设计值进行比较和分析,并消除误差较大的标签。我们还提出了M3DNet,一个频率感知的端到端网络,以解决像浅景深(DoF)和弱纹理的挑战。结果表明,M3D弥补了宏观DFF数据集的局限性,并将DFF应用扩展到微观场景。M3DNet通过利用卓越的全局特征提取,有效地捕获快速焦点衰减,并提高公共DFF数据集的性能。此外,即使在极端条件下,它也具有很强的稳健性。数据集和代码可在https://github.com/jiangfeng-Z/M3D上获得。
{"title":"M3D: A Benchmark Dataset and Model for Microscopic 3D Shape Reconstruction","authors":"Tao Yan;Yingying Wang;Yuhua Qian;Jiangfeng Zhang;Feijiang Li;Peng Wu;Lu Chen;Jieru Jia;Xiaoying Guo","doi":"10.1109/TIP.2025.3646889","DOIUrl":"10.1109/TIP.2025.3646889","url":null,"abstract":"Microscopic 3D shape reconstruction using depth from focus (DFF) is crucial in precision manufacturing for 3D modeling and quality control. However, the absence of high-precision microscopic DFF datasets and the significant differences between existing DFF datasets and microscopic DFF data in optical design, imaging principles and scene characteristics hinder the performance of current DFF models in microscopic tasks. To address this, we introduce M3D, a novel microscopic DFF dataset, constructed using a self-developed microscopic device. It includes multi-focus image sequences of 1,952 scenes across five categories, with depth labels obtained through the 3D TFT algorithm applied to dense image sequences for initial depth estimation and calibration. All labels are then compared and analyzed against the design values, and those with large errors are eliminated. We also propose M3DNet, a frequency-aware end-to-end network, to tackle challenges like shallow depth-of-field (DoF) and weak textures. Results show that M3D compensates for the limitations of macroscopic DFF datasets and extends DFF applications to microscopic scenarios. M3DNet effectively captures rapid focus decay and improves performance on public DFF datasets by leveraging superior global feature extraction. Additionally, it exhibits strong robustness even in extreme conditions. Dataset and code are available at <uri>https://github.com/jiangfeng-Z/M3D</uri>","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"35 ","pages":"181-193"},"PeriodicalIF":13.7,"publicationDate":"2025-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145879783","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cross-Modality Feature Aggregation for Cross-Domain Point Cloud Representation Learning 跨域点云表示学习的跨模态特征聚合。
IF 13.7 Pub Date : 2025-12-31 DOI: 10.1109/TIP.2025.3646890
Guoqing Wang;Chao Ma;Xiaokang Yang
Existing methods for learning 3D point cloud representation often use a single dataset-specific training and testing approach, leading to performance drops due to significant domain shifts between training and testing data. While recent cross-domain methods have made promising progress, the lack of inherent semantic information in point clouds makes models prone to overfitting specific datasets. As such, we introduce 3D-CFA, a simple yet effective cross-modality feature aggregation method for cross-domain 3D point cloud representation learning. 3D-CFA aggregates the geometry tokens with semantic tokens derived from multi-view images, which are projected from the point cloud, thus generating more transferable features for cross-domain 3D point cloud representation learning. Specifically, 3D-CFA consists of two main components: a cross-modality feature aggregation module and an elastic domain alignment module. The cross-modality feature aggregation module converts unordered points into multi-view images using the modality transformation module. Then, the geometry tokens and semantic tokens extracted from the geometry encoder and semantic encoder are fed into the cross-modal projector to get the transferable 3D tokens. A key insight of this design is that the semantic tokens can serve as a bridge between the 3D point cloud model and the 2D foundation model, greatly promoting the generalization of cross-domain models facing the severe domain shift. Finally, the elastic domain alignment module learns the hierarchical domain-invariant features of different training domains for either domain adaptation or domain generalization protocols. 3D-CFA finds a better way to transfer the knowledge of the 2D foundation model pre-trained at scale, meanwhile only introducing a few extra trainable parameters. Comprehensive experiments on several cross-domain point cloud benchmarks demonstrate the effectiveness of the proposed method compared to the state-of-the-art methods.
学习3D点云表示的现有方法通常使用单一数据集特定的训练和测试方法,由于训练和测试数据之间的显著域转移,导致性能下降。虽然最近的跨域方法取得了可喜的进展,但点云中缺乏固有的语义信息使得模型容易过度拟合特定的数据集。因此,我们引入了3D- cfa,这是一种简单而有效的跨域3D点云表示学习的跨模态特征聚合方法。3D- cfa将几何标记与从点云投影的多视图图像派生的语义标记聚合在一起,从而为跨域3D点云表示学习生成更多可转移的特征。具体来说,3D-CFA由两个主要组件组成:跨模态特征聚合模块和弹性域对齐模块。跨模态特征聚合模块使用模态转换模块将无序点转换成多视图图像。然后,将从几何编码器和语义编码器中提取的几何标记和语义标记输入到跨模态投影器中,得到可转移的3D标记。该设计的一个关键观点是,语义标记可以作为三维点云模型和二维基础模型之间的桥梁,极大地促进了面临严重领域转移的跨领域模型的泛化。最后,弹性域对齐模块学习不同训练域的层次域不变特征,分别用于域自适应协议和域泛化协议。3D-CFA找到了一种更好的方法来传递大规模预训练的2D基础模型的知识,同时只引入了一些额外的可训练参数。在多个跨域点云基准上的综合实验表明,与现有方法相比,该方法是有效的。
{"title":"Cross-Modality Feature Aggregation for Cross-Domain Point Cloud Representation Learning","authors":"Guoqing Wang;Chao Ma;Xiaokang Yang","doi":"10.1109/TIP.2025.3646890","DOIUrl":"10.1109/TIP.2025.3646890","url":null,"abstract":"Existing methods for learning 3D point cloud representation often use a single dataset-specific training and testing approach, leading to performance drops due to significant domain shifts between training and testing data. While recent cross-domain methods have made promising progress, the lack of inherent semantic information in point clouds makes models prone to overfitting specific datasets. As such, we introduce 3D-CFA, a simple yet effective cross-modality feature aggregation method for cross-domain 3D point cloud representation learning. 3D-CFA aggregates the geometry tokens with semantic tokens derived from multi-view images, which are projected from the point cloud, thus generating more transferable features for cross-domain 3D point cloud representation learning. Specifically, 3D-CFA consists of two main components: a cross-modality feature aggregation module and an elastic domain alignment module. The cross-modality feature aggregation module converts unordered points into multi-view images using the modality transformation module. Then, the geometry tokens and semantic tokens extracted from the geometry encoder and semantic encoder are fed into the cross-modal projector to get the transferable 3D tokens. A key insight of this design is that the semantic tokens can serve as a bridge between the 3D point cloud model and the 2D foundation model, greatly promoting the generalization of cross-domain models facing the severe domain shift. Finally, the elastic domain alignment module learns the hierarchical domain-invariant features of different training domains for either domain adaptation or domain generalization protocols. 3D-CFA finds a better way to transfer the knowledge of the 2D foundation model pre-trained at scale, meanwhile only introducing a few extra trainable parameters. Comprehensive experiments on several cross-domain point cloud benchmarks demonstrate the effectiveness of the proposed method compared to the state-of-the-art methods.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"35 ","pages":"166-180"},"PeriodicalIF":13.7,"publicationDate":"2025-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145879817","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FocusPatch AD: Few-Shot Multi-Class Anomaly Detection With Unified Keywords Patch Prompts FocusPatch AD:少射多类异常检测与统一的关键字补丁提示。
IF 13.7 Pub Date : 2025-12-30 DOI: 10.1109/TIP.2025.3646861
Xicheng Ding;Xiaofan Li;Mingang Chen;Jingyu Gong;Yuan Xie
Industrial few-shot anomaly detection (FSAD) requires identifying various abnormal states by leveraging as few normal samples as possible (abnormal samples are unavailable during training). However, current methods often require training a separate model for each category, leading to increased computation and storage overhead. Thus, designing a unified anomaly detection model that supports multiple categories remains a challenging task, as such a model must recognize anomalous patterns across diverse objects and domains. To tackle these challenges, this paper introduces FocusPatch AD, a unified anomaly detection framework based on vision-language models, achieving anomaly detection under few-shot multi-class settings. FocusPatch AD links anomaly state keywords to highly relevant discrete local regions within the image, guiding the model to focus on cross-category anomalies while filtering out background interference. This approach mitigates the false detection issues caused by global semantic alignment in vision-language models. We evaluate the proposed method on the MVTec, VisA, and Real-IAD datasets, comparing them against several prevailing anomaly detection methods. In both image-level and pixel-level anomaly detection tasks, FocusPatch AD achieves significant gains in classification and localization performance, demonstrating excellent generalization and adaptability.
工业少射异常检测(FSAD)需要利用尽可能少的正常样本来识别各种异常状态(异常样本在训练期间不可用)。然而,当前的方法通常需要为每个类别训练一个单独的模型,从而增加了计算和存储开销。因此,设计一个支持多类别的统一异常检测模型仍然是一项具有挑战性的任务,因为这样的模型必须识别跨不同对象和领域的异常模式。为了解决这些问题,本文引入了基于视觉语言模型的统一异常检测框架FocusPatch AD,实现了少镜头多类设置下的异常检测。FocusPatch AD将异常状态关键词链接到图像内高度相关的离散局部区域,引导模型关注跨类别异常,同时滤除背景干扰。这种方法减轻了视觉语言模型中由于全局语义对齐而导致的错误检测问题。我们在MVTec、VisA和Real-IAD数据集上评估了该方法,并将它们与几种流行的异常检测方法进行了比较。在图像级和像素级的异常检测任务中,FocusPatch AD在分类和定位性能上都取得了显著的提升,表现出了出色的泛化和适应性。
{"title":"FocusPatch AD: Few-Shot Multi-Class Anomaly Detection With Unified Keywords Patch Prompts","authors":"Xicheng Ding;Xiaofan Li;Mingang Chen;Jingyu Gong;Yuan Xie","doi":"10.1109/TIP.2025.3646861","DOIUrl":"10.1109/TIP.2025.3646861","url":null,"abstract":"Industrial few-shot anomaly detection (FSAD) requires identifying various abnormal states by leveraging as few normal samples as possible (abnormal samples are unavailable during training). However, current methods often require training a separate model for each category, leading to increased computation and storage overhead. Thus, designing a unified anomaly detection model that supports multiple categories remains a challenging task, as such a model must recognize anomalous patterns across diverse objects and domains. To tackle these challenges, this paper introduces FocusPatch AD, a unified anomaly detection framework based on vision-language models, achieving anomaly detection under few-shot multi-class settings. FocusPatch AD links anomaly state keywords to highly relevant discrete local regions within the image, guiding the model to focus on cross-category anomalies while filtering out background interference. This approach mitigates the false detection issues caused by global semantic alignment in vision-language models. We evaluate the proposed method on the MVTec, VisA, and Real-IAD datasets, comparing them against several prevailing anomaly detection methods. In both image-level and pixel-level anomaly detection tasks, FocusPatch AD achieves significant gains in classification and localization performance, demonstrating excellent generalization and adaptability.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"35 ","pages":"112-123"},"PeriodicalIF":13.7,"publicationDate":"2025-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145866716","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Few-Shot Fine-Grained Classification With Foreground-Aware Kernelized Feature Reconstruction Network 基于前景感知的核特征重构网络的少镜头细粒度分类。
IF 13.7 Pub Date : 2025-12-30 DOI: 10.1109/TIP.2025.3646940
Yangfan Li;Wei Li
Feature reconstruction networks have achieved remarkable performance in few-shot fine-grained classification tasks. Nonetheless, traditional feature reconstruction networks rely on linear regression. This linearity may cause the loss of subtle discriminative cues, ultimately resulting in less precise reconstructed features. Moreover, in situations where the background predominantly occupies the image, the background reconstruction errors tend to overshadow foreground reconstruction errors, resulting in inaccurate reconstruction errors. In order to address the two key issues, a novel approach called the Foreground-Aware Kernelized Feature Reconstruction Network (FKFRN) is proposed. Specifically, to address the problem of imprecise reconstructed features, we introduce kernel methods into linear feature reconstruction, extending it to nonlinear feature reconstruction, thus enabling the reconstruction of richer, finer-grained discriminative features. To tackle the issue of inaccurate reconstruction errors, the foreground-aware reconstruction error is proposed. Specifically, the model assigns higher weights to features containing more foreground information and lower weights to those dominated by background content, which reduces the impact of background errors on the overall reconstruction. To estimate these weights accurately, we design two complementary strategies: an explicit probabilistic graphical model and an implicit neural network–based approach. Extensive experimental results on eight datasets validate the effectiveness of the proposed approach for few-shot fine-grained classification.
特征重构网络在小样本细粒度分类任务中取得了显著的性能。然而,传统的特征重构网络依赖于线性回归。这种线性可能会导致微妙的判别线索的丢失,最终导致不太精确的重建特征。此外,在背景占据图像主体的情况下,背景重建误差往往会掩盖前景重建误差,导致重建误差不准确。为了解决这两个关键问题,提出了一种称为前景感知核特征重构网络(FKFRN)的新方法。具体来说,为了解决重构特征不精确的问题,我们将核方法引入到线性特征重构中,并将其扩展到非线性特征重构中,从而能够重构更丰富、更细粒度的判别特征。为了解决重建误差不准确的问题,提出了前景感知重建误差。具体而言,该模型对前景信息较多的特征赋予较高的权重,对背景内容占主导地位的特征赋予较低的权重,从而降低了背景误差对整体重建的影响。为了准确地估计这些权重,我们设计了两种互补策略:显式概率图模型和隐式基于神经网络的方法。在8个数据集上的大量实验结果验证了该方法对小样本细粒度分类的有效性。
{"title":"Few-Shot Fine-Grained Classification With Foreground-Aware Kernelized Feature Reconstruction Network","authors":"Yangfan Li;Wei Li","doi":"10.1109/TIP.2025.3646940","DOIUrl":"10.1109/TIP.2025.3646940","url":null,"abstract":"Feature reconstruction networks have achieved remarkable performance in few-shot fine-grained classification tasks. Nonetheless, traditional feature reconstruction networks rely on linear regression. This linearity may cause the loss of subtle discriminative cues, ultimately resulting in less precise reconstructed features. Moreover, in situations where the background predominantly occupies the image, the background reconstruction errors tend to overshadow foreground reconstruction errors, resulting in inaccurate reconstruction errors. In order to address the two key issues, a novel approach called the Foreground-Aware Kernelized Feature Reconstruction Network (FKFRN) is proposed. Specifically, to address the problem of imprecise reconstructed features, we introduce kernel methods into linear feature reconstruction, extending it to nonlinear feature reconstruction, thus enabling the reconstruction of richer, finer-grained discriminative features. To tackle the issue of inaccurate reconstruction errors, the foreground-aware reconstruction error is proposed. Specifically, the model assigns higher weights to features containing more foreground information and lower weights to those dominated by background content, which reduces the impact of background errors on the overall reconstruction. To estimate these weights accurately, we design two complementary strategies: an explicit probabilistic graphical model and an implicit neural network–based approach. Extensive experimental results on eight datasets validate the effectiveness of the proposed approach for few-shot fine-grained classification.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"35 ","pages":"150-165"},"PeriodicalIF":13.7,"publicationDate":"2025-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145866581","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE transactions on image processing : a publication of the IEEE Signal Processing Society
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1