首页 > 最新文献

IEEE transactions on image processing : a publication of the IEEE Signal Processing Society最新文献

英文 中文
Toward Generative Understanding: Incremental Few-Shot Semantic Segmentation With Diffusion Models 生成式理解:基于扩散模型的增量少镜头语义分割。
IF 13.7 Pub Date : 2026-01-14 DOI: 10.1109/TIP.2026.3652357
Qun Li;Lu Huang;Fu Xiao;Na Zhao;Bir Bhanu
Incremental Few-shot Semantic Segmentation (iFSS) aims to learn novel classes with limited samples while preserving segmentation capability for base classes, addressing the challenge of continual learning of novel classes and catastrophic forgetting of previously seen classes. Existing methods mainly rely on techniques such as knowledge distillation and background learning, which, while partially effective, still suffer from issues such as feature drift and limited generalization to real-world novel classes, primarily due to a bidirectional coupling bottleneck between the learning of base classes and novel classes. To address these challenges, we propose, for the first time, a diffusion-based generative framework for iFSS. Specifically, we bridge the gap between generative and discriminative tasks through an innovative binary-to-RGB mask mapping mechanism, enabling pre-trained diffusion models to focus on target regions via class-specific semantic embedding optimization while sharpening foreground-background contrast with color embeddings. A lightweight post-processor then refines the generated images into high-quality binary masks. Crucially, by leveraging diffusion priors, our framework avoids complex training strategies. The optimization of class-specific semantic embeddings decouples the embedding spaces of base and novel classes, inherently preventing feature drift, mitigating catastrophic forgetting, and enabling rapid novel-class adaptation. Experimental results show that our method achieves state-of-the-art performance on the PASCAL- $5^{i}$ and COCO- $20^{i}$ datasets using much less data than other methods, and exhibiting competitive results in cross-domain few-shot segmentation tasks. Project page: https://ifss-diff.github.io/
增量少射语义分割(iFSS)旨在用有限的样本学习新类,同时保留基类的分割能力,解决新类的持续学习和先前见过的类的灾难性遗忘的挑战。现有的方法主要依赖于知识蒸馏和背景学习等技术,这些技术虽然部分有效,但由于基类和新类的学习存在双向耦合瓶颈,仍然存在特征漂移和对现实世界新类的泛化有限等问题。为了应对这些挑战,我们首次提出了一个基于扩散的iFSS生成框架。具体来说,我们通过创新的二值到rgb掩码映射机制弥合了生成任务和判别任务之间的差距,使预训练的扩散模型能够通过特定类别的语义嵌入优化来关注目标区域,同时通过颜色嵌入来增强前景和背景的对比度。然后,一个轻量级的后处理器将生成的图像细化为高质量的二进制掩码。至关重要的是,通过利用扩散先验,我们的框架避免了复杂的训练策略。类特定语义嵌入的优化解耦了基本类和新类的嵌入空间,从本质上防止了特征漂移,减轻了灾难性遗忘,并实现了新类的快速适应。实验结果表明,该方法在PASCAL-5i和COCO-20i数据集上使用的数据比其他方法少得多,达到了最先进的性能,并且在跨域少镜头分割任务中表现出竞争力。项目页面:https://ifss-diff.github.io/。
{"title":"Toward Generative Understanding: Incremental Few-Shot Semantic Segmentation With Diffusion Models","authors":"Qun Li;Lu Huang;Fu Xiao;Na Zhao;Bir Bhanu","doi":"10.1109/TIP.2026.3652357","DOIUrl":"10.1109/TIP.2026.3652357","url":null,"abstract":"Incremental Few-shot Semantic Segmentation (iFSS) aims to learn novel classes with limited samples while preserving segmentation capability for base classes, addressing the challenge of continual learning of novel classes and catastrophic forgetting of previously seen classes. Existing methods mainly rely on techniques such as knowledge distillation and background learning, which, while partially effective, still suffer from issues such as feature drift and limited generalization to real-world novel classes, primarily due to a bidirectional coupling bottleneck between the learning of base classes and novel classes. To address these challenges, we propose, for the first time, a diffusion-based generative framework for iFSS. Specifically, we bridge the gap between generative and discriminative tasks through an innovative binary-to-RGB mask mapping mechanism, enabling pre-trained diffusion models to focus on target regions via class-specific semantic embedding optimization while sharpening foreground-background contrast with color embeddings. A lightweight post-processor then refines the generated images into high-quality binary masks. Crucially, by leveraging diffusion priors, our framework avoids complex training strategies. The optimization of class-specific semantic embeddings decouples the embedding spaces of base and novel classes, inherently preventing feature drift, mitigating catastrophic forgetting, and enabling rapid novel-class adaptation. Experimental results show that our method achieves state-of-the-art performance on the PASCAL-<inline-formula> <tex-math>$5^{i}$ </tex-math></inline-formula> and COCO-<inline-formula> <tex-math>$20^{i}$ </tex-math></inline-formula> datasets using much less data than other methods, and exhibiting competitive results in cross-domain few-shot segmentation tasks. Project page: <uri>https://ifss-diff.github.io/</uri>","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"35 ","pages":"743-758"},"PeriodicalIF":13.7,"publicationDate":"2026-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145971761","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
EinsPT: Efficient Instance-Aware Pre-Training of Vision Foundation Models 视觉基础模型的高效实例感知预训练。
IF 13.7 Pub Date : 2026-01-14 DOI: 10.1109/TIP.2026.3652371
Zhaozhi Wang;Yunjie Tian;Lingxi Xie;Yaowei Wang;Qixiang Ye
In this study, we introduce EinsPT, an efficient instance-aware pre-training paradigm designed to reduce the transfer gap between vision foundation models and downstream instance-level tasks. Unlike conventional image-level pre-training that relies solely on unlabeled images, EinsPT leverages both image reconstruction and instance annotations to learn representations that are spatially coherent and instance discriminative. To achieve this efficiently, we propose a proxy–foundation architecture that decouples high-resolution and low-resolution learning: the foundation model processes masked low-resolution images for global semantics, while a lightweight proxy model operates on complete high-resolution images to preserve fine-grained details. The two branches are jointly optimized through reconstruction and instance-level prediction losses on fused features. Extensive experiments demonstrate that EinsPT consistently enhances recognition accuracy across various downstream tasks with substantially reduced computational cost, while qualitative results further reveal improved instance perception and completeness in visual representations. Code is available at github.com/feufhd/EinsPT
在本研究中,我们引入了一种有效的实例感知预训练范式EinsPT,旨在减少视觉基础模型与下游实例级任务之间的转移差距。与传统的仅依赖于未标记图像的图像级预训练不同,EinsPT利用图像重建和实例注释来学习空间连贯和实例区分的表示。为了有效地实现这一目标,我们提出了一种代理基础架构,将高分辨率和低分辨率学习解耦:基础模型处理被掩盖的低分辨率图像以获得全局语义,而轻量级代理模型处理完整的高分辨率图像以保留细粒度细节。通过重构和融合特征的实例级预测损失对两个分支进行联合优化。大量实验表明,EinsPT在显著降低计算成本的同时,持续提高了各种下游任务的识别精度,而定性结果进一步揭示了视觉表征中实例感知和完整性的提高。代码可从github.com/feufhd/EinsPT获得。
{"title":"EinsPT: Efficient Instance-Aware Pre-Training of Vision Foundation Models","authors":"Zhaozhi Wang;Yunjie Tian;Lingxi Xie;Yaowei Wang;Qixiang Ye","doi":"10.1109/TIP.2026.3652371","DOIUrl":"10.1109/TIP.2026.3652371","url":null,"abstract":"In this study, we introduce EinsPT, an efficient instance-aware pre-training paradigm designed to reduce the transfer gap between vision foundation models and downstream instance-level tasks. Unlike conventional image-level pre-training that relies solely on unlabeled images, EinsPT leverages both image reconstruction and instance annotations to learn representations that are spatially coherent and instance discriminative. To achieve this efficiently, we propose a proxy–foundation architecture that decouples high-resolution and low-resolution learning: the foundation model processes masked low-resolution images for global semantics, while a lightweight proxy model operates on complete high-resolution images to preserve fine-grained details. The two branches are jointly optimized through reconstruction and instance-level prediction losses on fused features. Extensive experiments demonstrate that EinsPT consistently enhances recognition accuracy across various downstream tasks with substantially reduced computational cost, while qualitative results further reveal improved instance perception and completeness in visual representations. Code is available at github.com/feufhd/EinsPT","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"35 ","pages":"786-799"},"PeriodicalIF":13.7,"publicationDate":"2026-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145971820","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Harnessing Group-Oriented Consistency Constraints for Semi-Supervised Semantic Segmentation in CdZnTe Semiconductors 利用面向群的一致性约束在CdZnTe半导体中进行半监督语义分割
IF 13.7 Pub Date : 2026-01-14 DOI: 10.1109/TIP.2025.3646474
Peihao Li;Yan Fang;Man Liu;Huihui Bai;Anhong Wang;Yunchao Wei;Yao Zhao
Labeling Cadmium Zinc Telluride (CdZnTe) semiconductor images is challenging due to the low-contrast defect boundaries, necessitating annotators to cross-reference multiple views. These views share a single ground truth (GT), forming a unique “many-to-one” relationship. This characteristic renders advanced semi-supervised semantic segmentation (SSS) methods suboptimal, as they are generally limited by a “one-to-one” relationship, where each image is independently associated with its GT. Such limitation may lead to error accumulation in low-contrast regions, further exacerbating confirmation bias. To address this issue, we revisit the SSS pipeline from a group-oriented perspective and propose a human-inspired solution: the Intra-group Consistency Augmentation Framework (ICAF). First, we experimentally validate the inherent consistency constraints within CdZnTe groups, establishing a group-oriented baseline using the Intra-group View Sampling (IVS). Building on this insight, we introduce the Pseudo-label Correction Network (PCN) to enhance consistency representation, which consists of two key modules. The View Augmentation Module (VAM) improves boundary details by dynamically synthesizing a boundary-aware view through the aggregation of multiple views. In the View Correction Module (VCM), this synthesized view is paired with other views for information interaction, effectively emphasizing salient regions while minimizing noise. Extensive experiments demonstrate the effectiveness of our solution for CdZnTe materials. Leveraging DeepLabV3+ with a ResNet-101 backbone as our segmentation model, we achieve a 70.6% mIoU on the CdZnTe dataset using only 2 group-annotated data (5‰). The code is available at https://github.com/pipixiapipi/ICAF
由于低对比度缺陷边界,标记碲化镉锌(CdZnTe)半导体图像具有挑战性,需要注释者交叉参考多个视图。这些观点共享一个单一的基础真理(GT),形成独特的“多对一”关系。这一特点使得先进的半监督语义分割(SSS)方法不是最优的,因为它们通常受到“一对一”关系的限制,其中每个图像都与其GT独立关联。这种限制可能导致低对比度区域的误差积累,进一步加剧确认偏差。为了解决这个问题,我们从群体导向的角度重新审视了SSS管道,并提出了一个人性化的解决方案:群体内一致性增强框架(ICAF)。首先,我们通过实验验证了CdZnTe组内固有的一致性约束,使用组内视图采样(IVS)建立了面向组的基线。在此基础上,我们引入了伪标签校正网络(PCN)来增强一致性表示,它由两个关键模块组成。视图增强模块(View Augmentation Module, VAM)通过聚合多个视图动态合成一个边界感知视图,从而改善边界细节。在视图校正模块(VCM)中,该合成视图与其他视图配对进行信息交互,有效地突出突出区域,同时最小化噪声。大量的实验证明了我们的解决方案对CdZnTe材料的有效性。利用带有ResNet-101骨干网的DeepLabV3+作为我们的分割模型,我们仅使用2个组注释数据(5‰)在CdZnTe数据集上实现了70.6%的mIoU。代码可在https://github.com/pipixiapipi/ICAF上获得
{"title":"Harnessing Group-Oriented Consistency Constraints for Semi-Supervised Semantic Segmentation in CdZnTe Semiconductors","authors":"Peihao Li;Yan Fang;Man Liu;Huihui Bai;Anhong Wang;Yunchao Wei;Yao Zhao","doi":"10.1109/TIP.2025.3646474","DOIUrl":"10.1109/TIP.2025.3646474","url":null,"abstract":"Labeling Cadmium Zinc Telluride (CdZnTe) semiconductor images is challenging due to the low-contrast defect boundaries, necessitating annotators to cross-reference multiple views. These views share a single ground truth (GT), forming a unique “many-to-one” relationship. This characteristic renders advanced semi-supervised semantic segmentation (SSS) methods suboptimal, as they are generally limited by a “one-to-one” relationship, where each image is independently associated with its GT. Such limitation may lead to error accumulation in low-contrast regions, further exacerbating confirmation bias. To address this issue, we revisit the SSS pipeline from a group-oriented perspective and propose a human-inspired solution: the Intra-group Consistency Augmentation Framework (ICAF). First, we experimentally validate the inherent consistency constraints within CdZnTe groups, establishing a group-oriented baseline using the Intra-group View Sampling (IVS). Building on this insight, we introduce the Pseudo-label Correction Network (PCN) to enhance consistency representation, which consists of two key modules. The View Augmentation Module (VAM) improves boundary details by dynamically synthesizing a boundary-aware view through the aggregation of multiple views. In the View Correction Module (VCM), this synthesized view is paired with other views for information interaction, effectively emphasizing salient regions while minimizing noise. Extensive experiments demonstrate the effectiveness of our solution for CdZnTe materials. Leveraging DeepLabV3+ with a ResNet-101 backbone as our segmentation model, we achieve a 70.6% mIoU on the CdZnTe dataset using only 2 group-annotated data (5‰). The code is available at <uri>https://github.com/pipixiapipi/ICAF</uri>","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"35 ","pages":"759-769"},"PeriodicalIF":13.7,"publicationDate":"2026-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145972028","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Diagnosing and Improving Vector-Quantization-Based Blind Image Restoration 基于矢量量化的图像盲恢复诊断与改进。
IF 13.7 Pub Date : 2026-01-13 DOI: 10.1109/TIP.2026.3651985
Hongyu Li;Tianyi Xu;Zengyou Wang;Xiantong Zhen;Ran Gu;David Zhang;Jun Xu
Vector-Quantization (VQ) based discrete generative models are widely used to learn powerful high-quality (HQ) priors for blind image restoration (BIR). In this paper, we diagnose the side-effects of discrete VQ process essential to VQ-based BIR methods: 1) confining the representation capacity of HQ codebook, 2) being error-prone for code index prediction on low-quality (LQ) images, and 3) under-valuing the importance of input LQ image. These motivate us to learn continuous feature representation of HQ codebook for better restoration performance than using discrete VQ process. To further improve the restoration fidelity, we propose a new Self-in-Cross-Attention (SinCA) module to augment the HQ codebook with the feature of input LQ image, and perform cross-attention between LQ feature and input-augmented codebook. By this way, our SinCA leverages the input LQ image to enhance the representation of codebook for restoration fidelity. Experiments on four typical VQ-based BIR methods demonstrate that, by replacing the VQ process with a transformer using our SinCA, they achieve better quantitative and qualitative performance on blind image super-resolution and blind face restoration. The code and pre-trained models are publicly released at https://github.com/lhy-85/SinCA
基于矢量量化(VQ)的离散生成模型被广泛用于学习强大的高质量先验,用于盲图像恢复(BIR)。在本文中,我们诊断了离散VQ过程对基于VQ的BIR方法至关重要的副作用:1)限制了HQ码本的表示能力,2)在低质量(LQ)图像上的代码索引预测容易出错,3)低估了输入LQ图像的重要性。这促使我们学习HQ码本的连续特征表示,以获得比使用离散VQ过程更好的恢复性能。为了进一步提高复原保真度,我们提出了一种新的自交叉注意(SinCA)模块,利用输入LQ图像的特征增强HQ码本,并在LQ特征与输入增强码本之间进行交叉注意。通过这种方式,我们的SinCA利用输入LQ图像来增强码本的表示以恢复保真度。对四种典型的基于VQ的BIR方法进行了实验,结果表明,利用我们的SinCA将VQ过程替换为变压器,在盲图像超分辨率和盲人脸恢复方面取得了更好的定量和定性性能。代码和预训练模型将公开发布。
{"title":"Diagnosing and Improving Vector-Quantization-Based Blind Image Restoration","authors":"Hongyu Li;Tianyi Xu;Zengyou Wang;Xiantong Zhen;Ran Gu;David Zhang;Jun Xu","doi":"10.1109/TIP.2026.3651985","DOIUrl":"10.1109/TIP.2026.3651985","url":null,"abstract":"Vector-Quantization (VQ) based discrete generative models are widely used to learn powerful high-quality (HQ) priors for blind image restoration (BIR). In this paper, we diagnose the side-effects of discrete VQ process essential to VQ-based BIR methods: 1) confining the representation capacity of HQ codebook, 2) being error-prone for code index prediction on low-quality (LQ) images, and 3) under-valuing the importance of input LQ image. These motivate us to learn continuous feature representation of HQ codebook for better restoration performance than using discrete VQ process. To further improve the restoration fidelity, we propose a new Self-in-Cross-Attention (SinCA) module to augment the HQ codebook with the feature of input LQ image, and perform cross-attention between LQ feature and input-augmented codebook. By this way, our SinCA leverages the input LQ image to enhance the representation of codebook for restoration fidelity. Experiments on four typical VQ-based BIR methods demonstrate that, by replacing the VQ process with a transformer using our SinCA, they achieve better quantitative and qualitative performance on blind image super-resolution and blind face restoration. The code and pre-trained models are publicly released at <uri>https://github.com/lhy-85/SinCA</uri>","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"35 ","pages":"844-857"},"PeriodicalIF":13.7,"publicationDate":"2026-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145961421","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Self-Supervised Unfolding Network With Shared Reflectance Learning for Low-Light Image Enhancement 基于共享反射率学习的自监督展开网络弱光图像增强。
IF 13.7 Pub Date : 2026-01-13 DOI: 10.1109/TIP.2026.3652021
Jia Liu;Yu Luo;Guanghui Yue;Jie Ling;Liang Liao;Chia-Wen Lin;Guangtao Zhai;Wei Zhou
Recently, incorporating Retinex theory with unfolding networks has attracted increasing attention in the low-light image enhancement field. However, existing methods have two limitations, i.e., ignoring the modeling of the physical prior of Retinex theory and relying on a large amount of paired data. To advance this field, we propose a novel self-supervised unfolding network, named S2UNet, for the LIE task. Specifically, we formulate a novel optimization model based on the principle that content-consistent images under different illumination should share the same reflectance. The model simultaneously decomposes two illumination-different images into a shared reflectance component and two independent illumination components. Due to the absence of the normal-light image, we process the low-light image with gamma correction to create the illumination-different image pair. Then, we translate this model into a multi-stage unfolding network, in which each stage alternately optimizes the shared reflectance component and the respective illumination components of the two images. During progressive multi-stage optimization, the network inherently encodes the reflectance consistency prior by jointly estimating an optimal reflectance across varying illumination conditions. Finally, considering the presence of noise in low-light images and to suppress noise amplification, we propose a self-supervised denoising mechanism. Extensive experiments on nine benchmark datasets demonstrate that our proposed S2UNet outperforms state-of-the-art unsupervised methods in terms of both quantitative metrics and visual quality, while achieving competitive performance compared to supervised methods. The source code will be available at https://github.com/J-Liu-DL/S2UNet
近年来,将Retinex理论与展开网络相结合在微光图像增强领域受到越来越多的关注。然而,现有的方法存在两个局限性,即忽略了Retinex理论物理先验的建模,依赖于大量的配对数据。为了推进这一领域,我们提出了一种新的自监督展开网络,名为S2UNet,用于LIE任务。具体而言,我们基于不同光照下内容一致的图像具有相同反射率的原则,建立了一种新的优化模型。该模型将两幅不同照度的图像同时分解为一个共享的反射率分量和两个独立的照度分量。由于没有正常光图像,我们对低光图像进行伽玛校正以创建不同照度的图像对。然后,我们将该模型转化为一个多阶段展开网络,其中每个阶段交替优化两幅图像的共享反射率分量和各自的照明分量。在渐进式多阶段优化过程中,网络通过联合估计不同光照条件下的最优反射率来先验地编码反射率一致性。最后,考虑到弱光图像中存在噪声,为了抑制噪声放大,我们提出了一种自监督去噪机制。在9个基准数据集上进行的大量实验表明,我们提出的S2UNet在定量指标和视觉质量方面都优于最先进的无监督方法,同时与有监督方法相比具有竞争力。源代码可从https: //github.com/J-Liu-DL/S2UNet获得。
{"title":"Self-Supervised Unfolding Network With Shared Reflectance Learning for Low-Light Image Enhancement","authors":"Jia Liu;Yu Luo;Guanghui Yue;Jie Ling;Liang Liao;Chia-Wen Lin;Guangtao Zhai;Wei Zhou","doi":"10.1109/TIP.2026.3652021","DOIUrl":"10.1109/TIP.2026.3652021","url":null,"abstract":"Recently, incorporating Retinex theory with unfolding networks has attracted increasing attention in the low-light image enhancement field. However, existing methods have two limitations, i.e., ignoring the modeling of the physical prior of Retinex theory and relying on a large amount of paired data. To advance this field, we propose a novel self-supervised unfolding network, named S2UNet, for the LIE task. Specifically, we formulate a novel optimization model based on the principle that content-consistent images under different illumination should share the same reflectance. The model simultaneously decomposes two illumination-different images into a shared reflectance component and two independent illumination components. Due to the absence of the normal-light image, we process the low-light image with gamma correction to create the illumination-different image pair. Then, we translate this model into a multi-stage unfolding network, in which each stage alternately optimizes the shared reflectance component and the respective illumination components of the two images. During progressive multi-stage optimization, the network inherently encodes the reflectance consistency prior by jointly estimating an optimal reflectance across varying illumination conditions. Finally, considering the presence of noise in low-light images and to suppress noise amplification, we propose a self-supervised denoising mechanism. Extensive experiments on nine benchmark datasets demonstrate that our proposed S2UNet outperforms state-of-the-art unsupervised methods in terms of both quantitative metrics and visual quality, while achieving competitive performance compared to supervised methods. The source code will be available at <uri>https://github.com/J-Liu-DL/S2UNet</uri>","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"35 ","pages":"800-815"},"PeriodicalIF":13.7,"publicationDate":"2026-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145961419","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SAMURAI: Motion-Aware Memory for Training-Free Visual Object Tracking With SAM 2 SAMURAI:基于SAM的无训练视觉目标跟踪的运动感知记忆。
IF 13.7 Pub Date : 2026-01-13 DOI: 10.1109/TIP.2026.3651835
Cheng-Yeng Yang;Hsiang-Wei Huang;Wenhao Chai;Zhongyu Jiang;Jenq-Neng Hwang
The Segment Anything Model 2 (SAM 2) has demonstrated exceptional performance in object segmentation tasks but encounters challenges in visual object tracking, particularly in handling crowded scenes with fast-moving or self-occluding objects. Additionally, its fixed-window memory mechanism indiscriminately retains past frames, leading to error accumulation. This issue results in incorrect memory retention during occlusions, causing the model to condition future predictions on unreliable features and leading to identity switches or drift in crowded scenes. This paper introduces SAMURAI, an enhanced adaptation of SAM 2 that integrates temporal motion cues with a novel motion-aware memory selection strategy. SAMURAI effectively predicts object motion and refines mask selection, achieving robust and precise tracking without requiring retraining or fine-tuning. It demonstrates strong training-free performance across multiple VOT benchmark datasets, underscoring its generalization capability. SAMURAI achieves state-of-the-art performance on LaSOText, GOT-10k, and TrackingNet, while also delivering competitive results on LaSOT, VOT2020-ST, VOT2022-ST, and VOS benchmarks such as SA-V. These results highlight SAMURAI’s robustness in complex tracking scenarios and its potential for real-world applications in dynamic environments with an optimized memory selection mechanism. Code and results are available at https://github.com/yangchris11/samurai
片段任意模型2 (SAM 2)在对象分割任务中表现出优异的性能,但在视觉对象跟踪方面遇到了挑战,特别是在处理具有快速移动或自遮挡物体的拥挤场景时。此外,它的固定窗口记忆机制不加选择地保留过去的帧,导致错误积累。这个问题会导致在遮挡期间不正确的记忆保留,导致模型在不可靠的特征上调整未来的预测,并导致身份切换或在拥挤的场景中漂移。本文介绍了SAMURAI,它是一种增强的自适应sam2,它将时间运动线索与一种新颖的运动感知记忆选择策略相结合。SAMURAI有效地预测物体运动和改进掩模选择,实现鲁棒和精确的跟踪,而不需要再训练或微调。它在多个VOT基准数据集上展示了强大的无训练性能,强调了其泛化能力。SAMURAI在LaSOText、GOT-10k和TrackingNet上实现了最先进的性能,同时在LaSOT、VOT2020-ST、VOT2022-ST和VOS基准(如SA-V)上也提供了具有竞争力的结果。这些结果突出了SAMURAI在复杂跟踪场景中的鲁棒性,以及它在动态环境中具有优化内存选择机制的实际应用潜力。代码和结果可在https://github.com/yangchris11/samurai上获得。
{"title":"SAMURAI: Motion-Aware Memory for Training-Free Visual Object Tracking With SAM 2","authors":"Cheng-Yeng Yang;Hsiang-Wei Huang;Wenhao Chai;Zhongyu Jiang;Jenq-Neng Hwang","doi":"10.1109/TIP.2026.3651835","DOIUrl":"10.1109/TIP.2026.3651835","url":null,"abstract":"The Segment Anything Model 2 (SAM 2) has demonstrated exceptional performance in object segmentation tasks but encounters challenges in visual object tracking, particularly in handling crowded scenes with fast-moving or self-occluding objects. Additionally, its fixed-window memory mechanism indiscriminately retains past frames, leading to error accumulation. This issue results in incorrect memory retention during occlusions, causing the model to condition future predictions on unreliable features and leading to identity switches or drift in crowded scenes. This paper introduces SAMURAI, an enhanced adaptation of SAM 2 that integrates temporal motion cues with a novel motion-aware memory selection strategy. SAMURAI effectively predicts object motion and refines mask selection, achieving robust and precise tracking without requiring retraining or fine-tuning. It demonstrates strong training-free performance across multiple VOT benchmark datasets, underscoring its generalization capability. SAMURAI achieves state-of-the-art performance on LaSOText, GOT-10k, and TrackingNet, while also delivering competitive results on LaSOT, VOT2020-ST, VOT2022-ST, and VOS benchmarks such as SA-V. These results highlight SAMURAI’s robustness in complex tracking scenarios and its potential for real-world applications in dynamic environments with an optimized memory selection mechanism. Code and results are available at <uri>https://github.com/yangchris11/samurai</uri>","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"35 ","pages":"970-982"},"PeriodicalIF":13.7,"publicationDate":"2026-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11351313","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145961384","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reviewer Summary for Transactions on Image Processing 《图像处理汇刊》审稿人总结
IF 13.7 Pub Date : 2026-01-12 DOI: 10.1109/TIP.2025.3650664
{"title":"Reviewer Summary for Transactions on Image Processing","authors":"","doi":"10.1109/TIP.2025.3650664","DOIUrl":"10.1109/TIP.2025.3650664","url":null,"abstract":"","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"34 ","pages":"8684-8708"},"PeriodicalIF":13.7,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11346802","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145955219","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TSCCD: Temporal Self-Construction Cross-Domain Learning for Unsupervised Hyperspectral Change Detection 无监督高光谱变化检测的时间自构建跨域学习
IF 13.7 Pub Date : 2026-01-12 DOI: 10.1109/TIP.2025.3650387
Tianyuan Zhou;Fulin Luo;Chuan Fu;Tan Guo;Bo Du;Xinbo Gao;Liangpei Zhang
Multi-temporal hyperspectral imagery (HSI) has become a powerful tool for change detection (CD) owing to its rich spectral signatures and detailed spatial information. Nevertheless, the application of paired HSIs is constrained by the scarcity of annotated training data. While unsupervised domain adaptation (UDA) offers a potential solution by transferring change detection knowledge from source to target domains, two critical limitations persist: 1) the labor-intensive process of acquiring and annotating source-domain paired samples, and 2) the suboptimal transfer performance caused by substantial cross-domain distribution discrepancies. To address these challenges, we present a Temporal Self-Construction Cross-Domain learning (TSCCD) framework for UDA-based HSI-CD. Our TSCCD framework introduces an innovative temporal self-construction mechanism that synthesizes bi-temporal source-domain data from existing HSI classification datasets while simultaneously performing initial data-level alignment. Furthermore, we develop a reweighted amplitude maximum mean discrepancy (MMD) metric to enhance feature-level domain adaptation. The proposed architecture incorporates an attention-based Kolmogorov-Arnold network (KAN) with high-frequency feature augmentation within an encoder-decoder structure to effectively capture change characteristics. Comprehensive experiments conducted on three benchmark HSI datasets demonstrate that TSCCD achieves superior performance compared to current state-of-the-art methods in HSI change detection tasks. Codes are available at https://github.com/Zhoutya/TSCCD.
多时相高光谱图像(HSI)以其丰富的光谱特征和详细的空间信息成为变化检测的有力工具。然而,配对hsi的应用受到标注训练数据的稀缺性的限制。虽然无监督域自适应(UDA)提供了一种将变化检测知识从源域转移到目标域的潜在解决方案,但仍然存在两个关键限制:1)获取和注释源域配对样本的劳动密集型过程;2)由于大量跨域分布差异导致的次优转移性能。为了解决这些挑战,我们提出了一个基于数据集的跨域学习框架。我们的TSCCD框架引入了一种创新的时间自构建机制,该机制综合了来自现有HSI分类数据集的双时间源域数据,同时执行初始数据级校准。此外,我们开发了一种重新加权的幅度最大平均差异(MMD)度量来增强特征级域自适应。所提出的架构结合了基于注意力的Kolmogorov-Arnold网络(KAN),并在编码器-解码器结构中进行高频特征增强,以有效捕获变化特征。在三个基准HSI数据集上进行的综合实验表明,与目前最先进的方法相比,TSCCD在HSI变化检测任务中具有优越的性能。代码可在https://github.com/Zhoutya/TSCCD上获得。
{"title":"TSCCD: Temporal Self-Construction Cross-Domain Learning for Unsupervised Hyperspectral Change Detection","authors":"Tianyuan Zhou;Fulin Luo;Chuan Fu;Tan Guo;Bo Du;Xinbo Gao;Liangpei Zhang","doi":"10.1109/TIP.2025.3650387","DOIUrl":"10.1109/TIP.2025.3650387","url":null,"abstract":"Multi-temporal hyperspectral imagery (HSI) has become a powerful tool for change detection (CD) owing to its rich spectral signatures and detailed spatial information. Nevertheless, the application of paired HSIs is constrained by the scarcity of annotated training data. While unsupervised domain adaptation (UDA) offers a potential solution by transferring change detection knowledge from source to target domains, two critical limitations persist: 1) the labor-intensive process of acquiring and annotating source-domain paired samples, and 2) the suboptimal transfer performance caused by substantial cross-domain distribution discrepancies. To address these challenges, we present a Temporal Self-Construction Cross-Domain learning (TSCCD) framework for UDA-based HSI-CD. Our TSCCD framework introduces an innovative temporal self-construction mechanism that synthesizes bi-temporal source-domain data from existing HSI classification datasets while simultaneously performing initial data-level alignment. Furthermore, we develop a reweighted amplitude maximum mean discrepancy (MMD) metric to enhance feature-level domain adaptation. The proposed architecture incorporates an attention-based Kolmogorov-Arnold network (KAN) with high-frequency feature augmentation within an encoder-decoder structure to effectively capture change characteristics. Comprehensive experiments conducted on three benchmark HSI datasets demonstrate that TSCCD achieves superior performance compared to current state-of-the-art methods in HSI change detection tasks. Codes are available at <uri>https://github.com/Zhoutya/TSCCD</uri>.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"35 ","pages":"830-843"},"PeriodicalIF":13.7,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145955874","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IAP: Improving Continual Learning of Vision-Language Models via Instance-Aware Prompting IAP:通过实例感知提示改善视觉语言模型的持续学习
IF 13.7 Pub Date : 2026-01-12 DOI: 10.1109/TIP.2025.3650045
Hao Fu;Hanbin Zhao;Jiahua Dong;Henghui Ding;Chao Zhang;Hui Qian
Recent pre-trained vision-language models (PT-VLMs) often face a Multi-Domain Task Incremental Learning (MTIL) scenario in practice, where several classes and domains of multi-modal tasks are arrive incrementally. Without access to previously seen tasks and unseen tasks, memory-constrained MTIL suffers from forward and backward forgetting. To alleviate the above challenges, parameter-efficient fine-tuning techniques (PEFT), such as prompt tuning, are employed to adapt the PT-VLM to the diverse incrementally learned tasks. To achieve effective new task adaptation, existing methods only consider the effect of PEFT strategy selection, but neglect the influence of PEFT parameter setting (e.g., prompting). In this paper, we tackle the challenge of optimizing prompt designs for diverse tasks in MTIL and propose an Instance-Aware Prompting (IAP) framework. Specifically, our Instance-Aware Gated Prompting (IA-GP) strategy enhances adaptation to new tasks while mitigating forgetting by adaptively assigning prompts across transformer layers at the instance level. Our Instance-Aware Class-Distribution-Driven Prompting (IA-CDDP) improves the task adaptation process by determining an accurate task-label-related confidence score for each instance. Experimental evaluations across 11 datasets, using three performance metrics, demonstrate the effectiveness of our proposed method. The source codes are available at https://github.com/FerdinandZJU/IAP
当前的预训练视觉语言模型(PT-VLMs)在实践中经常面临多域任务增量学习(MTIL)的场景,其中多模态任务的多个类和域是增量到达的。由于无法访问先前看到的任务和未看到的任务,记忆受限的MTIL会遭受前向和后向遗忘。为了缓解上述挑战,采用参数有效微调技术(PEFT),如提示调谐,使PT-VLM适应各种增量学习任务。为了实现有效的新任务适应,现有方法只考虑PEFT策略选择的影响,而忽略了PEFT参数设置(如提示)的影响。在本文中,我们解决了优化MTIL中不同任务的提示设计的挑战,并提出了一个实例感知提示(IAP)框架。具体来说,我们的实例感知门控提示(IA-GP)策略增强了对新任务的适应能力,同时通过在实例级跨转换层自适应地分配提示来减轻遗忘。我们的实例感知类分布驱动提示(IA-CDDP)通过为每个实例确定与任务标签相关的准确置信度评分,改进了任务适应过程。使用三个性能指标对11个数据集进行实验评估,证明了我们提出的方法的有效性。源代码可从https://github.com/FerdinandZJU/IAP获得
{"title":"IAP: Improving Continual Learning of Vision-Language Models via Instance-Aware Prompting","authors":"Hao Fu;Hanbin Zhao;Jiahua Dong;Henghui Ding;Chao Zhang;Hui Qian","doi":"10.1109/TIP.2025.3650045","DOIUrl":"10.1109/TIP.2025.3650045","url":null,"abstract":"Recent pre-trained vision-language models (PT-VLMs) often face a Multi-Domain Task Incremental Learning (MTIL) scenario in practice, where several classes and domains of multi-modal tasks are arrive incrementally. Without access to previously seen tasks and unseen tasks, memory-constrained MTIL suffers from forward and backward forgetting. To alleviate the above challenges, parameter-efficient fine-tuning techniques (PEFT), such as prompt tuning, are employed to adapt the PT-VLM to the diverse incrementally learned tasks. To achieve effective new task adaptation, existing methods only consider the effect of PEFT strategy selection, but neglect the influence of PEFT parameter setting (e.g., prompting). In this paper, we tackle the challenge of optimizing prompt designs for diverse tasks in MTIL and propose an Instance-Aware Prompting (IAP) framework. Specifically, our Instance-Aware Gated Prompting (IA-GP) strategy enhances adaptation to new tasks while mitigating forgetting by adaptively assigning prompts across transformer layers at the instance level. Our Instance-Aware Class-Distribution-Driven Prompting (IA-CDDP) improves the task adaptation process by determining an accurate task-label-related confidence score for each instance. Experimental evaluations across 11 datasets, using three performance metrics, demonstrate the effectiveness of our proposed method. The source codes are available at <uri>https://github.com/FerdinandZJU/IAP</uri>","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"35 ","pages":"717-731"},"PeriodicalIF":13.7,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145955228","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reflectance Prediction-Based Knowledge Distillation for Robust 3D Object Detection in Compressed Point Clouds. 基于反射率预测的压缩点云三维目标鲁棒检测方法。
IF 13.7 Pub Date : 2026-01-01 DOI: 10.1109/TIP.2025.3648203
Hao Jing, Anhong Wang, Yifan Zhang, Donghan Bu, Junhui Hou

Regarding intelligent transportation systems, low-bitrate transmission via lossy point cloud compression is vital for facilitating real-time collaborative perception among connected agents, such as vehicles and infrastructures, under restricted bandwidth. In existing compression transmission systems, the sender lossily compresses point coordinates and reflectance to generate a transmission code stream, which faces transmission burdens from reflectance encoding and limited detection robustness due to information loss. To address these issues, this paper proposes a 3D object detection framework with reflectance prediction-based knowledge distillation (RPKD). We compress point coordinates while discarding reflectance during low-bitrate transmission, and feed the decoded non-reflectance compressed point clouds into a student detector. The discarded reflectance is then reconstructed by a geometry-based reflectance prediction (RP) module within the student detector for precise detection. A teacher detector with the same structure as the student detector is designed for performing reflectance knowledge distillation (RKD) and detection knowledge distillation (DKD) from raw to compressed point clouds. Our cross-source distillation training strategy (CDTS) equips the student detector with robustness to low-quality compressed data while preserving the accuracy benefits of raw data through transferred distillation knowledge. Experimental results on the KITTI and DAIR-V2X-V datasets demonstrate that our method can boost detection accuracy for compressed point clouds across multiple code rates. We will release the code publicly at https://github.com/HaoJing-SX/RPKD.

对于智能交通系统,在有限带宽下,通过有损点云压缩进行的低比特率传输对于促进连接代理(如车辆和基础设施)之间的实时协同感知至关重要。在现有的压缩传输系统中,发送方对点坐标和反射率进行有损压缩生成传输码流,这既面临着反射率编码带来的传输负担,又面临着信息丢失导致的检测鲁棒性受限的问题。为了解决这些问题,本文提出了一种基于反射率预测的知识蒸馏(RPKD)的三维目标检测框架。我们在低比特率传输过程中压缩点坐标,同时丢弃反射率,并将解码后的非反射率压缩点云送入学生探测器。然后通过学生检测器内基于几何的反射率预测(RP)模块重建丢弃的反射率,以进行精确检测。设计了一种与学生检测器结构相同的教师检测器,用于从原始点云到压缩点云进行反射知识蒸馏(RKD)和检测知识蒸馏(DKD)。我们的跨源蒸馏训练策略(CDTS)使学生检测器对低质量压缩数据具有鲁棒性,同时通过转移的蒸馏知识保持原始数据的准确性。在KITTI和DAIR-V2X-V数据集上的实验结果表明,该方法可以提高压缩点云在多个码率下的检测精度。我们将在https://github.com/HaoJing-SX/RPKD公开发布代码。
{"title":"Reflectance Prediction-Based Knowledge Distillation for Robust 3D Object Detection in Compressed Point Clouds.","authors":"Hao Jing, Anhong Wang, Yifan Zhang, Donghan Bu, Junhui Hou","doi":"10.1109/TIP.2025.3648203","DOIUrl":"10.1109/TIP.2025.3648203","url":null,"abstract":"<p><p>Regarding intelligent transportation systems, low-bitrate transmission via lossy point cloud compression is vital for facilitating real-time collaborative perception among connected agents, such as vehicles and infrastructures, under restricted bandwidth. In existing compression transmission systems, the sender lossily compresses point coordinates and reflectance to generate a transmission code stream, which faces transmission burdens from reflectance encoding and limited detection robustness due to information loss. To address these issues, this paper proposes a 3D object detection framework with reflectance prediction-based knowledge distillation (RPKD). We compress point coordinates while discarding reflectance during low-bitrate transmission, and feed the decoded non-reflectance compressed point clouds into a student detector. The discarded reflectance is then reconstructed by a geometry-based reflectance prediction (RP) module within the student detector for precise detection. A teacher detector with the same structure as the student detector is designed for performing reflectance knowledge distillation (RKD) and detection knowledge distillation (DKD) from raw to compressed point clouds. Our cross-source distillation training strategy (CDTS) equips the student detector with robustness to low-quality compressed data while preserving the accuracy benefits of raw data through transferred distillation knowledge. Experimental results on the KITTI and DAIR-V2X-V datasets demonstrate that our method can boost detection accuracy for compressed point clouds across multiple code rates. We will release the code publicly at https://github.com/HaoJing-SX/RPKD.</p>","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"PP ","pages":"85-97"},"PeriodicalIF":13.7,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145893537","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE transactions on image processing : a publication of the IEEE Signal Processing Society
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1