首页 > 最新文献

IEEE transactions on image processing : a publication of the IEEE Signal Processing Society最新文献

英文 中文
Boosting HDR Image Reconstruction via Semantic Knowledge Transfer. 基于语义知识转移的HDR图像重建。
IF 13.7 Pub Date : 2026-01-16 DOI: 10.1109/TIP.2026.3652360
Tao Hu, Longyao Wu, Wei Dong, Peng Wu, Jinqiu Sun, Xiaogang Xu, Qingsen Yan, Yanning Zhang

Recovering High Dynamic Range (HDR) images from multiple Standard Dynamic Range (SDR) images becomes challenging when the SDR images exhibit noticeable degradation and missing content. Leveraging scene-specific semantic priors offers a promising solution for restoring heavily degraded regions. However, these priors are typically extracted from sRGB SDR images, the domain/format gap poses a significant challenge when applying it to HDR imaging. To address this issue, we propose a general framework that transfers semantic knowledge derived from SDR domain via self-distillation to boost existing HDR reconstruction. Specifically, the proposed framework first introduces the Semantic Priors Guided Reconstruction Model (SPGRM), which leverages SDR image semantic knowledge to address ill-posed problems in the initial HDR reconstruction results. Subsequently, we leverage a self-distillation mechanism that constrains the color and content information with semantic knowledge, aligning the external outputs between the baseline and SPGRM. Furthermore, to transfer the semantic knowledge of the internal features, we utilize a Semantic Knowledge Alignment Module (SKAM) to fill the missing semantic contents with the complementary masks. Extensive experiments demonstrate that our framework significantly boosts HDR imaging quality for existing methods without altering the network architecture.

当标准动态范围(SDR)图像表现出明显的退化和内容缺失时,从多个标准动态范围(SDR)图像中恢复高动态范围(HDR)图像变得具有挑战性。利用场景特定的语义先验为恢复严重退化的区域提供了一个有希望的解决方案。然而,这些先验通常是从sRGB SDR图像中提取的,当将其应用于HDR成像时,域/格式差距会带来重大挑战。为了解决这个问题,我们提出了一个通用框架,该框架通过自蒸馏来传输从SDR域获得的语义知识,以促进现有的HDR重建。具体而言,该框架首先引入了语义先验引导重建模型(SPGRM),该模型利用SDR图像语义知识来解决初始HDR重建结果中的不适定问题。随后,我们利用一种自蒸馏机制,用语义知识约束颜色和内容信息,在基线和SPGRM之间对齐外部输出。此外,为了传递内部特征的语义知识,我们利用语义知识对齐模块(semantic knowledge Alignment Module, SKAM)用互补掩码填充缺失的语义内容。大量的实验表明,我们的框架在不改变网络架构的情况下显著提高了现有方法的HDR成像质量。
{"title":"Boosting HDR Image Reconstruction via Semantic Knowledge Transfer.","authors":"Tao Hu, Longyao Wu, Wei Dong, Peng Wu, Jinqiu Sun, Xiaogang Xu, Qingsen Yan, Yanning Zhang","doi":"10.1109/TIP.2026.3652360","DOIUrl":"https://doi.org/10.1109/TIP.2026.3652360","url":null,"abstract":"<p><p>Recovering High Dynamic Range (HDR) images from multiple Standard Dynamic Range (SDR) images becomes challenging when the SDR images exhibit noticeable degradation and missing content. Leveraging scene-specific semantic priors offers a promising solution for restoring heavily degraded regions. However, these priors are typically extracted from sRGB SDR images, the domain/format gap poses a significant challenge when applying it to HDR imaging. To address this issue, we propose a general framework that transfers semantic knowledge derived from SDR domain via self-distillation to boost existing HDR reconstruction. Specifically, the proposed framework first introduces the Semantic Priors Guided Reconstruction Model (SPGRM), which leverages SDR image semantic knowledge to address ill-posed problems in the initial HDR reconstruction results. Subsequently, we leverage a self-distillation mechanism that constrains the color and content information with semantic knowledge, aligning the external outputs between the baseline and SPGRM. Furthermore, to transfer the semantic knowledge of the internal features, we utilize a Semantic Knowledge Alignment Module (SKAM) to fill the missing semantic contents with the complementary masks. Extensive experiments demonstrate that our framework significantly boosts HDR imaging quality for existing methods without altering the network architecture.</p>","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"PP ","pages":""},"PeriodicalIF":13.7,"publicationDate":"2026-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145991925","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Particle Diffusion Matching: Random Walk Correspondence Search for the Alignment of Standard and Ultra-Widefield Fundus Images 粒子扩散匹配:标准和超宽视场眼底图像对齐的随机行走对应搜索。
IF 13.7 Pub Date : 2026-01-16 DOI: 10.1109/TIP.2026.3653198
Kang Geon Lee;Soochahn Lee;Kyoung Mu Lee
We propose a robust alignment technique for Standard Fundus Images (SFIs) and Ultra-Widefield Fundus Images (UWFIs), which are challenging to align due to differences in scale, appearance, and the scarcity of distinctive features. Our method, termed Particle Diffusion Matching (PDM), performs alignment through an iterative Random Walk Correspondence Search (RWCS) guided by a diffusion model. At each iteration, the model estimates displacement vectors for particle points by considering local appearance, the structural distribution of particles, and an estimated global transformation, enabling progressive refinement of correspondences even under difficult conditions. PDM achieves state-of-the-art performance across multiple retinal image alignment benchmarks, showing substantial improvement on a primary dataset of SFI-UWFI pairs and demonstrating its effectiveness in real-world clinical scenarios. By providing accurate and scalable correspondence estimation, PDM overcomes the limitations of existing methods and facilitates the integration of complementary retinal image modalities. This diffusion-guided search strategy offers a new direction for improving downstream supervised learning, disease diagnosis, and multi-modal image analysis in ophthalmology.
我们提出了一种针对标准眼底图像(sfi)和超广角眼底图像(uwfi)的鲁棒对齐技术,这两种图像由于尺度、外观和特征稀缺性的差异而难以对齐。我们的方法,称为粒子扩散匹配(PDM),通过由扩散模型指导的迭代随机行走对应搜索(RWCS)执行对齐。在每次迭代中,模型通过考虑局部外观、粒子的结构分布和估计的全局变换来估计粒子点的位移向量,即使在困难的条件下也能逐步改进对应。PDM在多个视网膜图像对齐基准上实现了最先进的性能,在SFI-UWFI对的主要数据集上显示出实质性的改进,并证明了其在现实世界临床场景中的有效性。通过提供精确和可扩展的对应估计,PDM克服了现有方法的局限性,促进了互补视网膜图像模态的集成。这种扩散引导搜索策略为改善眼科的下游监督学习、疾病诊断和多模态图像分析提供了新的方向。
{"title":"Particle Diffusion Matching: Random Walk Correspondence Search for the Alignment of Standard and Ultra-Widefield Fundus Images","authors":"Kang Geon Lee;Soochahn Lee;Kyoung Mu Lee","doi":"10.1109/TIP.2026.3653198","DOIUrl":"10.1109/TIP.2026.3653198","url":null,"abstract":"We propose a robust alignment technique for Standard Fundus Images (SFIs) and Ultra-Widefield Fundus Images (UWFIs), which are challenging to align due to differences in scale, appearance, and the scarcity of distinctive features. Our method, termed Particle Diffusion Matching (PDM), performs alignment through an iterative Random Walk Correspondence Search (RWCS) guided by a diffusion model. At each iteration, the model estimates displacement vectors for particle points by considering local appearance, the structural distribution of particles, and an estimated global transformation, enabling progressive refinement of correspondences even under difficult conditions. PDM achieves state-of-the-art performance across multiple retinal image alignment benchmarks, showing substantial improvement on a primary dataset of SFI-UWFI pairs and demonstrating its effectiveness in real-world clinical scenarios. By providing accurate and scalable correspondence estimation, PDM overcomes the limitations of existing methods and facilitates the integration of complementary retinal image modalities. This diffusion-guided search strategy offers a new direction for improving downstream supervised learning, disease diagnosis, and multi-modal image analysis in ophthalmology.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"35 ","pages":"943-954"},"PeriodicalIF":13.7,"publicationDate":"2026-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145986626","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Complex-valued SAR Foundation Model Based on Physically Inspired Representation Learning. 基于物理启发表示学习的复值SAR基础模型。
IF 13.7 Pub Date : 2026-01-16 DOI: 10.1109/TIP.2026.3652417
Mengyu Wang, Hanbo Bi, Yingchao Feng, Linlin Xin, Shuo Gong, Tianqi Wang, Zhiyuan Yan, Peijin Wang, Wenhui Diao, Xian Sun

Vision foundation models in remote sensing have been extensively studied due to their superior generalization on various downstream tasks. Synthetic Aperture Radar (SAR) offers all-day, all-weather imaging capabilities, providing significant advantages for Earth observation. However, establishing a foundation model for SAR image interpretation inevitably encounters the challenges of insufficient information utilization and poor interpretability. In this paper, we propose a remote sensing foundation model based on complex-valued SAR data, which simulates the polarimetric decomposition process for pre-training, i.e., characterizing pixel scattering intensity as a weighted combination of scattering bases and scattering coefficients, thereby endowing the foundation model with physical interpretability. Specifically, we construct a series of scattering queries, each representing an independent and meaningful scattering basis, which interact with SAR features in the scattering query decoder and output the corresponding scattering coefficient. To guide the pre-training process, polarimetric decomposition loss and power self-supervised loss are constructed. The former aligns the predicted coefficients with Yamaguchi coefficients, while the latter reconstructs power from the predicted coefficients and compares it to the input image's power. The performance of our foundation model is validated on nine typical downstream tasks, achieving state-of-the-art results. Notably, the foundation model can extract stable feature representations and exhibits strong generalization, even in data-scarce conditions.

遥感视觉基础模型由于在各种下游任务上具有较好的通用性而得到了广泛的研究。合成孔径雷达(SAR)提供全天、全天候成像能力,为地球观测提供了显著优势。然而,建立SAR图像解译的基础模型不可避免地会遇到信息利用不足和可解释性差的挑战。本文提出了一种基于复值SAR数据的遥感基础模型,该模型模拟极化分解过程进行预训练,将像元散射强度表征为散射基和散射系数的加权组合,从而使基础模型具有物理可解释性。具体而言,我们构建了一系列散射查询,每个查询代表一个独立且有意义的散射基,它们与散射查询解码器中的SAR特征相互作用并输出相应的散射系数。为了指导预训练过程,构造了极化分解损耗和功率自监督损耗。前者将预测系数与山口系数对齐,而后者根据预测系数重建功率,并将其与输入图像的功率进行比较。我们的基础模型的性能在九个典型的下游任务上得到了验证,获得了最先进的结果。值得注意的是,即使在数据稀缺的条件下,基础模型也可以提取稳定的特征表示,并表现出很强的泛化能力。
{"title":"A Complex-valued SAR Foundation Model Based on Physically Inspired Representation Learning.","authors":"Mengyu Wang, Hanbo Bi, Yingchao Feng, Linlin Xin, Shuo Gong, Tianqi Wang, Zhiyuan Yan, Peijin Wang, Wenhui Diao, Xian Sun","doi":"10.1109/TIP.2026.3652417","DOIUrl":"https://doi.org/10.1109/TIP.2026.3652417","url":null,"abstract":"<p><p>Vision foundation models in remote sensing have been extensively studied due to their superior generalization on various downstream tasks. Synthetic Aperture Radar (SAR) offers all-day, all-weather imaging capabilities, providing significant advantages for Earth observation. However, establishing a foundation model for SAR image interpretation inevitably encounters the challenges of insufficient information utilization and poor interpretability. In this paper, we propose a remote sensing foundation model based on complex-valued SAR data, which simulates the polarimetric decomposition process for pre-training, i.e., characterizing pixel scattering intensity as a weighted combination of scattering bases and scattering coefficients, thereby endowing the foundation model with physical interpretability. Specifically, we construct a series of scattering queries, each representing an independent and meaningful scattering basis, which interact with SAR features in the scattering query decoder and output the corresponding scattering coefficient. To guide the pre-training process, polarimetric decomposition loss and power self-supervised loss are constructed. The former aligns the predicted coefficients with Yamaguchi coefficients, while the latter reconstructs power from the predicted coefficients and compares it to the input image's power. The performance of our foundation model is validated on nine typical downstream tasks, achieving state-of-the-art results. Notably, the foundation model can extract stable feature representations and exhibits strong generalization, even in data-scarce conditions.</p>","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"PP ","pages":""},"PeriodicalIF":13.7,"publicationDate":"2026-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145991968","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hippocampal Memory-Like Separation-Completion Collaborative Network for Unbiased Scene Graph Generation 无偏场景图生成的海马类记忆分离补全协同网络。
IF 13.7 Pub Date : 2026-01-15 DOI: 10.1109/TIP.2025.3650668
Ruonan Zhang;Gaoyun An;Yiqing Hao;Dapeng Oliver Wu
Scene Graph Generation (SGG) is a challenging cross-modal task, which aims to identify entities and relationships in a scene simultaneously. Due to the highly skewed long-tailed distribution, the generated scene graphs are dominated by relation categories of head samples. Current works address this problem by designing re-balancing strategies at the data level or refining relation representations at the feature level. Different from them, we attribute this impact to catastrophic interference, that is, the subsequent learning of dominant relations tends to overwrite the earlier learning of rare relations. To address it at the modeling level, a Hippocampal Memory-Like Separation-Completion Collaborative Network (HMSC2) is proposed here, which imitates the hippocampal encoding and retrieval process. Inspired by the pattern separation of dentate gyrus during memory encoding, a Gradient Separation Classifier and a Prototype Separation Learning module are proposed to relieve the catastrophic interference of tail categories by modeling the separated classifier and prototypes. In addition, inspired by the pattern completion of area CA3 of the hippocampus during memory retrieval, a Prototype Completion Module is designed to supplement the incomplete information of prototypes by introducing relation representations as cues. Finally, the completed prototype and relation representations are connected within a hypersphere space by a Contrastive Connected Module. Experimental results on the Visual Genome and GQA datasets show our HMSC2 achieves state-of-the-art performance on the unbiased SGG task, effectively relieving the long-tailed problem. The source codes are released on GitHub: https://github.com/Nora-Zhang98/HMSC2
场景图生成(SGG)是一项具有挑战性的跨模态任务,旨在同时识别场景中的实体和关系。由于长尾分布高度偏斜,生成的场景图以头部样本的关系类别为主。当前的工作通过在数据级设计重新平衡策略或在特征级改进关系表示来解决这个问题。与它们不同的是,我们将这种影响归因于灾难性干扰,也就是说,对主导关系的后续学习往往会覆盖对罕见关系的早期学习。为了在建模层面解决这一问题,本文提出了一个海马记忆类分离-完成协同网络(HMSC2),它模拟了海马的编码和检索过程。受记忆编码过程中齿状回模式分离的启发,提出了梯度分离分类器和原型分离学习模块,通过对分离后的分类器和原型进行建模来缓解尾部类别的灾难性干扰。此外,借鉴海马CA3区在记忆提取过程中的模式补全,设计了原型补全模块,通过引入关系表征作为线索来补充原型的不完全信息。最后,通过对比连接模块将完成的原型和关系表示在超球空间内连接起来。在Visual Genome和GQA数据集上的实验结果表明,我们的HMSC2在无偏SGG任务上取得了最先进的性能,有效地缓解了长尾问题。源代码发布在GitHub: https://github.com/Nora-Zhang98/HMSC2。
{"title":"Hippocampal Memory-Like Separation-Completion Collaborative Network for Unbiased Scene Graph Generation","authors":"Ruonan Zhang;Gaoyun An;Yiqing Hao;Dapeng Oliver Wu","doi":"10.1109/TIP.2025.3650668","DOIUrl":"10.1109/TIP.2025.3650668","url":null,"abstract":"Scene Graph Generation (SGG) is a challenging cross-modal task, which aims to identify entities and relationships in a scene simultaneously. Due to the highly skewed long-tailed distribution, the generated scene graphs are dominated by relation categories of head samples. Current works address this problem by designing re-balancing strategies at the data level or refining relation representations at the feature level. Different from them, we attribute this impact to catastrophic interference, that is, the subsequent learning of dominant relations tends to overwrite the earlier learning of rare relations. To address it at the modeling level, a Hippocampal Memory-Like Separation-Completion Collaborative Network (HMSC2) is proposed here, which imitates the hippocampal encoding and retrieval process. Inspired by the pattern separation of dentate gyrus during memory encoding, a Gradient Separation Classifier and a Prototype Separation Learning module are proposed to relieve the catastrophic interference of tail categories by modeling the separated classifier and prototypes. In addition, inspired by the pattern completion of area CA3 of the hippocampus during memory retrieval, a Prototype Completion Module is designed to supplement the incomplete information of prototypes by introducing relation representations as cues. Finally, the completed prototype and relation representations are connected within a hypersphere space by a Contrastive Connected Module. Experimental results on the Visual Genome and GQA datasets show our HMSC2 achieves state-of-the-art performance on the unbiased SGG task, effectively relieving the long-tailed problem. The source codes are released on GitHub: <uri>https://github.com/Nora-Zhang98/HMSC2</uri>","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"35 ","pages":"770-785"},"PeriodicalIF":13.7,"publicationDate":"2026-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145971821","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fast Track Anything With Sparse Spatio-Temporal Propagation for Unified Video Segmentation 基于稀疏时空传播的快速跟踪统一视频分割。
IF 13.7 Pub Date : 2026-01-15 DOI: 10.1109/TIP.2025.3649365
Jisheng Dang;Huicheng Zheng;Zhixuan Chen;Zhang Li;Yulan Guo;Tat-Seng Chua
Recent advances in “track-anything” models have significantly improved fine-grained video understanding by simultaneously handling multiple video segmentation and tracking tasks. However, existing models often struggle with robust and efficient temporal propagation. To address these challenges, we propose the Sparse Spatio-Temporal Propagation (SSTP) method, which achieves robust and efficient unified video segmentation by selectively leveraging key spatio-temporal features in videos. Specifically, we design a dynamic 3D spatio-temporal convolution to aggregate global multi-frame spatio-temporal information into memory frames during memory construction. Additionally, we introduce a spatio-temporal aggregation reading strategy to efficiently aggregate the relevant spatio-temporal features from multiple memory frames during memory retrieval. By combining SSTP with an image segmentation foundation model, such as the segment anything model, our method effectively addresses multiple data-scarce video segmentation tasks. Our experimental results demonstrate state-of-the-art performance on five video segmentation tasks across eleven datasets, outperforming both task-specific and unified methods. Notably, SSTP exhibits strong robustness in handling sparse, low-frame-rate videos, making it well-suited for real-world applications.
“任何跟踪”模型的进步通过同时处理多个视频分割和跟踪任务,显著提高了细粒度视频理解能力。然而,现有的模型经常与鲁棒和有效的时间传播作斗争。为了解决这些挑战,我们提出了稀疏时空传播(SSTP)方法,该方法通过选择性地利用视频中的关键时空特征来实现鲁棒和高效的统一视频分割。具体来说,我们设计了一个动态三维时空卷积,在记忆构建过程中将全局多帧时空信息聚合到记忆帧中。此外,我们还引入了一种时空聚合阅读策略,以便在记忆检索过程中有效地聚合来自多个记忆框架的相关时空特征。通过将SSTP与图像分割基础模型(如分割任意模型)相结合,我们的方法有效地解决了多个数据稀缺的视频分割任务。我们的实验结果在11个数据集的5个视频分割任务上展示了最先进的性能,优于特定任务和统一方法。值得注意的是,SSTP在处理稀疏、低帧率视频方面表现出很强的鲁棒性,使其非常适合实际应用。
{"title":"Fast Track Anything With Sparse Spatio-Temporal Propagation for Unified Video Segmentation","authors":"Jisheng Dang;Huicheng Zheng;Zhixuan Chen;Zhang Li;Yulan Guo;Tat-Seng Chua","doi":"10.1109/TIP.2025.3649365","DOIUrl":"10.1109/TIP.2025.3649365","url":null,"abstract":"Recent advances in “track-anything” models have significantly improved fine-grained video understanding by simultaneously handling multiple video segmentation and tracking tasks. However, existing models often struggle with robust and efficient temporal propagation. To address these challenges, we propose the Sparse Spatio-Temporal Propagation (SSTP) method, which achieves robust and efficient unified video segmentation by selectively leveraging key spatio-temporal features in videos. Specifically, we design a dynamic 3D spatio-temporal convolution to aggregate global multi-frame spatio-temporal information into memory frames during memory construction. Additionally, we introduce a spatio-temporal aggregation reading strategy to efficiently aggregate the relevant spatio-temporal features from multiple memory frames during memory retrieval. By combining SSTP with an image segmentation foundation model, such as the segment anything model, our method effectively addresses multiple data-scarce video segmentation tasks. Our experimental results demonstrate state-of-the-art performance on five video segmentation tasks across eleven datasets, outperforming both task-specific and unified methods. Notably, SSTP exhibits strong robustness in handling sparse, low-frame-rate videos, making it well-suited for real-world applications.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"35 ","pages":"955-969"},"PeriodicalIF":13.7,"publicationDate":"2026-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145971759","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FourierSR: A Fourier Token-Based Plugin for Efficient Image Super-Resolution FourierSR:一个基于傅立叶标记的高效图像超分辨率插件
IF 13.7 Pub Date : 2026-01-15 DOI: 10.1109/TIP.2025.3648872
Wenjie Li;Heng Guo;Yuefeng Hou;Zhanyu Ma
Image super-resolution (SR) aims to recover low-resolution images to high-resolution images, where improving SR efficiency is a high-profile challenge. However, commonly used units in SR, like convolutions and window-based Transformers, have limited receptive fields, making it challenging to apply them to improve SR under extremely limited computational cost. To address this issue, inspired by modeling convolution theorem through token mix, we propose a Fourier token-based plugin called FourierSR to improve SR uniformly, which avoids the instability or inefficiency of existing token mix technologies when applied as plug-ins. Furthermore, compared to convolutions and windows-based Transformers, our FourierSR only utilizes Fourier transform and multiplication operations, greatly reducing complexity while having global receptive fields. Experimental results show that our FourierSR as a plug-and-play unit brings an average PSNR gain of 0.34dB for existing efficient SR methods on Manga109 test set at the scale of $times 4$ , while the average increase in the number of Params and FLOPs is only 0.6% and 1.5% of original sizes. We will release our codes upon acceptance.
图像超分辨率(SR)旨在将低分辨率图像恢复为高分辨率图像,提高图像超分辨率效率是一个备受瞩目的挑战。然而,SR中常用的单元,如卷积和基于窗口的变压器,具有有限的接受场,这使得在极其有限的计算成本下应用它们来提高SR具有挑战性。为了解决这一问题,受通过令牌混合建模卷积定理的启发,我们提出了一种基于傅立叶令牌的插件FourierSR来统一改进SR,从而避免了现有令牌混合技术作为插件应用时的不稳定性或低效率。此外,与卷积和基于窗口的变形器相比,我们的傅立叶sr只使用傅立叶变换和乘法运算,在具有全局接受域的同时大大降低了复杂性。实验结果表明,作为即插即用单元的FourierSR,在Manga109测试集上,在$ × 4$的尺度下,现有高效SR方法的平均PSNR增益为0.34dB,而Params和flop的平均增幅仅为原始尺寸的0.6%和1.5%。我们将在验收后发布我们的代码。
{"title":"FourierSR: A Fourier Token-Based Plugin for Efficient Image Super-Resolution","authors":"Wenjie Li;Heng Guo;Yuefeng Hou;Zhanyu Ma","doi":"10.1109/TIP.2025.3648872","DOIUrl":"10.1109/TIP.2025.3648872","url":null,"abstract":"Image super-resolution (SR) aims to recover low-resolution images to high-resolution images, where improving SR efficiency is a high-profile challenge. However, commonly used units in SR, like convolutions and window-based Transformers, have limited receptive fields, making it challenging to apply them to improve SR under extremely limited computational cost. To address this issue, inspired by modeling convolution theorem through token mix, we propose a Fourier token-based plugin called FourierSR to improve SR uniformly, which avoids the instability or inefficiency of existing token mix technologies when applied as plug-ins. Furthermore, compared to convolutions and windows-based Transformers, our FourierSR only utilizes Fourier transform and multiplication operations, greatly reducing complexity while having global receptive fields. Experimental results show that our FourierSR as a plug-and-play unit brings an average PSNR gain of 0.34dB for existing efficient SR methods on Manga109 test set at the scale of <inline-formula> <tex-math>$times 4$ </tex-math></inline-formula>, while the average increase in the number of Params and FLOPs is only 0.6% and 1.5% of original sizes. We will release our codes upon acceptance.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"35 ","pages":"732-742"},"PeriodicalIF":13.7,"publicationDate":"2026-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145972027","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Toward Generative Understanding: Incremental Few-Shot Semantic Segmentation With Diffusion Models 生成式理解:基于扩散模型的增量少镜头语义分割。
IF 13.7 Pub Date : 2026-01-14 DOI: 10.1109/TIP.2026.3652357
Qun Li;Lu Huang;Fu Xiao;Na Zhao;Bir Bhanu
Incremental Few-shot Semantic Segmentation (iFSS) aims to learn novel classes with limited samples while preserving segmentation capability for base classes, addressing the challenge of continual learning of novel classes and catastrophic forgetting of previously seen classes. Existing methods mainly rely on techniques such as knowledge distillation and background learning, which, while partially effective, still suffer from issues such as feature drift and limited generalization to real-world novel classes, primarily due to a bidirectional coupling bottleneck between the learning of base classes and novel classes. To address these challenges, we propose, for the first time, a diffusion-based generative framework for iFSS. Specifically, we bridge the gap between generative and discriminative tasks through an innovative binary-to-RGB mask mapping mechanism, enabling pre-trained diffusion models to focus on target regions via class-specific semantic embedding optimization while sharpening foreground-background contrast with color embeddings. A lightweight post-processor then refines the generated images into high-quality binary masks. Crucially, by leveraging diffusion priors, our framework avoids complex training strategies. The optimization of class-specific semantic embeddings decouples the embedding spaces of base and novel classes, inherently preventing feature drift, mitigating catastrophic forgetting, and enabling rapid novel-class adaptation. Experimental results show that our method achieves state-of-the-art performance on the PASCAL- $5^{i}$ and COCO- $20^{i}$ datasets using much less data than other methods, and exhibiting competitive results in cross-domain few-shot segmentation tasks. Project page: https://ifss-diff.github.io/
增量少射语义分割(iFSS)旨在用有限的样本学习新类,同时保留基类的分割能力,解决新类的持续学习和先前见过的类的灾难性遗忘的挑战。现有的方法主要依赖于知识蒸馏和背景学习等技术,这些技术虽然部分有效,但由于基类和新类的学习存在双向耦合瓶颈,仍然存在特征漂移和对现实世界新类的泛化有限等问题。为了应对这些挑战,我们首次提出了一个基于扩散的iFSS生成框架。具体来说,我们通过创新的二值到rgb掩码映射机制弥合了生成任务和判别任务之间的差距,使预训练的扩散模型能够通过特定类别的语义嵌入优化来关注目标区域,同时通过颜色嵌入来增强前景和背景的对比度。然后,一个轻量级的后处理器将生成的图像细化为高质量的二进制掩码。至关重要的是,通过利用扩散先验,我们的框架避免了复杂的训练策略。类特定语义嵌入的优化解耦了基本类和新类的嵌入空间,从本质上防止了特征漂移,减轻了灾难性遗忘,并实现了新类的快速适应。实验结果表明,该方法在PASCAL-5i和COCO-20i数据集上使用的数据比其他方法少得多,达到了最先进的性能,并且在跨域少镜头分割任务中表现出竞争力。项目页面:https://ifss-diff.github.io/。
{"title":"Toward Generative Understanding: Incremental Few-Shot Semantic Segmentation With Diffusion Models","authors":"Qun Li;Lu Huang;Fu Xiao;Na Zhao;Bir Bhanu","doi":"10.1109/TIP.2026.3652357","DOIUrl":"10.1109/TIP.2026.3652357","url":null,"abstract":"Incremental Few-shot Semantic Segmentation (iFSS) aims to learn novel classes with limited samples while preserving segmentation capability for base classes, addressing the challenge of continual learning of novel classes and catastrophic forgetting of previously seen classes. Existing methods mainly rely on techniques such as knowledge distillation and background learning, which, while partially effective, still suffer from issues such as feature drift and limited generalization to real-world novel classes, primarily due to a bidirectional coupling bottleneck between the learning of base classes and novel classes. To address these challenges, we propose, for the first time, a diffusion-based generative framework for iFSS. Specifically, we bridge the gap between generative and discriminative tasks through an innovative binary-to-RGB mask mapping mechanism, enabling pre-trained diffusion models to focus on target regions via class-specific semantic embedding optimization while sharpening foreground-background contrast with color embeddings. A lightweight post-processor then refines the generated images into high-quality binary masks. Crucially, by leveraging diffusion priors, our framework avoids complex training strategies. The optimization of class-specific semantic embeddings decouples the embedding spaces of base and novel classes, inherently preventing feature drift, mitigating catastrophic forgetting, and enabling rapid novel-class adaptation. Experimental results show that our method achieves state-of-the-art performance on the PASCAL-<inline-formula> <tex-math>$5^{i}$ </tex-math></inline-formula> and COCO-<inline-formula> <tex-math>$20^{i}$ </tex-math></inline-formula> datasets using much less data than other methods, and exhibiting competitive results in cross-domain few-shot segmentation tasks. Project page: <uri>https://ifss-diff.github.io/</uri>","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"35 ","pages":"743-758"},"PeriodicalIF":13.7,"publicationDate":"2026-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145971761","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
EinsPT: Efficient Instance-Aware Pre-Training of Vision Foundation Models 视觉基础模型的高效实例感知预训练。
IF 13.7 Pub Date : 2026-01-14 DOI: 10.1109/TIP.2026.3652371
Zhaozhi Wang;Yunjie Tian;Lingxi Xie;Yaowei Wang;Qixiang Ye
In this study, we introduce EinsPT, an efficient instance-aware pre-training paradigm designed to reduce the transfer gap between vision foundation models and downstream instance-level tasks. Unlike conventional image-level pre-training that relies solely on unlabeled images, EinsPT leverages both image reconstruction and instance annotations to learn representations that are spatially coherent and instance discriminative. To achieve this efficiently, we propose a proxy–foundation architecture that decouples high-resolution and low-resolution learning: the foundation model processes masked low-resolution images for global semantics, while a lightweight proxy model operates on complete high-resolution images to preserve fine-grained details. The two branches are jointly optimized through reconstruction and instance-level prediction losses on fused features. Extensive experiments demonstrate that EinsPT consistently enhances recognition accuracy across various downstream tasks with substantially reduced computational cost, while qualitative results further reveal improved instance perception and completeness in visual representations. Code is available at github.com/feufhd/EinsPT
在本研究中,我们引入了一种有效的实例感知预训练范式EinsPT,旨在减少视觉基础模型与下游实例级任务之间的转移差距。与传统的仅依赖于未标记图像的图像级预训练不同,EinsPT利用图像重建和实例注释来学习空间连贯和实例区分的表示。为了有效地实现这一目标,我们提出了一种代理基础架构,将高分辨率和低分辨率学习解耦:基础模型处理被掩盖的低分辨率图像以获得全局语义,而轻量级代理模型处理完整的高分辨率图像以保留细粒度细节。通过重构和融合特征的实例级预测损失对两个分支进行联合优化。大量实验表明,EinsPT在显著降低计算成本的同时,持续提高了各种下游任务的识别精度,而定性结果进一步揭示了视觉表征中实例感知和完整性的提高。代码可从github.com/feufhd/EinsPT获得。
{"title":"EinsPT: Efficient Instance-Aware Pre-Training of Vision Foundation Models","authors":"Zhaozhi Wang;Yunjie Tian;Lingxi Xie;Yaowei Wang;Qixiang Ye","doi":"10.1109/TIP.2026.3652371","DOIUrl":"10.1109/TIP.2026.3652371","url":null,"abstract":"In this study, we introduce EinsPT, an efficient instance-aware pre-training paradigm designed to reduce the transfer gap between vision foundation models and downstream instance-level tasks. Unlike conventional image-level pre-training that relies solely on unlabeled images, EinsPT leverages both image reconstruction and instance annotations to learn representations that are spatially coherent and instance discriminative. To achieve this efficiently, we propose a proxy–foundation architecture that decouples high-resolution and low-resolution learning: the foundation model processes masked low-resolution images for global semantics, while a lightweight proxy model operates on complete high-resolution images to preserve fine-grained details. The two branches are jointly optimized through reconstruction and instance-level prediction losses on fused features. Extensive experiments demonstrate that EinsPT consistently enhances recognition accuracy across various downstream tasks with substantially reduced computational cost, while qualitative results further reveal improved instance perception and completeness in visual representations. Code is available at github.com/feufhd/EinsPT","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"35 ","pages":"786-799"},"PeriodicalIF":13.7,"publicationDate":"2026-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145971820","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Harnessing Group-Oriented Consistency Constraints for Semi-Supervised Semantic Segmentation in CdZnTe Semiconductors 利用面向群的一致性约束在CdZnTe半导体中进行半监督语义分割
IF 13.7 Pub Date : 2026-01-14 DOI: 10.1109/TIP.2025.3646474
Peihao Li;Yan Fang;Man Liu;Huihui Bai;Anhong Wang;Yunchao Wei;Yao Zhao
Labeling Cadmium Zinc Telluride (CdZnTe) semiconductor images is challenging due to the low-contrast defect boundaries, necessitating annotators to cross-reference multiple views. These views share a single ground truth (GT), forming a unique “many-to-one” relationship. This characteristic renders advanced semi-supervised semantic segmentation (SSS) methods suboptimal, as they are generally limited by a “one-to-one” relationship, where each image is independently associated with its GT. Such limitation may lead to error accumulation in low-contrast regions, further exacerbating confirmation bias. To address this issue, we revisit the SSS pipeline from a group-oriented perspective and propose a human-inspired solution: the Intra-group Consistency Augmentation Framework (ICAF). First, we experimentally validate the inherent consistency constraints within CdZnTe groups, establishing a group-oriented baseline using the Intra-group View Sampling (IVS). Building on this insight, we introduce the Pseudo-label Correction Network (PCN) to enhance consistency representation, which consists of two key modules. The View Augmentation Module (VAM) improves boundary details by dynamically synthesizing a boundary-aware view through the aggregation of multiple views. In the View Correction Module (VCM), this synthesized view is paired with other views for information interaction, effectively emphasizing salient regions while minimizing noise. Extensive experiments demonstrate the effectiveness of our solution for CdZnTe materials. Leveraging DeepLabV3+ with a ResNet-101 backbone as our segmentation model, we achieve a 70.6% mIoU on the CdZnTe dataset using only 2 group-annotated data (5‰). The code is available at https://github.com/pipixiapipi/ICAF
由于低对比度缺陷边界,标记碲化镉锌(CdZnTe)半导体图像具有挑战性,需要注释者交叉参考多个视图。这些观点共享一个单一的基础真理(GT),形成独特的“多对一”关系。这一特点使得先进的半监督语义分割(SSS)方法不是最优的,因为它们通常受到“一对一”关系的限制,其中每个图像都与其GT独立关联。这种限制可能导致低对比度区域的误差积累,进一步加剧确认偏差。为了解决这个问题,我们从群体导向的角度重新审视了SSS管道,并提出了一个人性化的解决方案:群体内一致性增强框架(ICAF)。首先,我们通过实验验证了CdZnTe组内固有的一致性约束,使用组内视图采样(IVS)建立了面向组的基线。在此基础上,我们引入了伪标签校正网络(PCN)来增强一致性表示,它由两个关键模块组成。视图增强模块(View Augmentation Module, VAM)通过聚合多个视图动态合成一个边界感知视图,从而改善边界细节。在视图校正模块(VCM)中,该合成视图与其他视图配对进行信息交互,有效地突出突出区域,同时最小化噪声。大量的实验证明了我们的解决方案对CdZnTe材料的有效性。利用带有ResNet-101骨干网的DeepLabV3+作为我们的分割模型,我们仅使用2个组注释数据(5‰)在CdZnTe数据集上实现了70.6%的mIoU。代码可在https://github.com/pipixiapipi/ICAF上获得
{"title":"Harnessing Group-Oriented Consistency Constraints for Semi-Supervised Semantic Segmentation in CdZnTe Semiconductors","authors":"Peihao Li;Yan Fang;Man Liu;Huihui Bai;Anhong Wang;Yunchao Wei;Yao Zhao","doi":"10.1109/TIP.2025.3646474","DOIUrl":"10.1109/TIP.2025.3646474","url":null,"abstract":"Labeling Cadmium Zinc Telluride (CdZnTe) semiconductor images is challenging due to the low-contrast defect boundaries, necessitating annotators to cross-reference multiple views. These views share a single ground truth (GT), forming a unique “many-to-one” relationship. This characteristic renders advanced semi-supervised semantic segmentation (SSS) methods suboptimal, as they are generally limited by a “one-to-one” relationship, where each image is independently associated with its GT. Such limitation may lead to error accumulation in low-contrast regions, further exacerbating confirmation bias. To address this issue, we revisit the SSS pipeline from a group-oriented perspective and propose a human-inspired solution: the Intra-group Consistency Augmentation Framework (ICAF). First, we experimentally validate the inherent consistency constraints within CdZnTe groups, establishing a group-oriented baseline using the Intra-group View Sampling (IVS). Building on this insight, we introduce the Pseudo-label Correction Network (PCN) to enhance consistency representation, which consists of two key modules. The View Augmentation Module (VAM) improves boundary details by dynamically synthesizing a boundary-aware view through the aggregation of multiple views. In the View Correction Module (VCM), this synthesized view is paired with other views for information interaction, effectively emphasizing salient regions while minimizing noise. Extensive experiments demonstrate the effectiveness of our solution for CdZnTe materials. Leveraging DeepLabV3+ with a ResNet-101 backbone as our segmentation model, we achieve a 70.6% mIoU on the CdZnTe dataset using only 2 group-annotated data (5‰). The code is available at <uri>https://github.com/pipixiapipi/ICAF</uri>","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"35 ","pages":"759-769"},"PeriodicalIF":13.7,"publicationDate":"2026-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145972028","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Diagnosing and Improving Vector-Quantization-Based Blind Image Restoration 基于矢量量化的图像盲恢复诊断与改进。
IF 13.7 Pub Date : 2026-01-13 DOI: 10.1109/TIP.2026.3651985
Hongyu Li;Tianyi Xu;Zengyou Wang;Xiantong Zhen;Ran Gu;David Zhang;Jun Xu
Vector-Quantization (VQ) based discrete generative models are widely used to learn powerful high-quality (HQ) priors for blind image restoration (BIR). In this paper, we diagnose the side-effects of discrete VQ process essential to VQ-based BIR methods: 1) confining the representation capacity of HQ codebook, 2) being error-prone for code index prediction on low-quality (LQ) images, and 3) under-valuing the importance of input LQ image. These motivate us to learn continuous feature representation of HQ codebook for better restoration performance than using discrete VQ process. To further improve the restoration fidelity, we propose a new Self-in-Cross-Attention (SinCA) module to augment the HQ codebook with the feature of input LQ image, and perform cross-attention between LQ feature and input-augmented codebook. By this way, our SinCA leverages the input LQ image to enhance the representation of codebook for restoration fidelity. Experiments on four typical VQ-based BIR methods demonstrate that, by replacing the VQ process with a transformer using our SinCA, they achieve better quantitative and qualitative performance on blind image super-resolution and blind face restoration. The code and pre-trained models are publicly released at https://github.com/lhy-85/SinCA
基于矢量量化(VQ)的离散生成模型被广泛用于学习强大的高质量先验,用于盲图像恢复(BIR)。在本文中,我们诊断了离散VQ过程对基于VQ的BIR方法至关重要的副作用:1)限制了HQ码本的表示能力,2)在低质量(LQ)图像上的代码索引预测容易出错,3)低估了输入LQ图像的重要性。这促使我们学习HQ码本的连续特征表示,以获得比使用离散VQ过程更好的恢复性能。为了进一步提高复原保真度,我们提出了一种新的自交叉注意(SinCA)模块,利用输入LQ图像的特征增强HQ码本,并在LQ特征与输入增强码本之间进行交叉注意。通过这种方式,我们的SinCA利用输入LQ图像来增强码本的表示以恢复保真度。对四种典型的基于VQ的BIR方法进行了实验,结果表明,利用我们的SinCA将VQ过程替换为变压器,在盲图像超分辨率和盲人脸恢复方面取得了更好的定量和定性性能。代码和预训练模型将公开发布。
{"title":"Diagnosing and Improving Vector-Quantization-Based Blind Image Restoration","authors":"Hongyu Li;Tianyi Xu;Zengyou Wang;Xiantong Zhen;Ran Gu;David Zhang;Jun Xu","doi":"10.1109/TIP.2026.3651985","DOIUrl":"10.1109/TIP.2026.3651985","url":null,"abstract":"Vector-Quantization (VQ) based discrete generative models are widely used to learn powerful high-quality (HQ) priors for blind image restoration (BIR). In this paper, we diagnose the side-effects of discrete VQ process essential to VQ-based BIR methods: 1) confining the representation capacity of HQ codebook, 2) being error-prone for code index prediction on low-quality (LQ) images, and 3) under-valuing the importance of input LQ image. These motivate us to learn continuous feature representation of HQ codebook for better restoration performance than using discrete VQ process. To further improve the restoration fidelity, we propose a new Self-in-Cross-Attention (SinCA) module to augment the HQ codebook with the feature of input LQ image, and perform cross-attention between LQ feature and input-augmented codebook. By this way, our SinCA leverages the input LQ image to enhance the representation of codebook for restoration fidelity. Experiments on four typical VQ-based BIR methods demonstrate that, by replacing the VQ process with a transformer using our SinCA, they achieve better quantitative and qualitative performance on blind image super-resolution and blind face restoration. The code and pre-trained models are publicly released at <uri>https://github.com/lhy-85/SinCA</uri>","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"35 ","pages":"844-857"},"PeriodicalIF":13.7,"publicationDate":"2026-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145961421","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE transactions on image processing : a publication of the IEEE Signal Processing Society
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1