首页 > 最新文献

IEEE transactions on image processing : a publication of the IEEE Signal Processing Society最新文献

英文 中文
Imbalanced Multiclassification Challenges in Whole Slide Image: Cross-Patient Pseudo Bags Generation and Curriculum Contrastive Learning With Dynamic Rebalancing 全幻灯片图像的不平衡多分类挑战:跨患者伪袋生成与动态再平衡课程对比学习。
IF 13.7 Pub Date : 2026-01-21 DOI: 10.1109/TIP.2026.3654402
Yonghuang Wu;Xuan Xie;Chengqian Zhao;Pengfei Song;Feiyu Yin;Guoqing Wu;Jinhua Yu
The multi-classification of histopathological images under imbalanced sample conditions remains a long-standing unresolved challenge in computational pathology. In this paper, we propose for the first time a cross-patient pseudo-bag generation technique to address this challenge. Our key innovation lies in a cross-patient pseudo-bag generation framework that extracts complementary pathological features to construct distributionally consistent pseudo-bags. To resolve the critical challenge of distributional alignment in pseudo-bag generation, we propose an affinity-driven curriculum contrastive learning strategy, integrating sample affinity metrics with progressive training to stabilize representation learning. Unlike prior methods focused on bag-level embeddings, our framework pioneers a paradigm shift toward multi-instance feature distribution mining, explicitly modeling inter-bag heterogeneity to address class imbalance. Our method demonstrates significant performance improvements on three datasets with multiple classification difficulties, outperforming the second-best method by an average of 1.95 percentage points in F1 score and 2.07 percentage points in ACC.
在不平衡样本条件下组织病理图像的多重分类仍然是计算病理学中长期未解决的挑战。在本文中,我们首次提出了一种跨患者伪袋生成技术来解决这一挑战。我们的关键创新在于一个跨患者伪袋生成框架,提取互补的病理特征来构建分布一致的伪袋。为了解决伪袋生成中分布对齐的关键挑战,我们提出了一种亲和力驱动的课程对比学习策略,将样本亲和力指标与渐进式训练相结合,以稳定表征学习。与之前专注于袋级嵌入的方法不同,我们的框架开创了向多实例特征分布挖掘的范式转变,明确地建模袋间异质性以解决类不平衡问题。我们的方法在具有多个分类困难的三个数据集上表现出显著的性能改进,在F1得分和ACC得分上平均比第二优方法高出1.95个百分点和2.07个百分点。
{"title":"Imbalanced Multiclassification Challenges in Whole Slide Image: Cross-Patient Pseudo Bags Generation and Curriculum Contrastive Learning With Dynamic Rebalancing","authors":"Yonghuang Wu;Xuan Xie;Chengqian Zhao;Pengfei Song;Feiyu Yin;Guoqing Wu;Jinhua Yu","doi":"10.1109/TIP.2026.3654402","DOIUrl":"10.1109/TIP.2026.3654402","url":null,"abstract":"The multi-classification of histopathological images under imbalanced sample conditions remains a long-standing unresolved challenge in computational pathology. In this paper, we propose for the first time a cross-patient pseudo-bag generation technique to address this challenge. Our key innovation lies in a cross-patient pseudo-bag generation framework that extracts complementary pathological features to construct distributionally consistent pseudo-bags. To resolve the critical challenge of distributional alignment in pseudo-bag generation, we propose an affinity-driven curriculum contrastive learning strategy, integrating sample affinity metrics with progressive training to stabilize representation learning. Unlike prior methods focused on bag-level embeddings, our framework pioneers a paradigm shift toward multi-instance feature distribution mining, explicitly modeling inter-bag heterogeneity to address class imbalance. Our method demonstrates significant performance improvements on three datasets with multiple classification difficulties, outperforming the second-best method by an average of 1.95 percentage points in F1 score and 2.07 percentage points in ACC.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"35 ","pages":"904-914"},"PeriodicalIF":13.7,"publicationDate":"2026-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146015338","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Variational Multi-Scale Model for Multi-Exposure Image Fusion 多曝光图像融合的变分多尺度模型
IF 13.7 Pub Date : 2026-01-19 DOI: 10.1109/TIP.2025.3650052
Yuming Yang;Wei Wang
Multi-exposure image fusion (MEF) is the main method to obtain High Dynamic Range (HDR) images by fusing multiple images taken under various exposure values. In this paper, we propose and develop a novel variational model based on detail-base decomposition for MEF. The main idea is to incorporate the decomposition procedure and the reconstruction procedure into a unified framework, and to interact the detail information and the base information at the same time. Specifically, we make use of Tikhonov regularization to model the base layer, and we present an efficient design to obtain the detail layer, which is able to capture more detailed information effectively. Meanwhile, we incorporate multi-scale techniques to remove halo artifacts. Numerically, we apply alternating direction method of multipliers (ADMM) to solve the proposed minimization problem. Theoretically, we study the existence of the solution of the proposed model and the convergence of the proposed ADMM algorithm. Experimental examples are presented to demonstrate that the performance of the proposed model is better than that by using other testing methods in terms of visual quality and some criteria, e. g., the proposed model gives the best Natural image quality evaluator (NIQE) values with 1% - 10% improvement for real image fusion experiments and gives the best PSNR values with 13% - 20% improvement for the synthetic image fusion experiment.
多曝光图像融合(MEF)是通过融合不同曝光值下拍摄的多幅图像来获得高动态范围(HDR)图像的主要方法。本文提出并发展了一种新的基于细节基分解的MEF变分模型。其主要思想是将分解过程和重构过程整合到一个统一的框架中,同时实现详细信息和基础信息的交互。具体来说,我们利用Tikhonov正则化对基础层进行建模,并提出了一种高效的设计来获得细节层,能够有效地捕获更多的细节信息。同时,采用多尺度技术去除光晕伪影。在数值上,我们应用乘法器的交替方向法(ADMM)来解决所提出的最小化问题。从理论上研究了模型解的存在性和ADMM算法的收敛性。实验结果表明,该模型在视觉质量和某些标准方面优于其他测试方法,例如,在真实图像融合实验中,该模型给出的最佳自然图像质量评估器(NIQE)值提高1% ~ 10%;在合成图像融合实验中,该模型给出的最佳PSNR值提高13% ~ 20%。
{"title":"A Variational Multi-Scale Model for Multi-Exposure Image Fusion","authors":"Yuming Yang;Wei Wang","doi":"10.1109/TIP.2025.3650052","DOIUrl":"10.1109/TIP.2025.3650052","url":null,"abstract":"Multi-exposure image fusion (MEF) is the main method to obtain High Dynamic Range (HDR) images by fusing multiple images taken under various exposure values. In this paper, we propose and develop a novel variational model based on detail-base decomposition for MEF. The main idea is to incorporate the decomposition procedure and the reconstruction procedure into a unified framework, and to interact the detail information and the base information at the same time. Specifically, we make use of Tikhonov regularization to model the base layer, and we present an efficient design to obtain the detail layer, which is able to capture more detailed information effectively. Meanwhile, we incorporate multi-scale techniques to remove halo artifacts. Numerically, we apply alternating direction method of multipliers (ADMM) to solve the proposed minimization problem. Theoretically, we study the existence of the solution of the proposed model and the convergence of the proposed ADMM algorithm. Experimental examples are presented to demonstrate that the performance of the proposed model is better than that by using other testing methods in terms of visual quality and some criteria, e. g., the proposed model gives the best Natural image quality evaluator (NIQE) values with 1% - 10% improvement for real image fusion experiments and gives the best PSNR values with 13% - 20% improvement for the synthetic image fusion experiment.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"35 ","pages":"701-716"},"PeriodicalIF":13.7,"publicationDate":"2026-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146000606","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Selecting and Pruning: A Differentiable Causal Sequentialized State-Space Model for Two-View Correspondence Learning 二视图对应学习的可微因果序化状态空间模型。
IF 13.7 Pub Date : 2026-01-16 DOI: 10.1109/TIP.2026.3653189
Xiang Fang;Shihua Zhang;Hao Zhang;Xiaoguang Mei;Huabing Zhou;Jiayi Ma
Two-view correspondence learning aims to discern true and false correspondences between image pairs by recognizing their underlying different information. Previous methods either treat the information equally or require the explicit storage of the entire context, tending to be laborious in real-world scenarios. Inspired by Mamba’s inherent selectivity, we propose CorrMamba, a Correspondence filter leveraging Mamba’s ability to selectively mine information from true correspondences while mitigating interference from false ones, thus achieving adaptive focus at a lower cost. To prevent Mamba from being potentially impacted by unordered keypoints that obscured its ability to mine spatial information, we customize a causal sequential learning approach based on the Gumbel-Softmax technique to establish causal dependencies between features in a fully autonomous and differentiable manner. Additionally, a local-context enhancement module is designed to capture critical contextual cues essential for correspondence pruning, complementing the core framework. Extensive experiments on relative pose estimation, visual localization, and analysis demonstrate that CorrMamba achieves state-of-the-art performance. Notably, in outdoor relative pose estimation, our method surpasses the previous SOTA by 2.58 absolute percentage points in AUC@20°, highlighting its practical superiority. Our code is publicly available at https://github.com/ShineFox/CorrMamba
双视图对应学习的目的是通过识别图像对所包含的不同信息来识别图像对之间的真假对应关系。以前的方法要么平等地对待信息,要么要求显式地存储整个上下文,这在实际场景中往往很费力。受曼巴固有的选择性的启发,我们提出了CorrMamba,一个通信过滤器,利用曼巴有选择地从真实通信中挖掘信息的能力,同时减轻虚假通信的干扰,从而以较低的成本实现自适应聚焦。为了防止曼巴受到无序关键点的潜在影响,从而影响其挖掘空间信息的能力,我们定制了一种基于Gumbel-Softmax技术的因果顺序学习方法,以完全自主和可微的方式建立特征之间的因果依赖关系。此外,还设计了一个本地上下文增强模块,用于捕获通信修剪所必需的关键上下文线索,以补充核心框架。在相对姿态估计、视觉定位和分析方面的大量实验表明,CorrMamba具有最先进的性能。值得注意的是,在室外相对姿态估计中,我们的方法比以前的SOTA方法在AUC@20°上高出2.58个绝对百分点,突出了它的实用性优势。我们的代码将是公开的。
{"title":"Selecting and Pruning: A Differentiable Causal Sequentialized State-Space Model for Two-View Correspondence Learning","authors":"Xiang Fang;Shihua Zhang;Hao Zhang;Xiaoguang Mei;Huabing Zhou;Jiayi Ma","doi":"10.1109/TIP.2026.3653189","DOIUrl":"10.1109/TIP.2026.3653189","url":null,"abstract":"Two-view correspondence learning aims to discern true and false correspondences between image pairs by recognizing their underlying different information. Previous methods either treat the information equally or require the explicit storage of the entire context, tending to be laborious in real-world scenarios. Inspired by Mamba’s inherent selectivity, we propose CorrMamba, a Correspondence filter leveraging Mamba’s ability to selectively mine information from true correspondences while mitigating interference from false ones, thus achieving adaptive focus at a lower cost. To prevent Mamba from being potentially impacted by unordered keypoints that obscured its ability to mine spatial information, we customize a causal sequential learning approach based on the Gumbel-Softmax technique to establish causal dependencies between features in a fully autonomous and differentiable manner. Additionally, a local-context enhancement module is designed to capture critical contextual cues essential for correspondence pruning, complementing the core framework. Extensive experiments on relative pose estimation, visual localization, and analysis demonstrate that CorrMamba achieves state-of-the-art performance. Notably, in outdoor relative pose estimation, our method surpasses the previous SOTA by 2.58 absolute percentage points in AUC@20°, highlighting its practical superiority. Our code is publicly available at <uri>https://github.com/ShineFox/CorrMamba</uri>","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"35 ","pages":"816-829"},"PeriodicalIF":13.7,"publicationDate":"2026-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145991971","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Vision Enhancing LLMs: Empowering Multimodal Knowledge Storage and Sharing in LLMs 视觉增强法学硕士:增强法学硕士的多模式知识存储和共享。
IF 13.7 Pub Date : 2026-01-16 DOI: 10.1109/TIP.2025.3649356
Yunxin Li;Zhenyu Liu;Baotian Hu;Wei Wang;Yuxin Ding;Xiaochun Cao;Min Zhang
Recent advancements in multimodal large language models (MLLMs) have achieved significant multimodal generation capabilities, akin to GPT-4. These models predominantly map visual information into language representation space, leveraging the vast knowledge and powerful text generation abilities of LLMs to produce multimodal instruction-following responses. We could term this method as LLMs for Vision because of its employing LLMs for visual understanding and reasoning, yet observe that these MLLMs neglect the potential of harnessing visual knowledge to enhance the overall capabilities of LLMs, which could be regarded as Vision Enhancing LLMs. In this paper, we propose an approach called MKS2, aimed at enhancing LLMs through empowering Multimodal Knowledge Storage and Sharing in LLMs. Specifically, we introduce Modular Visual Memory (MVM), a component integrated into the internal blocks of LLMs, designed to store open-world visual information efficiently. Additionally, we present a soft Mixture of Multimodal Experts (MoMEs) architecture in LLMs to invoke multimodal knowledge collaboration during text generation. Our comprehensive experiments demonstrate that MKS2 substantially augments the reasoning capabilities of LLMs in contexts necessitating physical or commonsense knowledge. It also delivers competitive results on image-text understanding multimodal benchmarks. The codes will be available at: https://github.com/HITsz-TMG/MKS2-Multimodal-Knowledge-Storage-and-Sharing
多模态大型语言模型(mllm)的最新进展已经实现了类似于GPT-4的重要多模态生成能力。这些模型主要将视觉信息映射到语言表示空间,利用llm的丰富知识和强大的文本生成能力来生成多模态指令响应。我们可以将这种方法称为视觉llm,因为它使用llm进行视觉理解和推理,但观察到这些llm忽略了利用视觉知识来增强llm整体能力的潜力,这可以被视为视觉增强llm。在本文中,我们提出了一种名为MKS2的方法,旨在通过授权法学硕士中的多模式知识存储和共享来增强法学硕士。具体来说,我们引入了模块化视觉记忆(MVM),这是一个集成在llm内部块中的组件,旨在有效地存储开放世界的视觉信息。此外,我们在法学硕士中提出了一种软混合多模态专家(MoMEs)架构,以在文本生成过程中调用多模态知识协作。我们的综合实验表明,MKS2极大地增强了法学硕士在需要物理或常识知识的情况下的推理能力。它还在图像-文本理解多模态基准测试中提供了具有竞争力的结果。这些代码可在以下网址获得:https://github.com/HITsz-TMG/ mks2 - multimodal - knowledge - storage - sharing。
{"title":"Vision Enhancing LLMs: Empowering Multimodal Knowledge Storage and Sharing in LLMs","authors":"Yunxin Li;Zhenyu Liu;Baotian Hu;Wei Wang;Yuxin Ding;Xiaochun Cao;Min Zhang","doi":"10.1109/TIP.2025.3649356","DOIUrl":"10.1109/TIP.2025.3649356","url":null,"abstract":"Recent advancements in multimodal large language models (MLLMs) have achieved significant multimodal generation capabilities, akin to GPT-4. These models predominantly map visual information into language representation space, leveraging the vast knowledge and powerful text generation abilities of LLMs to produce multimodal instruction-following responses. We could term this method as LLMs for Vision because of its employing LLMs for visual understanding and reasoning, yet observe that these MLLMs neglect the potential of harnessing visual knowledge to enhance the overall capabilities of LLMs, which could be regarded as Vision Enhancing LLMs. In this paper, we propose an approach called MKS2, aimed at enhancing LLMs through empowering Multimodal Knowledge Storage and Sharing in LLMs. Specifically, we introduce Modular Visual Memory (MVM), a component integrated into the internal blocks of LLMs, designed to store open-world visual information efficiently. Additionally, we present a soft Mixture of Multimodal Experts (MoMEs) architecture in LLMs to invoke multimodal knowledge collaboration during text generation. Our comprehensive experiments demonstrate that MKS2 substantially augments the reasoning capabilities of LLMs in contexts necessitating physical or commonsense knowledge. It also delivers competitive results on image-text understanding multimodal benchmarks. The codes will be available at: <uri>https://github.com/HITsz-TMG/MKS2-Multimodal-Knowledge-Storage-and-Sharing</uri>","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"35 ","pages":"858-871"},"PeriodicalIF":13.7,"publicationDate":"2026-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145992007","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Boosting HDR Image Reconstruction via Semantic Knowledge Transfer 基于语义知识转移的HDR图像重建。
IF 13.7 Pub Date : 2026-01-16 DOI: 10.1109/TIP.2026.3652360
Tao Hu;Longyao Wu;Wei Dong;Peng Wu;Jinqiu Sun;Xiaogang Xu;Qingsen Yan;Yanning Zhang
Recovering High Dynamic Range (HDR) images from multiple Standard Dynamic Range (SDR) images becomes challenging when the SDR images exhibit noticeable degradation and missing content. Leveraging scene-specific semantic priors offers a promising solution for restoring heavily degraded regions. However, these priors are typically extracted from sRGB SDR images, the domain/format gap poses a significant challenge when applying it to HDR imaging. To address this issue, we propose a general framework that transfers semantic knowledge derived from SDR domain via self-distillation to boost existing HDR reconstruction. Specifically, the proposed framework first introduces the Semantic Priors Guided Reconstruction Model (SPGRM), which leverages SDR image semantic knowledge to address ill-posed problems in the initial HDR reconstruction results. Subsequently, we leverage a self-distillation mechanism that constrains the color and content information with semantic knowledge, aligning the external outputs between the baseline and SPGRM. Furthermore, to transfer the semantic knowledge of the internal features, we utilize a Semantic Knowledge Alignment Module (SKAM) to fill the missing semantic contents with the complementary masks. Extensive experiments demonstrate that our framework significantly boosts HDR imaging quality for existing methods without altering the network architecture.
当标准动态范围(SDR)图像表现出明显的退化和内容缺失时,从多个标准动态范围(SDR)图像中恢复高动态范围(HDR)图像变得具有挑战性。利用场景特定的语义先验为恢复严重退化的区域提供了一个有希望的解决方案。然而,这些先验通常是从sRGB SDR图像中提取的,当将其应用于HDR成像时,域/格式差距会带来重大挑战。为了解决这个问题,我们提出了一个通用框架,该框架通过自蒸馏来传输从SDR域获得的语义知识,以促进现有的HDR重建。具体而言,该框架首先引入了语义先验引导重建模型(SPGRM),该模型利用SDR图像语义知识来解决初始HDR重建结果中的不适定问题。随后,我们利用一种自蒸馏机制,用语义知识约束颜色和内容信息,在基线和SPGRM之间对齐外部输出。此外,为了传递内部特征的语义知识,我们利用语义知识对齐模块(semantic knowledge Alignment Module, SKAM)用互补掩码填充缺失的语义内容。大量的实验表明,我们的框架在不改变网络架构的情况下显著提高了现有方法的HDR成像质量。
{"title":"Boosting HDR Image Reconstruction via Semantic Knowledge Transfer","authors":"Tao Hu;Longyao Wu;Wei Dong;Peng Wu;Jinqiu Sun;Xiaogang Xu;Qingsen Yan;Yanning Zhang","doi":"10.1109/TIP.2026.3652360","DOIUrl":"10.1109/TIP.2026.3652360","url":null,"abstract":"Recovering High Dynamic Range (HDR) images from multiple Standard Dynamic Range (SDR) images becomes challenging when the SDR images exhibit noticeable degradation and missing content. Leveraging scene-specific semantic priors offers a promising solution for restoring heavily degraded regions. However, these priors are typically extracted from sRGB SDR images, the domain/format gap poses a significant challenge when applying it to HDR imaging. To address this issue, we propose a general framework that transfers semantic knowledge derived from SDR domain via self-distillation to boost existing HDR reconstruction. Specifically, the proposed framework first introduces the Semantic Priors Guided Reconstruction Model (SPGRM), which leverages SDR image semantic knowledge to address ill-posed problems in the initial HDR reconstruction results. Subsequently, we leverage a self-distillation mechanism that constrains the color and content information with semantic knowledge, aligning the external outputs between the baseline and SPGRM. Furthermore, to transfer the semantic knowledge of the internal features, we utilize a Semantic Knowledge Alignment Module (SKAM) to fill the missing semantic contents with the complementary masks. Extensive experiments demonstrate that our framework significantly boosts HDR imaging quality for existing methods without altering the network architecture.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"35 ","pages":"1910-1922"},"PeriodicalIF":13.7,"publicationDate":"2026-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145991925","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Particle Diffusion Matching: Random Walk Correspondence Search for the Alignment of Standard and Ultra-Widefield Fundus Images 粒子扩散匹配:标准和超宽视场眼底图像对齐的随机行走对应搜索。
IF 13.7 Pub Date : 2026-01-16 DOI: 10.1109/TIP.2026.3653198
Kang Geon Lee;Soochahn Lee;Kyoung Mu Lee
We propose a robust alignment technique for Standard Fundus Images (SFIs) and Ultra-Widefield Fundus Images (UWFIs), which are challenging to align due to differences in scale, appearance, and the scarcity of distinctive features. Our method, termed Particle Diffusion Matching (PDM), performs alignment through an iterative Random Walk Correspondence Search (RWCS) guided by a diffusion model. At each iteration, the model estimates displacement vectors for particle points by considering local appearance, the structural distribution of particles, and an estimated global transformation, enabling progressive refinement of correspondences even under difficult conditions. PDM achieves state-of-the-art performance across multiple retinal image alignment benchmarks, showing substantial improvement on a primary dataset of SFI-UWFI pairs and demonstrating its effectiveness in real-world clinical scenarios. By providing accurate and scalable correspondence estimation, PDM overcomes the limitations of existing methods and facilitates the integration of complementary retinal image modalities. This diffusion-guided search strategy offers a new direction for improving downstream supervised learning, disease diagnosis, and multi-modal image analysis in ophthalmology.
我们提出了一种针对标准眼底图像(sfi)和超广角眼底图像(uwfi)的鲁棒对齐技术,这两种图像由于尺度、外观和特征稀缺性的差异而难以对齐。我们的方法,称为粒子扩散匹配(PDM),通过由扩散模型指导的迭代随机行走对应搜索(RWCS)执行对齐。在每次迭代中,模型通过考虑局部外观、粒子的结构分布和估计的全局变换来估计粒子点的位移向量,即使在困难的条件下也能逐步改进对应。PDM在多个视网膜图像对齐基准上实现了最先进的性能,在SFI-UWFI对的主要数据集上显示出实质性的改进,并证明了其在现实世界临床场景中的有效性。通过提供精确和可扩展的对应估计,PDM克服了现有方法的局限性,促进了互补视网膜图像模态的集成。这种扩散引导搜索策略为改善眼科的下游监督学习、疾病诊断和多模态图像分析提供了新的方向。
{"title":"Particle Diffusion Matching: Random Walk Correspondence Search for the Alignment of Standard and Ultra-Widefield Fundus Images","authors":"Kang Geon Lee;Soochahn Lee;Kyoung Mu Lee","doi":"10.1109/TIP.2026.3653198","DOIUrl":"10.1109/TIP.2026.3653198","url":null,"abstract":"We propose a robust alignment technique for Standard Fundus Images (SFIs) and Ultra-Widefield Fundus Images (UWFIs), which are challenging to align due to differences in scale, appearance, and the scarcity of distinctive features. Our method, termed Particle Diffusion Matching (PDM), performs alignment through an iterative Random Walk Correspondence Search (RWCS) guided by a diffusion model. At each iteration, the model estimates displacement vectors for particle points by considering local appearance, the structural distribution of particles, and an estimated global transformation, enabling progressive refinement of correspondences even under difficult conditions. PDM achieves state-of-the-art performance across multiple retinal image alignment benchmarks, showing substantial improvement on a primary dataset of SFI-UWFI pairs and demonstrating its effectiveness in real-world clinical scenarios. By providing accurate and scalable correspondence estimation, PDM overcomes the limitations of existing methods and facilitates the integration of complementary retinal image modalities. This diffusion-guided search strategy offers a new direction for improving downstream supervised learning, disease diagnosis, and multi-modal image analysis in ophthalmology.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"35 ","pages":"943-954"},"PeriodicalIF":13.7,"publicationDate":"2026-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145986626","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Complex-Valued SAR Foundation Model Based on Physically Inspired Representation Learning 基于物理启发表示学习的复值SAR基础模型。
IF 13.7 Pub Date : 2026-01-16 DOI: 10.1109/TIP.2026.3652417
Mengyu Wang;Hanbo Bi;Yingchao Feng;Linlin Xin;Shuo Gong;Tianqi Wang;Zhiyuan Yan;Peijin Wang;Wenhui Diao;Xian Sun
Vision foundation models in remote sensing have been extensively studied due to their superior generalization on various downstream tasks. Synthetic Aperture Radar (SAR) offers all-day, all-weather imaging capabilities, providing significant advantages for Earth observation. However, establishing a foundation model for SAR image interpretation inevitably encounters the challenges of insufficient information utilization and poor interpretability. In this paper, we propose a remote sensing foundation model based on complex-valued SAR data, which simulates the polarimetric decomposition process for pre-training, i.e., characterizing pixel scattering intensity as a weighted combination of scattering bases and scattering coefficients, thereby endowing the foundation model with physical interpretability. Specifically, we construct a series of scattering queries, each representing an independent and meaningful scattering basis, which interact with SAR features in the scattering query decoder and output the corresponding scattering coefficient. To guide the pre-training process, polarimetric decomposition loss and power self-supervised loss are constructed. The former aligns the predicted coefficients with Yamaguchi coefficients, while the latter reconstructs power from the predicted coefficients and compares it to the input image’s power. The performance of our foundation model is validated on nine typical downstream tasks, achieving state-of-the-art results. Notably, the foundation model can extract stable feature representations and exhibits strong generalization, even in data-scarce conditions.
遥感视觉基础模型由于在各种下游任务上具有较好的通用性而得到了广泛的研究。合成孔径雷达(SAR)提供全天、全天候成像能力,为地球观测提供了显著优势。然而,建立SAR图像解译的基础模型不可避免地会遇到信息利用不足和可解释性差的挑战。本文提出了一种基于复值SAR数据的遥感基础模型,该模型模拟极化分解过程进行预训练,将像元散射强度表征为散射基和散射系数的加权组合,从而使基础模型具有物理可解释性。具体而言,我们构建了一系列散射查询,每个查询代表一个独立且有意义的散射基,它们与散射查询解码器中的SAR特征相互作用并输出相应的散射系数。为了指导预训练过程,构造了极化分解损耗和功率自监督损耗。前者将预测系数与山口系数对齐,而后者根据预测系数重建功率,并将其与输入图像的功率进行比较。我们的基础模型的性能在九个典型的下游任务上得到了验证,获得了最先进的结果。值得注意的是,即使在数据稀缺的条件下,基础模型也可以提取稳定的特征表示,并表现出很强的泛化能力。
{"title":"A Complex-Valued SAR Foundation Model Based on Physically Inspired Representation Learning","authors":"Mengyu Wang;Hanbo Bi;Yingchao Feng;Linlin Xin;Shuo Gong;Tianqi Wang;Zhiyuan Yan;Peijin Wang;Wenhui Diao;Xian Sun","doi":"10.1109/TIP.2026.3652417","DOIUrl":"10.1109/TIP.2026.3652417","url":null,"abstract":"Vision foundation models in remote sensing have been extensively studied due to their superior generalization on various downstream tasks. Synthetic Aperture Radar (SAR) offers all-day, all-weather imaging capabilities, providing significant advantages for Earth observation. However, establishing a foundation model for SAR image interpretation inevitably encounters the challenges of insufficient information utilization and poor interpretability. In this paper, we propose a remote sensing foundation model based on complex-valued SAR data, which simulates the polarimetric decomposition process for pre-training, i.e., characterizing pixel scattering intensity as a weighted combination of scattering bases and scattering coefficients, thereby endowing the foundation model with physical interpretability. Specifically, we construct a series of scattering queries, each representing an independent and meaningful scattering basis, which interact with SAR features in the scattering query decoder and output the corresponding scattering coefficient. To guide the pre-training process, polarimetric decomposition loss and power self-supervised loss are constructed. The former aligns the predicted coefficients with Yamaguchi coefficients, while the latter reconstructs power from the predicted coefficients and compares it to the input image’s power. The performance of our foundation model is validated on nine typical downstream tasks, achieving state-of-the-art results. Notably, the foundation model can extract stable feature representations and exhibits strong generalization, even in data-scarce conditions.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"35 ","pages":"2094-2109"},"PeriodicalIF":13.7,"publicationDate":"2026-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145991968","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hippocampal Memory-Like Separation-Completion Collaborative Network for Unbiased Scene Graph Generation 无偏场景图生成的海马类记忆分离补全协同网络。
IF 13.7 Pub Date : 2026-01-15 DOI: 10.1109/TIP.2025.3650668
Ruonan Zhang;Gaoyun An;Yiqing Hao;Dapeng Oliver Wu
Scene Graph Generation (SGG) is a challenging cross-modal task, which aims to identify entities and relationships in a scene simultaneously. Due to the highly skewed long-tailed distribution, the generated scene graphs are dominated by relation categories of head samples. Current works address this problem by designing re-balancing strategies at the data level or refining relation representations at the feature level. Different from them, we attribute this impact to catastrophic interference, that is, the subsequent learning of dominant relations tends to overwrite the earlier learning of rare relations. To address it at the modeling level, a Hippocampal Memory-Like Separation-Completion Collaborative Network (HMSC2) is proposed here, which imitates the hippocampal encoding and retrieval process. Inspired by the pattern separation of dentate gyrus during memory encoding, a Gradient Separation Classifier and a Prototype Separation Learning module are proposed to relieve the catastrophic interference of tail categories by modeling the separated classifier and prototypes. In addition, inspired by the pattern completion of area CA3 of the hippocampus during memory retrieval, a Prototype Completion Module is designed to supplement the incomplete information of prototypes by introducing relation representations as cues. Finally, the completed prototype and relation representations are connected within a hypersphere space by a Contrastive Connected Module. Experimental results on the Visual Genome and GQA datasets show our HMSC2 achieves state-of-the-art performance on the unbiased SGG task, effectively relieving the long-tailed problem. The source codes are released on GitHub: https://github.com/Nora-Zhang98/HMSC2
场景图生成(SGG)是一项具有挑战性的跨模态任务,旨在同时识别场景中的实体和关系。由于长尾分布高度偏斜,生成的场景图以头部样本的关系类别为主。当前的工作通过在数据级设计重新平衡策略或在特征级改进关系表示来解决这个问题。与它们不同的是,我们将这种影响归因于灾难性干扰,也就是说,对主导关系的后续学习往往会覆盖对罕见关系的早期学习。为了在建模层面解决这一问题,本文提出了一个海马记忆类分离-完成协同网络(HMSC2),它模拟了海马的编码和检索过程。受记忆编码过程中齿状回模式分离的启发,提出了梯度分离分类器和原型分离学习模块,通过对分离后的分类器和原型进行建模来缓解尾部类别的灾难性干扰。此外,借鉴海马CA3区在记忆提取过程中的模式补全,设计了原型补全模块,通过引入关系表征作为线索来补充原型的不完全信息。最后,通过对比连接模块将完成的原型和关系表示在超球空间内连接起来。在Visual Genome和GQA数据集上的实验结果表明,我们的HMSC2在无偏SGG任务上取得了最先进的性能,有效地缓解了长尾问题。源代码发布在GitHub: https://github.com/Nora-Zhang98/HMSC2。
{"title":"Hippocampal Memory-Like Separation-Completion Collaborative Network for Unbiased Scene Graph Generation","authors":"Ruonan Zhang;Gaoyun An;Yiqing Hao;Dapeng Oliver Wu","doi":"10.1109/TIP.2025.3650668","DOIUrl":"10.1109/TIP.2025.3650668","url":null,"abstract":"Scene Graph Generation (SGG) is a challenging cross-modal task, which aims to identify entities and relationships in a scene simultaneously. Due to the highly skewed long-tailed distribution, the generated scene graphs are dominated by relation categories of head samples. Current works address this problem by designing re-balancing strategies at the data level or refining relation representations at the feature level. Different from them, we attribute this impact to catastrophic interference, that is, the subsequent learning of dominant relations tends to overwrite the earlier learning of rare relations. To address it at the modeling level, a Hippocampal Memory-Like Separation-Completion Collaborative Network (HMSC2) is proposed here, which imitates the hippocampal encoding and retrieval process. Inspired by the pattern separation of dentate gyrus during memory encoding, a Gradient Separation Classifier and a Prototype Separation Learning module are proposed to relieve the catastrophic interference of tail categories by modeling the separated classifier and prototypes. In addition, inspired by the pattern completion of area CA3 of the hippocampus during memory retrieval, a Prototype Completion Module is designed to supplement the incomplete information of prototypes by introducing relation representations as cues. Finally, the completed prototype and relation representations are connected within a hypersphere space by a Contrastive Connected Module. Experimental results on the Visual Genome and GQA datasets show our HMSC2 achieves state-of-the-art performance on the unbiased SGG task, effectively relieving the long-tailed problem. The source codes are released on GitHub: <uri>https://github.com/Nora-Zhang98/HMSC2</uri>","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"35 ","pages":"770-785"},"PeriodicalIF":13.7,"publicationDate":"2026-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145971821","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fast Track Anything With Sparse Spatio-Temporal Propagation for Unified Video Segmentation 基于稀疏时空传播的快速跟踪统一视频分割。
IF 13.7 Pub Date : 2026-01-15 DOI: 10.1109/TIP.2025.3649365
Jisheng Dang;Huicheng Zheng;Zhixuan Chen;Zhang Li;Yulan Guo;Tat-Seng Chua
Recent advances in “track-anything” models have significantly improved fine-grained video understanding by simultaneously handling multiple video segmentation and tracking tasks. However, existing models often struggle with robust and efficient temporal propagation. To address these challenges, we propose the Sparse Spatio-Temporal Propagation (SSTP) method, which achieves robust and efficient unified video segmentation by selectively leveraging key spatio-temporal features in videos. Specifically, we design a dynamic 3D spatio-temporal convolution to aggregate global multi-frame spatio-temporal information into memory frames during memory construction. Additionally, we introduce a spatio-temporal aggregation reading strategy to efficiently aggregate the relevant spatio-temporal features from multiple memory frames during memory retrieval. By combining SSTP with an image segmentation foundation model, such as the segment anything model, our method effectively addresses multiple data-scarce video segmentation tasks. Our experimental results demonstrate state-of-the-art performance on five video segmentation tasks across eleven datasets, outperforming both task-specific and unified methods. Notably, SSTP exhibits strong robustness in handling sparse, low-frame-rate videos, making it well-suited for real-world applications.
“任何跟踪”模型的进步通过同时处理多个视频分割和跟踪任务,显著提高了细粒度视频理解能力。然而,现有的模型经常与鲁棒和有效的时间传播作斗争。为了解决这些挑战,我们提出了稀疏时空传播(SSTP)方法,该方法通过选择性地利用视频中的关键时空特征来实现鲁棒和高效的统一视频分割。具体来说,我们设计了一个动态三维时空卷积,在记忆构建过程中将全局多帧时空信息聚合到记忆帧中。此外,我们还引入了一种时空聚合阅读策略,以便在记忆检索过程中有效地聚合来自多个记忆框架的相关时空特征。通过将SSTP与图像分割基础模型(如分割任意模型)相结合,我们的方法有效地解决了多个数据稀缺的视频分割任务。我们的实验结果在11个数据集的5个视频分割任务上展示了最先进的性能,优于特定任务和统一方法。值得注意的是,SSTP在处理稀疏、低帧率视频方面表现出很强的鲁棒性,使其非常适合实际应用。
{"title":"Fast Track Anything With Sparse Spatio-Temporal Propagation for Unified Video Segmentation","authors":"Jisheng Dang;Huicheng Zheng;Zhixuan Chen;Zhang Li;Yulan Guo;Tat-Seng Chua","doi":"10.1109/TIP.2025.3649365","DOIUrl":"10.1109/TIP.2025.3649365","url":null,"abstract":"Recent advances in “track-anything” models have significantly improved fine-grained video understanding by simultaneously handling multiple video segmentation and tracking tasks. However, existing models often struggle with robust and efficient temporal propagation. To address these challenges, we propose the Sparse Spatio-Temporal Propagation (SSTP) method, which achieves robust and efficient unified video segmentation by selectively leveraging key spatio-temporal features in videos. Specifically, we design a dynamic 3D spatio-temporal convolution to aggregate global multi-frame spatio-temporal information into memory frames during memory construction. Additionally, we introduce a spatio-temporal aggregation reading strategy to efficiently aggregate the relevant spatio-temporal features from multiple memory frames during memory retrieval. By combining SSTP with an image segmentation foundation model, such as the segment anything model, our method effectively addresses multiple data-scarce video segmentation tasks. Our experimental results demonstrate state-of-the-art performance on five video segmentation tasks across eleven datasets, outperforming both task-specific and unified methods. Notably, SSTP exhibits strong robustness in handling sparse, low-frame-rate videos, making it well-suited for real-world applications.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"35 ","pages":"955-969"},"PeriodicalIF":13.7,"publicationDate":"2026-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145971759","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FourierSR: A Fourier Token-Based Plugin for Efficient Image Super-Resolution FourierSR:一个基于傅立叶标记的高效图像超分辨率插件
IF 13.7 Pub Date : 2026-01-15 DOI: 10.1109/TIP.2025.3648872
Wenjie Li;Heng Guo;Yuefeng Hou;Zhanyu Ma
Image super-resolution (SR) aims to recover low-resolution images to high-resolution images, where improving SR efficiency is a high-profile challenge. However, commonly used units in SR, like convolutions and window-based Transformers, have limited receptive fields, making it challenging to apply them to improve SR under extremely limited computational cost. To address this issue, inspired by modeling convolution theorem through token mix, we propose a Fourier token-based plugin called FourierSR to improve SR uniformly, which avoids the instability or inefficiency of existing token mix technologies when applied as plug-ins. Furthermore, compared to convolutions and windows-based Transformers, our FourierSR only utilizes Fourier transform and multiplication operations, greatly reducing complexity while having global receptive fields. Experimental results show that our FourierSR as a plug-and-play unit brings an average PSNR gain of 0.34dB for existing efficient SR methods on Manga109 test set at the scale of $times 4$ , while the average increase in the number of Params and FLOPs is only 0.6% and 1.5% of original sizes. We will release our codes upon acceptance.
图像超分辨率(SR)旨在将低分辨率图像恢复为高分辨率图像,提高图像超分辨率效率是一个备受瞩目的挑战。然而,SR中常用的单元,如卷积和基于窗口的变压器,具有有限的接受场,这使得在极其有限的计算成本下应用它们来提高SR具有挑战性。为了解决这一问题,受通过令牌混合建模卷积定理的启发,我们提出了一种基于傅立叶令牌的插件FourierSR来统一改进SR,从而避免了现有令牌混合技术作为插件应用时的不稳定性或低效率。此外,与卷积和基于窗口的变形器相比,我们的傅立叶sr只使用傅立叶变换和乘法运算,在具有全局接受域的同时大大降低了复杂性。实验结果表明,作为即插即用单元的FourierSR,在Manga109测试集上,在$ × 4$的尺度下,现有高效SR方法的平均PSNR增益为0.34dB,而Params和flop的平均增幅仅为原始尺寸的0.6%和1.5%。我们将在验收后发布我们的代码。
{"title":"FourierSR: A Fourier Token-Based Plugin for Efficient Image Super-Resolution","authors":"Wenjie Li;Heng Guo;Yuefeng Hou;Zhanyu Ma","doi":"10.1109/TIP.2025.3648872","DOIUrl":"10.1109/TIP.2025.3648872","url":null,"abstract":"Image super-resolution (SR) aims to recover low-resolution images to high-resolution images, where improving SR efficiency is a high-profile challenge. However, commonly used units in SR, like convolutions and window-based Transformers, have limited receptive fields, making it challenging to apply them to improve SR under extremely limited computational cost. To address this issue, inspired by modeling convolution theorem through token mix, we propose a Fourier token-based plugin called FourierSR to improve SR uniformly, which avoids the instability or inefficiency of existing token mix technologies when applied as plug-ins. Furthermore, compared to convolutions and windows-based Transformers, our FourierSR only utilizes Fourier transform and multiplication operations, greatly reducing complexity while having global receptive fields. Experimental results show that our FourierSR as a plug-and-play unit brings an average PSNR gain of 0.34dB for existing efficient SR methods on Manga109 test set at the scale of <inline-formula> <tex-math>$times 4$ </tex-math></inline-formula>, while the average increase in the number of Params and FLOPs is only 0.6% and 1.5% of original sizes. We will release our codes upon acceptance.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"35 ","pages":"732-742"},"PeriodicalIF":13.7,"publicationDate":"2026-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145972027","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE transactions on image processing : a publication of the IEEE Signal Processing Society
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1