首页 > 最新文献

Pattern Recognition最新文献

英文 中文
Generative model-based mixed-semantic enhancement for transductive zero-shot learning 基于生成模型的混合语义增强转导零采样学习
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-08-01 Epub Date: 2026-01-24 DOI: 10.1016/j.patcog.2026.113124
Huaizhou Qi , Yang Liu , Jungong Han , Lei Zhang
Zero-shot learning (ZSL) addresses the critical challenge of recognizing and classifying instances from categories not seen during training. Although generative model-based approaches have achieved notable success in ZSL, their predominant reliance on forward generation strategies coupled with excessive dependence on auxiliary information hampers model generalization and robustness. To overcome these limitations, we propose a Mixed-Semantic Enhancement framework inspired by interpolation-based feature extraction. This novel approach is designed to synthesize enriched auxiliary information through integrating authentic semantic cues, thereby refining the mapping from semantic descriptions to visual features. The enhanced feature synthesis capability enables better discrimination of ambiguous classes while preserving inter-class relationships. In addition, we establish bidirectional alignment between visual features and auxiliary information. This cross-modal interaction mechanism not only strengthens the generator’s training process through feature consistency constraints but also facilitates dynamic information exchange between modalities. Extensive experiments in a transductive setting across four benchmark datasets demonstrate significant performance gains, highlighting the robustness and effectiveness of our approach in advancing generative ZSL models.
零射击学习(ZSL)解决了从训练中未见的类别中识别和分类实例的关键挑战。尽管基于生成模型的方法在ZSL中取得了显著的成功,但它们主要依赖前向生成策略,加上过度依赖辅助信息,阻碍了模型的泛化和鲁棒性。为了克服这些限制,我们提出了一个基于插值特征提取的混合语义增强框架。该方法通过整合真实的语义线索来合成丰富的辅助信息,从而完善从语义描述到视觉特征的映射。增强的特征合成功能可以在保留类间关系的同时更好地区分歧义类。此外,我们建立了视觉特征和辅助信息之间的双向对齐。这种跨模态交互机制不仅通过特征一致性约束加强了生成器的训练过程,而且促进了模态之间的动态信息交换。在四个基准数据集的转换设置中进行的广泛实验证明了显著的性能提升,突出了我们的方法在推进生成式ZSL模型方面的鲁棒性和有效性。
{"title":"Generative model-based mixed-semantic enhancement for transductive zero-shot learning","authors":"Huaizhou Qi ,&nbsp;Yang Liu ,&nbsp;Jungong Han ,&nbsp;Lei Zhang","doi":"10.1016/j.patcog.2026.113124","DOIUrl":"10.1016/j.patcog.2026.113124","url":null,"abstract":"<div><div>Zero-shot learning (ZSL) addresses the critical challenge of recognizing and classifying instances from categories not seen during training. Although generative model-based approaches have achieved notable success in ZSL, their predominant reliance on forward generation strategies coupled with excessive dependence on auxiliary information hampers model generalization and robustness. To overcome these limitations, we propose a Mixed-Semantic Enhancement framework inspired by interpolation-based feature extraction. This novel approach is designed to synthesize enriched auxiliary information through integrating authentic semantic cues, thereby refining the mapping from semantic descriptions to visual features. The enhanced feature synthesis capability enables better discrimination of ambiguous classes while preserving inter-class relationships. In addition, we establish bidirectional alignment between visual features and auxiliary information. This cross-modal interaction mechanism not only strengthens the generator’s training process through feature consistency constraints but also facilitates dynamic information exchange between modalities. Extensive experiments in a transductive setting across four benchmark datasets demonstrate significant performance gains, highlighting the robustness and effectiveness of our approach in advancing generative ZSL models.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"176 ","pages":"Article 113124"},"PeriodicalIF":7.6,"publicationDate":"2026-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146174539","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing graph learning interpretability through modulating cluster information flow 通过调节聚类信息流增强图学习的可解释性
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-08-01 Epub Date: 2026-01-29 DOI: 10.1016/j.patcog.2026.113178
Jiayi Yang , Wei Ye , Xin Sun , Rui Fan , Jungong Han
Interpretable graph learning is essential for scientific applications that depend on learning models to extract reliable insights from graph-structured data. Recent efforts to explain GNN predictions focus on identifying vital substructures, such as subgraphs. However, existing approaches tend to misclassify the neighboring irrelevant nodes as part of the vital subgraphs. To address this, we propose Cluster Information Flow Graph Neural Networks (CIFlow-GNN), a built-in model-level method that provides accurate interpretable subgraph explanations by modulating the cluster information flow. CIFlow-GNN incorporates two modules, i.e., the graph clustering module and the cluster prototype module. The graph clustering module partitions the nodes according to their connectivity in the graph topology and their similarity in cluster features. Specifically, we introduce a cluster feature loss to regulate information flow at the cluster level. We prove that the proposed cluster feature loss is a lower bound of the InfoNCE loss. Optimizing the cluster feature loss reduces the mutual information among clusters and achieves the modulation of cluster information flow. Subsequently, the graph prototype module uses prototypes as a bridge to select important clusters as vital subgraphs by integrating information across all graphs. To ensure accurate correspondence between clusters and prototypes, we further modulate the cluster information flow at the prototype level. Experimental studies on both synthetic and real-world datasets demonstrate that our proposed CIFlow-GNN can identify vital subgraphs effectively and efficiently.
可解释的图学习对于依赖于学习模型从图结构数据中提取可靠见解的科学应用至关重要。最近解释GNN预测的努力集中在识别重要的子结构,如子图。然而,现有的方法往往将相邻的不相关节点错误地分类为重要子图的一部分。为了解决这个问题,我们提出了聚类信息流图神经网络(CIFlow-GNN),这是一种内置的模型级方法,通过调节聚类信息流提供准确的可解释子图解释。CIFlow-GNN包含两个模块,即图聚类模块和聚类原型模块。图聚类模块根据节点在图拓扑中的连通性和聚类特征的相似性对节点进行划分。具体来说,我们引入了一个集群特征损失来调节集群级别的信息流。我们证明了所提出的聚类特征损失是InfoNCE损失的下界。优化聚类特征损失减少了聚类之间的互信息,实现了聚类信息流的调制。随后,图原型模块使用原型作为桥梁,通过整合所有图的信息来选择重要的集群作为重要的子图。为了确保集群和原型之间的准确对应,我们在原型级别进一步调整集群信息流。在合成数据集和真实数据集上的实验研究表明,我们提出的CIFlow-GNN可以有效地识别重要子图。
{"title":"Enhancing graph learning interpretability through modulating cluster information flow","authors":"Jiayi Yang ,&nbsp;Wei Ye ,&nbsp;Xin Sun ,&nbsp;Rui Fan ,&nbsp;Jungong Han","doi":"10.1016/j.patcog.2026.113178","DOIUrl":"10.1016/j.patcog.2026.113178","url":null,"abstract":"<div><div>Interpretable graph learning is essential for scientific applications that depend on learning models to extract reliable insights from graph-structured data. Recent efforts to explain GNN predictions focus on identifying vital substructures, such as subgraphs. However, existing approaches tend to misclassify the neighboring irrelevant nodes as part of the vital subgraphs. To address this, we propose <strong><u>C</u></strong>luster <strong><u>I</u></strong>nformation <strong><u>Flow</u> <u>G</u></strong>raph <strong><u>N</u></strong>eural <strong><u>N</u></strong>etworks (CIFlow-GNN), a <em>built-in</em> model-level method that provides accurate interpretable subgraph explanations by modulating the cluster information flow. CIFlow-GNN incorporates two modules, i.e., the graph clustering module and the cluster prototype module. The graph clustering module partitions the nodes according to their connectivity in the graph topology and their similarity in cluster features. Specifically, we introduce a cluster feature loss to regulate information flow at the cluster level. We prove that the proposed cluster feature loss is a lower bound of the InfoNCE loss. Optimizing the cluster feature loss reduces the mutual information among clusters and achieves the modulation of cluster information flow. Subsequently, the graph prototype module uses prototypes as a bridge to select important clusters as vital subgraphs by integrating information across all graphs. To ensure accurate correspondence between clusters and prototypes, we further modulate the cluster information flow at the prototype level. Experimental studies on both synthetic and real-world datasets demonstrate that our proposed CIFlow-GNN can identify vital subgraphs effectively and efficiently.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"176 ","pages":"Article 113178"},"PeriodicalIF":7.6,"publicationDate":"2026-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146174575","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A communication efficient boosting method for distributed spectral clustering 分布式频谱聚类的通信高效增强方法
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-08-01 Epub Date: 2026-01-28 DOI: 10.1016/j.patcog.2026.113168
Yingqiu Zhu , Danyang Huang
Spectral clustering is one of the most popular clustering techniques in statistical inference. When applied to large-scale datasets, distributed spectral clustering typically faces two major challenges. First, distributed storage may disrupt the original network structure. Second, communication among computers within a distributed system results in high communication costs. In this work, we propose a communication-efficient algorithm for distributed spectral clustering. Our motivation stems from a theoretical comparison between spectral clustering on the entire dataset (global spectral clustering) and on a subsample (local spectral clustering), where we analyze the key factors underlying their performance differences. Based on the comparison, we propose a communication-efficient distributed spectral clustering (CEDSC) method, which iteratively aggregates intermediate outputs from local spectral clustering to approximate the corresponding global quantity. In this process, only low-dimensional vectors are exchanged between computers, which is shown to be communication efficient. Simulation studies and real-data applications show that CEDSC attains higher clustering accuracy than existing distributed spectral clustering methods while using only modest communication. When clustering 10,000 objects, CEDSC improves clustering accuracy by about 37% over the best baseline, with communication time below 0.4 seconds and comparable to the most communication-efficient method.
谱聚类是统计推断中最常用的聚类技术之一。当应用于大规模数据集时,分布式光谱聚类通常面临两个主要挑战。首先,分布式存储可能会破坏原有的网络结构。其次,分布式系统中计算机之间的通信导致高通信成本。在这项工作中,我们提出了一种高效通信的分布式频谱聚类算法。我们的动机源于对整个数据集(全局光谱聚类)和子样本(局部光谱聚类)的光谱聚类的理论比较,我们分析了它们性能差异背后的关键因素。在此基础上,提出了一种通信高效的分布式频谱聚类(CEDSC)方法,该方法迭代地聚集局部频谱聚类的中间输出以近似相应的全局量。在此过程中,计算机之间只交换低维向量,具有较高的通信效率。仿真研究和实际数据应用表明,CEDSC在仅使用适度通信的情况下,比现有的分布式频谱聚类方法具有更高的聚类精度。当对10,000个对象进行聚类时,CEDSC比最佳基线提高了约37%的聚类精度,通信时间低于0.4秒,与最有效的通信方法相当。
{"title":"A communication efficient boosting method for distributed spectral clustering","authors":"Yingqiu Zhu ,&nbsp;Danyang Huang","doi":"10.1016/j.patcog.2026.113168","DOIUrl":"10.1016/j.patcog.2026.113168","url":null,"abstract":"<div><div>Spectral clustering is one of the most popular clustering techniques in statistical inference. When applied to large-scale datasets, distributed spectral clustering typically faces two major challenges. First, distributed storage may disrupt the original network structure. Second, communication among computers within a distributed system results in high communication costs. In this work, we propose a communication-efficient algorithm for distributed spectral clustering. Our motivation stems from a theoretical comparison between spectral clustering on the entire dataset (global spectral clustering) and on a subsample (local spectral clustering), where we analyze the key factors underlying their performance differences. Based on the comparison, we propose a communication-efficient distributed spectral clustering (CEDSC) method, which iteratively aggregates intermediate outputs from local spectral clustering to approximate the corresponding global quantity. In this process, only low-dimensional vectors are exchanged between computers, which is shown to be communication efficient. Simulation studies and real-data applications show that CEDSC attains higher clustering accuracy than existing distributed spectral clustering methods while using only modest communication. When clustering 10,000 objects, CEDSC improves clustering accuracy by about 37% over the best baseline, with communication time below 0.4 seconds and comparable to the most communication-efficient method.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"176 ","pages":"Article 113168"},"PeriodicalIF":7.6,"publicationDate":"2026-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146081395","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning robust descriptors with probabilistic embedding and reliability-aware triplet loss 基于概率嵌入和可靠性感知三元组损失的鲁棒描述子学习
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-08-01 Epub Date: 2026-02-07 DOI: 10.1016/j.patcog.2026.113213
Weichao Bao , Xiaohui Wei , Caixia Zhou , Haibo Liu
Learning local descriptors from image patches is vital for many downstream tasks such as image matching and 3D reconstruction. Affected by inherent observation uncertainty resulting from factors like sensor noise and geometric variations, existing methods based on deterministic embeddings are limited in their ability to generate robust descriptors for real-world applications. Therefore, we propose a novel robust descriptor learning framework with probabilistic embedding and reliability-aware triplet loss in this paper. Specifically, we use probabilistic embeddings to represent image patches in the latent space, explicitly modeling the uncertainty by predicting a distribution rather than a deterministic point. To further enhance robustness, we propose a reliability-aware triplet loss whose core idea is to adaptively enhance the contribution of reliable samples while reducing the impact of unreliable ones based on the estimated uncertainty. The proposed framework can be seamlessly integrated into existing learning-based descriptor methods. Extensive experimental results demonstrate the effectiveness of the proposed framework, with the derived methods outperforming their original counterparts and other baselines on three different datasets. The code is available at: https://github.com/hnu-VML/bwc/tree/main/UNCERTAINTY_DESC.
从图像补丁中学习局部描述符对于图像匹配和3D重建等下游任务至关重要。受传感器噪声和几何变化等因素导致的固有观测不确定性的影响,基于确定性嵌入的现有方法在为现实世界应用生成鲁棒描述符的能力方面受到限制。因此,本文提出了一种具有概率嵌入和可靠性感知三元组损失的鲁棒描述符学习框架。具体来说,我们使用概率嵌入来表示潜在空间中的图像块,通过预测分布而不是确定性点来明确地建模不确定性。为了进一步增强鲁棒性,我们提出了一种可靠性感知的三重损失算法,其核心思想是根据估计的不确定性自适应地增强可靠样本的贡献,同时降低不可靠样本的影响。提出的框架可以无缝集成到现有的基于学习的描述符方法中。大量的实验结果证明了所提出框架的有效性,在三个不同的数据集上,衍生方法的性能优于原始方法和其他基线。代码可从https://github.com/hnu-VML/bwc/tree/main/UNCERTAINTY_DESC获得。
{"title":"Learning robust descriptors with probabilistic embedding and reliability-aware triplet loss","authors":"Weichao Bao ,&nbsp;Xiaohui Wei ,&nbsp;Caixia Zhou ,&nbsp;Haibo Liu","doi":"10.1016/j.patcog.2026.113213","DOIUrl":"10.1016/j.patcog.2026.113213","url":null,"abstract":"<div><div>Learning local descriptors from image patches is vital for many downstream tasks such as image matching and 3D reconstruction. Affected by inherent observation uncertainty resulting from factors like sensor noise and geometric variations, existing methods based on deterministic embeddings are limited in their ability to generate robust descriptors for real-world applications. Therefore, we propose a novel robust descriptor learning framework with probabilistic embedding and reliability-aware triplet loss in this paper. Specifically, we use probabilistic embeddings to represent image patches in the latent space, explicitly modeling the uncertainty by predicting a distribution rather than a deterministic point. To further enhance robustness, we propose a reliability-aware triplet loss whose core idea is to adaptively enhance the contribution of reliable samples while reducing the impact of unreliable ones based on the estimated uncertainty. The proposed framework can be seamlessly integrated into existing learning-based descriptor methods. Extensive experimental results demonstrate the effectiveness of the proposed framework, with the derived methods outperforming their original counterparts and other baselines on three different datasets. The code is available at: <span><span>https://github.com/hnu-VML/bwc/tree/main/UNCERTAINTY_DESC</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"176 ","pages":"Article 113213"},"PeriodicalIF":7.6,"publicationDate":"2026-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146174866","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Manifold regularized non-negative PCA with robust ℓ2,p-norm enhancement 具有鲁棒l2,p模增强的流形正则化非负主成分分析
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-08-01 Epub Date: 2026-01-30 DOI: 10.1016/j.patcog.2026.113195
Minghua Wan , Taotao Chen , Hai Tan , Mingwei Tang , Guowei Yang
Facing the challenges of ubiquitous noise in high-dimensional datasets and the embedding of data samples in low-dimensional manifolds, traditional robust NMF algorithms have limitations in noise reduction and preserving the geometric structure of data. This paper proposes a novel algorithm, Manifold Regularized Non-negative Principal Component Analysis (ℓ2,p-MRNPCA), which enhances the model’s robustness to noise by introducing ℓ2,p norm constraints and maintains the intrinsic geometric structure of the data. The algorithm further incorporates a Laplacian graph regularization term to preserve local manifold structure, and additionally imposes an independent ℓ2,1-norm penalty on the residual matrix to enhance robustness. Compared to ℓ2,p-PCA, ℓ2,p-MRNPCA demonstrates stronger local learning ability in image data processing, more effectively recognizing image details and patterns. The main contribution of this study is the proposal of a new method that integrates ℓ2,p regularization, NMF, and manifold learning, enhancing the model’s robustness and recognition capabilities. During the optimization of the projection matrix, this method effectively reduces the impact of noise and maintains the geometric integrity of the original data, thus obtaining superior part-based representations. Finally, we designed a Lagrangian–KKT multiplicative update framework to solve ℓ2,p-MRNPCA and conducted experiments on three common datasets and the handwritten MNIST dataset, demonstrating optimal performance.
面对高维数据集中无处不在的噪声和数据样本嵌入到低维流形中的挑战,传统的鲁棒NMF算法在降噪和保持数据几何结构方面存在局限性。本文提出了一种新的算法——流形正则化非负主成分分析(流形正则化非负主成分分析,p- mrnpca),该算法通过引入l2,p范数约束来增强模型对噪声的鲁棒性,并保持了数据固有的几何结构。该算法进一步引入拉普拉斯图正则化项以保持局部流形结构,并在残差矩阵上施加独立的1,1,2范数惩罚以增强鲁棒性。与l2 - p-PCA相比,l2 - p-MRNPCA在图像数据处理中表现出更强的局部学习能力,能更有效地识别图像细节和模式。本研究的主要贡献是提出了一种集成了l2、p正则化、NMF和流形学习的新方法,增强了模型的鲁棒性和识别能力。在投影矩阵优化过程中,该方法有效地降低了噪声的影响,保持了原始数据的几何完整性,从而获得了更好的基于部分的表示。最后,我们设计了一个拉格朗日- kkt乘法更新框架来求解l2,p-MRNPCA,并在三个常用数据集和手写MNIST数据集上进行了实验,证明了最优的性能。
{"title":"Manifold regularized non-negative PCA with robust ℓ2,p-norm enhancement","authors":"Minghua Wan ,&nbsp;Taotao Chen ,&nbsp;Hai Tan ,&nbsp;Mingwei Tang ,&nbsp;Guowei Yang","doi":"10.1016/j.patcog.2026.113195","DOIUrl":"10.1016/j.patcog.2026.113195","url":null,"abstract":"<div><div>Facing the challenges of ubiquitous noise in high-dimensional datasets and the embedding of data samples in low-dimensional manifolds, traditional robust NMF algorithms have limitations in noise reduction and preserving the geometric structure of data. This paper proposes a novel algorithm, Manifold Regularized Non-negative Principal Component Analysis (ℓ<sub>2,<em>p</em></sub>-MRNPCA), which enhances the model’s robustness to noise by introducing ℓ<sub>2,<em>p</em></sub> norm constraints and maintains the intrinsic geometric structure of the data. The algorithm further incorporates a Laplacian graph regularization term to preserve local manifold structure, and additionally imposes an independent ℓ<sub>2,1</sub>-norm penalty on the residual matrix to enhance robustness. Compared to ℓ<sub>2,<em>p</em></sub>-PCA, ℓ<sub>2,<em>p</em></sub>-MRNPCA demonstrates stronger local learning ability in image data processing, more effectively recognizing image details and patterns. The main contribution of this study is the proposal of a new method that integrates ℓ<sub>2,<em>p</em></sub> regularization, NMF, and manifold learning, enhancing the model’s robustness and recognition capabilities. During the optimization of the projection matrix, this method effectively reduces the impact of noise and maintains the geometric integrity of the original data, thus obtaining superior part-based representations. Finally, we designed a Lagrangian–KKT multiplicative update framework to solve ℓ<sub>2,<em>p</em></sub>-MRNPCA and conducted experiments on three common datasets and the handwritten MNIST dataset, demonstrating optimal performance.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"176 ","pages":"Article 113195"},"PeriodicalIF":7.6,"publicationDate":"2026-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146174869","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Lifelong scene graph generation 终身场景图生成
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-08-01 Epub Date: 2026-01-30 DOI: 10.1016/j.patcog.2026.113132
Tao He , Xin Hu , Tongtong Wu , Dongyang Zhang , Ming Li , Yuan-Fang Li , Fei Richard Yu
Scene Graph Generation (SGG) aims to predict visual relationships between object pairs in an image. Existing SGG approaches typically adopt a one-time training paradigm, which requires retraining on the entire dataset when new relationship types emerge-an impractical solution that leads to catastrophic forgetting. In this work, we introduce Lifelong Scene Graph Generation (LSGG), a challenging and practical setting where predicates arrive sequentially in a streaming fashion. We propose ICSGG, a novel in-context learning framework that reformulates visual features into symbolic textual tokens compatible with pre-trained language models. To retain prior knowledge while adapting to new tasks, ICSGG employs a knowledge-aware prompt retrieval strategy that selects relevant exemplars as in-context demonstrations for each query. This enables effective continual learning through prompt-based reasoning. Extensive experiments on two large-scale benchmarks-Visual Genome (VG) and Open Images v6-demonstrate that our method significantly outperforms existing SGG models in both lifelong and conventional settings, e.g., with about 4 ∼ 5% points better than the state-of-the-art PGSG.
场景图生成(Scene Graph Generation, SGG)旨在预测图像中物体对之间的视觉关系。现有的SGG方法通常采用一次性训练范式,当新的关系类型出现时,需要对整个数据集进行重新训练,这是一种导致灾难性遗忘的不切实际的解决方案。在这项工作中,我们介绍了终身场景图生成(LSGG),这是一个具有挑战性和实用性的设置,其中谓词以流方式顺序到达。我们提出了ICSGG,这是一个新的上下文学习框架,它将视觉特征重新表述为与预训练的语言模型兼容的符号文本标记。为了在适应新任务的同时保留先前的知识,ICSGG采用了一种知识感知的提示检索策略,该策略为每个查询选择相关的示例作为上下文演示。这可以通过基于提示的推理实现有效的持续学习。在两个大规模基准-视觉基因组(VG)和开放图像(Open Images) -上进行的大量实验表明,我们的方法在终身和传统设置中都明显优于现有的SGG模型,例如,比最先进的PGSG高出约4 ~ 5%。
{"title":"Lifelong scene graph generation","authors":"Tao He ,&nbsp;Xin Hu ,&nbsp;Tongtong Wu ,&nbsp;Dongyang Zhang ,&nbsp;Ming Li ,&nbsp;Yuan-Fang Li ,&nbsp;Fei Richard Yu","doi":"10.1016/j.patcog.2026.113132","DOIUrl":"10.1016/j.patcog.2026.113132","url":null,"abstract":"<div><div>Scene Graph Generation (SGG) aims to predict visual relationships between object pairs in an image. Existing SGG approaches typically adopt a one-time training paradigm, which requires retraining on the entire dataset when new relationship types emerge-an impractical solution that leads to catastrophic forgetting. In this work, we introduce Lifelong Scene Graph Generation (LSGG), a challenging and practical setting where predicates arrive sequentially in a streaming fashion. We propose ICSGG, a novel in-context learning framework that reformulates visual features into symbolic textual tokens compatible with pre-trained language models. To retain prior knowledge while adapting to new tasks, ICSGG employs a knowledge-aware prompt retrieval strategy that selects relevant exemplars as in-context demonstrations for each query. This enables effective continual learning through prompt-based reasoning. Extensive experiments on two large-scale benchmarks-Visual Genome (VG) and Open Images v<sub>6</sub>-demonstrate that our method significantly outperforms existing SGG models in both lifelong and conventional settings, e.g., with about 4 ∼ 5% points better than the state-of-the-art PGSG.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"176 ","pages":"Article 113132"},"PeriodicalIF":7.6,"publicationDate":"2026-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146174261","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A visual-textual mutual guidance fusion network for remote sensing visual question answering 一种面向遥感视觉问答的视文互导融合网络
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-08-01 Epub Date: 2026-02-10 DOI: 10.1016/j.patcog.2026.113258
Haolin Liu , Lei Chen , Xinchao Lu , Hao Wang , Lu Bai , Maoli Wang , Peng Ren
Existing remote sensing visual question answering (RS VQA) methods are challenged by the presence of small objects in extensive backgrounds, limiting the establishment of explicit cross-modal semantic relationships between visual objects and textual questions. In addition, rich visual information in remote sensing images (RSIs) has not been fully utilized during multi-modal feature fusion. To address these limitations, it is essential to strengthen RS VQA with a more effective mechanism for cross-modal semantic representation and integration. To this end, we propose a novel framework based on visual-textual mutual guidance fusion network (VMGN). Specifically, a contrast enhancement module is developed to mitigate the influence of the backgrounds and enhance the visual features of small objects. It allows the objects to occupy a prominent position in the visual features. Additionally, the transformer is used to achieve cross-modal interaction between visual and text features. It effectively models the cross-modal semantic relationship between visual and text features. Furthermore, a visual-textual mutual guidance feature fusion module is developed to explore the rich information contained within the visual features of RSIs. Our proposed framework effectively explores the rich information contained within the visual features of RSIs to establish an explicit cross-modal semantic relationship between small objects and their corresponding text. The experimental results show that our proposed framework performs better than state-of-the-art methods on three publicly available datasets. We release the reproducible code and the datasets used at https://github.com/LiuHL929/VMGN for public evaluation and possible extensive studies.
现有的遥感视觉问答(RS VQA)方法受到广泛背景中小物体存在的挑战,限制了视觉物体与文本问题之间明确的跨模态语义关系的建立。此外,遥感图像中丰富的视觉信息在多模态特征融合中没有得到充分利用。为了解决这些限制,必须使用更有效的跨模态语义表示和集成机制来加强RS VQA。为此,我们提出了一种基于视觉-文本互导融合网络(VMGN)的框架。具体而言,开发了对比度增强模块,以减轻背景的影响,增强小物体的视觉特征。它使物体在视觉特征中占据突出的位置。此外,转换器还用于实现视觉和文本特征之间的跨模态交互。它有效地模拟了视觉特征和文本特征之间的跨模态语义关系。在此基础上,开发了视觉-文本互导特征融合模块,探索rsi视觉特征中蕴含的丰富信息。我们提出的框架有效地挖掘了rsi视觉特征中包含的丰富信息,在小对象及其对应文本之间建立了明确的跨模态语义关系。实验结果表明,我们提出的框架在三个公开可用的数据集上比最先进的方法表现得更好。我们在https://github.com/LiuHL929/VMGN上发布了可复制的代码和数据集,用于公众评估和可能的广泛研究。
{"title":"A visual-textual mutual guidance fusion network for remote sensing visual question answering","authors":"Haolin Liu ,&nbsp;Lei Chen ,&nbsp;Xinchao Lu ,&nbsp;Hao Wang ,&nbsp;Lu Bai ,&nbsp;Maoli Wang ,&nbsp;Peng Ren","doi":"10.1016/j.patcog.2026.113258","DOIUrl":"10.1016/j.patcog.2026.113258","url":null,"abstract":"<div><div>Existing remote sensing visual question answering (RS VQA) methods are challenged by the presence of small objects in extensive backgrounds, limiting the establishment of explicit cross-modal semantic relationships between visual objects and textual questions. In addition, rich visual information in remote sensing images (RSIs) has not been fully utilized during multi-modal feature fusion. To address these limitations, it is essential to strengthen RS VQA with a more effective mechanism for cross-modal semantic representation and integration. To this end, we propose a novel framework based on visual-textual mutual guidance fusion network (VMGN). Specifically, a contrast enhancement module is developed to mitigate the influence of the backgrounds and enhance the visual features of small objects. It allows the objects to occupy a prominent position in the visual features. Additionally, the transformer is used to achieve cross-modal interaction between visual and text features. It effectively models the cross-modal semantic relationship between visual and text features. Furthermore, a visual-textual mutual guidance feature fusion module is developed to explore the rich information contained within the visual features of RSIs. Our proposed framework effectively explores the rich information contained within the visual features of RSIs to establish an explicit cross-modal semantic relationship between small objects and their corresponding text. The experimental results show that our proposed framework performs better than state-of-the-art methods on three publicly available datasets. We release the reproducible code and the datasets used at <span><span>https://github.com/LiuHL929/VMGN</span><svg><path></path></svg></span> for public evaluation and possible extensive studies.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"176 ","pages":"Article 113258"},"PeriodicalIF":7.6,"publicationDate":"2026-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146174367","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CUDiff: Consistency and uncertainty guided conditional diffusion for infrared and visible image fusion 一致性和不确定性引导条件扩散的红外和可见光图像融合
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-08-01 Epub Date: 2026-02-04 DOI: 10.1016/j.patcog.2026.113174
Yueying Luo, Kangjian He, Dan Xu
Infrared and visible image fusion aims to integrate complementary information from both modalities to produce more informative and visually coherent images. Although many existing methods focus on incorporating enhancement modules to improve model efficiency, few effectively address the challenges of learning in complex or ambiguous regions. In this paper, we propose CUDiff, a novel framework that leverages the powerful generative capabilities of diffusion models to reformulate the fusion process as a conditional generation task. Specifically, we design a conditional diffusion model that extracts and integrates relevant features from infrared and visible modalities. A content-consistency constraint is introduced to preserve the structural integrity of the source images, ensuring that essential information is retained in the fused output. Moreover, an uncertainty-driven mechanism adaptively refines and enhances uncertain regions, improving the overall quality and expressiveness of the fused images. Extensive experiments demonstrate that CUDiff surpasses 12 state-of-the-art methods in both visual quality and quantitative evaluation. Furthermore, CUDiff achieves superior performance in object detection tasks. The source code is available at: https://github.com/VCMHE/CUDiff
红外和可见光图像融合旨在整合两种模式的互补信息,以产生更多信息和视觉连贯的图像。尽管许多现有的方法侧重于结合增强模块来提高模型效率,但很少有方法有效地解决复杂或模糊区域学习的挑战。在本文中,我们提出了一个新的框架CUDiff,它利用扩散模型强大的生成能力将融合过程重新表述为条件生成任务。具体而言,我们设计了一个条件扩散模型,提取并整合了红外和可见光模式的相关特征。引入内容一致性约束以保持源图像的结构完整性,确保融合输出中保留基本信息。此外,不确定性驱动机制自适应地细化和增强不确定性区域,提高融合图像的整体质量和表现力。大量的实验表明,CUDiff在视觉质量和定量评价方面都超过了12种最先进的方法。此外,CUDiff在目标检测任务中取得了优异的性能。源代码可从https://github.com/VCMHE/CUDiff获得
{"title":"CUDiff: Consistency and uncertainty guided conditional diffusion for infrared and visible image fusion","authors":"Yueying Luo,&nbsp;Kangjian He,&nbsp;Dan Xu","doi":"10.1016/j.patcog.2026.113174","DOIUrl":"10.1016/j.patcog.2026.113174","url":null,"abstract":"<div><div>Infrared and visible image fusion aims to integrate complementary information from both modalities to produce more informative and visually coherent images. Although many existing methods focus on incorporating enhancement modules to improve model efficiency, few effectively address the challenges of learning in complex or ambiguous regions. In this paper, we propose CUDiff, a novel framework that leverages the powerful generative capabilities of diffusion models to reformulate the fusion process as a conditional generation task. Specifically, we design a conditional diffusion model that extracts and integrates relevant features from infrared and visible modalities. A content-consistency constraint is introduced to preserve the structural integrity of the source images, ensuring that essential information is retained in the fused output. Moreover, an uncertainty-driven mechanism adaptively refines and enhances uncertain regions, improving the overall quality and expressiveness of the fused images. Extensive experiments demonstrate that CUDiff surpasses 12 state-of-the-art methods in both visual quality and quantitative evaluation. Furthermore, CUDiff achieves superior performance in object detection tasks. The source code is available at: <span><span>https://github.com/VCMHE/CUDiff</span><svg><path></path></svg></span></div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"176 ","pages":"Article 113174"},"PeriodicalIF":7.6,"publicationDate":"2026-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146174368","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
BCDnet: Balanced coupling and decoupling network for person search BCDnet:用于人员搜索的平衡耦合解耦网络
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-08-01 Epub Date: 2026-02-06 DOI: 10.1016/j.patcog.2026.113241
Zhengjie Lu , Jinjia Peng , Huibing Wang , Xianping Fu
Person search aims to locate and identify individuals in unaltered scene images simultaneously. This task presents challenges in achieving collaborative optimization due to the intricate interplay between person detection and re-identification. Prior research has focused on the reduction of coupling between detection and re-identification. Nevertheless, some studies have indicated that neither complete decoupling nor complete coupling represents the optimal approach. The key to overcoming this challenge lies in leveraging the intricate interplay between subtasks to achieve a balance between coupling and decoupling. To solve the above problems, this paper proposes a Balanced Coupling and Decoupling network (BCDnet) that decouples conflicts between multiple tasks by Multi-View Decoupling (MVD) and establishes interactions between multiple tasks by Correlation Online Instance Matching (COIM) loss. Specifically, the MVD aims to mitigate conflicts between detection and re-identification by decoupling from multiple viewpoints. At the same time, the COIM utilizes distance correlation to adjust the relationship between detection and re-identification. Combining the two leverages the intricate interplay between subtasks, balancing coupling and decoupling among different subtasks, thereby enhancing the performance of person search. The framework is proposed in this paper achieves the mAP of 94.5% and 56.0% on CUHK-SYSU and PRW datasets, demonstrating its effectiveness and practical utility.
人物搜索的目的是在未改变的场景图像中同时定位和识别人物。由于人员检测和再识别之间复杂的相互作用,这项任务在实现协作优化方面提出了挑战。先前的研究主要集中在减少检测和再识别之间的耦合。然而,一些研究表明,完全解耦和完全耦合都不是最优方法。克服这一挑战的关键在于利用子任务之间复杂的相互作用来实现耦合和解耦之间的平衡。为了解决上述问题,本文提出了一种平衡耦合解耦网络(BCDnet),该网络通过多视图解耦(MVD)来解耦多个任务之间的冲突,并通过相关在线实例匹配(COIM)损失来建立多个任务之间的交互。具体来说,MVD旨在通过从多个视点解耦来减轻检测和重新识别之间的冲突。同时,COIM利用距离相关来调整检测和再识别之间的关系。两者的结合利用了子任务之间复杂的相互作用,平衡了不同子任务之间的耦合和解耦,从而提高了人员搜索的性能。本文提出的框架在中大-中山大学和PRW数据集上分别实现了94.5%和56.0%的mAP,证明了其有效性和实用性。
{"title":"BCDnet: Balanced coupling and decoupling network for person search","authors":"Zhengjie Lu ,&nbsp;Jinjia Peng ,&nbsp;Huibing Wang ,&nbsp;Xianping Fu","doi":"10.1016/j.patcog.2026.113241","DOIUrl":"10.1016/j.patcog.2026.113241","url":null,"abstract":"<div><div>Person search aims to locate and identify individuals in unaltered scene images simultaneously. This task presents challenges in achieving collaborative optimization due to the intricate interplay between person detection and re-identification. Prior research has focused on the reduction of coupling between detection and re-identification. Nevertheless, some studies have indicated that neither complete decoupling nor complete coupling represents the optimal approach. The key to overcoming this challenge lies in leveraging the intricate interplay between subtasks to achieve a balance between coupling and decoupling. To solve the above problems, this paper proposes a Balanced Coupling and Decoupling network (BCDnet) that decouples conflicts between multiple tasks by Multi-View Decoupling (MVD) and establishes interactions between multiple tasks by Correlation Online Instance Matching (COIM) loss. Specifically, the MVD aims to mitigate conflicts between detection and re-identification by decoupling from multiple viewpoints. At the same time, the COIM utilizes distance correlation to adjust the relationship between detection and re-identification. Combining the two leverages the intricate interplay between subtasks, balancing coupling and decoupling among different subtasks, thereby enhancing the performance of person search. The framework is proposed in this paper achieves the mAP of 94.5% and 56.0% on CUHK-SYSU and PRW datasets, demonstrating its effectiveness and practical utility.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"176 ","pages":"Article 113241"},"PeriodicalIF":7.6,"publicationDate":"2026-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146174369","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Quaternion adaptive approximation normalization graph guided implicit low rank for robust matrix completion 四元数自适应逼近归一化图导隐式低秩鲁棒矩阵补全
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-08-01 Epub Date: 2026-01-30 DOI: 10.1016/j.patcog.2026.113210
Yu Guo , Yi Liu , Guoqing Chen , Tieyong Zeng , Qiyu Jin , Michael Kwok-Po Ng
Graph structures are effective for capturing low-dimensional manifolds within high-dimensional data spaces and are frequently utilized as regularization terms to smooth graph signals. A crucial element in this process is the construction of the graph Laplacian. However, the normalization of this Laplacian often necessitates computationally expensive inverse operations. To address this limitation, this paper introduces quaternion graph regularity and proposes the quaternion adaptive approximation normalization graph (QAANG). QAANG offers a computationally efficient solution by requiring only a single adaptive scalar for approximate normalization, thereby circumventing the need for inverse operations. To promote the low rank of the graph, we implicitly embed the low rank into the data fidelity term. This approach not only avoids the significant costs associated with the explicit computation of the low-rank of quaternion matrices, but also eliminates the need to balance multiple regularization terms and adjust hyperparameters. Experimental results demonstrate that QAANG surpasses current state-of-the-art quaternion methods in both completion performance and robustness.
图结构对于在高维数据空间中捕获低维流形是有效的,并且经常被用作平滑图信号的正则化项。这个过程中的一个关键因素是图拉普拉斯的构造。然而,这个拉普拉斯函数的归一化常常需要计算代价昂贵的逆操作。为了解决这一缺陷,本文引入了四元数图的正则性,提出了四元数自适应逼近归一化图(QAANG)。QAANG提供了一种计算效率高的解决方案,它只需要一个自适应标量进行近似归一化,从而避免了对逆操作的需要。为了提高图的低秩,我们隐式地将低秩嵌入到数据保真度项中。这种方法不仅避免了显式计算低秩四元数矩阵所带来的巨大开销,而且消除了平衡多个正则化项和调整超参数的需要。实验结果表明,QAANG算法在完井性能和鲁棒性方面都优于当前最先进的四元数算法。
{"title":"Quaternion adaptive approximation normalization graph guided implicit low rank for robust matrix completion","authors":"Yu Guo ,&nbsp;Yi Liu ,&nbsp;Guoqing Chen ,&nbsp;Tieyong Zeng ,&nbsp;Qiyu Jin ,&nbsp;Michael Kwok-Po Ng","doi":"10.1016/j.patcog.2026.113210","DOIUrl":"10.1016/j.patcog.2026.113210","url":null,"abstract":"<div><div>Graph structures are effective for capturing low-dimensional manifolds within high-dimensional data spaces and are frequently utilized as regularization terms to smooth graph signals. A crucial element in this process is the construction of the graph Laplacian. However, the normalization of this Laplacian often necessitates computationally expensive inverse operations. To address this limitation, this paper introduces quaternion graph regularity and proposes the quaternion adaptive approximation normalization graph (QAANG). QAANG offers a computationally efficient solution by requiring only a single adaptive scalar for approximate normalization, thereby circumventing the need for inverse operations. To promote the low rank of the graph, we implicitly embed the low rank into the data fidelity term. This approach not only avoids the significant costs associated with the explicit computation of the low-rank of quaternion matrices, but also eliminates the need to balance multiple regularization terms and adjust hyperparameters. Experimental results demonstrate that QAANG surpasses current state-of-the-art quaternion methods in both completion performance and robustness.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"176 ","pages":"Article 113210"},"PeriodicalIF":7.6,"publicationDate":"2026-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146174373","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Pattern Recognition
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1