首页 > 最新文献

Pattern Recognition最新文献

英文 中文
One-step multi-view graph clustering via bottom-up structural learning 基于自底向上结构学习的一步多视图聚类
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-29 DOI: 10.1016/j.patcog.2026.113175
Wenzhe Liu , Li Jiang , Huibing Wang , Yong Zhang
In recent years, tensor-based methods have seen considerable success in multi-view clustering. However, the current approach has several limitations: 1) Insufficient exploration of underlying similarity information (i.e. latent representation); 2) Insufficient exploration of higher-order structure information of both inter-view and intra-view; 3) Treating clustering learning independently from tensor learning and the overall learning framework. To address these issues, we propose a unified framework called Bottom-up Structural Exploration for One-step Multi-view Graph Clustering (BSE_OMGC). Specifically, we first employ an anchor strategy to build similarity graphs, reducing the complexity of graph learning. To deeply represent the underlying similar information of the data and mitigate the influence of noise on similar structures in the original space, BSE_OMGC adaptively separates the noise matrix from the similarity graphs to learn high-quality enhanced graphs. Subsequently, from the bottom up, the enhanced graphs serve as the foundation for constructing high-order tensors. We rotate the constructed tensors and apply the t-TNN to preserve the low-rank properties and to better capture higher-order structure information of both inter-view and intra-view. Finally, we introduce a symmetric non-negative matrix factorization-based graph partitioning technique, which learns non-negative embeddings during dynamic optimization to reveal clustering results. This approach unifies clustering learning within the entire learning framework. Extensive experiments on multiple real-world multi-view datasets, along with comparisons to state-of-the-art methods, demonstrate the effectiveness and robustness of the proposed approach.
近年来,基于张量的聚类方法在多视图聚类中取得了相当大的成功。然而,目前的方法存在一些局限性:1)对潜在相似信息(即潜在表示)的探索不足;2)对视图间和视图内高阶结构信息的挖掘不足;3)将聚类学习独立于张量学习和整体学习框架。为了解决这些问题,我们提出了一个统一的框架,称为自下而上的结构探索一步多视图图聚类(BSE_OMGC)。具体来说,我们首先采用锚点策略来构建相似图,降低了图学习的复杂性。为了深度表示数据的潜在相似信息,减轻噪声对原始空间相似结构的影响,BSE_OMGC自适应地将噪声矩阵从相似图中分离出来,学习高质量的增强图。随后,从下往上,增强图作为构造高阶张量的基础。我们旋转构造张量并应用t-TNN来保持低秩性质,并更好地捕获视图间和视图内的高阶结构信息。最后,我们介绍了一种基于对称非负矩阵分解的图划分技术,该技术在动态优化过程中学习非负嵌入以显示聚类结果。这种方法在整个学习框架内统一了聚类学习。在多个真实世界的多视图数据集上进行的大量实验,以及与最先进方法的比较,证明了所提出方法的有效性和鲁棒性。
{"title":"One-step multi-view graph clustering via bottom-up structural learning","authors":"Wenzhe Liu ,&nbsp;Li Jiang ,&nbsp;Huibing Wang ,&nbsp;Yong Zhang","doi":"10.1016/j.patcog.2026.113175","DOIUrl":"10.1016/j.patcog.2026.113175","url":null,"abstract":"<div><div>In recent years, tensor-based methods have seen considerable success in multi-view clustering. However, the current approach has several limitations: 1) Insufficient exploration of underlying similarity information (i.e. latent representation); 2) Insufficient exploration of higher-order structure information of both inter-view and intra-view; 3) Treating clustering learning independently from tensor learning and the overall learning framework. To address these issues, we propose a unified framework called Bottom-up Structural Exploration for One-step Multi-view Graph Clustering (BSE_OMGC). Specifically, we first employ an anchor strategy to build similarity graphs, reducing the complexity of graph learning. To deeply represent the underlying similar information of the data and mitigate the influence of noise on similar structures in the original space, BSE_OMGC adaptively separates the noise matrix from the similarity graphs to learn high-quality enhanced graphs. Subsequently, from the bottom up, the enhanced graphs serve as the foundation for constructing high-order tensors. We rotate the constructed tensors and apply the t-TNN to preserve the low-rank properties and to better capture higher-order structure information of both inter-view and intra-view. Finally, we introduce a symmetric non-negative matrix factorization-based graph partitioning technique, which learns non-negative embeddings during dynamic optimization to reveal clustering results. This approach unifies clustering learning within the entire learning framework. Extensive experiments on multiple real-world multi-view datasets, along with comparisons to state-of-the-art methods, demonstrate the effectiveness and robustness of the proposed approach.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"176 ","pages":"Article 113175"},"PeriodicalIF":7.6,"publicationDate":"2026-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146174258","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning generalizable visual representations with causal diffusion model for controllable editing 基于因果扩散模型的可控编辑可泛化视觉表征学习
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-29 DOI: 10.1016/j.patcog.2026.113162
Shanshan Huang , Lei Wang , Haoxuan Chen , Yuxuan Liang , Li Liu
Representation learning has been widely employed to learn low-dimensional representations that consist of multiple independent and interpretable generative factors like visual attributes in images, enabling controllable image editing by manipulating specific attributes in the learned representation space. However, in real-world scenarios, generative factors with semantic meanings are often causally related rather than independent. Previous methods with independence assumption are failed to capture such causal relationships, even in the supervised settings. To this end, we propose a diffusion model-based causal representation learning framework, named CausalDiffuser, which models causal prior distributions by the structural causal models (SCMs) to explicitly characterize the causal relations among the underlying generative factors. Such modelling scheme encourages the framework to learn the latent representations of causality for generative factors. Furthermore, a composite loss function is introduced to ensure causal disentanglement of latent representations by combining supervision information from the ground truth factors (i.e., image labels). Empirical evaluations on one synthetic dataset and two real-world benchmark datasets suggest our approach significantly outperforms the state-of-the-art methods. CausalDiffuser effectively edits image attributes by restoring causal relationships among generative factors and generates counterfactual images through intervention operation.
表征学习已被广泛用于学习由多个独立且可解释的生成因素(如图像中的视觉属性)组成的低维表征,通过对学习到的表征空间中的特定属性进行操作,实现对图像的可控编辑。然而,在现实场景中,具有语义意义的生成因素往往是因果相关的,而不是独立的。以前的方法与独立性假设未能捕获这样的因果关系,即使在监督设置。为此,我们提出了一个基于扩散模型的因果表示学习框架,名为CausalDiffuser,它通过结构因果模型(scm)对因果先验分布进行建模,以明确表征潜在生成因素之间的因果关系。这种建模方案鼓励框架学习生成因素因果关系的潜在表征。此外,引入了一个复合损失函数,通过结合来自地面真值因素(即图像标签)的监督信息来确保潜在表示的因果解纠缠。对一个合成数据集和两个真实世界基准数据集的实证评估表明,我们的方法明显优于最先进的方法。CausalDiffuser通过还原生成因素之间的因果关系,有效地编辑图像属性,并通过干预操作生成反事实图像。
{"title":"Learning generalizable visual representations with causal diffusion model for controllable editing","authors":"Shanshan Huang ,&nbsp;Lei Wang ,&nbsp;Haoxuan Chen ,&nbsp;Yuxuan Liang ,&nbsp;Li Liu","doi":"10.1016/j.patcog.2026.113162","DOIUrl":"10.1016/j.patcog.2026.113162","url":null,"abstract":"<div><div>Representation learning has been widely employed to learn low-dimensional representations that consist of multiple independent and interpretable generative factors like visual attributes in images, enabling controllable image editing by manipulating specific attributes in the learned representation space. However, in real-world scenarios, generative factors with semantic meanings are often causally related rather than independent. Previous methods with independence assumption are failed to capture such causal relationships, even in the supervised settings. To this end, we propose a diffusion model-based causal representation learning framework, named CausalDiffuser, which models causal prior distributions by the structural causal models (SCMs) to explicitly characterize the causal relations among the underlying generative factors. Such modelling scheme encourages the framework to learn the latent representations of causality for generative factors. Furthermore, a composite loss function is introduced to ensure causal disentanglement of latent representations by combining supervision information from the ground truth factors (i.e., image labels). Empirical evaluations on one synthetic dataset and two real-world benchmark datasets suggest our approach significantly outperforms the state-of-the-art methods. CausalDiffuser effectively edits image attributes by restoring causal relationships among generative factors and generates counterfactual images through intervention operation.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"176 ","pages":"Article 113162"},"PeriodicalIF":7.6,"publicationDate":"2026-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146174375","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adaptive centroid guided hashing for cross-modal retrieval 跨模态检索的自适应质心引导哈希
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-29 DOI: 10.1016/j.patcog.2026.113186
Zhenqiu Shu, Julong Zhang, Zhengtao Yu
Deep hashing technology is widely used in cross-modal retrieval tasks due to its low storage costs and high computational efficiency. However, most existing supervised hashing methods suffer from the following challenges: (1) Relying on manually labeled semantic affinity levels as supervisory information for hash learning may ignore the underlying structure of semantic information, potentially resulting in semantic structure degradation. (2) They fail to consider both the semantic relationships among labels and the relative significance of each label to individual samples. To address these challenges, we propose a novel adaptive centroid guided hashing (ACGH) method for cross-modal retrieval. Specifically, we extract global and local features using Transformer models, and then fuse them to obtain fine-grained feature representations of multimodal data. Subsequently, the hash centroid generation module leverages the category semantic embedding to construct category hash centers and combine them with learnable Label-Affinity Coefficients (LAC) memory banks to learn adaptive hash centroids. Furthermore, we design a hash centroid guidance module, which employs the hash centroids to guide hash code learning and then updates the hash centers and LAC memory banks through the newly learned hash codes. Extensive experimental results on several benchmark multimodal datasets demonstrate that the proposed ACGH method significantly outperforms other state-of-the-art methods in cross-modal retrieval tasks.
深度哈希技术以其存储成本低、计算效率高的特点被广泛应用于跨模态检索任务中。然而,现有的大多数监督哈希方法都面临以下挑战:(1)依赖人工标记的语义亲和度作为哈希学习的监督信息,可能会忽略语义信息的底层结构,可能导致语义结构退化。(2)他们没有考虑标签之间的语义关系和每个标签对单个样本的相对重要性。为了解决这些挑战,我们提出了一种新的自适应质心引导哈希(ACGH)方法用于跨模态检索。具体来说,我们使用Transformer模型提取全局和局部特征,然后融合它们以获得多模态数据的细粒度特征表示。随后,哈希质心生成模块利用类别语义嵌入构造类别哈希中心,并将其与可学习的标签亲和系数(LAC)记忆库结合学习自适应哈希质心。此外,我们设计了一个哈希质心引导模块,利用哈希质心引导哈希码学习,然后通过新学习的哈希码更新哈希中心和LAC内存库。在多个基准多模态数据集上的大量实验结果表明,所提出的ACGH方法在跨模态检索任务中显著优于其他最先进的方法。
{"title":"Adaptive centroid guided hashing for cross-modal retrieval","authors":"Zhenqiu Shu,&nbsp;Julong Zhang,&nbsp;Zhengtao Yu","doi":"10.1016/j.patcog.2026.113186","DOIUrl":"10.1016/j.patcog.2026.113186","url":null,"abstract":"<div><div>Deep hashing technology is widely used in cross-modal retrieval tasks due to its low storage costs and high computational efficiency. However, most existing supervised hashing methods suffer from the following challenges: (1) Relying on manually labeled semantic affinity levels as supervisory information for hash learning may ignore the underlying structure of semantic information, potentially resulting in semantic structure degradation. (2) They fail to consider both the semantic relationships among labels and the relative significance of each label to individual samples. To address these challenges, we propose a novel adaptive centroid guided hashing (ACGH) method for cross-modal retrieval. Specifically, we extract global and local features using Transformer models, and then fuse them to obtain fine-grained feature representations of multimodal data. Subsequently, the hash centroid generation module leverages the category semantic embedding to construct category hash centers and combine them with learnable Label-Affinity Coefficients (LAC) memory banks to learn adaptive hash centroids. Furthermore, we design a hash centroid guidance module, which employs the hash centroids to guide hash code learning and then updates the hash centers and LAC memory banks through the newly learned hash codes. Extensive experimental results on several benchmark multimodal datasets demonstrate that the proposed ACGH method significantly outperforms other state-of-the-art methods in cross-modal retrieval tasks.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"176 ","pages":"Article 113186"},"PeriodicalIF":7.6,"publicationDate":"2026-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146174604","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Interaction-aware adaptive network for drug-drug interaction prediction 药物-药物相互作用预测的相互作用感知自适应网络
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-29 DOI: 10.1016/j.patcog.2026.113172
Dongjiang Niu , Xiaofeng Wang , Zengqian Deng , Bowen Tang , Zhen Li
The prediction of drug-drug interactions (DDI) is crucial for drug safety and combination therapies. However, existing computational approaches face significant challenges in modeling drug interactions and effectively integrating multi-view information. To this end, AMIE-DDI, an Adaptive Multi-view Integration framework is proposed. First, Interaction-Enhanced Graph Transformer is designed to model complex relationships between drugs and capture the underlying interaction mechanisms. Second, a Multi-Channel Adaptive Fusion Module (MAF) is introduced to dynamically integrate information from different representations, enhancing feature learning and ensuring efficient multi-view feature integration. Finally, a Dynamic Interaction Scaling Prediction Module (DIS) is developed to adaptively adjust interaction intensity, thus improving both predictive accuracy and stability. Experimental results on multiple datasets demonstrate that AMIE-DDI outperforms state-of-the-art baselines in both warm-start and cold-start scenarios. Moreover, ablation studies and visualization experiments validate its capability to capture key motifs and enhance DDI prediction accuracy.
药物相互作用(DDI)的预测对药物安全和联合治疗至关重要。然而,现有的计算方法在模拟药物相互作用和有效整合多视图信息方面面临重大挑战。为此,提出了自适应多视图集成框架mie - ddi。首先,交互增强图转换器设计用于模拟药物之间的复杂关系并捕获潜在的交互机制。其次,引入多通道自适应融合模块(MAF),对不同表征的信息进行动态集成,增强特征学习能力,保证多视图特征的高效集成;最后,开发了动态交互尺度预测模块(DIS),自适应调整交互强度,提高了预测精度和稳定性。在多个数据集上的实验结果表明,ami - ddi在热启动和冷启动场景下都优于最先进的基线。此外,消融研究和可视化实验验证了其捕获关键基序和提高DDI预测精度的能力。
{"title":"Interaction-aware adaptive network for drug-drug interaction prediction","authors":"Dongjiang Niu ,&nbsp;Xiaofeng Wang ,&nbsp;Zengqian Deng ,&nbsp;Bowen Tang ,&nbsp;Zhen Li","doi":"10.1016/j.patcog.2026.113172","DOIUrl":"10.1016/j.patcog.2026.113172","url":null,"abstract":"<div><div>The prediction of drug-drug interactions (DDI) is crucial for drug safety and combination therapies. However, existing computational approaches face significant challenges in modeling drug interactions and effectively integrating multi-view information. To this end, AMIE-DDI, an Adaptive Multi-view Integration framework is proposed. First, Interaction-Enhanced Graph Transformer is designed to model complex relationships between drugs and capture the underlying interaction mechanisms. Second, a Multi-Channel Adaptive Fusion Module (MAF) is introduced to dynamically integrate information from different representations, enhancing feature learning and ensuring efficient multi-view feature integration. Finally, a Dynamic Interaction Scaling Prediction Module (DIS) is developed to adaptively adjust interaction intensity, thus improving both predictive accuracy and stability. Experimental results on multiple datasets demonstrate that AMIE-DDI outperforms state-of-the-art baselines in both warm-start and cold-start scenarios. Moreover, ablation studies and visualization experiments validate its capability to capture key motifs and enhance DDI prediction accuracy.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"176 ","pages":"Article 113172"},"PeriodicalIF":7.6,"publicationDate":"2026-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146174606","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Generating transferable attacks across large vision-language models using adversarial deformation learning 利用对抗变形学习在大型视觉语言模型中产生可转移攻击
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-29 DOI: 10.1016/j.patcog.2026.113194
Daizong Liu , Wangqin Liu , Xiaowen Cai , Pan Zhou , Runwei Guan , Xiaoye Qu , Bo Du
Large Vision-Language Models (LVLMs) have achieved remarkable capabilities in understanding and generating content across diverse modalities, yet their vulnerability to adversarial attacks raises critical security concerns. Traditional adversarial attacks often design noisy manipulation specific to a certain LVLM, suffering from limited transferability and hindering their effectiveness against unseen LVLM models. To address this challenge, we propose a unified adversarial learning framework that enhances transferability by jointly optimizing robust perturbations for both vision and language modalities. In addition to producing perturbations on both input image and prompt, our approach also introduces multi-modal purification/transformation networks within an adversarial learning scheme, which learn worst-case distortions to evade the harmfulness of visual and textual perturbations while enforcing semantic and visual consistency, creating a generalizable training environment for the adversarial examples. Our core insight is to enforce the adversarial examples to resist the most harmful distortions produced by these networks for improving their transferability. Experiments demonstrate that our attack achieves significantly higher transfer-attack success rates compared to existing works, revealing critical robustness gaps in LVLMs.
大型视觉语言模型(LVLMs)在理解和生成跨多种模式的内容方面已经取得了非凡的能力,然而它们对对抗性攻击的脆弱性引起了严重的安全问题。传统的对抗性攻击通常针对特定的LVLM设计噪声操作,具有有限的可转移性,并且阻碍了它们对未知LVLM模型的有效性。为了应对这一挑战,我们提出了一个统一的对抗性学习框架,通过联合优化视觉和语言模式的鲁棒扰动来增强可转移性。除了在输入图像和提示符上产生扰动外,我们的方法还在对抗学习方案中引入了多模态净化/转换网络,该网络学习最坏情况的扭曲以避免视觉和文本扰动的危害,同时加强语义和视觉的一致性,为对抗示例创建一个可推广的训练环境。我们的核心观点是加强对抗性的例子,以抵制这些网络产生的最有害的扭曲,以提高它们的可转移性。实验表明,与现有的工作相比,我们的攻击实现了更高的传输攻击成功率,揭示了LVLMs中关键的鲁棒性差距。
{"title":"Generating transferable attacks across large vision-language models using adversarial deformation learning","authors":"Daizong Liu ,&nbsp;Wangqin Liu ,&nbsp;Xiaowen Cai ,&nbsp;Pan Zhou ,&nbsp;Runwei Guan ,&nbsp;Xiaoye Qu ,&nbsp;Bo Du","doi":"10.1016/j.patcog.2026.113194","DOIUrl":"10.1016/j.patcog.2026.113194","url":null,"abstract":"<div><div>Large Vision-Language Models (LVLMs) have achieved remarkable capabilities in understanding and generating content across diverse modalities, yet their vulnerability to adversarial attacks raises critical security concerns. Traditional adversarial attacks often design noisy manipulation specific to a certain LVLM, suffering from limited transferability and hindering their effectiveness against unseen LVLM models. To address this challenge, we propose a unified adversarial learning framework that enhances transferability by jointly optimizing robust perturbations for both vision and language modalities. In addition to producing perturbations on both input image and prompt, our approach also introduces multi-modal purification/transformation networks within an adversarial learning scheme, which learn worst-case distortions to evade the harmfulness of visual and textual perturbations while enforcing semantic and visual consistency, creating a generalizable training environment for the adversarial examples. Our core insight is to enforce the adversarial examples to resist the most harmful distortions produced by these networks for improving their transferability. Experiments demonstrate that our attack achieves significantly higher transfer-attack success rates compared to existing works, revealing critical robustness gaps in LVLMs.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"176 ","pages":"Article 113194"},"PeriodicalIF":7.6,"publicationDate":"2026-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146081392","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dual-teacher fusion with augmented branch for semi-supervised object detection 基于增强分支的双教师融合半监督目标检测
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-29 DOI: 10.1016/j.patcog.2026.113176
Xiaolong Xiong, Tingting Leng, Shuzhan Guo, Shanxiong Chen, Jun Zhou
Semi-Supervised Object Detection (SSOD) has gained widespread application by effectively reducing data labeling costs through the utilization of potential information from unlabeled data. However, the performance of SSOD algorithms is often challenged by two key issues: first, confirmation bias caused by pseudo-label noise; and second, limited consistency learning spaces that hinder the enhancement of the model’s generalization capability. To address these challenges, we propose an innovative framework called Dual-Teacher Fusion with Augmented Branch (DTAB) to improve both the quality of pseudo-labels and the model’s generalization capability. The proposed DTAB employs dual-teacher models to detect different views of the same image and fuses their complementary information through a prediction alignment module, reducing localization errors and enhancing the accuracy of pseudo-labels. In addition, the diversity of predictions from dual-teacher models helps mitigate issues arising from data scarcity. Furthermore, we introduce a feature-level perturbation branch that expands the scope of consistency learning by applying DropBlock perturbations in the feature space, further enhancing the generalization capability of the student model. Experimental results on the MS-COCO and PASCAL VOC datasets demonstrate that our proposed approach improves both accuracy and robustness compared to existing two-stage SSOD methods. The source code is available at https://github.com/peng5066/DTAB.
半监督目标检测(Semi-Supervised Object Detection, SSOD)通过利用未标记数据中的潜在信息,有效降低数据标注成本,得到了广泛的应用。然而,SSOD算法的性能经常受到两个关键问题的挑战:一是伪标签噪声引起的确认偏差;其次,有限的一致性学习空间阻碍了模型泛化能力的增强。为了解决这些挑战,我们提出了一个创新的框架,称为双教师融合与增强分支(DTAB),以提高伪标签的质量和模型的泛化能力。本文提出的DTAB采用双教师模型检测同一图像的不同视图,并通过预测对齐模块融合它们的互补信息,减少了定位误差,提高了伪标签的准确性。此外,双教师模型预测的多样性有助于缓解数据稀缺带来的问题。此外,我们引入了一个特征级扰动分支,通过在特征空间中应用DropBlock扰动来扩展一致性学习的范围,进一步增强了学生模型的泛化能力。在MS-COCO和PASCAL VOC数据集上的实验结果表明,与现有的两阶段SSOD方法相比,我们提出的方法提高了准确性和鲁棒性。源代码可从https://github.com/peng5066/DTAB获得。
{"title":"Dual-teacher fusion with augmented branch for semi-supervised object detection","authors":"Xiaolong Xiong,&nbsp;Tingting Leng,&nbsp;Shuzhan Guo,&nbsp;Shanxiong Chen,&nbsp;Jun Zhou","doi":"10.1016/j.patcog.2026.113176","DOIUrl":"10.1016/j.patcog.2026.113176","url":null,"abstract":"<div><div>Semi-Supervised Object Detection (SSOD) has gained widespread application by effectively reducing data labeling costs through the utilization of potential information from unlabeled data. However, the performance of SSOD algorithms is often challenged by two key issues: first, confirmation bias caused by pseudo-label noise; and second, limited consistency learning spaces that hinder the enhancement of the model’s generalization capability. To address these challenges, we propose an innovative framework called Dual-Teacher Fusion with Augmented Branch (DTAB) to improve both the quality of pseudo-labels and the model’s generalization capability. The proposed DTAB employs dual-teacher models to detect different views of the same image and fuses their complementary information through a prediction alignment module, reducing localization errors and enhancing the accuracy of pseudo-labels. In addition, the diversity of predictions from dual-teacher models helps mitigate issues arising from data scarcity. Furthermore, we introduce a feature-level perturbation branch that expands the scope of consistency learning by applying DropBlock perturbations in the feature space, further enhancing the generalization capability of the student model. Experimental results on the MS-COCO and PASCAL VOC datasets demonstrate that our proposed approach improves both accuracy and robustness compared to existing two-stage SSOD methods. The source code is available at <span><span>https://github.com/peng5066/DTAB</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"176 ","pages":"Article 113176"},"PeriodicalIF":7.6,"publicationDate":"2026-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146174886","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep non-convex tensor and higher-order graph embedding for multi-source domain adaptation 基于深度非凸张量和高阶图嵌入的多源域自适应
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-29 DOI: 10.1016/j.patcog.2026.113170
Yiyang Fu , Huiling Fu , Yuwu Lu , Ming Zhao
Multi-source domain adaptation (MSDA) aims to fully leverage knowledge from multiple source domains to enhance the generalization ability of the model to the target domain. Currently, learning low-dimensional representations of data based on matrix decomposition is an important branch of domain adaptation. Nevertheless, the majority of existing methods fail to effectively capture the intricate non-linear structures of the input data, which is crucial for preserving key features of the data and improving domain adaptation performance. Furthermore, in the context of MSDA, learning higher-order correlations between different domains is also crucial. Therefore, we propose a deep non-convex tensor and higher-order graph embedding (DNT-HGE) method for MSDA. The DNT-HGE method enables precise low-dimensional representation of input data by identifying the non-linear relationships through deep matrix decomposition, while efficiently preserving local structure information through higher-order similarity graph learning. In addition, low-rank tensor approximation is employed to capture higher-order correlations among multiple domains. A non-convex low-rank tensor norm is proposed to replace the classical tensor nuclear norm, better accounting for the physical significance of different singular values and reducing bias in rank estimation. Finally, an alternating direction multiplier method is introduced to solve the DNT-HGE model. Extensive experiments on four public dataset benchmarks validate the effectiveness of the proposed method.
多源域自适应(MSDA)旨在充分利用多源域的知识,增强模型对目标域的泛化能力。目前,基于矩阵分解的数据低维表示学习是领域自适应的一个重要分支。然而,现有的大多数方法都不能有效地捕获输入数据的复杂非线性结构,而这对于保留数据的关键特征和提高域自适应性能至关重要。此外,在MSDA上下文中,学习不同领域之间的高阶相关性也是至关重要的。因此,我们提出了一种深度非凸张量和高阶图嵌入(DNT-HGE)的MSDA方法。DNT-HGE方法通过深度矩阵分解识别非线性关系,实现输入数据的精确低维表示,同时通过高阶相似图学习有效地保留局部结构信息。此外,采用低秩张量近似捕获多域之间的高阶相关性。提出了一种非凸低秩张量范数来代替经典张量核范数,更好地考虑了不同奇异值的物理意义,减少了秩估计中的偏差。最后,引入交替方向乘子法求解DNT-HGE模型。在四个公共数据集基准上的大量实验验证了所提出方法的有效性。
{"title":"Deep non-convex tensor and higher-order graph embedding for multi-source domain adaptation","authors":"Yiyang Fu ,&nbsp;Huiling Fu ,&nbsp;Yuwu Lu ,&nbsp;Ming Zhao","doi":"10.1016/j.patcog.2026.113170","DOIUrl":"10.1016/j.patcog.2026.113170","url":null,"abstract":"<div><div>Multi-source domain adaptation (MSDA) aims to fully leverage knowledge from multiple source domains to enhance the generalization ability of the model to the target domain. Currently, learning low-dimensional representations of data based on matrix decomposition is an important branch of domain adaptation. Nevertheless, the majority of existing methods fail to effectively capture the intricate non-linear structures of the input data, which is crucial for preserving key features of the data and improving domain adaptation performance. Furthermore, in the context of MSDA, learning higher-order correlations between different domains is also crucial. Therefore, we propose a deep non-convex tensor and higher-order graph embedding (DNT-HGE) method for MSDA. The DNT-HGE method enables precise low-dimensional representation of input data by identifying the non-linear relationships through deep matrix decomposition, while efficiently preserving local structure information through higher-order similarity graph learning. In addition, low-rank tensor approximation is employed to capture higher-order correlations among multiple domains. A non-convex low-rank tensor norm is proposed to replace the classical tensor nuclear norm, better accounting for the physical significance of different singular values and reducing bias in rank estimation. Finally, an alternating direction multiplier method is introduced to solve the DNT-HGE model. Extensive experiments on four public dataset benchmarks validate the effectiveness of the proposed method.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"176 ","pages":"Article 113170"},"PeriodicalIF":7.6,"publicationDate":"2026-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146174262","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Diffusion model-based data augmentation for land cover segmentation in Pol-SAR imagery 基于扩散模型的Pol-SAR影像土地覆盖分割数据增强
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-29 DOI: 10.1016/j.patcog.2026.113171
Keunhoon Choi , Sunok Kim , Kwanghoon Sohn
Polarimetric Synthetic Aperture Radar (Pol-SAR) provides representations encapsulating physical texture information of land surfaces, useful for land cover segmentation. However, Pol-SAR images and precise segmentation maps are difficult to obtain, limiting public access to large datasets and hindering deep learning methods from achieving optimal performance. To address this, we propose two methods. First, we transform the channel axis to polar coordinates to better exploit surface information in Pol-SAR data. This allows deep learning models to directly learn polarization angles, which improves segmentation performance and resolves the channel imbalance problem in diffusion models. Second, we introduce a diffusion model-based data augmentation framework to generate Pol-SAR imagery with paired land cover maps. By representing land cover maps in a 2-channel format using the Gaussian distribution’s symmetry, we reduce GPU memory compared to one-hot encoding. We also propose a Guided Sampling strategy to generate paired Pol-SAR images when only land cover maps are available. Experimental results validate the effectiveness of our methods on the Pol-SAR dataset.
极化合成孔径雷达(Pol-SAR)提供了封装地表物理纹理信息的表示,有助于土地覆盖分割。然而,Pol-SAR图像和精确的分割图很难获得,这限制了公众对大型数据集的访问,并阻碍了深度学习方法实现最佳性能。为了解决这个问题,我们提出了两种方法。首先,我们将通道轴转换为极坐标,以便更好地利用Pol-SAR数据中的表面信息。这使得深度学习模型可以直接学习极化角,提高了分割性能,解决了扩散模型中的通道不平衡问题。其次,我们引入了一个基于扩散模型的数据增强框架,以生成具有成对土地覆盖图的Pol-SAR图像。通过使用高斯分布的对称性以2通道格式表示土地覆盖图,与单通道编码相比,我们减少了GPU内存。我们还提出了一种引导采样策略,用于在只有土地覆盖图可用的情况下生成成对的Pol-SAR图像。实验结果验证了本文方法在Pol-SAR数据集上的有效性。
{"title":"Diffusion model-based data augmentation for land cover segmentation in Pol-SAR imagery","authors":"Keunhoon Choi ,&nbsp;Sunok Kim ,&nbsp;Kwanghoon Sohn","doi":"10.1016/j.patcog.2026.113171","DOIUrl":"10.1016/j.patcog.2026.113171","url":null,"abstract":"<div><div>Polarimetric Synthetic Aperture Radar (Pol-SAR) provides representations encapsulating physical texture information of land surfaces, useful for land cover segmentation. However, Pol-SAR images and precise segmentation maps are difficult to obtain, limiting public access to large datasets and hindering deep learning methods from achieving optimal performance. To address this, we propose two methods. First, we transform the channel axis to polar coordinates to better exploit surface information in Pol-SAR data. This allows deep learning models to directly learn polarization angles, which improves segmentation performance and resolves the channel imbalance problem in diffusion models. Second, we introduce a diffusion model-based data augmentation framework to generate Pol-SAR imagery with paired land cover maps. By representing land cover maps in a 2-channel format using the Gaussian distribution’s symmetry, we reduce GPU memory compared to one-hot encoding. We also propose a Guided Sampling strategy to generate paired Pol-SAR images when only land cover maps are available. Experimental results validate the effectiveness of our methods on the Pol-SAR dataset.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"176 ","pages":"Article 113171"},"PeriodicalIF":7.6,"publicationDate":"2026-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146174264","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing graph learning interpretability through modulating cluster information flow 通过调节聚类信息流增强图学习的可解释性
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-29 DOI: 10.1016/j.patcog.2026.113178
Jiayi Yang , Wei Ye , Xin Sun , Rui Fan , Jungong Han
Interpretable graph learning is essential for scientific applications that depend on learning models to extract reliable insights from graph-structured data. Recent efforts to explain GNN predictions focus on identifying vital substructures, such as subgraphs. However, existing approaches tend to misclassify the neighboring irrelevant nodes as part of the vital subgraphs. To address this, we propose Cluster Information Flow Graph Neural Networks (CIFlow-GNN), a built-in model-level method that provides accurate interpretable subgraph explanations by modulating the cluster information flow. CIFlow-GNN incorporates two modules, i.e., the graph clustering module and the cluster prototype module. The graph clustering module partitions the nodes according to their connectivity in the graph topology and their similarity in cluster features. Specifically, we introduce a cluster feature loss to regulate information flow at the cluster level. We prove that the proposed cluster feature loss is a lower bound of the InfoNCE loss. Optimizing the cluster feature loss reduces the mutual information among clusters and achieves the modulation of cluster information flow. Subsequently, the graph prototype module uses prototypes as a bridge to select important clusters as vital subgraphs by integrating information across all graphs. To ensure accurate correspondence between clusters and prototypes, we further modulate the cluster information flow at the prototype level. Experimental studies on both synthetic and real-world datasets demonstrate that our proposed CIFlow-GNN can identify vital subgraphs effectively and efficiently.
可解释的图学习对于依赖于学习模型从图结构数据中提取可靠见解的科学应用至关重要。最近解释GNN预测的努力集中在识别重要的子结构,如子图。然而,现有的方法往往将相邻的不相关节点错误地分类为重要子图的一部分。为了解决这个问题,我们提出了聚类信息流图神经网络(CIFlow-GNN),这是一种内置的模型级方法,通过调节聚类信息流提供准确的可解释子图解释。CIFlow-GNN包含两个模块,即图聚类模块和聚类原型模块。图聚类模块根据节点在图拓扑中的连通性和聚类特征的相似性对节点进行划分。具体来说,我们引入了一个集群特征损失来调节集群级别的信息流。我们证明了所提出的聚类特征损失是InfoNCE损失的下界。优化聚类特征损失减少了聚类之间的互信息,实现了聚类信息流的调制。随后,图原型模块使用原型作为桥梁,通过整合所有图的信息来选择重要的集群作为重要的子图。为了确保集群和原型之间的准确对应,我们在原型级别进一步调整集群信息流。在合成数据集和真实数据集上的实验研究表明,我们提出的CIFlow-GNN可以有效地识别重要子图。
{"title":"Enhancing graph learning interpretability through modulating cluster information flow","authors":"Jiayi Yang ,&nbsp;Wei Ye ,&nbsp;Xin Sun ,&nbsp;Rui Fan ,&nbsp;Jungong Han","doi":"10.1016/j.patcog.2026.113178","DOIUrl":"10.1016/j.patcog.2026.113178","url":null,"abstract":"<div><div>Interpretable graph learning is essential for scientific applications that depend on learning models to extract reliable insights from graph-structured data. Recent efforts to explain GNN predictions focus on identifying vital substructures, such as subgraphs. However, existing approaches tend to misclassify the neighboring irrelevant nodes as part of the vital subgraphs. To address this, we propose <strong><u>C</u></strong>luster <strong><u>I</u></strong>nformation <strong><u>Flow</u> <u>G</u></strong>raph <strong><u>N</u></strong>eural <strong><u>N</u></strong>etworks (CIFlow-GNN), a <em>built-in</em> model-level method that provides accurate interpretable subgraph explanations by modulating the cluster information flow. CIFlow-GNN incorporates two modules, i.e., the graph clustering module and the cluster prototype module. The graph clustering module partitions the nodes according to their connectivity in the graph topology and their similarity in cluster features. Specifically, we introduce a cluster feature loss to regulate information flow at the cluster level. We prove that the proposed cluster feature loss is a lower bound of the InfoNCE loss. Optimizing the cluster feature loss reduces the mutual information among clusters and achieves the modulation of cluster information flow. Subsequently, the graph prototype module uses prototypes as a bridge to select important clusters as vital subgraphs by integrating information across all graphs. To ensure accurate correspondence between clusters and prototypes, we further modulate the cluster information flow at the prototype level. Experimental studies on both synthetic and real-world datasets demonstrate that our proposed CIFlow-GNN can identify vital subgraphs effectively and efficiently.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"176 ","pages":"Article 113178"},"PeriodicalIF":7.6,"publicationDate":"2026-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146174575","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ST-VA-AR: Learning velocity-aware action representations with mixture of spatiotemporal attention ST-VA-AR:混合时空注意的学习速度感知动作表征
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-29 DOI: 10.1016/j.patcog.2026.113200
Jiangning Wei , Bo Yu , Ke Li , Lan Yang , Dandan Xiao , Jun Liu
Action recognition aims to learn discriminative motion representations across diverse scenarios, yet existing skeleton-based methods often suffer from performance degradation under varying action velocities. Through a systematic analysis of eight representative methods on five benchmarks, we reveal that recognition accuracy consistently declines as motion velocity increases, indicating limited robustness to high-speed actions. To address this challenge, we propose SpatioTemporal Velocity-Aware Action Recognition (ST-VA-AR), a unified framework that explicitly models velocity heterogeneity along both temporal and spatial dimensions. Motivated by the observation that fast actions require fine-grained temporal modeling while slow actions benefit from broader temporal context, ST-VA-AR adopts a mixture-based spatiotemporal attention mechanism that dynamically adjusts temporal receptive fields. In addition, spatial velocity heterogeneity across body parts is captured by adaptively emphasizing informative body regions with salient motion dynamics. A key–value sharing strategy is further introduced to reduce model complexity and promote efficient information exchange across spatiotemporal experts. Compared with our preliminary conference version (VA-AR), this journal extension further generalizes velocity-aware modeling to a unified spatiotemporal framework, introduces a more parameter-efficient expert collaboration strategy, and provides substantially enriched experimental analyses for deeper interpretability and robustness validation. Extensive experiments on five widely used datasets demonstrate that ST-VA-AR consistently outperforms existing approaches across a wide range of action velocities, validating its effectiveness and robustness.
动作识别旨在学习不同场景下的判别运动表征,但现有的基于骨架的方法在不同的动作速度下往往存在性能下降的问题。通过对8种代表性方法在5个基准上的系统分析,我们发现识别精度随着运动速度的增加而持续下降,表明对高速动作的鲁棒性有限。为了应对这一挑战,我们提出了时空速度感知动作识别(ST-VA-AR),这是一个统一的框架,明确地沿时间和空间维度建模速度异质性。ST-VA-AR观察到快速动作需要细粒度的时间建模,而缓慢动作则受益于更广泛的时间背景,因此采用了基于混合的时空注意机制,动态调节时间接受野。此外,通过自适应强调具有显著运动动力学的信息身体区域,可以捕获身体各部位的空间速度异质性。进一步引入键值共享策略,降低模型复杂度,促进时空专家间高效的信息交换。与我们的初步会议版本(VA-AR)相比,该期刊扩展进一步将速度感知建模推广到统一的时空框架,引入了更高效的参数专家协作策略,并提供了大量丰富的实验分析,以实现更深层次的可解释性和鲁棒性验证。在五个广泛使用的数据集上进行的大量实验表明,ST-VA-AR在广泛的动作速度范围内始终优于现有方法,验证了其有效性和鲁棒性。
{"title":"ST-VA-AR: Learning velocity-aware action representations with mixture of spatiotemporal attention","authors":"Jiangning Wei ,&nbsp;Bo Yu ,&nbsp;Ke Li ,&nbsp;Lan Yang ,&nbsp;Dandan Xiao ,&nbsp;Jun Liu","doi":"10.1016/j.patcog.2026.113200","DOIUrl":"10.1016/j.patcog.2026.113200","url":null,"abstract":"<div><div>Action recognition aims to learn discriminative motion representations across diverse scenarios, yet existing skeleton-based methods often suffer from performance degradation under varying action velocities. Through a systematic analysis of eight representative methods on five benchmarks, we reveal that recognition accuracy consistently declines as motion velocity increases, indicating limited robustness to high-speed actions. To address this challenge, we propose SpatioTemporal Velocity-Aware Action Recognition (ST-VA-AR), a unified framework that explicitly models velocity heterogeneity along both temporal and spatial dimensions. Motivated by the observation that fast actions require fine-grained temporal modeling while slow actions benefit from broader temporal context, ST-VA-AR adopts a mixture-based spatiotemporal attention mechanism that dynamically adjusts temporal receptive fields. In addition, spatial velocity heterogeneity across body parts is captured by adaptively emphasizing informative body regions with salient motion dynamics. A key–value sharing strategy is further introduced to reduce model complexity and promote efficient information exchange across spatiotemporal experts. Compared with our preliminary conference version (VA-AR), this journal extension further generalizes velocity-aware modeling to a unified spatiotemporal framework, introduces a more parameter-efficient expert collaboration strategy, and provides substantially enriched experimental analyses for deeper interpretability and robustness validation. Extensive experiments on five widely used datasets demonstrate that ST-VA-AR consistently outperforms existing approaches across a wide range of action velocities, validating its effectiveness and robustness.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"176 ","pages":"Article 113200"},"PeriodicalIF":7.6,"publicationDate":"2026-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146174489","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Pattern Recognition
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1