首页 > 最新文献

Pattern Recognition最新文献

英文 中文
HCRT: Hybrid network with correlation-aware region transformer for breast tumor segmentation in DCE-MRI HCRT:用于DCE-MRI乳腺肿瘤分割的带有相关感知区域转换器的混合网络
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-29 DOI: 10.1016/j.patcog.2025.112934
Lei Zheng , Yuzhong Zhang , Jiadong Zhang , Tao Zhou , Kun Sun , Lei Zhou , Dinggang Shen
Accurate segmentation of breast tumors in Dynamic Contrast-Enhanced Magnetic Resonance Imaging (DCE-MRI) is essential for the early diagnosis of breast cancer. However, existing Transformer and Mamba-based architectures suffer from either excessive computational complexity or inadequate performance, creating an urgent need for structural innovation to achieve an optimal balance between accuracy and efficiency. We propose a dual-branch Hybrid Efficient Transformer Network (HCRT) to address these challenges. HCRT employs Light Ghost Blocks for efficient feature extraction and introduces a Correlation-Aware Region Transformer Block (CART), which utilises Multi-Dconv Channel Attention (MDCA) to capture long-range dependencies efficiently; in the auxiliary branch, a similarity matrix is generated through the Position-Aware (PAC) Correlation mechanism to weight attention maps in MDCA, preserving fine-grained spatial details. This design significantly reduces computational complexity from O(N2) to O(C2). Additionally, we propose Regional Prototype Contrastive Learning (RPCL), which operates solely during training to enhance model generalisation without compromising inference efficiency. Extensive experiments on a large-scale dataset of over 1000 cases and three additional datasets demonstrate that our method achieves superior segmentation accuracy and stronger generalisation ability. The code is available at https://github.com/ZhouL-lab/HCRT.
动态对比增强磁共振成像(DCE-MRI)对乳腺肿瘤的准确分割对于乳腺癌的早期诊断至关重要。然而,现有的基于Transformer和mamba的架构要么计算过于复杂,要么性能不足,因此迫切需要进行结构创新,以实现精度和效率之间的最佳平衡。我们提出了一种双支路混合高效变压器网络(HCRT)来解决这些挑战。HCRT采用Light Ghost Block进行高效的特征提取,并引入了相关感知区域变压器Block (CART),利用Multi-Dconv Channel Attention (MDCA)高效捕获远程依赖关系;在辅助分支中,通过位置感知(PAC)关联机制生成相似性矩阵,对MDCA中的注意图进行加权,保留细粒度的空间细节。这种设计显著地将计算复杂度从0 (N2)降低到O(C2)。此外,我们提出了区域原型对比学习(RPCL),它只在训练期间运行,以增强模型泛化而不影响推理效率。在超过1000例的大规模数据集和三个附加数据集上进行的大量实验表明,我们的方法具有较好的分割精度和较强的泛化能力。代码可在https://github.com/ZhouL-lab/HCRT上获得。
{"title":"HCRT: Hybrid network with correlation-aware region transformer for breast tumor segmentation in DCE-MRI","authors":"Lei Zheng ,&nbsp;Yuzhong Zhang ,&nbsp;Jiadong Zhang ,&nbsp;Tao Zhou ,&nbsp;Kun Sun ,&nbsp;Lei Zhou ,&nbsp;Dinggang Shen","doi":"10.1016/j.patcog.2025.112934","DOIUrl":"10.1016/j.patcog.2025.112934","url":null,"abstract":"<div><div>Accurate segmentation of breast tumors in Dynamic Contrast-Enhanced Magnetic Resonance Imaging (DCE-MRI) is essential for the early diagnosis of breast cancer. However, existing Transformer and Mamba-based architectures suffer from either excessive computational complexity or inadequate performance, creating an urgent need for structural innovation to achieve an optimal balance between accuracy and efficiency. We propose a dual-branch Hybrid Efficient Transformer Network (HCRT) to address these challenges. HCRT employs Light Ghost Blocks for efficient feature extraction and introduces a Correlation-Aware Region Transformer Block (CART), which utilises Multi-Dconv Channel Attention (MDCA) to capture long-range dependencies efficiently; in the auxiliary branch, a similarity matrix is generated through the Position-Aware (PAC) Correlation mechanism to weight attention maps in MDCA, preserving fine-grained spatial details. This design significantly reduces computational complexity from <em>O</em>(<em>N</em><sup>2</sup>) to <em>O</em>(<em>C</em><sup>2</sup>). Additionally, we propose Regional Prototype Contrastive Learning (RPCL), which operates solely during training to enhance model generalisation without compromising inference efficiency. Extensive experiments on a large-scale dataset of over 1000 cases and three additional datasets demonstrate that our method achieves superior segmentation accuracy and stronger generalisation ability. The code is available at <span><span>https://github.com/ZhouL-lab/HCRT</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"174 ","pages":"Article 112934"},"PeriodicalIF":7.6,"publicationDate":"2025-12-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145939938","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Average weight margin-based feature selection with three-way decision 基于三向决策的平均权值边缘特征选择
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-29 DOI: 10.1016/j.patcog.2025.113009
Can Gao , Jiandong Hu , Jie Zhou , Weiping Ding , Witold Pedrycz
Feature selection is a crucial task in machine learning and data mining, aiming at selecting a subset of key features to represent given data. Various feature selection methods have been proposed, yet they usually ignore the type and importance of the samples when evaluating and selecting features, thus affecting the quality of the obtained feature subsets. This study proposes a novel feature selection method based on the average weight margin. Specifically, a weighted neighborhood rough set model is first presented, which incorporates the sample and feature weights to define the lower and upper approximations, providing the ability to capture the intrinsic characteristics of the data. Then, a sample weight function based on Weibull distribution is introduced to map sample margins into weights, and a sample margin-driven three-way gradient optimization algorithm is developed to learn the feature weights adaptively. Finally, a measure of average weight margin is defined with the learned sample and feature weights, and a forward-adding feature selection algorithm is designed to obtain a subset of features with a large margin. Extensive experiments on UCI benchmark datasets demonstrate that the proposed method obtains high-quality feature subsets and outperforms other representative methods across different classifiers.
特征选择是机器学习和数据挖掘中的一项关键任务,旨在选择关键特征的子集来表示给定的数据。人们提出了各种特征选择方法,但在评估和选择特征时往往忽略了样本的类型和重要性,从而影响了所获得的特征子集的质量。本文提出了一种基于平均权值裕度的特征选择方法。具体来说,首先提出了一个加权邻域粗糙集模型,该模型结合了样本和特征权重来定义上下近似,从而提供了捕获数据内在特征的能力。然后,引入基于威布尔分布的样本权函数,将样本边距映射为权值,并开发了基于样本边距驱动的三向梯度优化算法,自适应学习特征权值。最后,利用学习到的样本和特征权值定义了平均权值裕度的度量,并设计了一种前向添加特征选择算法,以获得具有较大裕度的特征子集。在UCI基准数据集上的大量实验表明,该方法获得了高质量的特征子集,并且在不同分类器上优于其他代表性方法。
{"title":"Average weight margin-based feature selection with three-way decision","authors":"Can Gao ,&nbsp;Jiandong Hu ,&nbsp;Jie Zhou ,&nbsp;Weiping Ding ,&nbsp;Witold Pedrycz","doi":"10.1016/j.patcog.2025.113009","DOIUrl":"10.1016/j.patcog.2025.113009","url":null,"abstract":"<div><div>Feature selection is a crucial task in machine learning and data mining, aiming at selecting a subset of key features to represent given data. Various feature selection methods have been proposed, yet they usually ignore the type and importance of the samples when evaluating and selecting features, thus affecting the quality of the obtained feature subsets. This study proposes a novel feature selection method based on the average weight margin. Specifically, a weighted neighborhood rough set model is first presented, which incorporates the sample and feature weights to define the lower and upper approximations, providing the ability to capture the intrinsic characteristics of the data. Then, a sample weight function based on Weibull distribution is introduced to map sample margins into weights, and a sample margin-driven three-way gradient optimization algorithm is developed to learn the feature weights adaptively. Finally, a measure of average weight margin is defined with the learned sample and feature weights, and a forward-adding feature selection algorithm is designed to obtain a subset of features with a large margin. Extensive experiments on UCI benchmark datasets demonstrate that the proposed method obtains high-quality feature subsets and outperforms other representative methods across different classifiers.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"174 ","pages":"Article 113009"},"PeriodicalIF":7.6,"publicationDate":"2025-12-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145939937","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Incremental sampling hashing for image retrieval with concept drift 基于概念漂移的图像检索增量抽样哈希
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-28 DOI: 10.1016/j.patcog.2025.112994
Wing W.Y. Ng , Linfei Wang , Qihua Li , Xing Tian
For large-scale image retrieval, deep hashing is widely adopted due to its storage and time efficiencices. However, real-world data environments are typically non-stationary where both number and distribution of categories may change frequently, leading to concept drift problems. This degrades retrieval performance and has been rarely addressed in existing deep hashing methods. To tackle this issue, Incremental Sampling Hashing (ISH) method is proposed in this work. ISH employs a representative sampling strategy to capture representative samples and utilizes knowledge distillation to learn hash codes for new images. This approach mitigates catastrophic forgetting of prior knowledge and reduces impacts of concept drift. Additionally, a loss function is designed to guide the learning of the deep hashing neural network to balance semantic similarity and historical information perservation. Experiments on 12 simulated concept drift scenarios demonstrate that ISH effectively handles concept drift in non-stationary environments and yields better retrieval accuracies and applicabilities compared to existing online hashing methods.
对于大规模图像检索,深度哈希算法由于其存储效率和时间效率而被广泛采用。然而,现实世界的数据环境通常是非平稳的,其中类别的数量和分布可能经常变化,从而导致概念漂移问题。这降低了检索性能,并且在现有的深度哈希方法中很少得到解决。为了解决这一问题,本文提出了增量抽样哈希(ISH)方法。ISH采用代表性采样策略捕获代表性样本,并利用知识蒸馏学习新图像的哈希码。这种方法减轻了先验知识的灾难性遗忘,减少了概念漂移的影响。此外,设计了一个损失函数来指导深度哈希神经网络的学习,以平衡语义相似度和历史信息保存。12个模拟概念漂移场景的实验表明,与现有的在线哈希方法相比,ISH可以有效地处理非平稳环境中的概念漂移,并且具有更好的检索精度和适用性。
{"title":"Incremental sampling hashing for image retrieval with concept drift","authors":"Wing W.Y. Ng ,&nbsp;Linfei Wang ,&nbsp;Qihua Li ,&nbsp;Xing Tian","doi":"10.1016/j.patcog.2025.112994","DOIUrl":"10.1016/j.patcog.2025.112994","url":null,"abstract":"<div><div>For large-scale image retrieval, deep hashing is widely adopted due to its storage and time efficiencices. However, real-world data environments are typically non-stationary where both number and distribution of categories may change frequently, leading to concept drift problems. This degrades retrieval performance and has been rarely addressed in existing deep hashing methods. To tackle this issue, Incremental Sampling Hashing (ISH) method is proposed in this work. ISH employs a representative sampling strategy to capture representative samples and utilizes knowledge distillation to learn hash codes for new images. This approach mitigates catastrophic forgetting of prior knowledge and reduces impacts of concept drift. Additionally, a loss function is designed to guide the learning of the deep hashing neural network to balance semantic similarity and historical information perservation. Experiments on 12 simulated concept drift scenarios demonstrate that ISH effectively handles concept drift in non-stationary environments and yields better retrieval accuracies and applicabilities compared to existing online hashing methods.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"174 ","pages":"Article 112994"},"PeriodicalIF":7.6,"publicationDate":"2025-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145885938","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DpFedFKP: Dynamic personalized federated learning for finger knuckle print recognition DpFedFKP:指节指纹识别的动态个性化联邦学习
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-28 DOI: 10.1016/j.patcog.2025.112966
Shuyi Li , Jianian Hu , Bob Zhang , Shanping Yu , Lifang Wu
Finger knuckle print (FKP) recognition has recently attracted significant attention owing to its low cost, high security, and discriminative characteristics. However, existing data-driven approaches, particularly deep learning methods, typically require centralized training with large-scale datasets, raising serious privacy risks. Furthermore, the substantial data heterogeneity among independent datasets often leads to training difficulties and performance degradation. To address these challenges, we propose a dynamic personalized federated learning framework, called DpFedFKP, for FKP recognition. Specifically, each client first performs local training through classification loss constraints to obtain the local model, and the central server then aggregates these local models via Federated Averaging (FedAvg) to generate a generalized global model. Subsequently, each client performs an adaptive gradient analysis that computes gradients and parameter variations between global and local models using domain-specific sub-datasets. Furthermore, we propose a novel server-side dynamic update mechanism that dynamically adjusts the local model ratios within client-specific personalized models, enabling optimal interpolation between the global and local models to achieve robust generalization and personalization. Comprehensive experiments on two widely used FKP datasets demonstrate that the proposed method has achieved significant performance improvement over the state-of-the-art techniques. Specifically, our method achieves 0.03% relative accuracy improvement on Data-fkp and 0.52% relative accuracy improvement on PolyU-fkp, with 12.35% relative reduction in equal error rate (EER) on PolyU-fkp. Furthermore, the cross-client experiments achieve up to 2.80% relative accuracy improvement and 33.48% relative EER reduction. Significantly, the proposed DpFedFKP has strong compatibility with differential privacy techniques, thereby enhancing privacy-preserving capability.
指关节指纹(FKP)识别以其低成本、高安全性和可识别性等特点近年来受到广泛关注。然而,现有的数据驱动方法,特别是深度学习方法,通常需要大规模数据集的集中训练,这带来了严重的隐私风险。此外,独立数据集之间的大量数据异质性经常导致训练困难和性能下降。为了解决这些挑战,我们提出了一个动态个性化的联邦学习框架,称为DpFedFKP,用于FKP识别。具体来说,每个客户端首先通过分类损失约束进行局部训练以获得局部模型,然后中央服务器通过联邦平均(fedag)将这些局部模型聚合以生成广义全局模型。随后,每个客户端执行自适应梯度分析,使用特定领域的子数据集计算全局和局部模型之间的梯度和参数变化。此外,我们提出了一种新的服务器端动态更新机制,该机制可以动态调整客户端特定个性化模型中的局部模型比例,从而实现全局和局部模型之间的最佳插值,从而实现鲁棒的泛化和个性化。在两个广泛使用的FKP数据集上进行的综合实验表明,该方法比最先进的技术取得了显着的性能改进。具体来说,我们的方法在Data-fkp上的相对精度提高了0.03%,在u -fkp上的相对精度提高了0.52%,在u -fkp上的等错误率(EER)相对降低了12.35%。此外,跨客户端实验的相对精度提高了2.80%,相对EER降低了33.48%。值得注意的是,所提出的DpFedFKP与差分隐私技术具有很强的兼容性,从而增强了隐私保护能力。
{"title":"DpFedFKP: Dynamic personalized federated learning for finger knuckle print recognition","authors":"Shuyi Li ,&nbsp;Jianian Hu ,&nbsp;Bob Zhang ,&nbsp;Shanping Yu ,&nbsp;Lifang Wu","doi":"10.1016/j.patcog.2025.112966","DOIUrl":"10.1016/j.patcog.2025.112966","url":null,"abstract":"<div><div>Finger knuckle print (FKP) recognition has recently attracted significant attention owing to its low cost, high security, and discriminative characteristics. However, existing data-driven approaches, particularly deep learning methods, typically require centralized training with large-scale datasets, raising serious privacy risks. Furthermore, the substantial data heterogeneity among independent datasets often leads to training difficulties and performance degradation. To address these challenges, we propose a dynamic personalized federated learning framework, called DpFedFKP, for FKP recognition. Specifically, each client first performs local training through classification loss constraints to obtain the local model, and the central server then aggregates these local models via Federated Averaging (FedAvg) to generate a generalized global model. Subsequently, each client performs an adaptive gradient analysis that computes gradients and parameter variations between global and local models using domain-specific sub-datasets. Furthermore, we propose a novel server-side dynamic update mechanism that dynamically adjusts the local model ratios within client-specific personalized models, enabling optimal interpolation between the global and local models to achieve robust generalization and personalization. Comprehensive experiments on two widely used FKP datasets demonstrate that the proposed method has achieved significant performance improvement over the state-of-the-art techniques. Specifically, our method achieves 0.03% relative accuracy improvement on Data-fkp and 0.52% relative accuracy improvement on PolyU-fkp, with 12.35% relative reduction in equal error rate (EER) on PolyU-fkp. Furthermore, the cross-client experiments achieve up to 2.80% relative accuracy improvement and 33.48% relative EER reduction. Significantly, the proposed DpFedFKP has strong compatibility with differential privacy techniques, thereby enhancing privacy-preserving capability.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"174 ","pages":"Article 112966"},"PeriodicalIF":7.6,"publicationDate":"2025-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145885757","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cross-domain distillation for unsupervised domain adaptation with large vision-language models 面向大型视觉语言模型的无监督域自适应跨域蒸馏
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-28 DOI: 10.1016/j.patcog.2025.112985
Xingwei Deng , Yangtao Wang , Yanzhao Xie , Xin Tan , Maobin Tang , Meie Fang , Wensheng Zhang
Large vision-language models (VLMs), incorporating the prompt learning mechanism, have achieved promising results in cross-domain tasks. However, leveraging VLMs to transfer the knowledge from the source domain to the target domain remains a challenging task for unsupervised domain adaptation (UDA). To this end, we propose Cross-domain Distillation for UDA with VLMs (termed as CDU). Firstly, CDU trains a source model by embedding the knowledge of the source domain (including both each sample and its corresponding class category) into VLMs in a lightweight manner. Secondly, CDU makes full use of the image and text semantics from the source model to guide the target model learning, thereby achieving domain alignment to yield semantically consistent representations across domains. We conduct extensive experiments on 3 popular UDA datasets including Office-31, Office-Home, and DomainNet. Experimental results verify our method consistently surpasses the state-of-the-art (SOTA) UDA methods by a large margin with higher performance and lower model complexity on various UDA benchmarks. Take Office-Home as an example, the average accuracy of CDU exceeds existing methods by at least 3%, yet the number of learnable parameters only accounts for 17.9% and the inference time only takes up 4.3% compared to the strongest candidates. The code of this paper is available at GitHub: https://github.com/1d1x1w/CDU.
结合提示学习机制的大型视觉语言模型(VLMs)在跨领域任务中取得了良好的效果。然而,对于无监督域自适应(UDA)来说,利用vlm将知识从源域转移到目标域仍然是一项具有挑战性的任务。为此,我们提出了使用vlm(称为CDU)对UDA进行跨域蒸馏。首先,CDU通过将源域的知识(包括每个样本及其对应的类类别)以轻量级的方式嵌入vlm来训练源模型。其次,CDU充分利用源模型的图像和文本语义来指导目标模型学习,从而实现领域对齐,从而产生跨领域的语义一致表示。我们在Office-31、Office-Home和DomainNet三种流行的UDA数据集上进行了广泛的实验。实验结果证明,我们的方法在各种UDA基准测试中始终以更高的性能和更低的模型复杂性大大超过最先进的(SOTA) UDA方法。以Office-Home为例,CDU的平均准确率比现有方法至少高出3%,但与最强候选方法相比,可学习参数的数量仅占17.9%,推理时间仅占4.3%。本文的代码可在GitHub上获得:https://github.com/1d1x1w/CDU。
{"title":"Cross-domain distillation for unsupervised domain adaptation with large vision-language models","authors":"Xingwei Deng ,&nbsp;Yangtao Wang ,&nbsp;Yanzhao Xie ,&nbsp;Xin Tan ,&nbsp;Maobin Tang ,&nbsp;Meie Fang ,&nbsp;Wensheng Zhang","doi":"10.1016/j.patcog.2025.112985","DOIUrl":"10.1016/j.patcog.2025.112985","url":null,"abstract":"<div><div>Large vision-language models (VLMs), incorporating the prompt learning mechanism, have achieved promising results in cross-domain tasks. However, leveraging VLMs to transfer the knowledge from the source domain to the target domain remains a challenging task for unsupervised domain adaptation (UDA). To this end, we propose <strong><u>C</u></strong>ross-domain <strong><u>D</u></strong>istillation for <strong><u>U</u></strong>DA with VLMs (termed as CDU). Firstly, CDU trains a source model by embedding the knowledge of the source domain (including both each sample and its corresponding class category) into VLMs in a lightweight manner. Secondly, CDU makes full use of the image and text semantics from the source model to guide the target model learning, thereby achieving domain alignment to yield semantically consistent representations across domains. We conduct extensive experiments on 3 popular UDA datasets including Office-31, Office-Home, and DomainNet. Experimental results verify our method consistently surpasses the state-of-the-art (SOTA) UDA methods by a large margin with higher performance and lower model complexity on various UDA benchmarks. Take Office-Home as an example, the average accuracy of CDU exceeds existing methods by at least 3%, yet the number of learnable parameters only accounts for 17.9% and the inference time only takes up 4.3% compared to the strongest candidates. The code of this paper is available at GitHub: <span><span>https://github.com/1d1x1w/CDU</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"174 ","pages":"Article 112985"},"PeriodicalIF":7.6,"publicationDate":"2025-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145885377","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Graph-regularized geometric deep learning for motor imagery EEG decoding via multi-domain fusion 基于图正则化几何深度学习的多域融合运动意象脑电解码
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-28 DOI: 10.1016/j.patcog.2025.112976
Chaoqin Chu , Jianing Shen , Qinkun Xiao
This study addresses the key challenges in motor imagery EEG (MI-EEG) decoding, including low signal-to-noise ratios from volume conduction, dynamic non-stationarity arising from inter-subject variability, and complex spatio-spectro-temporal feature interactions-by proposing a novel multi-domain fusion feature representation (MFFR) framework. The primary objective is to propose and validate an integrated paradigm that unifies graph signal processing and Riemannian geometry within a deep learning architecture to enhance decoding robustness and accuracy. Key innovations include: (1) graph-regularized spatio-temporal filters that suppress noise while preserving inter-channel connectivity; (2) an enhanced covariance representation using Riemannian batch normalization to achieve geometric invariance against non-stationarity; and (3) a shallow 3D CNN enabling hierarchical multi-domain fusion with manifold-aware feature learning. Rigorous validation on three public datasets (BCIC_2a/2b, OpenBMI) demonstrates superior performance. MFFR establishes a new paradigm for robust MI-EEG decoding and offers a theoretically grounded approach for high-dimensional biological signal processing.
本研究通过提出一种新的多域融合特征表示(MFFR)框架,解决了运动图像脑电图(MI-EEG)解码中的关键挑战,包括体积传导的低信噪比、主体间可变性引起的动态非平稳性以及复杂的时空-光谱-时间特征相互作用。主要目标是提出并验证一个集成范例,该范例将图形信号处理和黎曼几何在深度学习架构中统一起来,以提高解码的鲁棒性和准确性。关键创新包括:(1)在保持信道间连通性的同时抑制噪声的图正则化时空滤波器;(2)利用黎曼批归一化增强协方差表示,实现几何不变性和非平稳性;(3)支持分层多域融合和流形感知特征学习的浅三维CNN。在三个公共数据集(BCIC_2a/2b, OpenBMI)上进行了严格的验证,证明了优越的性能。MFFR建立了鲁棒脑电解码的新范式,为高维生物信号处理提供了理论基础。
{"title":"Graph-regularized geometric deep learning for motor imagery EEG decoding via multi-domain fusion","authors":"Chaoqin Chu ,&nbsp;Jianing Shen ,&nbsp;Qinkun Xiao","doi":"10.1016/j.patcog.2025.112976","DOIUrl":"10.1016/j.patcog.2025.112976","url":null,"abstract":"<div><div>This study addresses the key challenges in motor imagery EEG (MI-EEG) decoding, including low signal-to-noise ratios from volume conduction, dynamic non-stationarity arising from inter-subject variability, and complex spatio-spectro-temporal feature interactions-by proposing a novel multi-domain fusion feature representation (MFFR) framework. The primary objective is to propose and validate an integrated paradigm that unifies graph signal processing and Riemannian geometry within a deep learning architecture to enhance decoding robustness and accuracy. Key innovations include: (1) graph-regularized spatio-temporal filters that suppress noise while preserving inter-channel connectivity; (2) an enhanced covariance representation using Riemannian batch normalization to achieve geometric invariance against non-stationarity; and (3) a shallow 3D CNN enabling hierarchical multi-domain fusion with manifold-aware feature learning. Rigorous validation on three public datasets (BCIC_2a/2b, OpenBMI) demonstrates superior performance. MFFR establishes a new paradigm for robust MI-EEG decoding and offers a theoretically grounded approach for high-dimensional biological signal processing.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"174 ","pages":"Article 112976"},"PeriodicalIF":7.6,"publicationDate":"2025-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145939878","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MORSE: Molecular representation learning via structured semantic extraction across hierarchical and asymmetric biological modalities 分子表征学习通过结构化语义提取跨层次和不对称的生物模式
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-28 DOI: 10.1016/j.patcog.2025.112975
Ronghui Zhang , Mengran Li , Wenbin Xing , Bo Li , Chengyang Zhang , Wenxuan Tu , Yongfu Li , Ruxin Wang
Molecular representation learning is essential in computational drug discovery and systems biology. With the increasing availability of high-throughput biological assays, multimodal data (e.g. molecular structure, cellular phenotypes, and gene expression profiles) has become vital for capturing molecular effects across different biological scales. Nevertheless, current methods often overlook the hierarchical dependencies between molecular features and cellular responses, hindering the modeling of complex biological perturbations. In addition, redundant or missing information in asymmetric modalities can introduce noise and degrade predictive performance. In this work, we propose MORSE (MOlecular Representation via Structured Semantics Extraction), a unified framework for multimodal molecular representation learning that couples structured semantic inference with modality reliability modeling to preserve high-confidence and robust learning from asymmetric conditions. MORSE consists of two coordinated components: First, PathMiner performs cross-modal random walks on the biological knowledge graph to extract high-order semantic paths, each of which is used to assemble a hyperedge that provides structured priors for inferring asymmetric modalities. Second, VeilNet encodes a unified molecular representation by integrating multiple modalities into a shared semantic space, resolving cross-modal discrepancies through a masked graph autoencoder, and adaptively regulating each modality’s contribution based on its reliability. Experiments show that MORSE achieves the best performance compared to over 20 baseline methods across different molecular property prediction tasks on 10 datasets. For instance, in the regression-based molecular property prediction of the Biogen3K dataset, MORSE achieves an 8.3% improvement over the baseline method.
分子表示学习在计算药物发现和系统生物学中是必不可少的。随着高通量生物检测的日益普及,多模态数据(如分子结构、细胞表型和基因表达谱)对于捕获不同生物尺度的分子效应变得至关重要。然而,目前的方法往往忽略了分子特征和细胞反应之间的层次依赖性,阻碍了复杂生物扰动的建模。此外,非对称模式中的冗余或缺失信息会引入噪声并降低预测性能。在这项工作中,我们提出了MORSE (MOlecular Representation via Structured Semantics Extraction),这是一个用于多模态分子表示学习的统一框架,将结构化语义推理与模态可靠性建模相结合,以保持非对称条件下的高置信度和鲁棒性学习。MORSE由两个协调的组件组成:首先,PathMiner在生物知识图上执行跨模态随机行走以提取高阶语义路径,每个路径用于组装一个超边缘,该超边缘为推断不对称模态提供结构化先验。其次,VeilNet通过将多个模态集成到共享的语义空间中,通过掩模图自编码器解决跨模态差异,并根据其可靠性自适应调节每个模态的贡献,从而编码统一的分子表示。实验表明,在10个数据集的不同分子性质预测任务中,与20多种基线方法相比,MORSE方法的性能最好。例如,在Biogen3K数据集的基于回归的分子特性预测中,MORSE比基线方法提高了8.3%。
{"title":"MORSE: Molecular representation learning via structured semantic extraction across hierarchical and asymmetric biological modalities","authors":"Ronghui Zhang ,&nbsp;Mengran Li ,&nbsp;Wenbin Xing ,&nbsp;Bo Li ,&nbsp;Chengyang Zhang ,&nbsp;Wenxuan Tu ,&nbsp;Yongfu Li ,&nbsp;Ruxin Wang","doi":"10.1016/j.patcog.2025.112975","DOIUrl":"10.1016/j.patcog.2025.112975","url":null,"abstract":"<div><div>Molecular representation learning is essential in computational drug discovery and systems biology. With the increasing availability of high-throughput biological assays, multimodal data (<em>e.g.</em> molecular structure, cellular phenotypes, and gene expression profiles) has become vital for capturing molecular effects across different biological scales. Nevertheless, current methods often overlook the hierarchical dependencies between molecular features and cellular responses, hindering the modeling of complex biological perturbations. In addition, redundant or missing information in asymmetric modalities can introduce noise and degrade predictive performance. In this work, we propose <strong>MORSE</strong> (<u><strong>MO</strong></u>lecular <u><strong>R</strong></u>epresentation via <u><strong>S</strong></u>tructured Semantics <u><strong>E</strong></u>xtraction), a unified framework for multimodal molecular representation learning that couples structured semantic inference with modality reliability modeling to preserve high-confidence and robust learning from asymmetric conditions. MORSE consists of two coordinated components: First, PathMiner performs cross-modal random walks on the biological knowledge graph to extract high-order semantic paths, each of which is used to assemble a hyperedge that provides structured priors for inferring asymmetric modalities. Second, VeilNet encodes a unified molecular representation by integrating multiple modalities into a shared semantic space, resolving cross-modal discrepancies through a masked graph autoencoder, and adaptively regulating each modality’s contribution based on its reliability. Experiments show that MORSE achieves the best performance compared to over 20 baseline methods across different molecular property prediction tasks on 10 datasets. For instance, in the regression-based molecular property prediction of the Biogen3K dataset, MORSE achieves an 8.3% improvement over the baseline method.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"174 ","pages":"Article 112975"},"PeriodicalIF":7.6,"publicationDate":"2025-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145939883","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TAGNet-BiLSTM: Transformer augmented network with BiLSTM for skeleton-based gait recognition TAGNet-BiLSTM:基于骨骼步态识别的BiLSTM变压器增强网络
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-27 DOI: 10.1016/j.patcog.2025.112990
Daniel R. Mesghena , Yanan Diao , Guilan Chen , Zijing You , Xiantai Jiang , Guanglin Li , Guoru Zhao
Gait characterizes the walking style of a person and can be used as a biometric feature for identification. In this study, we propose Transformer-Augmented Spatial Graph Neural Networks with LSTM (TAGNET-BiLSTM), a novel framework for gait recognition using skeletal data. Our method employs a Bidirectional LSTM to capture temporal dependencies in both directions, eliminating the common need for sequence-flipping augmentation and reducing training feature dimensionality and time. For spatial processing, we integrate a multi-head transformer and evaluate its placement within the spatial-temporal architecture. Experimental results on widely used gait datasets demonstrate that TAGNET-BiLSTM achieves state-of-the-art performance, improving recognition accuracy by 5% on CASIA-B, 8% on OU-MVLP, 2% on GREW, and 3% on Gait3D while reducing feature dimensionality to 80% and training time to 50%. To the best of our knowledge, we are the first to show that BiLSTM outperforms traditional one-directional recurrent and convolutional methods for temporal gait representation. Moreover, despite the prevalent use of transformers for temporal modeling, our findings reveal that incorporating the transformer into the spatial block of the network yields superior performance. We estimate that this work will inspire the use of bidirectional temporal data processing in future gait recognition tasks.
步态表征一个人的行走方式,可以作为识别的生物特征。在这项研究中,我们提出了基于LSTM的变形增强空间图神经网络(TAGNET-BiLSTM),这是一种利用骨骼数据进行步态识别的新框架。我们的方法采用双向LSTM来捕获两个方向上的时间依赖性,消除了对序列翻转增强的常见需求,减少了训练特征的维数和时间。对于空间处理,我们集成了一个多头变压器,并评估其在时空架构中的位置。在广泛使用的步态数据集上的实验结果表明,TAGNET-BiLSTM达到了最先进的性能,在CASIA-B上提高了5%的识别准确率,在OU-MVLP上提高了8%,在grow上提高了2%,在Gait3D上提高了3%,同时将特征维数降低到80%,训练时间降低到50%。据我们所知,我们是第一个证明BiLSTM优于传统的单向循环和卷积时间步态表示方法的人。此外,尽管普遍使用变压器进行时间建模,但我们的研究结果表明,将变压器纳入网络的空间块会产生更好的性能。我们估计这项工作将启发在未来的步态识别任务中使用双向时间数据处理。
{"title":"TAGNet-BiLSTM: Transformer augmented network with BiLSTM for skeleton-based gait recognition","authors":"Daniel R. Mesghena ,&nbsp;Yanan Diao ,&nbsp;Guilan Chen ,&nbsp;Zijing You ,&nbsp;Xiantai Jiang ,&nbsp;Guanglin Li ,&nbsp;Guoru Zhao","doi":"10.1016/j.patcog.2025.112990","DOIUrl":"10.1016/j.patcog.2025.112990","url":null,"abstract":"<div><div>Gait characterizes the walking style of a person and can be used as a biometric feature for identification. In this study, we propose Transformer-Augmented Spatial Graph Neural Networks with LSTM (TAGNET-BiLSTM), a novel framework for gait recognition using skeletal data. Our method employs a Bidirectional LSTM to capture temporal dependencies in both directions, eliminating the common need for sequence-flipping augmentation and reducing training feature dimensionality and time. For spatial processing, we integrate a multi-head transformer and evaluate its placement within the spatial-temporal architecture. Experimental results on widely used gait datasets demonstrate that TAGNET-BiLSTM achieves state-of-the-art performance, improving recognition accuracy by 5% on CASIA-B, 8% on OU-MVLP, 2% on GREW, and 3% on Gait3D while reducing feature dimensionality to 80% and training time to 50%. To the best of our knowledge, we are the first to show that BiLSTM outperforms traditional one-directional recurrent and convolutional methods for temporal gait representation. Moreover, despite the prevalent use of transformers for temporal modeling, our findings reveal that incorporating the transformer into the spatial block of the network yields superior performance. We estimate that this work will inspire the use of bidirectional temporal data processing in future gait recognition tasks.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"174 ","pages":"Article 112990"},"PeriodicalIF":7.6,"publicationDate":"2025-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145885944","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep categorical clustering via symbolization and masking mechanisms 通过符号化和掩蔽机制的深度分类聚类
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-27 DOI: 10.1016/j.patcog.2025.113002
Wei Xu, Zhenping Xie
Clustering analysis is a fundamental task in pattern recognition, yet most existing deep clustering models are designed for numerical or image data and perform poorly on categorical data. Categorical data are ubiquitous in domains such as bioinformatics, market research, and network security, but their discrete nature and lack of inherent metric structure make conventional distance-based clustering methods ineffective. Moreover, most algorithms require the number of clusters to be specified in advance, which limits their adaptability in real-world applications. To address these challenges, we propose a novel deep clustering algorithm based on symbolization and masking mechanisms (SAMM). SAMM introduces a symbolization layer to transform heterogeneous categorical attributes into symbolic tokens, followed by a masking layer that leverages self-attention to capture contextual dependencies among features. Finally, a learnable pattern matrix adaptively assigns samples to clusters without prior knowledge of cluster numbers. To the best of our knowledge, SAMM is the first work to integrate symbolization and masking mechanisms into categorical clustering. Extensive experiments on 18 benchmark data sets demonstrate that SAMM consistently outperforms both classical methods and recent categorical clustering baselines. Our implementation is publicly available at https://github.com/kcisgroup/SAMM.
聚类分析是模式识别中的一项基本任务,但现有的大多数深度聚类模型都是针对数值或图像数据设计的,在分类数据上表现不佳。分类数据在生物信息学、市场研究和网络安全等领域普遍存在,但其离散性和缺乏固有度量结构使得传统的基于距离的聚类方法无效。此外,大多数算法需要预先指定簇的数量,这限制了它们在实际应用中的适应性。为了解决这些挑战,我们提出了一种新的基于符号化和掩蔽机制(SAMM)的深度聚类算法。SAMM引入了一个符号化层,将异构的分类属性转换为符号标记,然后是一个屏蔽层,利用自关注来捕获特征之间的上下文依赖关系。最后,一个可学习的模式矩阵自适应地将样本分配给簇,而不需要事先知道簇的编号。据我们所知,SAMM是第一个将符号化和掩蔽机制集成到分类聚类中的方法。在18个基准数据集上进行的大量实验表明,SAMM始终优于经典方法和最近的分类聚类基线。我们的实现可以在https://github.com/kcisgroup/SAMM上公开获得。
{"title":"Deep categorical clustering via symbolization and masking mechanisms","authors":"Wei Xu,&nbsp;Zhenping Xie","doi":"10.1016/j.patcog.2025.113002","DOIUrl":"10.1016/j.patcog.2025.113002","url":null,"abstract":"<div><div>Clustering analysis is a fundamental task in pattern recognition, yet most existing deep clustering models are designed for numerical or image data and perform poorly on categorical data. Categorical data are ubiquitous in domains such as bioinformatics, market research, and network security, but their discrete nature and lack of inherent metric structure make conventional distance-based clustering methods ineffective. Moreover, most algorithms require the number of clusters to be specified in advance, which limits their adaptability in real-world applications. To address these challenges, we propose a novel deep clustering algorithm based on symbolization and masking mechanisms (SAMM). SAMM introduces a symbolization layer to transform heterogeneous categorical attributes into symbolic tokens, followed by a masking layer that leverages self-attention to capture contextual dependencies among features. Finally, a learnable pattern matrix adaptively assigns samples to clusters without prior knowledge of cluster numbers. To the best of our knowledge, SAMM is the first work to integrate symbolization and masking mechanisms into categorical clustering. Extensive experiments on 18 benchmark data sets demonstrate that SAMM consistently outperforms both classical methods and recent categorical clustering baselines. Our implementation is publicly available at <span><span>https://github.com/kcisgroup/SAMM</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"174 ","pages":"Article 113002"},"PeriodicalIF":7.6,"publicationDate":"2025-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145885769","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improving multi-label contrastive learning by leveraging label distribution 利用标签分布改进多标签对比学习
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-27 DOI: 10.1016/j.patcog.2025.113011
Ning Chen , Shen-Huan Lyu , Tian-Shuang Wu , Yanyan Wang , Bin Tang
In multi-label learning, leveraging contrastive learning to learn better representations faces a key challenge: selecting positive and negative samples and effectively utilizing label information. Previous studies address the former through differential overlap degrees between positive and negative samples, while existing approaches typically employ logical labels for the latter. However, directly using logical labels fails to fully utilize inter-label information, as they ignore the varying importance among labels. To address this problem, we propose a novel method that improves multi-label contrastive learning through label distribution. Specifically, the framework first leverages contrastive loss to estimate label distributions from logical labels, then integrates label-aware information from these distributions into the loss function. We conduct evaluations on multiple widely-used multi-label datasets, including image and vector datasets, and additionally validate the feasibility of learning latent label distributions from logical labels using contrastive loss on label distribution datasets. The results demonstrate that our method outperforms state-of-the-art methods in six evaluation metrics.
在多标签学习中,利用对比学习学习更好的表征面临着一个关键的挑战:选择正样本和负样本,并有效地利用标签信息。以前的研究通过正样本和负样本之间的不同重叠度来解决前者,而现有的方法通常对后者使用逻辑标签。然而,直接使用逻辑标签并不能充分利用标签间的信息,因为它们忽略了标签之间重要性的差异。为了解决这个问题,我们提出了一种通过标签分布改进多标签对比学习的新方法。具体来说,该框架首先利用对比损失来估计逻辑标签的标签分布,然后将来自这些分布的标签感知信息集成到损失函数中。我们对多个广泛使用的多标签数据集(包括图像和向量数据集)进行了评估,并在标签分布数据集上使用对比损失验证了从逻辑标签学习潜在标签分布的可行性。结果表明,我们的方法优于最先进的方法在六个评估指标。
{"title":"Improving multi-label contrastive learning by leveraging label distribution","authors":"Ning Chen ,&nbsp;Shen-Huan Lyu ,&nbsp;Tian-Shuang Wu ,&nbsp;Yanyan Wang ,&nbsp;Bin Tang","doi":"10.1016/j.patcog.2025.113011","DOIUrl":"10.1016/j.patcog.2025.113011","url":null,"abstract":"<div><div>In multi-label learning, leveraging contrastive learning to learn better representations faces a key challenge: selecting positive and negative samples and effectively utilizing label information. Previous studies address the former through differential overlap degrees between positive and negative samples, while existing approaches typically employ logical labels for the latter. However, directly using logical labels fails to fully utilize inter-label information, as they ignore the varying importance among labels. To address this problem, we propose a novel method that improves multi-label contrastive learning through label distribution. Specifically, the framework first leverages contrastive loss to estimate label distributions from logical labels, then integrates label-aware information from these distributions into the loss function. We conduct evaluations on multiple widely-used multi-label datasets, including image and vector datasets, and additionally validate the feasibility of learning latent label distributions from logical labels using contrastive loss on label distribution datasets. The results demonstrate that our method outperforms state-of-the-art methods in six evaluation metrics.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"174 ","pages":"Article 113011"},"PeriodicalIF":7.6,"publicationDate":"2025-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145940052","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Pattern Recognition
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1