首页 > 最新文献

Knowledge-Based Systems最新文献

英文 中文
General expert-guided multi-group graph prompt learning 一般专家引导的多组图提示学习
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-09 DOI: 10.1016/j.knosys.2026.115265
Zhuofeng Luo , Xinyan Huang , Runkai He , Yaming Yang
Prompt learning has recently shown great promise in enhancing Graph Neural Networks (GNNs) by enabling efficient task adaptation and better generalization. However, existing methods typically employ a single prompt, which restricts their expressive power and adaptability across diverse graph tasks. To overcome this limitation, we propose the Multi-Group Graph Prompt (MGGP) framework, which introduces multiple learnable prompt groups working collaboratively within a GNN to capture diverse semantic patterns and task cues. To effectively integrate the diverse outputs from these prompt groups, we further design an expert-guided aggregation mechanism. This expert module dynamically weighs and combines predictions from each group, acting as a meta-reasoner that selects and integrates information in a task-aware manner, significantly outperforming naive aggregation strategies such as voting or averaging. Extensive experiments on various node and graph classification benchmarks under both full supervision and few-shot settings demonstrate that MGGP achieves superior accuracy and robustness. Our approach empowers existing graph prompt learning methods with multi-perspective reasoning capabilities.
提示学习最近通过实现有效的任务适应和更好的泛化,在增强图神经网络(gnn)方面显示出很大的希望。然而,现有的方法通常使用单个提示符,这限制了它们在不同图形任务中的表达能力和适应性。为了克服这一限制,我们提出了多组图提示(MGGP)框架,该框架引入了多个可学习的提示组,在GNN内协同工作,以捕获不同的语义模式和任务线索。为了有效整合这些提示组的不同输出,我们进一步设计了一个专家引导的聚合机制。这个专家模块动态地权衡和组合来自每个组的预测,作为一个元推理器,以任务感知的方式选择和集成信息,显著优于单纯的聚合策略,如投票或平均。在各种节点和图的分类基准上进行的大量实验表明,MGGP在完全监督和少量射击设置下都具有优异的准确性和鲁棒性。我们的方法使现有的图形提示学习方法具有多视角推理能力。
{"title":"General expert-guided multi-group graph prompt learning","authors":"Zhuofeng Luo ,&nbsp;Xinyan Huang ,&nbsp;Runkai He ,&nbsp;Yaming Yang","doi":"10.1016/j.knosys.2026.115265","DOIUrl":"10.1016/j.knosys.2026.115265","url":null,"abstract":"<div><div>Prompt learning has recently shown great promise in enhancing Graph Neural Networks (GNNs) by enabling efficient task adaptation and better generalization. However, existing methods typically employ a single prompt, which restricts their expressive power and adaptability across diverse graph tasks. To overcome this limitation, we propose the Multi-Group Graph Prompt (MGGP) framework, which introduces multiple learnable prompt groups working collaboratively within a GNN to capture diverse semantic patterns and task cues. To effectively integrate the diverse outputs from these prompt groups, we further design an expert-guided aggregation mechanism. This expert module dynamically weighs and combines predictions from each group, acting as a meta-reasoner that selects and integrates information in a task-aware manner, significantly outperforming naive aggregation strategies such as voting or averaging. Extensive experiments on various node and graph classification benchmarks under both full supervision and few-shot settings demonstrate that MGGP achieves superior accuracy and robustness. Our approach empowers existing graph prompt learning methods with multi-perspective reasoning capabilities.</div></div>","PeriodicalId":49939,"journal":{"name":"Knowledge-Based Systems","volume":"336 ","pages":"Article 115265"},"PeriodicalIF":7.6,"publicationDate":"2026-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145980894","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HFLMND: Toward robust and efficient hierarchical federated learning via malicious node detection HFLMND:通过恶意节点检测实现鲁棒和高效的分层联邦学习
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-09 DOI: 10.1016/j.knosys.2026.115270
Qinglin Bi , Lina Ge , Ming Jiang , Lei Tian , Wenbo Lin
Hierarchical federated learning (HFL) has attracted considerable attention owing to its communication efficiency and cost-effectiveness. However, in high-dimensional, large-scale, zero-trust distributed edge networks, HFL is highly susceptible to attacks from malicious nodes across different layers. Most existing federated learning (FL) defense methods focus on two-layer structures and fail to address complex cross-layer attacks in HFL. Security research for HFL is nascent and lacks comprehensive defenses. We address these challenges by proposing HFLMND, a lightweight and robust defense framework designed to enhance the security and resilience of HFL by accurately identifying malicious nodes. In particular, we design a node similarity feature extraction method that extracts multiple features, such as cosine similarity and Euclidean distance, from model parameters submitted by clients and edge servers. Subsequently, we apply a hierarchical clustering strategy based on these features to detect malicious nodes. In addition, we integrate a historical suspicion score correction mechanism that enhances detection accuracy and stability by accumulating historical detection results. Evaluation results indicate that HFLMND effectively detects and defends against various attacks in HFL and achieves an average detection accuracy of  > 90% across multiple attacks, with negligible impact on global model performance.
层次联邦学习(HFL)因其通信效率和成本效益而受到广泛关注。然而,在高维、大规模、零信任的分布式边缘网络中,HFL极易受到跨层恶意节点的攻击。现有的大多数联邦学习(FL)防御方法侧重于两层结构,无法解决复杂的跨层攻击。HFL的安全研究刚刚起步,缺乏全面的防御措施。我们通过提出HFLMND来解决这些挑战,HFLMND是一个轻量级和强大的防御框架,旨在通过准确识别恶意节点来增强HFL的安全性和弹性。特别地,我们设计了一种节点相似度特征提取方法,从客户端和边缘服务器提交的模型参数中提取余弦相似度和欧氏距离等多个特征。随后,我们应用基于这些特征的分层聚类策略来检测恶意节点。此外,我们还集成了历史怀疑评分校正机制,通过积累历史检测结果来提高检测的准确性和稳定性。评估结果表明,HFLMND能够有效检测和防御HFL中的各种攻击,多次攻击的平均检测准确率达到 >; 90%,对全局模型性能的影响可以忽略不计。
{"title":"HFLMND: Toward robust and efficient hierarchical federated learning via malicious node detection","authors":"Qinglin Bi ,&nbsp;Lina Ge ,&nbsp;Ming Jiang ,&nbsp;Lei Tian ,&nbsp;Wenbo Lin","doi":"10.1016/j.knosys.2026.115270","DOIUrl":"10.1016/j.knosys.2026.115270","url":null,"abstract":"<div><div>Hierarchical federated learning (HFL) has attracted considerable attention owing to its communication efficiency and cost-effectiveness. However, in high-dimensional, large-scale, zero-trust distributed edge networks, HFL is highly susceptible to attacks from malicious nodes across different layers. Most existing federated learning (FL) defense methods focus on two-layer structures and fail to address complex cross-layer attacks in HFL. Security research for HFL is nascent and lacks comprehensive defenses. We address these challenges by proposing HFLMND, a lightweight and robust defense framework designed to enhance the security and resilience of HFL by accurately identifying malicious nodes. In particular, we design a node similarity feature extraction method that extracts multiple features, such as cosine similarity and Euclidean distance, from model parameters submitted by clients and edge servers. Subsequently, we apply a hierarchical clustering strategy based on these features to detect malicious nodes. In addition, we integrate a historical suspicion score correction mechanism that enhances detection accuracy and stability by accumulating historical detection results. Evaluation results indicate that HFLMND effectively detects and defends against various attacks in HFL and achieves an average detection accuracy of  &gt; 90% across multiple attacks, with negligible impact on global model performance.</div></div>","PeriodicalId":49939,"journal":{"name":"Knowledge-Based Systems","volume":"336 ","pages":"Article 115270"},"PeriodicalIF":7.6,"publicationDate":"2026-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145980773","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A lightweight dual-view network for sand-dust degraded image enhancement 用于沙尘退化图像增强的轻量级双视图网络
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-09 DOI: 10.1016/j.knosys.2026.115308
Guxue Gao , Chunyun Sun , Xiaopeng Wen , Yang Xiao , Yuanyuan Wang
To address the issue that current supervised sand-dust image enhancement networks require large parameters and consume substantial computational resources and storage space, we propose a lightweight dual-view sand-dust image network. The proposed dual-view sharpening encoder and the original encoder are designed to provide complementary feature information, thereby maximizing the diversity of extracted features. At the encoder stage, a parameter-free feature modulation module is introduced and selectively embedded into the encoder branches to enhance feature extraction capability. In the decoding stage, a contextual attention integration module is designed to improve image contrast and enhance regional details by adaptively leveraging variance-based weighting and long-range pixel dependencies. These modules collectively strengthen feature representation and network reconstruction capacity while significantly reducing parameter overhead. Experimental results demonstrate that the proposed network can effectively enhance sand-dust images with fewer network parameters while ensuring performance. Additionally, the proposed algorithm generalizes well to haze and turbid underwater image enhancement. The processed images also improve the detection accuracy of targets such as vehicles and pedestrians, indicating its strong application potential.
为了解决当前有监督的沙尘图像增强网络需要大参数、消耗大量计算资源和存储空间的问题,我们提出了一种轻量级的双视图沙尘图像网络。所提出的双视图锐化编码器和原始编码器旨在提供互补的特征信息,从而最大限度地提高提取特征的多样性。在编码器阶段,引入无参数特征调制模块,并选择性地嵌入到编码器支路中,以增强特征提取能力。在解码阶段,设计了上下文注意集成模块,通过自适应利用基于方差的加权和远程像素依赖关系来提高图像对比度和增强区域细节。这些模块共同增强了特征表示和网络重建能力,同时显著降低了参数开销。实验结果表明,该网络可以在保证性能的前提下,以较少的网络参数有效地增强沙尘图像。此外,该算法对雾霾和浑浊的水下图像增强具有很好的泛化性。处理后的图像还能提高车辆、行人等目标的检测精度,显示出较强的应用潜力。
{"title":"A lightweight dual-view network for sand-dust degraded image enhancement","authors":"Guxue Gao ,&nbsp;Chunyun Sun ,&nbsp;Xiaopeng Wen ,&nbsp;Yang Xiao ,&nbsp;Yuanyuan Wang","doi":"10.1016/j.knosys.2026.115308","DOIUrl":"10.1016/j.knosys.2026.115308","url":null,"abstract":"<div><div>To address the issue that current supervised sand-dust image enhancement networks require large parameters and consume substantial computational resources and storage space, we propose a lightweight dual-view sand-dust image network. The proposed dual-view sharpening encoder and the original encoder are designed to provide complementary feature information, thereby maximizing the diversity of extracted features. At the encoder stage, a parameter-free feature modulation module is introduced and selectively embedded into the encoder branches to enhance feature extraction capability. In the decoding stage, a contextual attention integration module is designed to improve image contrast and enhance regional details by adaptively leveraging variance-based weighting and long-range pixel dependencies. These modules collectively strengthen feature representation and network reconstruction capacity while significantly reducing parameter overhead. Experimental results demonstrate that the proposed network can effectively enhance sand-dust images with fewer network parameters while ensuring performance. Additionally, the proposed algorithm generalizes well to haze and turbid underwater image enhancement. The processed images also improve the detection accuracy of targets such as vehicles and pedestrians, indicating its strong application potential.</div></div>","PeriodicalId":49939,"journal":{"name":"Knowledge-Based Systems","volume":"336 ","pages":"Article 115308"},"PeriodicalIF":7.6,"publicationDate":"2026-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145980775","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GBGCN: Adaptive granular-ball graph representation and clarity-aware GCN for multi-focus image fusion GBGCN:用于多焦点图像融合的自适应颗粒球图表示和清晰度感知GCN
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-08 DOI: 10.1016/j.knosys.2026.115271
Zhendong Xu, Hao Zhai, Zhi Zeng, Bo Lin, Minyu Deng
Multi-focus image fusion technology aims to combine images taken at different focal lengths into a globally clear all-in-focus image. However, traditional methods and existing deep learning methods still face challenges in balancing global semantic modeling with natural boundary preservation. To address this, this paper proposes a novel method that integrates granular-ball computing with graph convolutional neural networks, constructing a dual-branch hybrid architecture. In the graph convolutional neural network branch, we introduce granular-ball computing theory to represent the image as a series of adaptively generated semantic units (i.e., granular-ball), and employ an iterative optimization strategy guided by a deep clarity map to naturally align the granular-ball distribution with the focused regions in the image. Meanwhile, a clarity-aware graph convolutional network is designed to accurately identify focused areas by integrating multidimensional clarity features with a gating mechanism. In the convolutional neural network branch, a lightweight network is responsible for extracting rich local detail features. The two branches achieve deep collaboration through a multi-level feature interaction mechanism. Experimental results on four public datasets demonstrate that, compared to current mainstream methods, the proposed method shows significant advantages in both qualitative and quantitative evaluations.
多焦图像融合技术旨在将不同焦距拍摄的图像组合成全局清晰的全焦图像。然而,传统方法和现有的深度学习方法在平衡全局语义建模和自然边界保存方面仍然面临挑战。为了解决这一问题,本文提出了一种将颗粒球计算与图卷积神经网络相结合的新方法,构建了双分支混合体系结构。在图卷积神经网络分支中,我们引入颗粒球计算理论,将图像表示为一系列自适应生成的语义单元(即颗粒球),并采用深度清晰度图引导的迭代优化策略,将颗粒球分布与图像中的聚焦区域自然对齐。同时,设计了一个清晰感知的图卷积网络,通过将多维清晰特征与门控机制相结合,准确识别焦点区域。在卷积神经网络分支中,轻量级网络负责提取丰富的局部细节特征。两个分支通过多层次的特征交互机制实现深度协作。在4个公开数据集上的实验结果表明,与目前的主流方法相比,本文提出的方法在定性和定量评价方面都具有显著的优势。
{"title":"GBGCN: Adaptive granular-ball graph representation and clarity-aware GCN for multi-focus image fusion","authors":"Zhendong Xu,&nbsp;Hao Zhai,&nbsp;Zhi Zeng,&nbsp;Bo Lin,&nbsp;Minyu Deng","doi":"10.1016/j.knosys.2026.115271","DOIUrl":"10.1016/j.knosys.2026.115271","url":null,"abstract":"<div><div>Multi-focus image fusion technology aims to combine images taken at different focal lengths into a globally clear all-in-focus image. However, traditional methods and existing deep learning methods still face challenges in balancing global semantic modeling with natural boundary preservation. To address this, this paper proposes a novel method that integrates granular-ball computing with graph convolutional neural networks, constructing a dual-branch hybrid architecture. In the graph convolutional neural network branch, we introduce granular-ball computing theory to represent the image as a series of adaptively generated semantic units (i.e., granular-ball), and employ an iterative optimization strategy guided by a deep clarity map to naturally align the granular-ball distribution with the focused regions in the image. Meanwhile, a clarity-aware graph convolutional network is designed to accurately identify focused areas by integrating multidimensional clarity features with a gating mechanism. In the convolutional neural network branch, a lightweight network is responsible for extracting rich local detail features. The two branches achieve deep collaboration through a multi-level feature interaction mechanism. Experimental results on four public datasets demonstrate that, compared to current mainstream methods, the proposed method shows significant advantages in both qualitative and quantitative evaluations.</div></div>","PeriodicalId":49939,"journal":{"name":"Knowledge-Based Systems","volume":"336 ","pages":"Article 115271"},"PeriodicalIF":7.6,"publicationDate":"2026-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145980950","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multiple feature similarities based heterogeneous graph representation 基于多特征相似度的异构图表示
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-08 DOI: 10.1016/j.knosys.2025.115232
Lan Huang , Yihang Geng , Chenghao Li , Rui Zhang
Classical graph representation learning methods are challenged by the non-neglectable variety between nodes and/or between edges in which context the real-world data has to be modeled as heterogeneous graphs. Current researches transform the heterogeneous graphs into homogeneous ones either through metapaths or by direct embeddings projected into the latent spaces. This paper proposes a method to transform the complex various type of data into multiple homogeneous graphs of the target nodes. It captures the semantic feature information of the different neighbor nodes via the multiple feature similarity matrices, and the structural feature information on the metapaths as a complement. Because the method exploits both the semantic and the structural features of the original heterogenous graph to represent the target nodes in the final homogenous graph, it outperforms most of the state of the art baseline methods on public datasets.
经典的图表示学习方法受到节点之间和/或边之间不可忽视的变化的挑战,在这种情况下,现实世界的数据必须被建模为异构图。目前的研究要么通过元路径,要么通过投影到潜在空间的直接嵌入,将异构图转化为同质图。本文提出了一种将复杂的各类数据转化为目标节点的多个同构图的方法。它通过多个特征相似矩阵捕获不同相邻节点的语义特征信息,并将元路径上的结构特征信息作为补充。由于该方法利用了原始异构图的语义和结构特征来表示最终同质图中的目标节点,因此它在公共数据集上优于大多数最先进的基线方法。
{"title":"Multiple feature similarities based heterogeneous graph representation","authors":"Lan Huang ,&nbsp;Yihang Geng ,&nbsp;Chenghao Li ,&nbsp;Rui Zhang","doi":"10.1016/j.knosys.2025.115232","DOIUrl":"10.1016/j.knosys.2025.115232","url":null,"abstract":"<div><div>Classical graph representation learning methods are challenged by the non-neglectable variety between nodes and/or between edges in which context the real-world data has to be modeled as heterogeneous graphs. Current researches transform the heterogeneous graphs into homogeneous ones either through metapaths or by direct embeddings projected into the latent spaces. This paper proposes a method to transform the complex various type of data into multiple homogeneous graphs of the target nodes. It captures the semantic feature information of the different neighbor nodes via the multiple feature similarity matrices, and the structural feature information on the metapaths as a complement. Because the method exploits both the semantic and the structural features of the original heterogenous graph to represent the target nodes in the final homogenous graph, it outperforms most of the state of the art baseline methods on public datasets.</div></div>","PeriodicalId":49939,"journal":{"name":"Knowledge-Based Systems","volume":"336 ","pages":"Article 115232"},"PeriodicalIF":7.6,"publicationDate":"2026-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145981033","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
BiTrans-CDSR: Bidirectional knowledge transfer for cross-domain sequential recommendation via joint user-item overlap modeling bittrans - cdsr:基于联合用户-项目重叠建模的跨领域顺序推荐双向知识转移
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-08 DOI: 10.1016/j.knosys.2026.115282
Tesfaye Fenta Boka , Zhendong Niu
Cross-domain sequential recommendation (CDSR) systems aim to improve accuracy by leveraging knowledge across multiple domains. While existing approaches focus on user or item overlap independently, three crucial scenarios remain unexplored: user partial overlap with item partial overlap, user partial overlap with item full overlap, and user full overlap with item partial overlap. We introduce BiTrans-CDSR, a novel framework that enables bidirectional knowledge transfer through user and item bridges simultaneously. The framework employs large language models (LLMs) to generate pseudo cross-domain interactions. We propose a dual-bridge contrastive learning mechanism to align user behavioral patterns and item semantic relationships, and a bidirectional relevance-aware meta recall network to adaptively weight user-based and item-based signals for retrieving high-quality pseudo-items. Extensive experiments on three real-world datasets demonstrate that BiTrans-CDSR significantly outperforms state-of-the-art baselines across all three scenarios, with average improvements of 13.7% in NDCG@10 and 15.3% in HR@10. BiTrans-CDSR effectively bridges the gap between user-centric and item-centric knowledge transfer, providing a more comprehensive solution for complex cross-domain recommendation.
跨领域顺序推荐(CDSR)系统旨在通过利用跨多个领域的知识来提高准确性。虽然现有的方法独立地关注用户或项目重叠,但仍未探索三个关键场景:用户部分重叠与项目部分重叠,用户部分重叠与项目完全重叠,用户完全重叠与项目部分重叠。我们介绍了bittrans - cdsr,这是一个新颖的框架,可以同时通过用户和项目桥实现双向知识转移。该框架使用大型语言模型(llm)来生成伪跨域交互。我们提出了一个双桥对比学习机制来调整用户行为模式和项目语义关系,以及一个双向关联感知的元回忆网络来自适应地加权基于用户和基于项目的信号来检索高质量的伪项目。在三个真实数据集上进行的大量实验表明,bittrans - cdsr在所有三种情况下都明显优于最先进的基线,在NDCG@10和HR@10的平均改进分别为13.7%和15.3%。bittrans - cdsr有效地弥合了以用户为中心和以项目为中心的知识转移之间的差距,为复杂的跨领域推荐提供了更全面的解决方案。
{"title":"BiTrans-CDSR: Bidirectional knowledge transfer for cross-domain sequential recommendation via joint user-item overlap modeling","authors":"Tesfaye Fenta Boka ,&nbsp;Zhendong Niu","doi":"10.1016/j.knosys.2026.115282","DOIUrl":"10.1016/j.knosys.2026.115282","url":null,"abstract":"<div><div>Cross-domain sequential recommendation (CDSR) systems aim to improve accuracy by leveraging knowledge across multiple domains. While existing approaches focus on user or item overlap independently, three crucial scenarios remain unexplored: user partial overlap with item partial overlap, user partial overlap with item full overlap, and user full overlap with item partial overlap. We introduce BiTrans-CDSR, a novel framework that enables bidirectional knowledge transfer through user and item bridges simultaneously. The framework employs large language models (LLMs) to generate pseudo cross-domain interactions. We propose a dual-bridge contrastive learning mechanism to align user behavioral patterns and item semantic relationships, and a bidirectional relevance-aware meta recall network to adaptively weight user-based and item-based signals for retrieving high-quality pseudo-items. Extensive experiments on three real-world datasets demonstrate that BiTrans-CDSR significantly outperforms state-of-the-art baselines across all three scenarios, with average improvements of 13.7% in NDCG@10 and 15.3% in HR@10. BiTrans-CDSR effectively bridges the gap between user-centric and item-centric knowledge transfer, providing a more comprehensive solution for complex cross-domain recommendation.</div></div>","PeriodicalId":49939,"journal":{"name":"Knowledge-Based Systems","volume":"336 ","pages":"Article 115282"},"PeriodicalIF":7.6,"publicationDate":"2026-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145980892","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improving the Adam optimizer via projection-based gradient correction in deep learning 深度学习中基于投影梯度校正的Adam优化器改进
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-08 DOI: 10.1016/j.knosys.2026.115267
Alaa Luqman Ibrahim , Bayda Ghanim Fathi , Maiwan Bahjat Abdulrazzaq
Deep neural networks (DNNs) are widely used for large-scale learning tasks because of their ability to model complex relationships within data. The Adaptive Moment Estimation (Adam) optimizer is a popular choice for training DNNs; however, its generalization performance can be suboptimal on challenging datasets. To address this limitation, we propose three modified Adam variants (Adam-V1, Adam-V2, and Adam-V3) that incorporate a projection-based gradient-correction mechanism inspired by quasi-Newton and conjugate gradient methods. This correction introduces curvature awareness without requiring full Hessian computations, improving convergence stability and reducing the tendency to settle at sharp or poorly generalizing minima. The proposed methods were systematically evaluated on both low- and high-dimensional tasks, including one- and two-variable non-convex functions, two-dimensional image segmentation, image classification using CNNs on MNIST, CIFAR- 10, and the more challenging CIFAR-100 datasets, as well as ResNet-based architectures on CIFAR-10. In addition, robustness on non-stationary real-world signals was assessed through ECG beat classification using the MIT-BIH Arrhythmia dataset. Experimental results demonstrate consistent improvements over baseline Adam. On CNN models trained on MNIST, Adam-V2 achieved the highest accuracy of 97.93 %, surpassing standard Adam (96.48 %) and highlighting the benefit of combining gradient correction with adaptive step-size adjustment in lower-dimensional settings. For CNNs trained on CIFAR-10, Adam-V3 attained a validation accuracy of 73.59 %, improving generalization relative to Adam (72.44 %). On the more complex CIFAR-100 dataset, the proposed variants consistently outperformed baseline Adam and recent adaptive optimizers in terms of accuracy and F1-score. Using a ResNet-50 model on CIFAR-10, Adam-V1 reached the highest accuracy of 79.9 %, while Adam-V3 achieved the best F1-score of 0.704, demonstrating strong performance in deeper network architectures. These results show that curvature-aware gradient corrections enhance convergence speed, stability, and generalization in deep learning tasks with minimal additional computational overhead. The proposed optimizers offer practical advantages for both shallow and deep architectures, providing a simple and effective improvement to existing adaptive optimization methods.
深度神经网络(dnn)由于能够对数据中的复杂关系进行建模而被广泛用于大规模学习任务。自适应矩估计(Adam)优化器是训练深度神经网络的常用选择;然而,它的泛化性能在具有挑战性的数据集上可能不是最优的。为了解决这一限制,我们提出了三种改进的Adam变体(Adam- v1, Adam- v2和Adam- v3),它们结合了基于投影的梯度校正机制,该机制受到准牛顿和共轭梯度方法的启发。这种修正引入了曲率感知,而不需要完整的Hessian计算,提高了收敛稳定性,减少了在急剧或泛化性差的最小值处定居的趋势。在低维和高维任务上对所提出的方法进行了系统评估,包括单变量和双变量非凸函数、二维图像分割、在MNIST、CIFAR-10和更具挑战性的CIFAR-100数据集上使用cnn进行图像分类,以及在CIFAR-10上基于resnet的架构。此外,通过使用MIT-BIH心律失常数据集进行心电跳动分类,评估了对非平稳现实信号的鲁棒性。实验结果表明,与基线亚当相比,改进是一致的。在MNIST训练的CNN模型上,Adam- v2达到了97.93%的最高准确率,超过了标准Adam(96.48%),突出了在低维环境下将梯度校正与自适应步长调整相结合的好处。对于在CIFAR-10上训练的cnn, Adam- v3获得了73.59%的验证准确率,相对于Adam(72.44%)提高了泛化。在更复杂的CIFAR-100数据集上,所提出的变体在准确性和f1分数方面始终优于基线Adam和最近的自适应优化器。在CIFAR-10上使用ResNet-50模型,Adam-V1的准确率最高,达到79.9%,而Adam-V3的f1得分最高,为0.704,在更深层次的网络架构中表现出较强的性能。这些结果表明,曲率感知梯度修正在最小的额外计算开销下提高了深度学习任务的收敛速度、稳定性和泛化。所提出的优化器在浅层和深层架构中都具有实际优势,对现有的自适应优化方法进行了简单有效的改进。
{"title":"Improving the Adam optimizer via projection-based gradient correction in deep learning","authors":"Alaa Luqman Ibrahim ,&nbsp;Bayda Ghanim Fathi ,&nbsp;Maiwan Bahjat Abdulrazzaq","doi":"10.1016/j.knosys.2026.115267","DOIUrl":"10.1016/j.knosys.2026.115267","url":null,"abstract":"<div><div>Deep neural networks (DNNs) are widely used for large-scale learning tasks because of their ability to model complex relationships within data. The Adaptive Moment Estimation (Adam) optimizer is a popular choice for training DNNs; however, its generalization performance can be suboptimal on challenging datasets. To address this limitation, we propose three modified Adam variants (Adam-V1, Adam-V2, and Adam-V3) that incorporate a projection-based gradient-correction mechanism inspired by quasi-Newton and conjugate gradient methods. This correction introduces curvature awareness without requiring full Hessian computations, improving convergence stability and reducing the tendency to settle at sharp or poorly generalizing minima. The proposed methods were systematically evaluated on both low- and high-dimensional tasks, including one- and two-variable non-convex functions, two-dimensional image segmentation, image classification using CNNs on MNIST, CIFAR- 10, and the more challenging CIFAR-100 datasets, as well as ResNet-based architectures on CIFAR-10. In addition, robustness on non-stationary real-world signals was assessed through ECG beat classification using the MIT-BIH Arrhythmia dataset. Experimental results demonstrate consistent improvements over baseline Adam. On CNN models trained on MNIST, Adam-V2 achieved the highest accuracy of 97.93 %, surpassing standard Adam (96.48 %) and highlighting the benefit of combining gradient correction with adaptive step-size adjustment in lower-dimensional settings. For CNNs trained on CIFAR-10, Adam-V3 attained a validation accuracy of 73.59 %, improving generalization relative to Adam (72.44 %). On the more complex CIFAR-100 dataset, the proposed variants consistently outperformed baseline Adam and recent adaptive optimizers in terms of accuracy and F1-score. Using a ResNet-50 model on CIFAR-10, Adam-V1 reached the highest accuracy of 79.9 %, while Adam-V3 achieved the best F1-score of 0.704, demonstrating strong performance in deeper network architectures. These results show that curvature-aware gradient corrections enhance convergence speed, stability, and generalization in deep learning tasks with minimal additional computational overhead. The proposed optimizers offer practical advantages for both shallow and deep architectures, providing a simple and effective improvement to existing adaptive optimization methods.</div></div>","PeriodicalId":49939,"journal":{"name":"Knowledge-Based Systems","volume":"336 ","pages":"Article 115267"},"PeriodicalIF":7.6,"publicationDate":"2026-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145980774","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MDR: Memory distillation and reproduction for personalized dialogue generation MDR:个性化对话生成的记忆蒸馏和再现
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-08 DOI: 10.1016/j.knosys.2025.115252
Pengli Wu , Xuebing Yang , Yanlong Wen , Wensheng Zhang
Personalized dialogue generation requires chatbots to generate dialogue content that meets users’ persona and aligns with historical interactions. The long conversations pose difficulties for personalized and coherent responses, which becomes more challenging given that most current systems generate responses by directly encoding features derived from various persona. To make better use of the correlation between encoded features and actual responses, in this paper the Memory Distillation and Reproduction (MDR) framework is proposed. For utterance feature encoding, we utilize the student encoder to align with and fit the response features encoded by the teacher encoder through knowledge distillation, enhancing the understanding of underlying persona and complex contexts. For response generation, the decoding process is tailored to accommodate the contribution degree of response tokens. Therefore, MDR integrates users’ historical dialogue and personalized knowledge to construct up-to-date user profiles. Extensive experiments are conducted on ConvAI2 and Baidu PersonaChat datasets, compared with 8 strong existing methods through automatic evaluation. The results validate the superiority of MDR in terms of Coherence, Diversity and Consistency. Notably, MDR achieves BLEU-1 20.33 and Coh-Con.S 38.06 on ConvAI2, and ROUGE-L 30.05 and S-Dist-2 91.23 on Baidu PersonaChat.
个性化对话生成要求聊天机器人生成符合用户角色并与历史交互保持一致的对话内容。长时间的对话给个性化和连贯的响应带来了困难,考虑到大多数当前系统通过直接编码来自各种角色的特征来生成响应,这变得更具挑战性。为了更好地利用编码特征与实际响应之间的相关性,本文提出了记忆提取与再现(MDR)框架。对于话语特征编码,我们利用学生编码器通过知识蒸馏与教师编码器编码的响应特征进行对齐和拟合,增强对潜在人物角色和复杂上下文的理解。对于响应生成,解码过程被定制以适应响应令牌的贡献程度。因此,MDR集成了用户的历史对话和个性化知识来构建最新的用户配置文件。在ConvAI2和百度PersonaChat数据集上进行了大量实验,通过自动评价与现有8种较强的方法进行了比较。结果验证了多目标预测在一致性、多样性和一致性方面的优越性。值得注意的是,MDR达到了BLEU-1 20.33和Coh-Con。rouge - l30.05和S- dist -2 91.23在百度PersonaChat上。
{"title":"MDR: Memory distillation and reproduction for personalized dialogue generation","authors":"Pengli Wu ,&nbsp;Xuebing Yang ,&nbsp;Yanlong Wen ,&nbsp;Wensheng Zhang","doi":"10.1016/j.knosys.2025.115252","DOIUrl":"10.1016/j.knosys.2025.115252","url":null,"abstract":"<div><div>Personalized dialogue generation requires chatbots to generate dialogue content that meets users’ persona and aligns with historical interactions. The long conversations pose difficulties for personalized and coherent responses, which becomes more challenging given that most current systems generate responses by directly encoding features derived from various persona. To make better use of the correlation between encoded features and actual responses, in this paper the Memory Distillation and Reproduction (MDR) framework is proposed. For utterance feature encoding, we utilize the student encoder to align with and fit the response features encoded by the teacher encoder through knowledge distillation, enhancing the understanding of underlying persona and complex contexts. For response generation, the decoding process is tailored to accommodate the contribution degree of response tokens. Therefore, MDR integrates users’ historical dialogue and personalized knowledge to construct up-to-date user profiles. Extensive experiments are conducted on ConvAI2 and Baidu PersonaChat datasets, compared with 8 strong existing methods through automatic evaluation. The results validate the superiority of MDR in terms of Coherence, Diversity and Consistency. Notably, MDR achieves BLEU-1 20.33 and Coh-Con.S 38.06 on ConvAI2, and ROUGE-L 30.05 and S-Dist-2 91.23 on Baidu PersonaChat.</div></div>","PeriodicalId":49939,"journal":{"name":"Knowledge-Based Systems","volume":"336 ","pages":"Article 115252"},"PeriodicalIF":7.6,"publicationDate":"2026-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145981035","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing medical MLLMs with dual vision encoders and MoE-based modality projector 用双视觉编码器和基于moe的模态投影仪增强医疗mlm
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-07 DOI: 10.1016/j.knosys.2026.115275
Feizhong Zhou, Xingyue Liu, Zhiying Yang, Zhipeng Li, Hanguang Xiao
Recent advancements in Medical Multimodal Large Language Models (Med-MLLMs) have primarily concentrated on improving the large language model (LLM) backbone, constructing high-quality multimodal datasets, and extending model architectures. However, other key components, specifically the vision encoder and modality connector, remain underexplored. Most existing Med-MLLMs rely exclusively on high-level visual features from a single vision encoder, which can lead to the loss of fine-grained details and the introduction of visual bias. Furthermore, employing a single MLP as the modality connector enforces a static, single-path mapping between vision and language. This severely limits the model’s capacity to achieve robust modality alignment, particularly when handling complex and heterogeneous visual features. To address these limitations, we propose DM-Fuse, a novel feature fusion module. DM-Fuse integrates multi-level features from two complementary vision encoders (CLIP and DINOv2) via adaptive weighting and cross-attention, thereby substantially enhancing visual perception. In addition, we introduce MoE-Projector, a novel modality connector built upon a Mixture-of-Experts (MoE) architecture. It employs a dynamic routing mechanism to selectively activate the most relevant sub-projectors, enabling more adaptive and precise vision-language alignment. Building on these innovations, we develop Agamotto, an efficient Med-MLLM with only 4.6B parameters. Experimental results show that Agamotto substantially outperforms state-of-the-art methods across three medical Visual Question Answering (VQA) benchmarks. This underscores the necessity of jointly optimizing vision encoders and modality connectors to advance Med-MLLM performance. The code has been released on Github: https://github.com/NyKxo1/Agamotto.
医学多模态大型语言模型(med - mllm)的最新进展主要集中在改进大型语言模型(LLM)主干、构建高质量的多模态数据集和扩展模型体系结构。然而,其他关键组件,特别是视觉编码器和模态连接器,仍未得到充分开发。大多数现有的med - mlm完全依赖于来自单个视觉编码器的高级视觉特征,这可能导致细粒度细节的丢失和视觉偏差的引入。此外,使用单个MLP作为模态连接器可以在视觉和语言之间强制执行静态的单路径映射。这严重限制了模型实现健壮模态对齐的能力,特别是在处理复杂和异构的视觉特征时。为了解决这些限制,我们提出了一种新的特征融合模块DM-Fuse。DM-Fuse通过自适应加权和交叉注意集成了两个互补视觉编码器(CLIP和DINOv2)的多层次特征,从而大大增强了视觉感知。此外,我们还介绍了MoE- projector,这是一种基于混合专家(MoE)架构的新型模态连接器。它采用动态路由机制来选择性地激活最相关的子投影仪,从而实现更自适应和精确的视觉语言对齐。在这些创新的基础上,我们开发了Agamotto,一种只有4.6B参数的高效med - mlm。实验结果表明,Agamotto在三个医学视觉问答(VQA)基准测试中显著优于最先进的方法。这强调了联合优化视觉编码器和模态连接器以提高med - mlm性能的必要性。代码已经在Github上发布:https://github.com/NyKxo1/Agamotto。
{"title":"Enhancing medical MLLMs with dual vision encoders and MoE-based modality projector","authors":"Feizhong Zhou,&nbsp;Xingyue Liu,&nbsp;Zhiying Yang,&nbsp;Zhipeng Li,&nbsp;Hanguang Xiao","doi":"10.1016/j.knosys.2026.115275","DOIUrl":"10.1016/j.knosys.2026.115275","url":null,"abstract":"<div><div>Recent advancements in Medical Multimodal Large Language Models (Med-MLLMs) have primarily concentrated on improving the large language model (LLM) backbone, constructing high-quality multimodal datasets, and extending model architectures. However, other key components, specifically the vision encoder and modality connector, remain underexplored. Most existing Med-MLLMs rely exclusively on high-level visual features from a single vision encoder, which can lead to the loss of fine-grained details and the introduction of visual bias. Furthermore, employing a single MLP as the modality connector enforces a static, single-path mapping between vision and language. This severely limits the model’s capacity to achieve robust modality alignment, particularly when handling complex and heterogeneous visual features. To address these limitations, we propose DM-Fuse, a novel feature fusion module. DM-Fuse integrates multi-level features from two complementary vision encoders (CLIP and DINOv2) via adaptive weighting and cross-attention, thereby substantially enhancing visual perception. In addition, we introduce MoE-Projector, a novel modality connector built upon a Mixture-of-Experts (MoE) architecture. It employs a dynamic routing mechanism to selectively activate the most relevant sub-projectors, enabling more adaptive and precise vision-language alignment. Building on these innovations, we develop Agamotto, an efficient Med-MLLM with only 4.6B parameters. Experimental results show that Agamotto substantially outperforms state-of-the-art methods across three medical Visual Question Answering (VQA) benchmarks. This underscores the necessity of jointly optimizing vision encoders and modality connectors to advance Med-MLLM performance. The code has been released on Github: <span><span><u>https://github.com/NyKxo1/Agamotto</u></span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49939,"journal":{"name":"Knowledge-Based Systems","volume":"336 ","pages":"Article 115275"},"PeriodicalIF":7.6,"publicationDate":"2026-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145980953","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dual contrastive learning with behavior pattern modeling for session-based recommendation 基于会话推荐的双对比学习与行为模式建模
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-07 DOI: 10.1016/j.knosys.2026.115281
Jiarun Sun , Ling Dai , Ren Guan , Liang Duan
Session-based recommendation (SBR) that provides personalized predictions based on anonymous users’ short-term clicks has recently gained widespread attention. Nowadays, numerous SBR models overlook the joint extraction of explicit and implicit feedback, leading to biases in user behavior modeling. Meanwhile, most methods fail to fully leverage the latent information in repeated items and click orders within sessions, exacerbating the negative effects of data sparsity in SBR. To address the above issues, we propose the Dual Contrastive Learning with Behavior Pattern Modeling (DCL-BPM) method, which maximizes the use of short-term session information while extracting long-range user dependencies for recommendation. Specifically, we first employ GGNN and E-GNN to extract implicit and explicit feedback separately, effectively combining them to construct an accurate dynamic user profile. We then add the filtered session embeddings to prevent data loss caused by gradient mismatch. To better capture user preferences, we design a Dual Contrastive Loss (DCL) framework that constructs negative samples through deduplication and random reshuffling, highlighting the critical role of item frequency and click orders in positive samples during training. DCL is not limited by the network architecture, making it easily adaptable to diverse scenarios in SBR. Extensive experiments on three representative datasets demonstrate the effectiveness of our model and its practical value in real-world applications.
基于会话的推荐(SBR)基于匿名用户的短期点击提供个性化预测,最近受到了广泛关注。目前,许多SBR模型忽略了显式和隐式反馈的联合提取,导致用户行为建模存在偏差。同时,大多数方法未能充分利用会话内重复项目和点击订单中的潜在信息,加剧了SBR中数据稀疏性的负面影响。为了解决上述问题,我们提出了基于行为模式建模(DCL-BPM)的双重对比学习方法,该方法在提取长期用户依赖关系进行推荐的同时,最大限度地利用了短期会话信息。具体而言,我们首先使用GGNN和E-GNN分别提取隐式和显式反馈,有效地将它们结合起来构建准确的动态用户画像。然后,我们添加过滤会话嵌入,以防止梯度不匹配导致的数据丢失。为了更好地捕捉用户偏好,我们设计了一个双对比损失(Dual contrtive Loss, DCL)框架,该框架通过重复数据删除和随机重组构建负样本,突出了项目频率和点击顺序在训练过程中对正样本的关键作用。DCL不受网络体系结构的限制,因此很容易适应SBR中的各种场景。在三个具有代表性的数据集上进行的大量实验证明了该模型的有效性及其在实际应用中的实用价值。
{"title":"Dual contrastive learning with behavior pattern modeling for session-based recommendation","authors":"Jiarun Sun ,&nbsp;Ling Dai ,&nbsp;Ren Guan ,&nbsp;Liang Duan","doi":"10.1016/j.knosys.2026.115281","DOIUrl":"10.1016/j.knosys.2026.115281","url":null,"abstract":"<div><div>Session-based recommendation (SBR) that provides personalized predictions based on anonymous users’ short-term clicks has recently gained widespread attention. Nowadays, numerous SBR models overlook the joint extraction of explicit and implicit feedback, leading to biases in user behavior modeling. Meanwhile, most methods fail to fully leverage the latent information in repeated items and click orders within sessions, exacerbating the negative effects of data sparsity in SBR. To address the above issues, we propose the Dual Contrastive Learning with Behavior Pattern Modeling (<strong>DCL-BPM</strong>) method, which maximizes the use of short-term session information while extracting long-range user dependencies for recommendation. Specifically, we first employ GGNN and E-GNN to extract implicit and explicit feedback separately, effectively combining them to construct an accurate dynamic user profile. We then add the filtered session embeddings to prevent data loss caused by gradient mismatch. To better capture user preferences, we design a Dual Contrastive Loss (<strong>DCL</strong>) framework that constructs negative samples through deduplication and random reshuffling, highlighting the critical role of item frequency and click orders in positive samples during training. DCL is not limited by the network architecture, making it easily adaptable to diverse scenarios in SBR. Extensive experiments on three representative datasets demonstrate the effectiveness of our model and its practical value in real-world applications.</div></div>","PeriodicalId":49939,"journal":{"name":"Knowledge-Based Systems","volume":"336 ","pages":"Article 115281"},"PeriodicalIF":7.6,"publicationDate":"2026-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145981034","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Knowledge-Based Systems
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1