首页 > 最新文献

Knowledge-Based Systems最新文献

英文 中文
Multimodal summarization via coarse-and-fine granularity synergy and region counterfactual reasoning filter 基于粗细粒度协同和区域反事实推理过滤的多模态总结
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-24 DOI: 10.1016/j.knosys.2026.115356
Rulong Liu , Qing He , Yuji Wang , Nisuo Du , Zhihao Yang
Multimodal Summarization (MS) generates high-quality summaries by integrating textual and visual information. However, existing MS research faces several challenges, including (1) ignoring fine-grained key information between visual and textual modalities and interaction with coarse-grained information, (2) cross-modal semantic inconsistency, which hinders alignment and fusion of visual and textual feature spaces, and (3) ignoring inherent heterogeneity of an image when filtering visual information, which causes excessive filtering or excessive retention. To address these issues, we propose Coarse-and-Fine Granularity Synergy and Region Counterfactual Reasoning Filter (CFCR) for MS. Specifically, we design Coarse-and-Fine Granularity Synergy (CFS) to capture both global (coarse-grained) and important detailed (fine-grained) information in text and image modalities. Based on this, we design Dual-granularity Contrastive Learning (DCL) for mapping coarse-grained and fine-grained visual features into the text semantic space, thereby reducing semantic inconsistency caused by modality differences at dual granularity levels, and facilitating cross-modal alignment. To address the issue of excessive filtering or excessive retention in visual information filtering, we design a Region Counterfactual Reasoning Filter (RCF) that employs Counterfactual Reasoning to determine the validity of image regions and generate category labels. These labels are then used to train Image Region Selector to select regions beneficial for summarization. Extensive experiments on the representative MMSS and MSMO dataset show that CFCR outperforms multiple strong baselines, particularly in terms of selecting and focusing on critical details, demonstrating its effectiveness in MS.
多模态摘要(MS)通过整合文本信息和视觉信息生成高质量的摘要。然而,现有的MS研究面临着一些挑战,包括:(1)忽略了视觉和文本模式之间的细粒度关键信息以及与粗粒度信息的交互;(2)跨模态语义不一致,阻碍了视觉和文本特征空间的对齐和融合;(3)在过滤视觉信息时忽略了图像固有的异质性,导致过度过滤或过度保留。为了解决这些问题,我们提出了粗粒度和细粒度协同以及区域反事实推理过滤器(CFCR),具体来说,我们设计了粗粒度和细粒度协同(CFS)来捕获文本和图像模式中的全局(粗粒度)和重要的详细(细粒度)信息。在此基础上,我们设计了双粒度对比学习(dual -granularity Contrastive Learning, DCL),将粗粒度和细粒度的视觉特征映射到文本语义空间中,从而减少双粒度水平上模态差异导致的语义不一致,促进跨模态对齐。为了解决视觉信息过滤中过度过滤或过度保留的问题,我们设计了一个区域反事实推理过滤器(RCF),该过滤器使用反事实推理来确定图像区域的有效性并生成类别标签。然后使用这些标签来训练图像区域选择器来选择有利于摘要的区域。在代表性的MMSS和MSMO数据集上进行的大量实验表明,CFCR优于多个强基线,特别是在选择和关注关键细节方面,证明了其在MS中的有效性。
{"title":"Multimodal summarization via coarse-and-fine granularity synergy and region counterfactual reasoning filter","authors":"Rulong Liu ,&nbsp;Qing He ,&nbsp;Yuji Wang ,&nbsp;Nisuo Du ,&nbsp;Zhihao Yang","doi":"10.1016/j.knosys.2026.115356","DOIUrl":"10.1016/j.knosys.2026.115356","url":null,"abstract":"<div><div>Multimodal Summarization (MS) generates high-quality summaries by integrating textual and visual information. However, existing MS research faces several challenges, including (1) ignoring fine-grained key information between visual and textual modalities and interaction with coarse-grained information, (2) cross-modal semantic inconsistency, which hinders alignment and fusion of visual and textual feature spaces, and (3) ignoring inherent heterogeneity of an image when filtering visual information, which causes excessive filtering or excessive retention. To address these issues, we propose Coarse-and-Fine Granularity Synergy and Region Counterfactual Reasoning Filter (CFCR) for MS. Specifically, we design Coarse-and-Fine Granularity Synergy (CFS) to capture both global (coarse-grained) and important detailed (fine-grained) information in text and image modalities. Based on this, we design Dual-granularity Contrastive Learning (DCL) for mapping coarse-grained and fine-grained visual features into the text semantic space, thereby reducing semantic inconsistency caused by modality differences at dual granularity levels, and facilitating cross-modal alignment. To address the issue of excessive filtering or excessive retention in visual information filtering, we design a Region Counterfactual Reasoning Filter (RCF) that employs Counterfactual Reasoning to determine the validity of image regions and generate category labels. These labels are then used to train Image Region Selector to select regions beneficial for summarization. Extensive experiments on the representative MMSS and MSMO dataset show that CFCR outperforms multiple strong baselines, particularly in terms of selecting and focusing on critical details, demonstrating its effectiveness in MS.</div></div>","PeriodicalId":49939,"journal":{"name":"Knowledge-Based Systems","volume":"337 ","pages":"Article 115356"},"PeriodicalIF":7.6,"publicationDate":"2026-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146081244","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Temporal householder transformation embedding for temporal knowledge graph completion 时态知识图补全的时态户主变换嵌入
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-24 DOI: 10.1016/j.knosys.2026.115406
Zhiyu Xu , Kai Lin , Pengpeng Qiu , Tong Shen , Fu Zhang
Knowledge Graph Embedding (KGE) has been widely used to address the incompleteness of Knowledge Graph (KG) by predicting missing facts. Temporal Knowledge Graph Embedding (TKGE) extends KGE by incorporating temporal information into fact representations. However, most existing research focuses on static graphs and ignores the temporal dynamics of facts in TKG, which poses significant challenges for link prediction. Furthermore, current TKGE models still struggle with effectively capturing and representing crucial relation patterns, including symmetry, antisymmetry, inversion, composition, and temporal, along with complex relation mapping properties like 1-to-N, N-to-1, and N-to-N. To overcome these challenges, we propose a Temporal Householder Transformation Embedding model called TeHTE, which fuses temporal information with Householder transformation to capture both static and temporal features within TKG effectively. In the static module, TeHTE constructs static entity embeddings by reflecting the head entity through a transfer matrix and represents each relation with a pair of vectors to capture relational semantics. In the temporal module, TeHTE integrates temporal information into the entity representation through the time transfer matrix and shared time window, thereby enhancing its ability to capture temporal features. To further enhance modeling capacity, TeHTE learns a set of Householder transformations parameterized by relations to obtain structural embeddings for entities. Moreover, we theoretically demonstrate the ability of TeHTE to model various relation patterns and mapping properties. Experimental results on four benchmark datasets indicate that TeHTE substantially surpasses most existing TKGE approaches on temporal link prediction tasks. Ablation studies further validate the contribution of each component within the TeHTE framework.
知识图嵌入(Knowledge Graph Embedding, KGE)被广泛应用于通过预测缺失事实来解决知识图的不完备性问题。时间知识图嵌入(TKGE)通过将时间信息整合到事实表示中来扩展知识图嵌入。然而,现有的研究大多集中在静态图上,忽略了TKG中事实的时间动态,这给链路预测带来了重大挑战。此外,当前的TKGE模型仍然难以有效地捕获和表示关键的关系模式,包括对称、反对称、反转、组合和时间,以及复杂的关系映射属性,如1到n、n到1和n到n。为了克服这些挑战,我们提出了一种称为TeHTE的时间户主转换嵌入模型,该模型将时间信息与户主转换融合在一起,以有效地捕获TKG中的静态和时间特征。在静态模块中,TeHTE通过传递矩阵反映头部实体来构造静态实体嵌入,并用一对向量表示每个关系,以捕获关系语义。在时间模块中,TeHTE通过时间传递矩阵和共享时间窗口将时间信息整合到实体表示中,增强了对时间特征的捕捉能力。为了进一步增强建模能力,TeHTE学习了一组由关系参数化的Householder转换,以获得实体的结构嵌入。此外,我们从理论上证明了tete对各种关系模式和映射属性建模的能力。在4个基准数据集上的实验结果表明,在时间链路预测任务上,TeHTE方法大大优于现有的大多数TKGE方法。消融研究进一步验证了TeHTE框架内每个组成部分的贡献。
{"title":"Temporal householder transformation embedding for temporal knowledge graph completion","authors":"Zhiyu Xu ,&nbsp;Kai Lin ,&nbsp;Pengpeng Qiu ,&nbsp;Tong Shen ,&nbsp;Fu Zhang","doi":"10.1016/j.knosys.2026.115406","DOIUrl":"10.1016/j.knosys.2026.115406","url":null,"abstract":"<div><div>Knowledge Graph Embedding (KGE) has been widely used to address the incompleteness of Knowledge Graph (KG) by predicting missing facts. Temporal Knowledge Graph Embedding (TKGE) extends KGE by incorporating temporal information into fact representations. However, most existing research focuses on static graphs and ignores the temporal dynamics of facts in TKG, which poses significant challenges for link prediction. Furthermore, current TKGE models still struggle with effectively capturing and representing crucial relation patterns, including <em>symmetry, antisymmetry, inversion, composition</em>, and <em>temporal</em>, along with complex relation mapping properties like 1<em>-to-N, N-to-</em>1, and <em>N-to-N</em>. To overcome these challenges, we propose a <strong>Te</strong>mporal <strong>H</strong>ouseholder <strong>T</strong>ransformation <strong>E</strong>mbedding model called TeHTE, which fuses temporal information with Householder transformation to capture both static and temporal features within TKG effectively. In the static module, TeHTE constructs static entity embeddings by reflecting the head entity through a transfer matrix and represents each relation with a pair of vectors to capture relational semantics. In the temporal module, TeHTE integrates temporal information into the entity representation through the time transfer matrix and shared time window, thereby enhancing its ability to capture temporal features. To further enhance modeling capacity, TeHTE learns a set of Householder transformations parameterized by relations to obtain structural embeddings for entities. Moreover, we theoretically demonstrate the ability of TeHTE to model various relation patterns and mapping properties. Experimental results on four benchmark datasets indicate that TeHTE substantially surpasses most existing TKGE approaches on temporal link prediction tasks. Ablation studies further validate the contribution of each component within the TeHTE framework.</div></div>","PeriodicalId":49939,"journal":{"name":"Knowledge-Based Systems","volume":"337 ","pages":"Article 115406"},"PeriodicalIF":7.6,"publicationDate":"2026-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146081104","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-View fusion feature representation learning for drug-target interaction prediction 多视图融合特征表示学习用于药物-靶标相互作用预测
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-23 DOI: 10.1016/j.knosys.2026.115364
Hua Duan, Junyue Dong, Yufei Zhao, Shiduo Wang, Wenhao Wang
Prediction of Drug-Target Interactions(DTI) is crucial for drug discovery. Heterogeneous graph neural networks(HGNNs) provide an efficient computational approach by modeling complex biological networks, overcoming the high cost and time constraints associated with traditional experimental methods. However, existing HGNNs primarily rely on meta-path-based topological learning, often overlooking attribute similarities between nodes and inherent structural consistency. This single-perspective learning mechanism limits their ability to leverage multi-source heterogeneous information, resulting in poor generalization performance, particularly under sparse data scenarios. To address these issues, this paper proposes MV-HGNN, a multi-view fusion model. It learns comprehensive feature embeddings for drugs and proteins from three complementary perspectives: 1) A View-Specific Topology Embedding Module, which captures topology-driven representations through graph propagation and aggregation; 2) A Structure-Consensus-Aware Cross-Domain Alignment Module, which identifies latent structural consistency by mining original node features, thereby compensating for missing topological information in sparse networks; 3) A Latent Space Semantic Regularization Aggregation Module, which enhances generalization with scarce samples by pulling semantically similar nodes closer in the refined latent embedding space. The complementary features learned from these topological, structural, and semantic views are fused via an adaptive attention mechanism. The DTI prediction task is formulated as a classification problem on a constructed Drug-Protein Pair(DPP) graph. Experimental results demonstrate that MV-HGNN significantly outperforms existing baseline methods across multiple metrics.
药物-靶标相互作用预测(DTI)对药物发现至关重要。异构图神经网络(hgnn)通过对复杂生物网络进行建模,克服了传统实验方法的高成本和时间限制,提供了一种高效的计算方法。然而,现有的hgnn主要依赖于基于元路径的拓扑学习,往往忽略了节点之间的属性相似性和固有结构一致性。这种单视角学习机制限制了它们利用多源异构信息的能力,导致泛化性能差,特别是在稀疏数据场景下。为了解决这些问题,本文提出了多视图融合模型MV-HGNN。它从三个互补的角度学习药物和蛋白质的综合特征嵌入:1)特定于视图的拓扑嵌入模块,通过图传播和聚合捕获拓扑驱动的表示;2)基于结构共识的跨域对齐模块,通过挖掘原始节点特征来识别潜在的结构一致性,从而补偿稀疏网络中缺失的拓扑信息;3)潜在空间语义正则化聚合模块,该模块通过在精细化的潜在嵌入空间中拉紧语义相似的节点来增强稀缺样本的泛化。从这些拓扑、结构和语义视图中学习到的互补特征通过自适应注意机制融合在一起。将DTI预测任务表述为构建的药物-蛋白质对(Drug-Protein Pair, DPP)图的分类问题。实验结果表明,MV-HGNN在多个指标上明显优于现有的基线方法。
{"title":"Multi-View fusion feature representation learning for drug-target interaction prediction","authors":"Hua Duan,&nbsp;Junyue Dong,&nbsp;Yufei Zhao,&nbsp;Shiduo Wang,&nbsp;Wenhao Wang","doi":"10.1016/j.knosys.2026.115364","DOIUrl":"10.1016/j.knosys.2026.115364","url":null,"abstract":"<div><div>Prediction of Drug-Target Interactions(DTI) is crucial for drug discovery. Heterogeneous graph neural networks(HGNNs) provide an efficient computational approach by modeling complex biological networks, overcoming the high cost and time constraints associated with traditional experimental methods. However, existing HGNNs primarily rely on meta-path-based topological learning, often overlooking attribute similarities between nodes and inherent structural consistency. This single-perspective learning mechanism limits their ability to leverage multi-source heterogeneous information, resulting in poor generalization performance, particularly under sparse data scenarios. To address these issues, this paper proposes MV-HGNN, a multi-view fusion model. It learns comprehensive feature embeddings for drugs and proteins from three complementary perspectives: 1) A View-Specific Topology Embedding Module, which captures topology-driven representations through graph propagation and aggregation; 2) A Structure-Consensus-Aware Cross-Domain Alignment Module, which identifies latent structural consistency by mining original node features, thereby compensating for missing topological information in sparse networks; 3) A Latent Space Semantic Regularization Aggregation Module, which enhances generalization with scarce samples by pulling semantically similar nodes closer in the refined latent embedding space. The complementary features learned from these topological, structural, and semantic views are fused via an adaptive attention mechanism. The DTI prediction task is formulated as a classification problem on a constructed Drug-Protein Pair(DPP) graph. Experimental results demonstrate that MV-HGNN significantly outperforms existing baseline methods across multiple metrics.</div></div>","PeriodicalId":49939,"journal":{"name":"Knowledge-Based Systems","volume":"337 ","pages":"Article 115364"},"PeriodicalIF":7.6,"publicationDate":"2026-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146081163","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A generalizable anomaly detection framework with dynamic concept drift suppression for non-stationary time series 一种具有动态概念的非平稳时间序列异常检测框架
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-23 DOI: 10.1016/j.knosys.2026.115380
Licheng Yang , Yu Yao , Daoqing Yang , Wei Yang , Yuming Hao
In practical applications, the performance of industrial data stream anomaly detection methods often degrades due to concept drift. The core bottleneck lies in the fact that existing algorithms struggle to dynamically perceive the coupling relationship between data distribution changes and anomaly patterns. This paper proposes a generalized framework for time series anomaly detection based on Dynamic Drift Awareness and Diffusion Enhancement (DDADE). Through real-time distance monitoring and an adaptive model incremental learning mechanism, it achieves collaborative detection of concept drift and anomaly events. Specifically, the innovation of this work is as follows: First, a drift detection module based on the industrial-enhanced Mahalanobis distance is designed to capture the covariate shift in the feature space in real-time. Second, an anomaly detection model based on diffusion enhancement is proposed, which can perform incremental learning or dynamically adjust the threshold according to the drift detection results. Experiments show that in several representative industrial simulation datasets containing drift scenarios, this method outperforms the baseline models.
在实际应用中,工业数据流异常检测方法的性能经常因概念漂移而下降。核心瓶颈在于现有算法难以动态感知数据分布变化与异常模式之间的耦合关系。提出了一种基于动态漂移感知和扩散增强(DDADE)的广义时间序列异常检测框架。通过实时远程监测和自适应模型增量学习机制,实现了概念漂移和异常事件的协同检测。具体而言,本工作的创新之处在于:首先,设计了基于工业增强马氏距离的漂移检测模块,实时捕获特征空间中的协变量位移。其次,提出了一种基于扩散增强的异常检测模型,该模型可以根据漂移检测结果进行增量学习或动态调整阈值;实验表明,在包含漂移场景的多个代表性工业仿真数据集上,该方法优于基线模型。
{"title":"A generalizable anomaly detection framework with dynamic concept drift suppression for non-stationary time series","authors":"Licheng Yang ,&nbsp;Yu Yao ,&nbsp;Daoqing Yang ,&nbsp;Wei Yang ,&nbsp;Yuming Hao","doi":"10.1016/j.knosys.2026.115380","DOIUrl":"10.1016/j.knosys.2026.115380","url":null,"abstract":"<div><div>In practical applications, the performance of industrial data stream anomaly detection methods often degrades due to concept drift. The core bottleneck lies in the fact that existing algorithms struggle to dynamically perceive the coupling relationship between data distribution changes and anomaly patterns. This paper proposes a generalized framework for time series anomaly detection based on <u>D</u>ynamic <u>D</u>rift <u>A</u>wareness and <u>D</u>iffusion <u>E</u>nhancement (DDADE). Through real-time distance monitoring and an adaptive model incremental learning mechanism, it achieves collaborative detection of concept drift and anomaly events. Specifically, the innovation of this work is as follows: First, a drift detection module based on the industrial-enhanced Mahalanobis distance is designed to capture the covariate shift in the feature space in real-time. Second, an anomaly detection model based on diffusion enhancement is proposed, which can perform incremental learning or dynamically adjust the threshold according to the drift detection results. Experiments show that in several representative industrial simulation datasets containing drift scenarios, this method outperforms the baseline models.</div></div>","PeriodicalId":49939,"journal":{"name":"Knowledge-Based Systems","volume":"337 ","pages":"Article 115380"},"PeriodicalIF":7.6,"publicationDate":"2026-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146080783","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GEOMR: Integrating image geographic features and human reasoning knowledge for image geolocalization GEOMR:将图像地理特征与人类推理知识相结合,实现图像地理定位
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-23 DOI: 10.1016/j.knosys.2026.115391
Jian Fang , Siyi Qian , Shaohui Liu
Worldwide image geolocalization aims to accurately predict the geographic location where a given image was captured. Due to the vast scale of the Earth and the uneven distribution of geographic features, this task remains highly challenging. Traditional methods exhibit clear limitations when handling global-scale data. To address these challenges, we propose GEOMR, an effective and adaptive framework that integrates image geographic features and human reasoning knowledge to enhance global geolocalization accuracy. GEOMR consists of two modules. The first module extracts geographic features from images by jointly learning multimodal features. The second module involves training a multimodal large language model in a two-phase process to enhance its geolocalization reasoning capabilities. The first phase learns human geolocalization reasoning knowledge, enabling the model to utilize geographic cues present in images effectively. The second phase focuses on learning how to use reference information to infer the correct geographic coordinates. Extensive experiments conducted on the IM2GPS3K, YFCC4K, and YFCC26K datasets demonstrate that GEOMR significantly outperforms state-of-the-art methods.
全球图像地理定位旨在准确预测给定图像被捕获的地理位置。由于地球幅员辽阔,地理特征分布不均,这一任务仍然极具挑战性。传统方法在处理全球尺度数据时表现出明显的局限性。为了应对这些挑战,我们提出了一种有效的自适应框架GEOMR,它集成了图像地理特征和人类推理知识,以提高全球地理定位的准确性。GEOMR由两个模块组成。第一个模块通过联合学习多模态特征,从图像中提取地理特征。第二个模块是分两阶段训练一个多模态大语言模型,以增强其地理定位推理能力。第一阶段学习人类地理定位推理知识,使模型能够有效地利用图像中存在的地理线索。第二阶段的重点是学习如何使用参考信息来推断正确的地理坐标。在IM2GPS3K、YFCC4K和YFCC26K数据集上进行的大量实验表明,GEOMR显著优于最先进的方法。
{"title":"GEOMR: Integrating image geographic features and human reasoning knowledge for image geolocalization","authors":"Jian Fang ,&nbsp;Siyi Qian ,&nbsp;Shaohui Liu","doi":"10.1016/j.knosys.2026.115391","DOIUrl":"10.1016/j.knosys.2026.115391","url":null,"abstract":"<div><div>Worldwide image geolocalization aims to accurately predict the geographic location where a given image was captured. Due to the vast scale of the Earth and the uneven distribution of geographic features, this task remains highly challenging. Traditional methods exhibit clear limitations when handling global-scale data. To address these challenges, we propose GEOMR, an effective and adaptive framework that integrates image geographic features and human reasoning knowledge to enhance global geolocalization accuracy. GEOMR consists of two modules. The first module extracts geographic features from images by jointly learning multimodal features. The second module involves training a multimodal large language model in a two-phase process to enhance its geolocalization reasoning capabilities. The first phase learns human geolocalization reasoning knowledge, enabling the model to utilize geographic cues present in images effectively. The second phase focuses on learning how to use reference information to infer the correct geographic coordinates. Extensive experiments conducted on the IM2GPS3K, YFCC4K, and YFCC26K datasets demonstrate that GEOMR significantly outperforms state-of-the-art methods.</div></div>","PeriodicalId":49939,"journal":{"name":"Knowledge-Based Systems","volume":"337 ","pages":"Article 115391"},"PeriodicalIF":7.6,"publicationDate":"2026-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146081229","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Mutual masked image consistency and feature adversarial training for semi-supervised medical image segmentation 半监督医学图像分割的互掩膜图像一致性和特征对抗训练
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-23 DOI: 10.1016/j.knosys.2026.115349
Wei Li , Linye Ma , Wenyi Zhao , Huihua Yang
Semi-supervised medical image segmentation (SSMIS) aims to alleviate the burden of extensive pixel/voxel-wise annotations by effectively leveraging unlabeled data. While prevalent approaches relying on pseudo-labeling or consistency regularization have shown promise, they are often prone to confirmation bias due to limited feature diversity. Furthermore, existing mixed sampling strategies utilized to expand the training scale frequently generate synthetic data that deviates from real-world distributions, potentially misleading the learning process. To address these challenges, we introduce a novel framework called Mutual Masked Image Consistency and Feature Adversarial Training (MCFAT-Net). Our approach enhances model diversity through a multi-perspective strategy, fostering global-local consistency to improve generalization. Specifically, MCFAT-Net comprises a shared encoder and dual classifiers that leverage Mutual Feature Adversarial Training to inject perturbations, ensuring sub-network divergence and decision boundary smoothness. Moreover, we integrate a dual-level data augmentation strategy: Cross-Set CutMix operating at the inter-sample level to capture global dataset structures, and Mutual Masked Image Consistency operating at the intra-sample level to refine fine-grained local representations. This combination enables the simultaneous capture of pairwise structures across the entire dataset and individual part-object relationships. Extensive experiments on three public datasets demonstrate that MCFAT-Net achieves superior performance compared to state-of-the-art methods.
半监督医学图像分割(SSMIS)旨在通过有效利用未标记数据来减轻大量像素/体素注释的负担。虽然依赖于伪标签或一致性正则化的流行方法已经显示出希望,但由于有限的特征多样性,它们往往容易产生确认偏差。此外,现有用于扩展训练规模的混合采样策略经常生成偏离真实分布的合成数据,可能会误导学习过程。为了解决这些挑战,我们引入了一个新的框架,称为互掩膜图像一致性和特征对抗训练(MCFAT-Net)。我们的方法通过多视角策略增强模型多样性,促进全局-局部一致性以提高泛化。具体来说,MCFAT-Net包括一个共享编码器和双分类器,它们利用互特征对抗训练注入扰动,确保子网络发散和决策边界平滑。此外,我们还集成了一种双级数据增强策略:在样本间级别操作的Cross-Set CutMix捕获全局数据集结构,以及在样本内级别操作的Mutual masking Image Consistency以细化细粒度的局部表示。这种组合可以同时捕获整个数据集和单个部分-对象关系的成对结构。在三个公共数据集上进行的大量实验表明,与最先进的方法相比,MCFAT-Net实现了卓越的性能。
{"title":"Mutual masked image consistency and feature adversarial training for semi-supervised medical image segmentation","authors":"Wei Li ,&nbsp;Linye Ma ,&nbsp;Wenyi Zhao ,&nbsp;Huihua Yang","doi":"10.1016/j.knosys.2026.115349","DOIUrl":"10.1016/j.knosys.2026.115349","url":null,"abstract":"<div><div>Semi-supervised medical image segmentation (SSMIS) aims to alleviate the burden of extensive pixel/voxel-wise annotations by effectively leveraging unlabeled data. While prevalent approaches relying on pseudo-labeling or consistency regularization have shown promise, they are often prone to confirmation bias due to limited feature diversity. Furthermore, existing mixed sampling strategies utilized to expand the training scale frequently generate synthetic data that deviates from real-world distributions, potentially misleading the learning process. To address these challenges, we introduce a novel framework called Mutual Masked Image Consistency and Feature Adversarial Training (MCFAT-Net). Our approach enhances model diversity through a multi-perspective strategy, fostering global-local consistency to improve generalization. Specifically, MCFAT-Net comprises a shared encoder and dual classifiers that leverage Mutual Feature Adversarial Training to inject perturbations, ensuring sub-network divergence and decision boundary smoothness. Moreover, we integrate a dual-level data augmentation strategy: Cross-Set CutMix operating at the inter-sample level to capture global dataset structures, and Mutual Masked Image Consistency operating at the intra-sample level to refine fine-grained local representations. This combination enables the simultaneous capture of pairwise structures across the entire dataset and individual part-object relationships. Extensive experiments on three public datasets demonstrate that MCFAT-Net achieves superior performance compared to state-of-the-art methods.</div></div>","PeriodicalId":49939,"journal":{"name":"Knowledge-Based Systems","volume":"337 ","pages":"Article 115349"},"PeriodicalIF":7.6,"publicationDate":"2026-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146080716","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Rethinking heterophilic graph learning via graph curvature 基于图曲率的恋异性图学习的再思考
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-23 DOI: 10.1016/j.knosys.2026.115409
Jian Wang , Xingcheng Fu , Qingyun Sun , Li-E Wang , Hao Peng , Jiting Li , Xianxian Li , Minglai Shao
The performance of graph neural networks is limited on heterophilic graphs since heterophilic connections hinder the transport of supervision signals related to downstream tasks. In recent years, most existing works based on node-pair heterophily “transform” heterophilic graphs into special homophilic graphs, which often increase homophilic connectivity and remove heterophilic edges, thereby converting highly heterophilic graphs into highly homophilic ones. They only consider the label difference between node pairs while overlooking the change in the label distribution between their neighborhoods. They need to provide some heuristic priors or complex designs to alleviate the lack of underlying understanding of the heterophilic information propagation, which leads to the issue of heterophily inconsistency. To address the issue of heterophily inconsistency, based on optimal transport theory, we extend the definition of curvature and propose the Heterophily Curvature Graph Representation Learning framework (HetCurv) to optimize the information transport structure and learn better node representations simultaneously. HetCurv perceives the variation of supervision signals on heterophilic graphs through heterophily curvature, and learns the optimal information transport pattern for specific downstream tasks. Extensive experiments demonstrate the superiority of the proposed method in comparison to state-of-the-art baselines across various node classification benchmarks.
图神经网络在异亲图上的性能受到限制,因为异亲连接阻碍了与下游任务相关的监督信号的传输。近年来,已有的基于节点对异亲性的研究大都将亲异性图“变换”为特殊的亲同性图,这种方法往往增加亲同性连通性,去除亲异性边,从而将高度亲异性图转化为高度亲同性图。它们只考虑节点对之间的标签差异,而忽略了它们的邻域之间标签分布的变化。他们需要提供一些启发式的先验或复杂的设计来缓解对异性恋信息传播缺乏基本理解,从而导致异性恋不一致的问题。针对异构不一致问题,基于最优传输理论,扩展了曲率的定义,提出了异构曲率图表示学习框架(HetCurv),以优化信息传输结构,同时学习更好的节点表示。HetCurv通过异亲曲率感知异亲图上监督信号的变化,并学习特定下游任务的最优信息传输模式。大量的实验表明,与各种节点分类基准的最新基线相比,所提出的方法具有优越性。
{"title":"Rethinking heterophilic graph learning via graph curvature","authors":"Jian Wang ,&nbsp;Xingcheng Fu ,&nbsp;Qingyun Sun ,&nbsp;Li-E Wang ,&nbsp;Hao Peng ,&nbsp;Jiting Li ,&nbsp;Xianxian Li ,&nbsp;Minglai Shao","doi":"10.1016/j.knosys.2026.115409","DOIUrl":"10.1016/j.knosys.2026.115409","url":null,"abstract":"<div><div>The performance of graph neural networks is limited on heterophilic graphs since heterophilic connections hinder the transport of supervision signals related to downstream tasks. In recent years, most existing works based on node-pair heterophily “transform” heterophilic graphs into special homophilic graphs, which often increase homophilic connectivity and remove heterophilic edges, thereby converting highly heterophilic graphs into highly homophilic ones. They only consider the label difference between node pairs while overlooking the change in the label distribution between their neighborhoods. They need to provide some heuristic priors or complex designs to alleviate the lack of underlying understanding of the heterophilic information propagation, which leads to the issue of heterophily inconsistency. To address the issue of heterophily inconsistency, based on optimal transport theory, we extend the definition of curvature and propose the Heterophily Curvature Graph Representation Learning framework (<strong>HetCurv</strong>) to optimize the information transport structure and learn better node representations simultaneously. HetCurv perceives the variation of supervision signals on heterophilic graphs through heterophily curvature, and learns the optimal information transport pattern for specific downstream tasks. Extensive experiments demonstrate the superiority of the proposed method in comparison to state-of-the-art baselines across various node classification benchmarks.</div></div>","PeriodicalId":49939,"journal":{"name":"Knowledge-Based Systems","volume":"337 ","pages":"Article 115409"},"PeriodicalIF":7.6,"publicationDate":"2026-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146081228","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-modal dual attention graph contrastive learning for recommendation 推荐的多模态双注意图对比学习
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-23 DOI: 10.1016/j.knosys.2026.115404
Shouxing Ma , Shiqing Wu , Yawen Zeng , Kaize Shi , Guandong Xu
Multi-modal recommender systems, incorporating rich content information (e.g., images and texts) into user behavior modeling, have attracted significant attention recently. Current work has successfully combined graph neural networks (GNNs) and contrastive learning to improve recommendation accuracy and mitigate the inherent sparse data problem. Yet, view augmentation strategies borrowed from other domains-such as edge or node dropout-tend to distort the original graph structure, leading to unintended semantic drift and suboptimal representation learning. Moreover, prior work has predominantly focused on optimizing inter-modal weights while overlooking user-specific modality preferences and adaptation of modal features generated by generic models. To tackle the above issues, we propose a novel multi-mOdal dUal aTtention Graph cOntrastive learning framework (OUTGO). Specifically, we first encode user and item representations by utilizing user and item homogeneous GNNs. Then, we employ designed intra- and inter-attention mechanisms, sequentially and adaptively, tuning each modal feature value based on the principal loss and considering fusing them with different modal perspectives. Additionally, semantic and structural contrastive learning tasks are introduced to alleviate the sparse data without destroying the original data structure. Extensive experiments on real-world datasets demonstrate the superiority of OUTGO compared to state-of-the-art baselines. The code is available at https://github.com/MrShouxingMa/OUTGO.
多模态推荐系统将丰富的内容信息(如图像和文本)整合到用户行为建模中,最近引起了人们的广泛关注。目前的工作已经成功地将图神经网络(gnn)和对比学习相结合,以提高推荐的准确性并缓解固有的稀疏数据问题。然而,从其他领域借用的视图增强策略(如边缘或节点退出)往往会扭曲原始图结构,导致意外的语义漂移和次优表示学习。此外,先前的工作主要集中在优化模态间权重,而忽略了用户特定的模态偏好和通用模型生成的模态特征的适应。为了解决上述问题,我们提出了一种新的多模态双注意图对比学习框架(OUTGO)。具体来说,我们首先利用用户和项目同构gnn对用户和项目表示进行编码。然后,我们采用设计的注意力内和注意力间机制,顺序和自适应地根据主损失调整每个模态特征值,并考虑将它们与不同的模态视角融合。此外,引入语义和结构对比学习任务,在不破坏原始数据结构的情况下缓解稀疏数据。在真实世界数据集上进行的大量实验表明,与最先进的基线相比,OUTGO具有优势。代码可在https://github.com/MrShouxingMa/OUTGO上获得。
{"title":"Multi-modal dual attention graph contrastive learning for recommendation","authors":"Shouxing Ma ,&nbsp;Shiqing Wu ,&nbsp;Yawen Zeng ,&nbsp;Kaize Shi ,&nbsp;Guandong Xu","doi":"10.1016/j.knosys.2026.115404","DOIUrl":"10.1016/j.knosys.2026.115404","url":null,"abstract":"<div><div>Multi-modal recommender systems, incorporating rich content information (e.g., images and texts) into user behavior modeling, have attracted significant attention recently. Current work has successfully combined graph neural networks (GNNs) and contrastive learning to improve recommendation accuracy and mitigate the inherent sparse data problem. Yet, view augmentation strategies borrowed from other domains-such as edge or node dropout-tend to distort the original graph structure, leading to unintended semantic drift and suboptimal representation learning. Moreover, prior work has predominantly focused on optimizing inter-modal weights while overlooking user-specific modality preferences and adaptation of modal features generated by generic models. To tackle the above issues, we propose a novel multi-m<strong><u>O</u></strong>dal d<strong><u>U</u></strong>al a<strong><u>T</u></strong>tention <strong><u>G</u></strong>raph c<strong><u>O</u></strong>ntrastive learning framework (OUTGO). Specifically, we first encode user and item representations by utilizing user and item homogeneous GNNs. Then, we employ designed intra- and inter-attention mechanisms, sequentially and adaptively, tuning each modal feature value based on the principal loss and considering fusing them with different modal perspectives. Additionally, semantic and structural contrastive learning tasks are introduced to alleviate the sparse data without destroying the original data structure. Extensive experiments on real-world datasets demonstrate the superiority of OUTGO compared to state-of-the-art baselines. The code is available at <span><span>https://github.com/MrShouxingMa/OUTGO</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49939,"journal":{"name":"Knowledge-Based Systems","volume":"337 ","pages":"Article 115404"},"PeriodicalIF":7.6,"publicationDate":"2026-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146081241","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DOREMI: Optimizing long tail predictions in document-level relation extraction DOREMI:优化文档级关系提取中的长尾预测
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-22 DOI: 10.1016/j.knosys.2026.115359
Laura Menotti, Stefano Marchesin, Gianmaria Silvello
Document-Level Relation Extraction (DocRE) presents significant challenges due to its reliance on cross-sentence context and the long-tail distribution of relation types, where many relations have scarce training examples. In this work, we introduce DOcument-level Relation Extraction optiMizing the long taIl (DOREMI), an iterative framework that enhances underrepresented relations through minimal yet targeted manual annotations. Unlike previous approaches that rely on large-scale noisy data or heuristic denoising, DOREMI actively selects the most informative examples to improve training efficiency and robustness. DOREMI can be applied to any existing DocRE model and is effective at mitigating long-tail biases, offering a scalable solution to improve generalization on rare relations.
文档级关系提取(DocRE)由于依赖于跨句上下文和关系类型的长尾分布而面临重大挑战,其中许多关系缺乏训练样例。在这项工作中,我们引入了文档级关系提取优化长尾(DOREMI),这是一个迭代框架,通过最小但有针对性的手动注释来增强未充分表示的关系。与以往依赖于大规模噪声数据或启发式去噪的方法不同,DOREMI主动选择信息量最大的样本来提高训练效率和鲁棒性。DOREMI可以应用于任何现有的DocRE模型,并且有效地减轻了长尾偏差,提供了一个可扩展的解决方案来提高对稀有关系的泛化。
{"title":"DOREMI: Optimizing long tail predictions in document-level relation extraction","authors":"Laura Menotti,&nbsp;Stefano Marchesin,&nbsp;Gianmaria Silvello","doi":"10.1016/j.knosys.2026.115359","DOIUrl":"10.1016/j.knosys.2026.115359","url":null,"abstract":"<div><div>Document-Level Relation Extraction (DocRE) presents significant challenges due to its reliance on cross-sentence context and the long-tail distribution of relation types, where many relations have scarce training examples. In this work, we introduce <strong>DO</strong>cument-level <strong>R</strong>elation <strong>E</strong>xtraction opti<strong>M</strong>izing the long ta<strong>I</strong>l (DOREMI), an iterative framework that enhances underrepresented relations through minimal yet targeted manual annotations. Unlike previous approaches that rely on large-scale noisy data or heuristic denoising, DOREMI actively selects the most informative examples to improve training efficiency and robustness. DOREMI can be applied to any existing DocRE model and is effective at mitigating long-tail biases, offering a scalable solution to improve generalization on rare relations.</div></div>","PeriodicalId":49939,"journal":{"name":"Knowledge-Based Systems","volume":"337 ","pages":"Article 115359"},"PeriodicalIF":7.6,"publicationDate":"2026-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146080785","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PDPA: A prompt-based dual persona-aware approach for empathetic response generation PDPA:移情反应生成的基于提示的双重角色感知方法
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-22 DOI: 10.1016/j.knosys.2026.115390
Wei Zhang , Changhong Jiang , Ming Xia , Lulu Wang , Zhongtian Hu , Jiashi Lin , Ronghan Li
Maintaining personality consistency is essential for improving the performance of empathetic dialogue systems. However, existing approaches to persona-aware empathetic response generation commonly exhibit two fundamental limitations in persona information extraction: (1) an inherent trade-off between the richness of information and contextual consistency, and (2) a unidirectional extraction strategy that considers only one interlocutor in the dialogue history. To address these limitations, this study proposes a method that utilizes Pre-trained Language Models (PLMs) and Large Language Models (LLMs) to extract dense persona information from all historical utterances of each participant in the training set, based on their participant IDs. Building on this, we introduce PDPA, a prompt-driven framework that jointly models user and agent perspectives. Specifically, a novel prompt template with three special tokens is designed to explicitly distinguish persona information from dialogue history during feature extraction. Furthermore, a persona-aware heterogeneous graph is constructed to enhance the aggregation of discourse structure, personality traits, complete dialogue history, and external knowledge. Finally, to ensure the effective use of refined persona information together with essential contextual details during generation, a dialogue decoder equipped with a dynamic pointer network is employed. Experimental evaluations demonstrate that the proposed model consistently outperforms strong baselines on two datasets derived from the EMPATHETICDIALOGUES benchmark. In particular, compared with its backbone BART, PDPA achieves notable improvements in emotion classification accuracy, with an increase of 4.73% when assisted by LLM-generated persona information and 4.36% when assisted by PLM-generated persona information, highlighting the effectiveness of our approach.
保持人格一致性对于提高移情对话系统的性能至关重要。然而,现有的角色感知共情反应生成方法在角色信息提取方面普遍存在两个基本限制:(1)信息丰富性和上下文一致性之间的内在权衡;(2)单向提取策略只考虑对话历史中的一个对话者。为了解决这些限制,本研究提出了一种利用预训练语言模型(PLMs)和大型语言模型(LLMs)的方法,根据参与者id从训练集中每个参与者的所有历史话语中提取密集的角色信息。在此基础上,我们引入PDPA,这是一个联合建模用户和代理透视图的提示驱动框架。具体来说,设计了一个带有三个特殊标记的新颖提示模板,以便在特征提取过程中明确区分人物角色信息和对话历史。在此基础上,构建了一个人物感知的异构图,增强了话语结构、人格特征、完整对话历史和外部知识的聚合。最后,为了确保在生成过程中有效地利用精炼的人物角色信息和必要的上下文细节,采用了配备动态指针网络的对话解码器。实验评估表明,所提出的模型在来自EMPATHETICDIALOGUES基准的两个数据集上始终优于强基线。与主干BART相比,PDPA在情绪分类准确率上取得了显著的提高,在llm生成的人物信息辅助下提高了4.73%,在plm生成的人物信息辅助下提高了4.36%,凸显了我们方法的有效性。
{"title":"PDPA: A prompt-based dual persona-aware approach for empathetic response generation","authors":"Wei Zhang ,&nbsp;Changhong Jiang ,&nbsp;Ming Xia ,&nbsp;Lulu Wang ,&nbsp;Zhongtian Hu ,&nbsp;Jiashi Lin ,&nbsp;Ronghan Li","doi":"10.1016/j.knosys.2026.115390","DOIUrl":"10.1016/j.knosys.2026.115390","url":null,"abstract":"<div><div>Maintaining personality consistency is essential for improving the performance of empathetic dialogue systems. However, existing approaches to persona-aware empathetic response generation commonly exhibit two fundamental limitations in persona information extraction: (1) an inherent trade-off between the richness of information and contextual consistency, and (2) a unidirectional extraction strategy that considers only one interlocutor in the dialogue history. To address these limitations, this study proposes a method that utilizes Pre-trained Language Models (PLMs) and Large Language Models (LLMs) to extract dense persona information from all historical utterances of each participant in the training set, based on their participant IDs. Building on this, we introduce PDPA, a prompt-driven framework that jointly models user and agent perspectives. Specifically, a novel prompt template with three special tokens is designed to explicitly distinguish persona information from dialogue history during feature extraction. Furthermore, a persona-aware heterogeneous graph is constructed to enhance the aggregation of discourse structure, personality traits, complete dialogue history, and external knowledge. Finally, to ensure the effective use of refined persona information together with essential contextual details during generation, a dialogue decoder equipped with a dynamic pointer network is employed. Experimental evaluations demonstrate that the proposed model consistently outperforms strong baselines on two datasets derived from the EMPATHETICDIALOGUES benchmark. In particular, compared with its backbone BART, PDPA achieves notable improvements in emotion classification accuracy, with an increase of 4.73% when assisted by LLM-generated persona information and 4.36% when assisted by PLM-generated persona information, highlighting the effectiveness of our approach.</div></div>","PeriodicalId":49939,"journal":{"name":"Knowledge-Based Systems","volume":"337 ","pages":"Article 115390"},"PeriodicalIF":7.6,"publicationDate":"2026-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146081172","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Knowledge-Based Systems
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1