首页 > 最新文献

IEEE Transactions on Knowledge and Data Engineering最新文献

英文 中文
Uncertain Priors for Graphical Causal Models: A Multi-Objective Optimization Perspective 图解因果模型的不确定先验:多目标优化视角
IF 10.4 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-09-11 DOI: 10.1109/TKDE.2025.3608723
Zidong Wang;Xiaoguang Gao;Qingfu Zhang
Learning graphical causal models from observational data can effectively elucidate the underlying causal mechanism behind the variables. In the context of limited datasets, modelers often incorporate prior knowledge, which is assumed to be correct, as a penalty in single-objective optimization. However, this approach struggles to adapt complex and uncertain priors effectively. This paper introduces UpCM, which tackles the issue from a multi-objective optimization perspective. Instead of focusing exclusively on the DAG as the optimization goal, UpCM methodically evaluate the effect of uncertain priors on specific structures, merging data-driven and knowledge-driven objectives. Utilizing the MOEA/D framework, it achieve a balanced trade-off between these objectives. Furthermore, since uncertain priors may introduce erroneous constraints, resulting in PDAGs lacking consistent extensions, the minimal non-consistent extension is explored. This extension, which separately incorporates positive and negative constraints, aims to approximate the true causality of the PDAGs. Experimental results demonstrate that UpCM achieves significant structural accuracy improvements compared to baseline methods. It reduces the SHD by 7.94%, 13.23%, and 12.8% relative to PC_stable, GES, and MAHC, respectively, when incorporating uncertain priors. In downstream inference tasks, UpCM outperforms domain-expert knowledge graphs, owing to its ability to learn explainable causal relationships that balance data-driven evidence with prior knowledge.
从观测数据中学习图形因果模型可以有效地阐明变量背后潜在的因果机制。在有限的数据集的背景下,建模者经常将先验知识作为单目标优化的惩罚,这被认为是正确的。然而,这种方法很难有效地适应复杂和不确定的先验。本文介绍了UpCM,它从多目标优化的角度来解决这一问题。UpCM不是专门关注DAG作为优化目标,而是系统地评估不确定先验对特定结构的影响,合并数据驱动和知识驱动的目标。利用MOEA/D框架,它实现了这些目标之间的平衡权衡。此外,由于不确定的先验可能引入错误的约束,导致pdag缺乏一致扩展,因此探讨了最小不一致扩展。这一扩展分别包含了正约束和负约束,旨在近似PDAGs的真正因果关系。实验结果表明,与基线方法相比,UpCM可以显著提高结构精度。当考虑不确定先验时,相对于PC_stable、GES和MAHC, SHD分别降低了7.94%、13.23%和12.8%。在下游推理任务中,UpCM优于领域专家知识图,因为它能够学习可解释的因果关系,平衡数据驱动的证据和先验知识。
{"title":"Uncertain Priors for Graphical Causal Models: A Multi-Objective Optimization Perspective","authors":"Zidong Wang;Xiaoguang Gao;Qingfu Zhang","doi":"10.1109/TKDE.2025.3608723","DOIUrl":"https://doi.org/10.1109/TKDE.2025.3608723","url":null,"abstract":"Learning graphical causal models from observational data can effectively elucidate the underlying causal mechanism behind the variables. In the context of limited datasets, modelers often incorporate prior knowledge, which is assumed to be correct, as a penalty in single-objective optimization. However, this approach struggles to adapt complex and uncertain priors effectively. This paper introduces UpCM, which tackles the issue from a multi-objective optimization perspective. Instead of focusing exclusively on the DAG as the optimization goal, UpCM methodically evaluate the effect of uncertain priors on specific structures, merging data-driven and knowledge-driven objectives. Utilizing the MOEA/D framework, it achieve a balanced trade-off between these objectives. Furthermore, since uncertain priors may introduce erroneous constraints, resulting in PDAGs lacking consistent extensions, the minimal non-consistent extension is explored. This extension, which separately incorporates positive and negative constraints, aims to approximate the true causality of the PDAGs. Experimental results demonstrate that UpCM achieves significant structural accuracy improvements compared to baseline methods. It reduces the SHD by 7.94%, 13.23%, and 12.8% relative to PC_stable, GES, and MAHC, respectively, when incorporating uncertain priors. In downstream inference tasks, UpCM outperforms domain-expert knowledge graphs, owing to its ability to learn explainable causal relationships that balance data-driven evidence with prior knowledge.","PeriodicalId":13496,"journal":{"name":"IEEE Transactions on Knowledge and Data Engineering","volume":"37 12","pages":"7426-7439"},"PeriodicalIF":10.4,"publicationDate":"2025-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145455998","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SandwichSketch: A More Accurate Sketch for Frequent Object Mining in Data Streams SandwichSketch:数据流中频繁对象挖掘的更精确的草图
IF 10.4 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-09-09 DOI: 10.1109/TKDE.2025.3607691
Zhuochen Fan;Ruixin Wang;Zihan Jiang;Ruwen Zhang;Tong Yang;Sha Wang;Yuhan Wu;Ruijie Miao;Kaicheng Yang;Bui Cui
Frequent object mining has gained considerable interest in the research community and can be split into frequent item mining and frequent set mining depending on the type of object. While existing sketch-based algorithms have made significant progress in addressing these two tasks concurrently, they also possess notable limitations. They either support only software platforms with low throughput or compromise accuracy for faster processing speed and better hardware compatibility. In this paper, we make a substantial stride towards supporting frequent object mining by designing SandwichSketch, which draws inspiration from sandwich making and proposes two techniques including the double fidelity enhancement and hierarchical hot locking to guarantee high fidelity on both two tasks. We implement SandwichSketch on three platforms (CPU, Redis, and FPGA) and show that it enhances accuracy by $38.4times$ and $5times$ for two tasks on three real-world datasets, respectively. Additionally, it supports a distributed measurement scenario with less than a 0.01% decrease in Average Relative Error (ARE) when the number of nodes increases from 1 to 16.
频繁对象挖掘在研究界引起了相当大的兴趣,根据对象的类型可以分为频繁项挖掘和频繁集挖掘。虽然现有的基于草图的算法在同时处理这两个任务方面取得了重大进展,但它们也具有明显的局限性。它们要么只支持低吞吐量的软件平台,要么为了更快的处理速度和更好的硬件兼容性而牺牲精度。在本文中,我们通过设计SandwichSketch在支持频繁对象挖掘方面迈出了坚实的一步,该设计从三明治制作中获得灵感,并提出了两种技术,包括双保真度增强和分层热锁定,以保证两种任务的高保真度。我们在三个平台(CPU、Redis和FPGA)上实现了SandwichSketch,并表明它在三个真实数据集上分别为两个任务提高了38.4倍和5倍的准确性。此外,它还支持分布式测量场景,当节点数从1增加到16时,平均相对误差(Average Relative Error, ARE)下降小于0.01%。
{"title":"SandwichSketch: A More Accurate Sketch for Frequent Object Mining in Data Streams","authors":"Zhuochen Fan;Ruixin Wang;Zihan Jiang;Ruwen Zhang;Tong Yang;Sha Wang;Yuhan Wu;Ruijie Miao;Kaicheng Yang;Bui Cui","doi":"10.1109/TKDE.2025.3607691","DOIUrl":"https://doi.org/10.1109/TKDE.2025.3607691","url":null,"abstract":"Frequent object mining has gained considerable interest in the research community and can be split into frequent item mining and frequent set mining depending on the type of object. While existing sketch-based algorithms have made significant progress in addressing these two tasks concurrently, they also possess notable limitations. They either support only software platforms with low throughput or compromise accuracy for faster processing speed and better hardware compatibility. In this paper, we make a substantial stride towards supporting frequent object mining by designing SandwichSketch, which draws inspiration from sandwich making and proposes two techniques including the double fidelity enhancement and hierarchical hot locking to guarantee high fidelity on both two tasks. We implement SandwichSketch on three platforms (CPU, Redis, and FPGA) and show that it enhances accuracy by <inline-formula><tex-math>$38.4times$</tex-math></inline-formula> and <inline-formula><tex-math>$5times$</tex-math></inline-formula> for two tasks on three real-world datasets, respectively. Additionally, it supports a distributed measurement scenario with less than a 0.01% decrease in Average Relative Error (ARE) when the number of nodes increases from 1 to 16.","PeriodicalId":13496,"journal":{"name":"IEEE Transactions on Knowledge and Data Engineering","volume":"37 11","pages":"6636-6650"},"PeriodicalIF":10.4,"publicationDate":"2025-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145242587","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
KnobCF: Uncertainty-Aware Knob Tuning KnobCF:不确定性感知旋钮调谐
IF 10.4 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-09-09 DOI: 10.1109/TKDE.2025.3608030
Yu Yan;Junfang Huang;Hongzhi Wang;Jian Geng;Kaixin Zhang;Tao Yu
The knob tuning aims to optimize database performance by searching for the most effective knob configuration under a certain workload. Existing works suffer from two significant problems. First, there exist multiple useless evaluations of knob tuning even with diverse searching methods because of the different sensitivities of knobs on a certain workload. Second, the single evaluation of knob configurations may bring overestimation or underestimation because of query performance uncertainty. To solve the above problems, we propose a query uncertainty-aware knob classifier, called ${sf KnobCF}$, to enhance knob tuning. Our method has three contributions: (1) We propose uncertainty-aware configuration estimation to improve the tuning process. (2) We design a few-shot uncertainty estimator that requires no extra data collection, ensuring high efficiency in practical tasks. (3) We provide a flexible framework that can be integrated into existing knob tuners and DBMSs without modification. Our experiments on four open-source benchmarks demonstrate that our method effectively reduces useless evaluations and improves the tuning results. Especially in TPCC, our method achieves competitive tuning results with only 60% to 70% time consumption compared to the full workload evaluations.
旋钮调优的目的是在一定的工作负载下,通过搜索最有效的旋钮配置来优化数据库性能。现存的作品存在两个显著的问题。首先,在一定的工作负荷下,由于旋钮的灵敏度不同,即使使用不同的搜索方法,旋钮调优也存在多次无效的评价。其次,由于查询性能的不确定性,对旋钮配置的单一评估可能导致高估或低估。为了解决上述问题,我们提出了一个查询不确定性感知旋钮分类器,称为${sf KnobCF}$,以增强旋钮调优。我们的方法有三个贡献:(1)我们提出了不确定性感知配置估计来改进调优过程。(2)我们设计了一种无需额外数据采集的少次不确定性估计器,保证了实际任务的高效率。(3)我们提供了一个灵活的框架,可以集成到现有的旋钮调谐器和dbms中,而无需修改。我们在四个开源基准测试上的实验表明,我们的方法有效地减少了无用的评估并改善了调优结果。特别是在TPCC中,与全工作负载评估相比,我们的方法仅花费60%到70%的时间就获得了具有竞争力的调优结果。
{"title":"KnobCF: Uncertainty-Aware Knob Tuning","authors":"Yu Yan;Junfang Huang;Hongzhi Wang;Jian Geng;Kaixin Zhang;Tao Yu","doi":"10.1109/TKDE.2025.3608030","DOIUrl":"https://doi.org/10.1109/TKDE.2025.3608030","url":null,"abstract":"The knob tuning aims to optimize database performance by searching for the most effective knob configuration under a certain workload. Existing works suffer from two significant problems. First, there exist multiple useless evaluations of knob tuning even with diverse searching methods because of the different sensitivities of knobs on a certain workload. Second, the single evaluation of knob configurations may bring overestimation or underestimation because of query performance uncertainty. To solve the above problems, we propose a query uncertainty-aware knob classifier, called <inline-formula><tex-math>${sf KnobCF}$</tex-math></inline-formula>, to enhance knob tuning. Our method has three contributions: (1) We propose uncertainty-aware configuration estimation to improve the tuning process. (2) We design a few-shot uncertainty estimator that requires no extra data collection, ensuring high efficiency in practical tasks. (3) We provide a flexible framework that can be integrated into existing knob tuners and DBMSs without modification. Our experiments on four open-source benchmarks demonstrate that our method effectively reduces useless evaluations and improves the tuning results. Especially in TPCC, our method achieves competitive tuning results with only 60% to 70% time consumption compared to the full workload evaluations.","PeriodicalId":13496,"journal":{"name":"IEEE Transactions on Knowledge and Data Engineering","volume":"37 12","pages":"7240-7254"},"PeriodicalIF":10.4,"publicationDate":"2025-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145456054","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SAQE: Complex Logical Query Answering via Semantic-Aware Representation Learning 基于语义感知表示学习的复杂逻辑查询应答
IF 10.4 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-09-05 DOI: 10.1109/TKDE.2025.3603877
Zongsheng Cao;Qianqian Xu;Zhiyong Yang;Yuan He;Xiaochun Cao;Qingming Huang
Performing complex First-Order Logic (FOL) queries on knowledge graphs is crucial for advancing knowledge reasoning. Knowledge graphs encapsulate rich semantic interactions among entities, encompassing both explicit structural knowledge represented by triples $(e_{1}, r, e_{2})$ and implicit relational knowledge through multi-hop paths $(e_{1} stackrel{r_{1}}{rightarrow } cdots e_{3} cdots stackrel{r_{2}}{rightarrow } e_{2})$. Traditional models often focus solely on either triple-level or path-level knowledge, overlooking the benefits of integrating both to enhance logic query answering. This oversight leads to suboptimal representation learning and inefficient query reasoning. To overcome these challenges, we introduce a new Semantic-Aware representation learning model for Query-answering Embeddings (SAQE). Specifically, SAQE employs a joint learning approach that integrates triple-level and path-level knowledge semantics and captures both explicit and implicit contextual nuances within the knowledge graph, yielding more accurate and contextually relevant representations. To efficiently handle the large combinatorial search spaces in FOL reasoning, we propose a novel hierarchical reasoning optimization strategy by a multi-hop tree thus optimizing subqueries rooted at variable nodes in a divide-and-conquer manner. Theoretical analysis confirms that SAQE effectively supports various types of FOL reasoning and enhances generalizations for query answering. Extensive experiments demonstrate that our model achieves state-of-the-art performance across several established datasets.
在知识图上执行复杂的一阶逻辑(FOL)查询对于推进知识推理至关重要。知识图封装了实体之间丰富的语义交互,既包含由三元组$(e_{1}, r, e_{2})$表示的显式结构知识,也包含通过多跳路径$(e_{1} stackrel{r_{1}}{rightarrow} cdots e_{3} cdots stackrel{r_{2}}{rightarrow} e_{2})$表示的隐式关系知识。传统模型通常只关注三层或路径级知识,忽略了集成两者以增强逻辑查询应答的好处。这种疏忽导致次优表示学习和低效的查询推理。为了克服这些挑战,我们为问答嵌入(SAQE)引入了一种新的语义感知表示学习模型。具体来说,SAQE采用了一种联合学习方法,该方法集成了三级和路径级知识语义,并捕获知识图中显式和隐含的上下文细微差别,从而产生更准确和上下文相关的表示。为了有效地处理FOL推理中庞大的组合搜索空间,提出了一种基于多跳树的分层推理优化策略,以分而治之的方式优化基于可变节点的子查询。理论分析证实,SAQE有效地支持各种类型的FOL推理,并增强了查询回答的泛化。大量的实验表明,我们的模型在多个已建立的数据集上实现了最先进的性能。
{"title":"SAQE: Complex Logical Query Answering via Semantic-Aware Representation Learning","authors":"Zongsheng Cao;Qianqian Xu;Zhiyong Yang;Yuan He;Xiaochun Cao;Qingming Huang","doi":"10.1109/TKDE.2025.3603877","DOIUrl":"https://doi.org/10.1109/TKDE.2025.3603877","url":null,"abstract":"Performing complex First-Order Logic (FOL) queries on knowledge graphs is crucial for advancing knowledge reasoning. Knowledge graphs encapsulate rich semantic interactions among entities, encompassing both explicit structural knowledge represented by triples <inline-formula><tex-math>$(e_{1}, r, e_{2})$</tex-math></inline-formula> and implicit relational knowledge through multi-hop paths <inline-formula><tex-math>$(e_{1} stackrel{r_{1}}{rightarrow } cdots e_{3} cdots stackrel{r_{2}}{rightarrow } e_{2})$</tex-math></inline-formula>. Traditional models often focus solely on either triple-level or path-level knowledge, overlooking the benefits of integrating both to enhance logic query answering. This oversight leads to suboptimal representation learning and inefficient query reasoning. To overcome these challenges, we introduce a new <b>S</b>emantic-<b>A</b>ware representation learning model for <b>Q</b>uery-answering <b>E</b>mbeddings (<b>SAQE</b>). Specifically, SAQE employs a joint learning approach that integrates triple-level and path-level knowledge semantics and captures both explicit and implicit contextual nuances within the knowledge graph, yielding more accurate and contextually relevant representations. To efficiently handle the large combinatorial search spaces in FOL reasoning, we propose a novel hierarchical reasoning optimization strategy by a multi-hop tree thus optimizing subqueries rooted at variable nodes in a divide-and-conquer manner. Theoretical analysis confirms that SAQE effectively supports various types of FOL reasoning and enhances generalizations for query answering. Extensive experiments demonstrate that our model achieves state-of-the-art performance across several established datasets.","PeriodicalId":13496,"journal":{"name":"IEEE Transactions on Knowledge and Data Engineering","volume":"37 11","pages":"6651-6665"},"PeriodicalIF":10.4,"publicationDate":"2025-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145242609","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Incremental Multi-View Clustering: Exploring Stream-View Correlations to Learn Consistency and Diversity 增量多视图聚类:探索流视图相关性以学习一致性和多样性
IF 10.4 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-09-03 DOI: 10.1109/TKDE.2025.3605594
Yu Feng;Weixuan Liang;Xinhang Wan;Jiyuan Liu;Miaomiao Li;Xinwang Liu
Multi-view clustering (MVC) has demonstrated impressive performance due to its ability to capture both consistency and diversity information among views. However, most existing techniques assume that all views are available in advance, making them inadequate for stream-view data, such as intelligent transportation systems and medical imaging analysis, where memory constraints or privacy concerns prevent storing all previous views. Although some methods attempt to address this issue by capturing consistency information, they often fail to effectively extract both diversity information and cross-view relationships. We argue that these limitations are inherent to incremental multi-view clustering (IMVC), as the inability to retain all previous views inevitably leads to insufficient information utilization, thereby compromising performance. To address these challenges, we propose a novel algorithm, termed Incremental Multi-View Clustering with Cross-View Correlation and Diversity (CDIMVC). Unlike existing methods that only retain consistency information, CDIMVC also preserves diversity information and utilizes similarity matrices to capture cross-view relationships. To implement this method, we develop three key modules: the dynamic view correlation analysis module (DVCAM), the knowledge extraction module (KEM), and the knowledge transfer module (KTM). When a new view arrives, DVCAM first assesses its importance and correlations to historical views. Subsequently, KEM computes its consistency and diversity information by comparing it to that in the knowledge base. Finally, KTM facilitates the effective transmission of past knowledge, preventing the loss of historical information. By integrating these modules, CDIMVC can effectively capture cross-view relationships and diversity information, facilitating efficient knowledge updating and maintenance. An alternating procedure is also designed to optimize the resulting optimization problem. Experimental results show that CDIMVC exceeds state-of-the-art methods, demonstrating its effectiveness in handling stream-view data.
由于能够捕获视图之间的一致性和多样性信息,多视图聚类(MVC)已经展示了令人印象深刻的性能。然而,大多数现有技术假设所有视图都是预先可用的,这使得它们不适用于流视图数据,例如智能交通系统和医疗成像分析,在这些系统中,内存限制或隐私问题无法存储所有先前的视图。尽管有些方法试图通过捕获一致性信息来解决这个问题,但它们往往不能有效地提取多样性信息和交叉视图关系。我们认为这些限制是增量多视图聚类(IMVC)固有的,因为无法保留所有以前的视图不可避免地导致信息利用率不足,从而影响性能。为了解决这些挑战,我们提出了一种新的算法,称为具有交叉视图相关和多样性的增量多视图聚类(CDIMVC)。与仅保留一致性信息的现有方法不同,CDIMVC还保留了多样性信息,并利用相似性矩阵捕获跨视图关系。为了实现该方法,我们开发了三个关键模块:动态视图关联分析模块(DVCAM)、知识提取模块(KEM)和知识转移模块(KTM)。当出现新的视图时,DVCAM首先评估其重要性以及与历史视图的相关性。随后,KEM通过与知识库中的一致性和多样性信息进行比较,计算其一致性和多样性信息。最后,KTM促进了过去知识的有效传递,防止了历史信息的丢失。通过集成这些模块,CDIMVC可以有效地捕获跨视图关系和多样性信息,便于高效的知识更新和维护。还设计了一个交替过程来优化所得到的优化问题。实验结果表明,CDIMVC在处理流视图数据方面的有效性超过了现有的方法。
{"title":"Incremental Multi-View Clustering: Exploring Stream-View Correlations to Learn Consistency and Diversity","authors":"Yu Feng;Weixuan Liang;Xinhang Wan;Jiyuan Liu;Miaomiao Li;Xinwang Liu","doi":"10.1109/TKDE.2025.3605594","DOIUrl":"https://doi.org/10.1109/TKDE.2025.3605594","url":null,"abstract":"Multi-view clustering (MVC) has demonstrated impressive performance due to its ability to capture both consistency and diversity information among views. However, most existing techniques assume that all views are available in advance, making them inadequate for stream-view data, such as intelligent transportation systems and medical imaging analysis, where memory constraints or privacy concerns prevent storing all previous views. Although some methods attempt to address this issue by capturing consistency information, they often fail to effectively extract both diversity information and cross-view relationships. We argue that these limitations are inherent to incremental multi-view clustering (IMVC), as the inability to retain all previous views inevitably leads to insufficient information utilization, thereby compromising performance. To address these challenges, we propose a novel algorithm, termed Incremental Multi-View Clustering with Cross-View Correlation and Diversity (CDIMVC). Unlike existing methods that only retain consistency information, CDIMVC also preserves diversity information and utilizes similarity matrices to capture cross-view relationships. To implement this method, we develop three key modules: the dynamic view correlation analysis module (DVCAM), the knowledge extraction module (KEM), and the knowledge transfer module (KTM). When a new view arrives, DVCAM first assesses its importance and correlations to historical views. Subsequently, KEM computes its consistency and diversity information by comparing it to that in the knowledge base. Finally, KTM facilitates the effective transmission of past knowledge, preventing the loss of historical information. By integrating these modules, CDIMVC can effectively capture cross-view relationships and diversity information, facilitating efficient knowledge updating and maintenance. An alternating procedure is also designed to optimize the resulting optimization problem. Experimental results show that CDIMVC exceeds state-of-the-art methods, demonstrating its effectiveness in handling stream-view data.","PeriodicalId":13496,"journal":{"name":"IEEE Transactions on Knowledge and Data Engineering","volume":"37 12","pages":"7226-7239"},"PeriodicalIF":10.4,"publicationDate":"2025-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145455832","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Semi-Supervised Short Text Stream Classification Based on Drift-Aware Incremental Deep Learning 基于漂移感知增量深度学习的半监督短文本流分类
IF 10.4 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-09-02 DOI: 10.1109/TKDE.2025.3605389
Peipei Li;Shiying Yu;Jiajun Li;Xuegang Hu
Real-world applications have produced massive short text streams. Contrary to the traditional normal texts, they present the characteristics such as short length, only having few labeled data, high-velocity, high-volume and dynamic data distributions, which deteriorate the issues of data sparseness, label missing and concept drift. Obviously, it is a huge challenge for existing short text (stream) classification algorithms due to the poor effectiveness, where they always assume all short texts are completely labeled and little attention is paid on the concept drift issue hidden in short text streams. Therefore, we propose a novel semi-supervised short text steam classification method based on the drift-aware incremental deep learning ensemble model. Specifically, with the sliding window mechanism, we first fuse three types of statistical, semantic and structure information to solve the data sparseness issue. Second, a semi-supervised incremental deep learning ensemble model based on GCN and the refined LSTM is developed to adapt to the high-volume, high-velocity and label missing short text streams. Third, a label-probability distribution based concept drift detector is introduced to distinguish concept drifts. Finally, as compared with eleven well-known classification methods, extensive experiments demonstrate the effectiveness of the proposed method in the handling of short text streams with limited labeled data.
现实世界的应用程序已经产生了大量的短文本流。与传统的标准文本相反,它们呈现出长度短、标注数据少、数据分布速度快、容量大、动态等特点,加剧了数据稀疏性、标签缺失和概念漂移等问题。显然,现有的短文本(流)分类算法的有效性很差,它们总是假设所有短文本都被完全标记,很少关注短文本流中隐藏的概念漂移问题。因此,我们提出了一种基于漂移感知增量深度学习集成模型的半监督短文本蒸汽分类方法。具体来说,通过滑动窗口机制,我们首先融合了三种类型的统计信息、语义信息和结构信息来解决数据稀疏性问题。其次,提出了一种基于GCN和改进LSTM的半监督增量深度学习集成模型,以适应大容量、高速度和标签缺失的短文本流。第三,引入基于标签概率分布的概念漂移检测器来识别概念漂移。最后,通过与11种已知的分类方法进行比较,大量的实验证明了该方法在处理具有有限标记数据的短文本流方面的有效性。
{"title":"Semi-Supervised Short Text Stream Classification Based on Drift-Aware Incremental Deep Learning","authors":"Peipei Li;Shiying Yu;Jiajun Li;Xuegang Hu","doi":"10.1109/TKDE.2025.3605389","DOIUrl":"https://doi.org/10.1109/TKDE.2025.3605389","url":null,"abstract":"Real-world applications have produced massive short text streams. Contrary to the traditional normal texts, they present the characteristics such as short length, only having few labeled data, high-velocity, high-volume and dynamic data distributions, which deteriorate the issues of data sparseness, label missing and concept drift. Obviously, it is a huge challenge for existing short text (stream) classification algorithms due to the poor effectiveness, where they always assume all short texts are completely labeled and little attention is paid on the concept drift issue hidden in short text streams. Therefore, we propose a novel semi-supervised short text steam classification method based on the drift-aware incremental deep learning ensemble model. Specifically, with the sliding window mechanism, we first fuse three types of statistical, semantic and structure information to solve the data sparseness issue. Second, a semi-supervised incremental deep learning ensemble model based on GCN and the refined LSTM is developed to adapt to the high-volume, high-velocity and label missing short text streams. Third, a label-probability distribution based concept drift detector is introduced to distinguish concept drifts. Finally, as compared with eleven well-known classification methods, extensive experiments demonstrate the effectiveness of the proposed method in the handling of short text streams with limited labeled data.","PeriodicalId":13496,"journal":{"name":"IEEE Transactions on Knowledge and Data Engineering","volume":"37 11","pages":"6680-6693"},"PeriodicalIF":10.4,"publicationDate":"2025-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145242612","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TensorMon: A Breakthrough in Sparse Data Gathering Leveraging Tensor-Enhanced Techniques for System and Network Monitoring TensorMon:利用张量增强技术进行系统和网络监控的稀疏数据收集的突破
IF 10.4 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-08-21 DOI: 10.1109/TKDE.2025.3601198
Jiazheng Tian;Kun Xie;Xin Wang;Jigang Wen;Gaogang Xie;Wei Liang;Dafang Zhang;Kenli Li
Sparse data gathering has become a promising solution for reducing measurement costs by leveraging the inherent sparsity of data. However, most existing approaches rely on low-dimensional models such as compressive sensing or matrix completion, which are limited in capturing complex high-dimensional structures. To overcome these limitations, we propose TensorMon, a novel tensor-based sparse data gathering framework that introduces a cuboid sampling strategy to more effectively exploit multidimensional correlations. Unlike traditional entry-based or tube-based sampling, TensorMon introduces the innovative concept of cuboid sampling. We further develop a lightweight sampling scheduling algorithm and a non-iterative inference algorithm to ensure efficient measurement planning and accurate reconstruction of unmeasured data. Theoretical analysis establishes a new performance bound for our sampling strategy, which is significantly lower than those in existing literature. To validate our theoretical findings, we conduct extensive experiments on four real-world datasets: two network monitoring datasets, a city-scale crowd flow dataset, and a road traffic speed dataset. Experimental results demonstrate that TensorMon achieves substantial reductions in measurement cost, delivers high inference accuracy, and ensures rapid data recovery, highlighting its effectiveness and practicality across diverse application scenarios.
稀疏数据收集已经成为利用数据固有的稀疏性来降低测量成本的一种很有前途的解决方案。然而,大多数现有方法依赖于低维模型,如压缩感知或矩阵补全,这些方法在捕获复杂的高维结构方面受到限制。为了克服这些限制,我们提出了TensorMon,这是一个新的基于张量的稀疏数据收集框架,它引入了一个长方体采样策略,以更有效地利用多维相关性。与传统的基于入口或基于管的采样不同,TensorMon引入了长方体采样的创新概念。我们进一步开发了一种轻量级的采样调度算法和一种非迭代推理算法,以确保有效的测量规划和精确的未测量数据重建。理论分析为我们的抽样策略建立了一个新的性能界,该性能界明显低于现有文献。为了验证我们的理论发现,我们在四个真实世界的数据集上进行了广泛的实验:两个网络监控数据集,一个城市规模的人群流量数据集和一个道路交通速度数据集。实验结果表明,TensorMon能够大幅降低测量成本,提供较高的推理精度,并确保快速的数据恢复,突出了其在不同应用场景下的有效性和实用性。
{"title":"TensorMon: A Breakthrough in Sparse Data Gathering Leveraging Tensor-Enhanced Techniques for System and Network Monitoring","authors":"Jiazheng Tian;Kun Xie;Xin Wang;Jigang Wen;Gaogang Xie;Wei Liang;Dafang Zhang;Kenli Li","doi":"10.1109/TKDE.2025.3601198","DOIUrl":"https://doi.org/10.1109/TKDE.2025.3601198","url":null,"abstract":"Sparse data gathering has become a promising solution for reducing measurement costs by leveraging the inherent sparsity of data. However, most existing approaches rely on low-dimensional models such as compressive sensing or matrix completion, which are limited in capturing complex high-dimensional structures. To overcome these limitations, we propose <bold>TensorMon</b>, a novel tensor-based sparse data gathering framework that introduces a cuboid sampling strategy to more effectively exploit multidimensional correlations. Unlike traditional entry-based or tube-based sampling, TensorMon introduces the innovative concept of <italic>cuboid sampling</i>. We further develop a lightweight sampling scheduling algorithm and a non-iterative inference algorithm to ensure efficient measurement planning and accurate reconstruction of unmeasured data. Theoretical analysis establishes a new performance bound for our sampling strategy, which is significantly lower than those in existing literature. To validate our theoretical findings, we conduct extensive experiments on four real-world datasets: two network monitoring datasets, a city-scale crowd flow dataset, and a road traffic speed dataset. Experimental results demonstrate that TensorMon achieves substantial reductions in measurement cost, delivers high inference accuracy, and ensures rapid data recovery, highlighting its effectiveness and practicality across diverse application scenarios.","PeriodicalId":13496,"journal":{"name":"IEEE Transactions on Knowledge and Data Engineering","volume":"37 11","pages":"6708-6722"},"PeriodicalIF":10.4,"publicationDate":"2025-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145242600","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Score-Based Generative Diffusion Models for Social Recommendations 基于分数的社会推荐生成扩散模型
IF 10.4 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-08-19 DOI: 10.1109/TKDE.2025.3600103
Chengyi Liu;Jiahao Zhang;Shijie Wang;Wenqi Fan;Qing Li
With the prevalence of social networks on online platforms, social recommendation has become a vital technique for enhancing personalized recommendations. The effectiveness of social recommendations largely relies on the social homophily assumption, which presumes that individuals with social connections often share similar preferences. However, this foundational premise has been recently challenged due to the inherent complexity and noise present in real-world social networks. In this paper, we tackle the low social homophily challenge from an innovative generative perspective, directly generating optimal user social representations that maximize consistency with collaborative signals. Specifically, we propose the Score-based Generative Model for Social Recommendation (SGSR), which effectively adapts the Stochastic Differential Equation (SDE)-based diffusion models for social recommendations. To better fit the recommendation context, SGSR employs a joint curriculum training strategy to mitigate challenges related to missing supervision signals and leverages self-supervised learning techniques to align knowledge across social and collaborative domains. Extensive experiments on real-world datasets demonstrate the effectiveness of our approach in filtering redundant social information and improving recommendation performance.
随着社交网络在网络平台上的普及,社交推荐已经成为增强个性化推荐的重要技术。社会推荐的有效性很大程度上依赖于社会同质性假设,该假设假定具有社会关系的个体通常具有相似的偏好。然而,由于现实社会网络中固有的复杂性和噪音,这个基本前提最近受到了挑战。在本文中,我们从创新的生成角度解决了低社会同质性的挑战,直接生成了与协作信号一致性最大化的最佳用户社会表示。具体而言,我们提出了基于分数的社交推荐生成模型(SGSR),该模型有效地适应了基于随机微分方程(SDE)的社交推荐扩散模型。为了更好地适应建议背景,SGSR采用联合课程培训策略来减轻与缺失监督信号相关的挑战,并利用自我监督学习技术来协调跨社会和协作领域的知识。在真实数据集上的大量实验证明了我们的方法在过滤冗余社会信息和提高推荐性能方面的有效性。
{"title":"Score-Based Generative Diffusion Models for Social Recommendations","authors":"Chengyi Liu;Jiahao Zhang;Shijie Wang;Wenqi Fan;Qing Li","doi":"10.1109/TKDE.2025.3600103","DOIUrl":"https://doi.org/10.1109/TKDE.2025.3600103","url":null,"abstract":"With the prevalence of social networks on online platforms, social recommendation has become a vital technique for enhancing personalized recommendations. The effectiveness of social recommendations largely relies on the social homophily assumption, which presumes that individuals with social connections often share similar preferences. However, this foundational premise has been recently challenged due to the inherent complexity and noise present in real-world social networks. In this paper, we tackle the low social homophily challenge from an innovative generative perspective, directly generating optimal user social representations that maximize consistency with collaborative signals. Specifically, we propose the Score-based Generative Model for Social Recommendation (SGSR), which effectively adapts the Stochastic Differential Equation (SDE)-based diffusion models for social recommendations. To better fit the recommendation context, SGSR employs a joint curriculum training strategy to mitigate challenges related to missing supervision signals and leverages self-supervised learning techniques to align knowledge across social and collaborative domains. Extensive experiments on real-world datasets demonstrate the effectiveness of our approach in filtering redundant social information and improving recommendation performance.","PeriodicalId":13496,"journal":{"name":"IEEE Transactions on Knowledge and Data Engineering","volume":"37 11","pages":"6666-6679"},"PeriodicalIF":10.4,"publicationDate":"2025-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145242599","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TokenRec: Learning to Tokenize ID for LLM-Based Generative Recommendations TokenRec:学习为基于llm的生成式推荐标记ID
IF 10.4 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-08-19 DOI: 10.1109/TKDE.2025.3599265
Haohao Qu;Wenqi Fan;Zihuai Zhao;Qing Li
There is a growing interest in utilizing large language models (LLMs) to advance next-generation Recommender Systems (RecSys), driven by their outstanding language understanding and reasoning capabilities. In this scenario, tokenizing users and items becomes essential for ensuring seamless alignment of LLMs with recommendations. While studies have made progress in representing users and items using textual contents or latent representations, challenges remain in capturing high-order collaborative knowledge into discrete tokens compatible with LLMs and generalizing to unseen users/items. To address these challenges, we propose a novel framework called TokenRec, which introduces an effective ID tokenization strategy and an efficient retrieval paradigm for LLM-based recommendations. Our tokenization strategy involves quantizing the masked user/item representations learned from collaborative filtering into discrete tokens, thus achieving smooth incorporation of high-order collaborative knowledge and generalizable tokenization of users and items for LLM-based RecSys. Meanwhile, our generative retrieval paradigm is designed to efficiently recommend top-K items for users, eliminating the need for the time-consuming auto-regressive decoding and beam search processes used by LLMs, thus significantly reducing inference time. Comprehensive experiments validate the effectiveness of the proposed methods, demonstrating that TokenRec outperforms competitive benchmarks, including both traditional recommender systems and emerging LLM-based recommender systems.
利用大型语言模型(llm)来推进下一代推荐系统(RecSys)的兴趣越来越大,因为它们具有出色的语言理解和推理能力。在这种情况下,标记用户和项目对于确保llm与建议的无缝结合至关重要。虽然研究在使用文本内容或潜在表示表示用户和项目方面取得了进展,但在将高阶协作知识捕获为与llm兼容的离散令牌并推广到看不见的用户/项目方面仍然存在挑战。为了应对这些挑战,我们提出了一个名为TokenRec的新框架,该框架为基于llm的推荐引入了有效的ID标记化策略和有效的检索范式。我们的标记化策略包括将从协作过滤中学习到的屏蔽用户/项目表示量化为离散的标记,从而为基于llm的RecSys实现高阶协作知识和用户和项目的可推广标记化的平滑合并。同时,我们的生成检索范式旨在有效地为用户推荐top-K项,从而消除了llm使用的耗时的自回归解码和波束搜索过程的需要,从而显着减少了推理时间。综合实验验证了所提出方法的有效性,表明TokenRec优于竞争性基准,包括传统推荐系统和新兴的基于法学硕士的推荐系统。
{"title":"TokenRec: Learning to Tokenize ID for LLM-Based Generative Recommendations","authors":"Haohao Qu;Wenqi Fan;Zihuai Zhao;Qing Li","doi":"10.1109/TKDE.2025.3599265","DOIUrl":"https://doi.org/10.1109/TKDE.2025.3599265","url":null,"abstract":"There is a growing interest in utilizing large language models (LLMs) to advance next-generation Recommender Systems (RecSys), driven by their outstanding language understanding and reasoning capabilities. In this scenario, tokenizing users and items becomes essential for ensuring seamless alignment of LLMs with recommendations. While studies have made progress in representing users and items using textual contents or latent representations, challenges remain in capturing high-order collaborative knowledge into discrete tokens compatible with LLMs and generalizing to unseen users/items. To address these challenges, we propose a novel framework called <bold>TokenRec</b>, which introduces an effective ID tokenization strategy and an efficient retrieval paradigm for LLM-based recommendations. Our tokenization strategy involves quantizing the masked user/item representations learned from collaborative filtering into discrete tokens, thus achieving smooth incorporation of high-order collaborative knowledge and generalizable tokenization of users and items for LLM-based RecSys. Meanwhile, our generative retrieval paradigm is designed to efficiently recommend top-K items for users, eliminating the need for the time-consuming auto-regressive decoding and beam search processes used by LLMs, thus significantly reducing inference time. Comprehensive experiments validate the effectiveness of the proposed methods, demonstrating that TokenRec outperforms competitive benchmarks, including both traditional recommender systems and emerging LLM-based recommender systems.","PeriodicalId":13496,"journal":{"name":"IEEE Transactions on Knowledge and Data Engineering","volume":"37 10","pages":"6216-6231"},"PeriodicalIF":10.4,"publicationDate":"2025-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145036983","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FocusCores of Multilayer Graphs 多层图的FocusCores
IF 10.4 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-08-12 DOI: 10.1109/TKDE.2025.3597995
Run-An Wang;Zhaonian Zou;Dandan Liu;Xudong Liu
Mining dense subgraphs on multilayer graphs offers the opportunity for more in-depth discoveries than classical dense subgraph mining on single-layer graphs. However, the existing approaches fail to ensure the denseness of a discovered subgraph on layers of users’ interest and simultaneously gain partial supports on the denseness from other layers. In this paper, we introduce a novel dense subgraph model called FocusCore (FoCore for short) for multilayer graphs, which can pay more attention to the layers focused by users. The FoCore decomposition problem, that is, identifying all nonempty FoCores in a multilayer graph, can be addressed by executing the peeling process with respect to all possible configurations of focus and background layers. Using the nice properties of FoCores, we devise an interleaved peeling algorithm and a vertex-centric algorithm toward efficient FoCore decomposition. We further design a novel cache to minimize the average retrieval time for an arbitrary FoCore without the need for full FoCore decomposition, which significantly improves efficiency in large-scale graph mining tasks. As an application, we propose a FoCore-decomposition-based algorithm to approximate the densest subgraph in a multilayer graph with a provable approximation guarantee. The extensive experiments on real-world datasets verify the effectiveness of the FoCore model and the efficiency of the proposed algorithms.
在多层图上挖掘密集子图比在单层图上挖掘经典密集子图提供了更深入的发现机会。然而,现有的方法不能保证发现的子图在用户感兴趣的层上的密度,同时在其他层的密度上获得部分支持。本文引入了一种新颖的多层图密集子图模型FocusCore(简称FoCore),该模型可以更加关注用户关注的层。FoCore分解问题,即识别多层图中所有非空的FoCore,可以通过对焦点层和背景层的所有可能配置执行剥离过程来解决。利用FoCore的良好特性,我们设计了一种交错剥离算法和一种以顶点为中心的算法来实现高效的FoCore分解。我们进一步设计了一种新的缓存,以最小化任意FoCore的平均检索时间,而无需完全分解FoCore,从而显着提高了大规模图挖掘任务的效率。作为应用,我们提出了一种基于focore分解的算法来逼近多层图中最密集的子图,并提供了可证明的逼近保证。在实际数据集上的大量实验验证了FoCore模型的有效性和所提算法的效率。
{"title":"FocusCores of Multilayer Graphs","authors":"Run-An Wang;Zhaonian Zou;Dandan Liu;Xudong Liu","doi":"10.1109/TKDE.2025.3597995","DOIUrl":"https://doi.org/10.1109/TKDE.2025.3597995","url":null,"abstract":"Mining dense subgraphs on multilayer graphs offers the opportunity for more in-depth discoveries than classical dense subgraph mining on single-layer graphs. However, the existing approaches fail to ensure the denseness of a discovered subgraph on layers of users’ interest and simultaneously gain partial supports on the denseness from other layers. In this paper, we introduce a novel dense subgraph model called <underline>Fo</u>cus<underline>Core</u> (FoCore for short) for multilayer graphs, which can pay more attention to the layers focused by users. The FoCore decomposition problem, that is, identifying all nonempty FoCores in a multilayer graph, can be addressed by executing the peeling process with respect to all possible configurations of focus and background layers. Using the nice properties of FoCores, we devise an interleaved peeling algorithm and a vertex-centric algorithm toward efficient FoCore decomposition. We further design a novel cache to minimize the average retrieval time for an arbitrary FoCore without the need for full FoCore decomposition, which significantly improves efficiency in large-scale graph mining tasks. As an application, we propose a FoCore-decomposition-based algorithm to approximate the densest subgraph in a multilayer graph with a provable approximation guarantee. The extensive experiments on real-world datasets verify the effectiveness of the FoCore model and the efficiency of the proposed algorithms.","PeriodicalId":13496,"journal":{"name":"IEEE Transactions on Knowledge and Data Engineering","volume":"37 10","pages":"5890-5904"},"PeriodicalIF":10.4,"publicationDate":"2025-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145050780","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Transactions on Knowledge and Data Engineering
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1