首页 > 最新文献

IEEE Transactions on Knowledge and Data Engineering最新文献

英文 中文
IMS: Incremental Max-P Regionalization With Statistical Constraints IMS:具有统计约束的增量Max-P区划
IF 10.4 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-10-15 DOI: 10.1109/TKDE.2025.3621843
Yunfan Kang;Yiyang Bian;Qinma Kang;Amr Magdy
Spatial regionalization is the process of grouping a set of spatial areas into spatially contiguous and homogeneous regions. This paper introduces an Incremental Max-P regionalization with statistical constraints (IMS) problem; a regionalization process that supports enriched user-defined constraints based on statistical aggregate functions and supports incremental updates. In addition to enabling richer constraints, it allows users to employ multiple constraints simultaneously to significantly push the expressiveness and effectiveness of the existing regionalization literature. The IMS problem is NP-hard and significantly enriches the existing regionalization problems. Such a major enrichment introduces several challenges in both feasibility and scalability. To address these challenges, we propose the FaCT algorithm, a three-phase heuristic approach that finds a feasible set of spatial regions that satisfy IMS constraints while supporting large datasets compared to the existing literature. FaCT supports local and global incremental updates when there are changes in attribute values or constraints. In addition, we incorporate the Iterated Greedy algorithm with FaCT to further improve the solution quality of the IMS problem and the classical max-p regions problem. Our extensive experimental evaluation has demonstrated the effectiveness and scalability of our techniques on several real datasets.
空间区划是将一组空间区域划分为空间上连续且均匀的区域的过程。介绍了一种具有统计约束的增量Max-P区划问题;区域化过程,支持基于统计聚合函数的丰富的用户定义约束,并支持增量更新。除了支持更丰富的约束外,它还允许用户同时使用多个约束,以显著提高现有区划文献的表现力和有效性。IMS问题是NP-hard问题,极大地丰富了现有的区域化问题。如此重大的丰富在可行性和可伸缩性方面都带来了一些挑战。为了应对这些挑战,我们提出了FaCT算法,这是一种三阶段启发式方法,与现有文献相比,它可以找到一组满足IMS约束的可行空间区域,同时支持大型数据集。当属性值或约束发生变化时,FaCT支持本地和全局增量更新。此外,我们将迭代贪心算法与事实相结合,进一步提高了IMS问题和经典的max-p区域问题的解质量。我们广泛的实验评估已经证明了我们的技术在几个真实数据集上的有效性和可扩展性。
{"title":"IMS: Incremental Max-P Regionalization With Statistical Constraints","authors":"Yunfan Kang;Yiyang Bian;Qinma Kang;Amr Magdy","doi":"10.1109/TKDE.2025.3621843","DOIUrl":"https://doi.org/10.1109/TKDE.2025.3621843","url":null,"abstract":"Spatial regionalization is the process of grouping a set of spatial areas into spatially contiguous and homogeneous regions. This paper introduces an <italic>Incremental Max-P regionalization with statistical constraints</i> (IMS) problem; a regionalization process that supports enriched user-defined constraints based on statistical aggregate functions and supports incremental updates. In addition to enabling richer constraints, it allows users to employ multiple constraints simultaneously to significantly push the expressiveness and effectiveness of the existing regionalization literature. The IMS problem is NP-hard and significantly enriches the existing regionalization problems. Such a major enrichment introduces several challenges in both feasibility and scalability. To address these challenges, we propose the <italic>FaCT</i> algorithm, a three-phase heuristic approach that finds a feasible set of spatial regions that satisfy IMS constraints while supporting large datasets compared to the existing literature. <italic>FaCT</i> supports local and global incremental updates when there are changes in attribute values or constraints. In addition, we incorporate the Iterated Greedy algorithm with <italic>FaCT</i> to further improve the solution quality of the IMS problem and the classical max-p regions problem. Our extensive experimental evaluation has demonstrated the effectiveness and scalability of our techniques on several real datasets.","PeriodicalId":13496,"journal":{"name":"IEEE Transactions on Knowledge and Data Engineering","volume":"38 1","pages":"380-398"},"PeriodicalIF":10.4,"publicationDate":"2025-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145705924","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Survey of Natural Language Processing for Education: Taxonomy, Systematic Review, and Future Trends 用于教育的自然语言处理综述:分类、系统回顾和未来趋势
IF 10.4 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-10-14 DOI: 10.1109/TKDE.2025.3621181
Yunshi Lan;Xinyuan Li;Hanyue Du;Xuesong Lu;Ming Gao;Weining Qian;Aoying Zhou
Natural Language Processing (NLP) aims to analyze text or speech via techniques in the computer science field. It serves applications in the domains of healthcare, commerce, education, and so on. Particularly, NLP has been widely applied to the education domain and its applications have enormous potential to help teaching and learning. In this survey, we review recent advances in NLP with a focus on solving problems relevant to the education domain. In detail, we begin with introducing the related background and the real-world scenarios in education to which NLP techniques could contribute. Then, we present a taxonomy of NLP in the education domain and highlight typical NLP applications including question answering, question construction, automated assessment, and error correction. Next, we illustrate the task definition, challenges, and corresponding cutting-edge techniques based on the above taxonomy. In particular, LLM-involved methods are included for discussion due to the wide usage of LLMs in diverse NLP applications. After that, we showcase some off-the-shelf demonstrations in this domain, which are designed for educators or researchers. At last, we conclude with five promising directions for future research, including generalization over subjects and languages, deployed LLM-based systems for education, adaptive learning for teaching and learning, interpretability for education, and ethical consideration of NLP techniques.
自然语言处理(NLP)旨在通过计算机科学领域的技术分析文本或语音。它服务于医疗保健、商业、教育等领域的应用程序。特别是,自然语言处理已经广泛应用于教育领域,其应用在帮助教学和学习方面具有巨大的潜力。在这项调查中,我们回顾了NLP的最新进展,重点是解决与教育领域相关的问题。详细地说,我们首先介绍了NLP技术可以贡献的相关背景和教育中的现实场景。然后,我们提出了NLP在教育领域的分类,并重点介绍了典型的NLP应用,包括问题回答、问题构建、自动评估和错误纠正。接下来,我们将根据上述分类法说明任务定义、挑战和相应的前沿技术。特别是,由于法学硕士在各种NLP应用中的广泛使用,涉及法学硕士的方法被包括在讨论中。之后,我们将展示这个领域的一些现成的演示,这些演示是为教育工作者或研究人员设计的。最后,我们总结了未来研究的五个有希望的方向,包括学科和语言的泛化,部署基于法学硕士的教育系统,教学和学习的适应性学习,教育的可解释性以及NLP技术的伦理考虑。
{"title":"Survey of Natural Language Processing for Education: Taxonomy, Systematic Review, and Future Trends","authors":"Yunshi Lan;Xinyuan Li;Hanyue Du;Xuesong Lu;Ming Gao;Weining Qian;Aoying Zhou","doi":"10.1109/TKDE.2025.3621181","DOIUrl":"https://doi.org/10.1109/TKDE.2025.3621181","url":null,"abstract":"Natural Language Processing (NLP) aims to analyze text or speech via techniques in the computer science field. It serves applications in the domains of healthcare, commerce, education, and so on. Particularly, NLP has been widely applied to the education domain and its applications have enormous potential to help teaching and learning. In this survey, we review recent advances in NLP with a focus on solving problems relevant to the education domain. In detail, we begin with introducing the related background and the real-world scenarios in education to which NLP techniques could contribute. Then, we present a taxonomy of NLP in the education domain and highlight typical NLP applications including question answering, question construction, automated assessment, and error correction. Next, we illustrate the task definition, challenges, and corresponding cutting-edge techniques based on the above taxonomy. In particular, LLM-involved methods are included for discussion due to the wide usage of LLMs in diverse NLP applications. After that, we showcase some off-the-shelf demonstrations in this domain, which are designed for educators or researchers. At last, we conclude with five promising directions for future research, including generalization over subjects and languages, deployed LLM-based systems for education, adaptive learning for teaching and learning, interpretability for education, and ethical consideration of NLP techniques.","PeriodicalId":13496,"journal":{"name":"IEEE Transactions on Knowledge and Data Engineering","volume":"38 1","pages":"659-678"},"PeriodicalIF":10.4,"publicationDate":"2025-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145705887","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Secure and Practical Time Series Analytics With Mixed Model 安全实用的混合模型时间序列分析
IF 10.4 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-10-14 DOI: 10.1109/TKDE.2025.3620577
Songnian Zhang;Hao Yuan;Hui Zhu;Jun Shao;Yandong Zheng;Fengwei Wang
Merging multi-source time series data in cloud servers significantly enhances the effectiveness of analyses. However, privacy concerns are hindering time series analytics in the cloud. Responsively, numerous secure time series analytics schemes have been designed to address privacy concerns. Unfortunately, existing schemes suffer from severe performance issues, making them impractical for real-world applications. In this work, we propose novel secure time series analytics schemes that break through the performance bottleneck by substantially improving both communication and computational efficiency without compromising security. To attain this, we open up a new technique roadmap that leverages the idea of mixed model. Specifically, we design a non-interactive secure Euclidean distance protocol by tailoring homomorphic secret sharing to suit subtractive secret sharing. Additionally, we devise a different approach to securely compute the minimum of three elements, simultaneously reducing computational and communication costs. Moreover, we delicately introduce a rotation concept, design a rotation-based hybrid comparison mode, and finally propose our fast secure top-$k$ protocol that can dramatically reduce comparison complexity. With the above secure protocols, we propose a practical secure time series analytics scheme with exceptional performance and a security-enhanced scheme that considers stronger adversaries. Formal security analyses demonstrate that our proposed schemes can achieve the desired security requirements, while the comprehensive experimental evaluations illustrate that our schemes outperform the state-of-the-art scheme in both computation and communication.
在云服务器中合并多源时间序列数据可以显著提高分析的有效性。然而,隐私问题阻碍了云中的时间序列分析。相应地,许多安全的时间序列分析方案被设计来解决隐私问题。不幸的是,现有的方案存在严重的性能问题,使得它们不适合实际应用程序。在这项工作中,我们提出了新的安全时间序列分析方案,通过在不影响安全性的情况下大幅提高通信和计算效率来突破性能瓶颈。为了实现这一点,我们开辟了一个利用混合模型思想的新技术路线图。具体来说,我们通过调整同态秘密共享以适应相减秘密共享,设计了一种非交互的安全欧几里得距离协议。此外,我们设计了一种不同的方法来安全地计算至少三个元素,同时降低了计算和通信成本。此外,我们巧妙地引入了旋转的概念,设计了一种基于旋转的混合比较模式,最后提出了快速安全的top- k -协议,可以显著降低比较的复杂性。利用上述安全协议,我们提出了一种具有卓越性能的实用安全时间序列分析方案和一种考虑更强对手的安全增强方案。正式的安全分析表明,我们提出的方案可以达到预期的安全要求,而综合的实验评估表明,我们的方案在计算和通信方面都优于最先进的方案。
{"title":"Secure and Practical Time Series Analytics With Mixed Model","authors":"Songnian Zhang;Hao Yuan;Hui Zhu;Jun Shao;Yandong Zheng;Fengwei Wang","doi":"10.1109/TKDE.2025.3620577","DOIUrl":"https://doi.org/10.1109/TKDE.2025.3620577","url":null,"abstract":"Merging multi-source time series data in cloud servers significantly enhances the effectiveness of analyses. However, privacy concerns are hindering time series analytics in the cloud. Responsively, numerous secure time series analytics schemes have been designed to address privacy concerns. Unfortunately, existing schemes suffer from severe performance issues, making them impractical for real-world applications. In this work, we propose novel secure time series analytics schemes that break through the performance bottleneck by substantially improving both communication and computational efficiency without compromising security. To attain this, we open up a new technique roadmap that leverages the idea of mixed model. Specifically, we design a non-interactive secure Euclidean distance protocol by tailoring homomorphic secret sharing to suit subtractive secret sharing. Additionally, we devise a different approach to securely compute the minimum of three elements, simultaneously reducing computational and communication costs. Moreover, we delicately introduce a rotation concept, design a rotation-based hybrid comparison mode, and finally propose our fast secure top-<inline-formula><tex-math>$k$</tex-math></inline-formula> protocol that can dramatically reduce comparison complexity. With the above secure protocols, we propose a practical secure time series analytics scheme with exceptional performance and a security-enhanced scheme that considers stronger adversaries. Formal security analyses demonstrate that our proposed schemes can achieve the desired security requirements, while the comprehensive experimental evaluations illustrate that our schemes outperform the state-of-the-art scheme in both computation and communication.","PeriodicalId":13496,"journal":{"name":"IEEE Transactions on Knowledge and Data Engineering","volume":"38 1","pages":"588-601"},"PeriodicalIF":10.4,"publicationDate":"2025-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145705862","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TowerDNA: Fast and Accurate Graph Retrieval With Dividing, Contrasting and Alignment TowerDNA:快速和准确的图检索与划分,对比和对齐
IF 10.4 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-10-14 DOI: 10.1109/TKDE.2025.3621493
Junwei Yang;Yiyang Gu;Yifang Qin;Xiao Luo;Zhiping Xiao;Kangjie Zheng;Wei Ju;Xian-Sheng Hua;Ming Zhang
Graph retrieval (GR), a ranking procedure that aims to sort the graphs in a database by their relevance to a query graph in decreasing order, has wide applications across diverse domains, such as visual object detection and drug discovery. Existing Graph Retrieval (GR) approaches usually compare graph pairs at a detailed level and generate quadratic similarity scores. In realistic scenarios, conducting quadratic fine-grained comparisons is costly. However, coarse-grained comparisons would result in performance loss. Moreover, label scarcity in real-world data brings extra challenges. To tackle these issues, we investigate a more realistic GR problem, namely, efficient graph retrieval (EGR). Our key intuition is that, since there are numerous underutilized unlabeled pairs in realistic scenarios, by leveraging the additional information they provide, we can achieve speed-up while simplifying the model without sacrificing performance. Following our intuition, we propose an efficient model called Dual-Tower Model with Dividing, Contrasting and Alignment (TowerDNA). TowerDNA utilizes a GNN-based dual-tower model as a backbone to quickly compare graph pairs in a coarse-grained manner. In addition, to effectively utilize unlabeled pairs, TowerDNA first identifies confident pairs from unlabeled pairs to expand labeled datasets. It then learns from remaining unconfident pairs via graph contrastive learning with geometric correspondence. To integrate all semantics with reduced biases, TowerDNA generates prototypes using labeled pairs, which are aligned within both confident and unconfident pairs. Extensive experiments on diverse realistic datasets demonstrate that TowerDNA achieves comparable performance to fine-grained methods while providing a 10× speed-up.
图检索(GR)是一种排序过程,旨在根据与查询图的相关性按降序对数据库中的图进行排序,在视觉对象检测和药物发现等各个领域都有广泛的应用。现有的图检索(GR)方法通常在详细层次上比较图对,并生成二次相似度分数。在现实场景中,进行二次细粒度比较的成本很高。但是,粗粒度比较会导致性能损失。此外,现实世界数据中的标签稀缺性带来了额外的挑战。为了解决这些问题,我们研究了一个更现实的GR问题,即高效图检索(EGR)。我们的关键直觉是,由于在现实场景中有许多未充分利用的未标记对,通过利用它们提供的额外信息,我们可以在简化模型的同时实现加速,而不会牺牲性能。根据我们的直觉,我们提出了一个有效的模型,称为具有划分,对比和对齐的双塔模型(TowerDNA)。TowerDNA利用基于gnn的双塔模型作为主干,以粗粒度的方式快速比较图对。此外,为了有效利用未标记对,TowerDNA首先从未标记对中识别可信对,以扩展标记数据集。然后,它通过几何对应的图对比学习从剩余的不自信对中学习。为了集成所有语义和减少偏差,TowerDNA使用标记对生成原型,这些标记对在自信和不自信对中对齐。在各种实际数据集上进行的大量实验表明,TowerDNA在提供10倍加速的同时实现了与细粒度方法相当的性能。
{"title":"TowerDNA: Fast and Accurate Graph Retrieval With Dividing, Contrasting and Alignment","authors":"Junwei Yang;Yiyang Gu;Yifang Qin;Xiao Luo;Zhiping Xiao;Kangjie Zheng;Wei Ju;Xian-Sheng Hua;Ming Zhang","doi":"10.1109/TKDE.2025.3621493","DOIUrl":"https://doi.org/10.1109/TKDE.2025.3621493","url":null,"abstract":"Graph retrieval (GR), a ranking procedure that aims to sort the graphs in a database by their relevance to a query graph in decreasing order, has wide applications across diverse domains, such as visual object detection and drug discovery. Existing Graph Retrieval (GR) approaches usually compare graph pairs at a detailed level and generate quadratic similarity scores. In realistic scenarios, conducting quadratic fine-grained comparisons is costly. However, coarse-grained comparisons would result in performance loss. Moreover, label scarcity in real-world data brings extra challenges. To tackle these issues, we investigate a more realistic GR problem, namely, efficient graph retrieval (EGR). Our key intuition is that, since there are numerous underutilized unlabeled pairs in realistic scenarios, by leveraging the additional information they provide, we can achieve speed-up while simplifying the model without sacrificing performance. Following our intuition, we propose an efficient model called Dual-<bold>Tower</b> Model with <bold>D</b>ividing, Co<bold>n</b>trasting and <bold>A</b>lignment (TowerDNA). TowerDNA utilizes a GNN-based dual-tower model as a backbone to quickly compare graph pairs in a coarse-grained manner. In addition, to effectively utilize unlabeled pairs, TowerDNA first identifies confident pairs from unlabeled pairs to expand labeled datasets. It then learns from remaining unconfident pairs via graph contrastive learning with geometric correspondence. To integrate all semantics with reduced biases, TowerDNA generates prototypes using labeled pairs, which are aligned within both confident and unconfident pairs. Extensive experiments on diverse realistic datasets demonstrate that TowerDNA achieves comparable performance to fine-grained methods while providing a 10× speed-up.","PeriodicalId":13496,"journal":{"name":"IEEE Transactions on Knowledge and Data Engineering","volume":"38 2","pages":"1364-1379"},"PeriodicalIF":10.4,"publicationDate":"2025-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145898184","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Efficient $k$k-Plex Mining in Temporal Graphs 时间图中高效的$k$k- plex挖掘
IF 10.4 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-10-13 DOI: 10.1109/TKDE.2025.3620605
Yanping Wu;Renjie Sun;Xiaoyang Wang;Ying Zhang;Lu Qin;Wenjie Zhang;Xuemin Lin
A $k$-plex is a subgraph in which each vertex can miss edges to at most $k$ vertices, including itself. $k$-plex can find many real-world applications such as social network analysis and product recommendation. Previous studies about $k$-plex mainly focus on static graphs. However, in reality, relationships between two entities often occur at some specific timestamps, which can be modeled as temporal graphs. Directly extending the $k$-plex model may fail to find some critical groups in temporal graphs, which exhibit certain frequent occurring patterns. To fill the gap, in this paper, we develop a novel model, named $(k,l)$-plex, which is a vertex set that exists in no less than $l$ timestamps, at each of which the subgraph induced is a $k$-plex. To identify practical results, we propose and investigate two important problems, i.e., large maximal $(k,l)$-plex (MalKLP) enumeration and maximum $(k,l)$-plex (MaxKLP) identification. For the MalKLP enumeration problem, a reasonable baseline method is first proposed by extending the Bron-Kerbosch (BK) framework. To overcome the limitations in baseline and scale for large graphs, optimized strategies are developed, including novel graph reduction approach and search branch pruning techniques. For the MaxKLP identification task, we first design a baseline method by extending the proposed enumeration framework. Additionally, to accelerate the search, a new search framework with efficient branch pruning rules and refined graph reduction method is developed. Finally, comprehensive experiments are conducted on 14 real-world datasets to validate the efficiency and effectiveness of the proposed techniques.
$k$ plex是一个子图,其中每个顶点最多可以错过$k$个顶点的边,包括它自己。$k$-plex可以找到许多现实世界的应用程序,如社会网络分析和产品推荐。以往关于$k$-plex的研究主要集中在静态图上。然而,在现实中,两个实体之间的关系经常发生在一些特定的时间戳上,这些时间戳可以建模为时间图。直接扩展$k$-plex模型可能无法在时间图中找到一些关键组,这些组表现出某些频繁发生的模式。为了填补这一空白,本文提出了一个新的模型$(k,l)$-plex,它是一个顶点集,存在于不少于$l$的时间戳中,其中每个子图都是$k$-plex。为了确定实际结果,我们提出并研究了两个重要问题,即最大$(k,l)$-plex (MalKLP)枚举和最大$(k,l)$-plex (MaxKLP)识别。针对MalKLP枚举问题,首先通过扩展brown - kerbosch (BK)框架,提出了合理的基线方法。为了克服大图在基线和规模上的限制,开发了优化策略,包括新的图约简方法和搜索分支修剪技术。对于MaxKLP识别任务,我们首先通过扩展所提出的枚举框架设计一个基线方法。此外,为了提高搜索速度,提出了一种具有高效分支修剪规则和精细图约简方法的搜索框架。最后,在14个真实数据集上进行了综合实验,验证了所提出技术的效率和有效性。
{"title":"Efficient $k$k-Plex Mining in Temporal Graphs","authors":"Yanping Wu;Renjie Sun;Xiaoyang Wang;Ying Zhang;Lu Qin;Wenjie Zhang;Xuemin Lin","doi":"10.1109/TKDE.2025.3620605","DOIUrl":"https://doi.org/10.1109/TKDE.2025.3620605","url":null,"abstract":"A <inline-formula><tex-math>$k$</tex-math></inline-formula>-plex is a subgraph in which each vertex can miss edges to at most <inline-formula><tex-math>$k$</tex-math></inline-formula> vertices, including itself. <inline-formula><tex-math>$k$</tex-math></inline-formula>-plex can find many real-world applications such as social network analysis and product recommendation. Previous studies about <inline-formula><tex-math>$k$</tex-math></inline-formula>-plex mainly focus on static graphs. However, in reality, relationships between two entities often occur at some specific timestamps, which can be modeled as temporal graphs. Directly extending the <inline-formula><tex-math>$k$</tex-math></inline-formula>-plex model may fail to find some critical groups in temporal graphs, which exhibit certain frequent occurring patterns. To fill the gap, in this paper, we develop a novel model, named <inline-formula><tex-math>$(k,l)$</tex-math></inline-formula>-plex, which is a vertex set that exists in no less than <inline-formula><tex-math>$l$</tex-math></inline-formula> timestamps, at each of which the subgraph induced is a <inline-formula><tex-math>$k$</tex-math></inline-formula>-plex. To identify practical results, we propose and investigate two important problems, i.e., large maximal <inline-formula><tex-math>$(k,l)$</tex-math></inline-formula>-plex (MalKLP) enumeration and maximum <inline-formula><tex-math>$(k,l)$</tex-math></inline-formula>-plex (MaxKLP) identification. For the MalKLP enumeration problem, a reasonable baseline method is first proposed by extending the Bron-Kerbosch (BK) framework. To overcome the limitations in baseline and scale for large graphs, optimized strategies are developed, including novel graph reduction approach and search branch pruning techniques. For the MaxKLP identification task, we first design a baseline method by extending the proposed enumeration framework. Additionally, to accelerate the search, a new search framework with efficient branch pruning rules and refined graph reduction method is developed. Finally, comprehensive experiments are conducted on 14 real-world datasets to validate the efficiency and effectiveness of the proposed techniques.","PeriodicalId":13496,"journal":{"name":"IEEE Transactions on Knowledge and Data Engineering","volume":"37 12","pages":"7105-7119"},"PeriodicalIF":10.4,"publicationDate":"2025-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145455828","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Interest-Aware Graph Contrastive Learning for Recommendation With Diffusion-Based Augmentation 基于扩散增强的推荐兴趣感知图对比学习
IF 10.4 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-10-13 DOI: 10.1109/TKDE.2025.3620600
Mengyuan Jing;Yanmin Zhu;Zhaobo Wang;Jiadi Yu;Feilong Tang
Graph Contrastive Learning (GCL) has recently garnered significant attention for enhancing recommender systems. Most existing GCL-based methods perturb the raw data graph to generate views, performing contrastive learning across these views to learn generalizable representations. However, most of these methods rely on data- or model-based augmentation techniques that may disrupt interest consistency. In this paper, we propose a novel interest-aware augmentation approach based on diffusion models to address this issue. Specifically, we leverage a conditional diffusion model to generate interest-consistent views by conditioning on node interaction information, ensuring that the generated views align with the interests of the nodes. Based on this augmentation method, we introduce DiffCL, a graph contrastive learning framework for recommendation. Furthermore, we propose an easy-to-hard generation strategy. By progressively adjusting the starting point of the reverse denoising process, this strategy further enhances effective contrastive learning. We evaluate DiffCL on three public real-world datasets, and results indicate that our method outperforms state-of-the-art techniques, demonstrating its effectiveness.
图对比学习(GCL)最近在增强推荐系统方面获得了极大的关注。大多数现有的基于gcl的方法对原始数据图进行扰动以生成视图,在这些视图之间执行对比学习以学习可推广的表示。然而,这些方法大多依赖于基于数据或模型的增强技术,这可能会破坏兴趣一致性。本文提出了一种基于扩散模型的兴趣感知增强方法来解决这一问题。具体来说,我们利用条件扩散模型通过对节点交互信息进行调节来生成兴趣一致的视图,确保生成的视图与节点的兴趣一致。在此基础上,我们引入了一种用于推荐的图对比学习框架DiffCL。此外,我们提出了一种易-难生成策略。该策略通过逐步调整反向去噪过程的起始点,进一步增强了有效的对比学习。我们在三个公开的真实世界数据集上评估了DiffCL,结果表明我们的方法优于最先进的技术,证明了它的有效性。
{"title":"Interest-Aware Graph Contrastive Learning for Recommendation With Diffusion-Based Augmentation","authors":"Mengyuan Jing;Yanmin Zhu;Zhaobo Wang;Jiadi Yu;Feilong Tang","doi":"10.1109/TKDE.2025.3620600","DOIUrl":"https://doi.org/10.1109/TKDE.2025.3620600","url":null,"abstract":"Graph Contrastive Learning (GCL) has recently garnered significant attention for enhancing recommender systems. Most existing GCL-based methods perturb the raw data graph to generate views, performing contrastive learning across these views to learn generalizable representations. However, most of these methods rely on data- or model-based augmentation techniques that may disrupt interest consistency. In this paper, we propose a novel interest-aware augmentation approach based on diffusion models to address this issue. Specifically, we leverage a conditional diffusion model to generate interest-consistent views by conditioning on node interaction information, ensuring that the generated views align with the interests of the nodes. Based on this augmentation method, we introduce DiffCL, a graph contrastive learning framework for recommendation. Furthermore, we propose an easy-to-hard generation strategy. By progressively adjusting the starting point of the reverse denoising process, this strategy further enhances effective contrastive learning. We evaluate DiffCL on three public real-world datasets, and results indicate that our method outperforms state-of-the-art techniques, demonstrating its effectiveness.","PeriodicalId":13496,"journal":{"name":"IEEE Transactions on Knowledge and Data Engineering","volume":"38 1","pages":"414-427"},"PeriodicalIF":10.4,"publicationDate":"2025-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145705885","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
OMCR: An Online Multivariate Forecaster for Cloud Resource Management OMCR:云资源管理的在线多元预测器
IF 10.4 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-10-08 DOI: 10.1109/TKDE.2025.3619097
Xu Gao;Xiu Tang;Chang Yao;Sai Wu;Gongsheng Yuan;Wenchao Zhou;Feifei Li;Gang Chen
A precise workload forecaster is the key to effective resource management, system scalability, and overall operational efficiency in cloud environments. However, real-world cloud systems frequently operate in dynamic and unpredictable settings, causing workloads that exhibit significant diversity and fluctuations. To address these problems, we introduce OMCR, a novel online multivariate forecaster for cloud resource management, that overcomes the limitations of existing static forecasting methods through online learning. OMCR integrates long-term memory with a rapid response mechanism to short-term changes in cloud systems, while also considering the impact of multivariate relationships on workload prediction. OMCR minimizes its reliance on historical data, thereby reducing training difficulty and maintaining lower prediction loss in the long run. OMCR also offers an adaptive approach to forecasting peak workloads in a certain time span, which helps cloud resource management. Experimental results demonstrate the superior performance of our proposed framework compared to state-of-the-art methods in MAE and MSE metrics when forecasting cloud workloads.
精确的工作负载预测是云环境中有效的资源管理、系统可伸缩性和整体操作效率的关键。然而,现实世界的云系统经常在动态和不可预测的环境中运行,导致工作负载表现出巨大的多样性和波动。为了解决这些问题,我们引入了一种新的用于云资源管理的在线多元预测器OMCR,它通过在线学习克服了现有静态预测方法的局限性。OMCR集成了长期记忆和对云系统短期变化的快速响应机制,同时也考虑了多变量关系对工作负载预测的影响。OMCR最大限度地减少了对历史数据的依赖,从而降低了训练难度,并在长期内保持较低的预测损失。OMCR还提供了一种自适应方法来预测特定时间范围内的峰值工作负载,这有助于云资源管理。实验结果表明,在预测云工作负载时,与MAE和MSE指标中最先进的方法相比,我们提出的框架具有优越的性能。
{"title":"OMCR: An Online Multivariate Forecaster for Cloud Resource Management","authors":"Xu Gao;Xiu Tang;Chang Yao;Sai Wu;Gongsheng Yuan;Wenchao Zhou;Feifei Li;Gang Chen","doi":"10.1109/TKDE.2025.3619097","DOIUrl":"https://doi.org/10.1109/TKDE.2025.3619097","url":null,"abstract":"A precise workload forecaster is the key to effective resource management, system scalability, and overall operational efficiency in cloud environments. However, real-world cloud systems frequently operate in dynamic and unpredictable settings, causing workloads that exhibit significant diversity and fluctuations. To address these problems, we introduce OMCR, a novel online multivariate forecaster for cloud resource management, that overcomes the limitations of existing static forecasting methods through online learning. OMCR integrates long-term memory with a rapid response mechanism to short-term changes in cloud systems, while also considering the impact of multivariate relationships on workload prediction. OMCR minimizes its reliance on historical data, thereby reducing training difficulty and maintaining lower prediction loss in the long run. OMCR also offers an adaptive approach to forecasting peak workloads in a certain time span, which helps cloud resource management. Experimental results demonstrate the superior performance of our proposed framework compared to state-of-the-art methods in MAE and MSE metrics when forecasting cloud workloads.","PeriodicalId":13496,"journal":{"name":"IEEE Transactions on Knowledge and Data Engineering","volume":"38 1","pages":"532-545"},"PeriodicalIF":10.4,"publicationDate":"2025-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145705895","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Private-Set-Intersection-Based Medical Data Sharing Scheme With Integrity Auditing for IoMT Cloud Storage Systems 基于私有集交叉点的IoMT云存储医疗数据共享方案及完整性审计
IF 10.4 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-10-08 DOI: 10.1109/TKDE.2025.3619426
Zekun Li;Jinyong Chang;Bei Liang;Kaijing Ling;Yifan Dong;Yanyan Ji;Maozhi Xu
In recent years, the medical industry is generating a large amount of data. How to securely store and reliably share these medical data has been a hot research topic. Cloud storage technology can be applied to the medical industry to adapt to the rapid growth of medical data. However, cloud-based data storage and sharing systems face a series of security issues: whether the integrity of outsourced medical data can be guaranteed, and malicious access between different medical institutions may leak user’s privacy. This article proposes a system that simultaneously solves the integrity auditing of medical data and securely data sharing between different medical institutions under the terminal-edge-cloud framework. Specifically, patients/doctors are treated as terminal users, medical institutions are viewed as edge nodes, and medical clouds form the central storage layer. In the process of data auditing, third-party auditor can achieve integrity auditing of medical cloud storage data. Moreover, different medical institutions use private-set-intersection technology to share the common user’s electronic medical data, while for other users not in intersection set, their data does not need to be shared. Finally, security and performance analyses show that our proposed system is provable secure and has high computational and communication efficiency.
近年来,医疗行业正在产生大量的数据。如何安全存储和可靠共享这些医疗数据一直是研究的热点。云存储技术可以应用于医疗行业,以适应医疗数据的快速增长。然而,基于云的数据存储和共享系统面临着一系列安全问题:外包医疗数据的完整性能否得到保障,不同医疗机构之间的恶意访问可能会泄露用户隐私。本文提出了一种在终端边缘云框架下,同时解决医疗数据完整性审计和不同医疗机构之间数据安全共享的系统。具体而言,将患者/医生视为终端用户,将医疗机构视为边缘节点,将医疗云视为中心存储层。在数据审计过程中,第三方审计人员可以实现对医疗云存储数据的完整性审计。此外,不同的医疗机构使用private-set-交集技术共享普通用户的电子医疗数据,而对于其他不在交集集的用户,则不需要共享其数据。最后,安全性和性能分析表明,该系统是安全的,具有较高的计算效率和通信效率。
{"title":"Private-Set-Intersection-Based Medical Data Sharing Scheme With Integrity Auditing for IoMT Cloud Storage Systems","authors":"Zekun Li;Jinyong Chang;Bei Liang;Kaijing Ling;Yifan Dong;Yanyan Ji;Maozhi Xu","doi":"10.1109/TKDE.2025.3619426","DOIUrl":"https://doi.org/10.1109/TKDE.2025.3619426","url":null,"abstract":"In recent years, the medical industry is generating a large amount of data. How to securely store and reliably share these medical data has been a hot research topic. Cloud storage technology can be applied to the medical industry to adapt to the rapid growth of medical data. However, cloud-based data storage and sharing systems face a series of security issues: whether the integrity of outsourced medical data can be guaranteed, and malicious access between different medical institutions may leak user’s privacy. This article proposes a system that simultaneously solves the integrity auditing of medical data and securely data sharing between different medical institutions under the terminal-edge-cloud framework. Specifically, patients/doctors are treated as terminal users, medical institutions are viewed as edge nodes, and medical clouds form the central storage layer. In the process of data auditing, third-party auditor can achieve integrity auditing of medical cloud storage data. Moreover, different medical institutions use private-set-intersection technology to share the common user’s electronic medical data, while for other users not in intersection set, their data does not need to be shared. Finally, security and performance analyses show that our proposed system is provable secure and has high computational and communication efficiency.","PeriodicalId":13496,"journal":{"name":"IEEE Transactions on Knowledge and Data Engineering","volume":"37 12","pages":"7402-7413"},"PeriodicalIF":10.4,"publicationDate":"2025-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145455947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning From Graph-Graph Relationship: A New Perspective on Graph-Level Anomaly Detection 从图-图关系中学习:图级异常检测的新视角
IF 10.4 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-10-07 DOI: 10.1109/TKDE.2025.3618929
Zhenyu Yang;Ge Zhang;Jia Wu;Jian Yang;Hao Peng;Pietro Lió
Graph-level anomaly detection (GLAD) aims to distinguish anomalous graphs that exhibit significant deviations from others. The graph-graph relationship, revealing the deviation and similarity between graphs, offers global insights into the entire graph level for highlighting the anomalies’ divergence from normal graph patterns. Thus, understanding graph-graph relationships is critical to boosting models on GLAD tasks. However, existing deep GLAD algorithms heavily rely on Graph Neural Networks that primarily focus on analyzing individual graphs. These methods overlook the significance of graph-graph relationships in telling anomalies from normal graphs. In this paper, we propose a novel model for Graph-level Anomaly Detection using the Transformer technique, namely GADTrans. Specifically, GADTrans builds the transformer upon crucial subgraphs mined by a parametrized extractor, for modeling precise graph-graph relationships. The learned graph-graph relationships put effort into distinguishing normal and anomalous graphs. In addition, a specific loss is introduced to guide GADTrans in highlighting the deviation between anomalous and normal graphs while underlining the similarities among normal graphs. GADTrans achieves model interpretability by delivering human-interpretable results, which are learned graph-graph relationships and crucial subgraphs. Extensive experiments on six real-world datasets verify the effectiveness and superiority of GADTrans for GLAD tasks.
图级异常检测(GLAD)旨在区分与其他图有显著偏差的异常图。图-图关系揭示了图之间的偏差和相似性,提供了对整个图级别的全局洞察,以突出异常与正常图模式的差异。因此,理解图-图关系对于提高GLAD任务上的模型至关重要。然而,现有的深度GLAD算法严重依赖于主要侧重于分析单个图的图神经网络。这些方法忽略了图-图关系在区分正态图和异常图方面的重要性。在本文中,我们提出了一种利用变压器技术进行图级异常检测的新模型,即GADTrans。具体来说,GADTrans将变压器建立在由参数化提取器挖掘的关键子图上,用于精确的图-图关系建模。学习到的图-图关系致力于区分正常图和异常图。此外,引入了一个特定的损失来指导GADTrans突出异常图和正态图之间的偏差,同时强调正态图之间的相似性。GADTrans通过提供人类可解释的结果来实现模型可解释性,这些结果是习得的图-图关系和关键子图。在六个真实数据集上的大量实验验证了GADTrans在GLAD任务中的有效性和优越性。
{"title":"Learning From Graph-Graph Relationship: A New Perspective on Graph-Level Anomaly Detection","authors":"Zhenyu Yang;Ge Zhang;Jia Wu;Jian Yang;Hao Peng;Pietro Lió","doi":"10.1109/TKDE.2025.3618929","DOIUrl":"https://doi.org/10.1109/TKDE.2025.3618929","url":null,"abstract":"Graph-level anomaly detection (GLAD) aims to distinguish anomalous graphs that exhibit significant deviations from others. The graph-graph relationship, revealing the deviation and similarity between graphs, offers global insights into the entire graph level for highlighting the anomalies’ divergence from normal graph patterns. Thus, understanding graph-graph relationships is critical to boosting models on GLAD tasks. However, existing deep GLAD algorithms heavily rely on Graph Neural Networks that primarily focus on analyzing individual graphs. These methods overlook the significance of graph-graph relationships in telling anomalies from normal graphs. In this paper, we propose a novel model for Graph-level Anomaly Detection using the Transformer technique, namely GADTrans. Specifically, GADTrans builds the transformer upon crucial subgraphs mined by a parametrized extractor, for modeling precise graph-graph relationships. The learned graph-graph relationships put effort into distinguishing normal and anomalous graphs. In addition, a specific loss is introduced to guide GADTrans in highlighting the deviation between anomalous and normal graphs while underlining the similarities among normal graphs. GADTrans achieves model interpretability by delivering human-interpretable results, which are learned graph-graph relationships and crucial subgraphs. Extensive experiments on six real-world datasets verify the effectiveness and superiority of GADTrans for GLAD tasks.","PeriodicalId":13496,"journal":{"name":"IEEE Transactions on Knowledge and Data Engineering","volume":"38 1","pages":"428-441"},"PeriodicalIF":10.4,"publicationDate":"2025-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145705928","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning Accurate Representation to Nonstandard Tensors via a Mode-Aware Tucker Network 基于模式感知的Tucker网络学习非标准张量的精确表示
IF 10.4 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-10-03 DOI: 10.1109/TKDE.2025.3617894
Hao Wu;Qu Wang;Xin Luo;Zidong Wang
A nonstandard tensor is frequently adopted to model a large-sale complex dynamic network. A Tensor Representation Learning (TRL) model enables extracting valuable knowledge form a dynamic network via learning low-dimensional representation of a target nonstandard tensor. Nevertheless, the representation learning ability of existing TRL models are limited for a nonstandard tensor due to its inability to accurately represent the specific nature of the nonstandard tensor, i.e., mode imbalance, high-dimension, and incompleteness. To address this issue, this study innovatively proposes a Mode-Aware Tucker Network-based Tensor Representation Learning (MTN-TRL) model with three-fold ideas: a) designing a mode-aware Tucker network to accurately represent the imbalanced mode of a nonstandard tensor, b) building an MTN-based high-efficient TRL model that fuses both data density-oriented modeling principle and adaptive parameters learning scheme, and c) theoretically proving the MTN-TRL model’s convergence. Extensive experiments on eight nonstandard tensors generating from real-world dynamic networks demonstrate that MTN-TRL significantly outperforms state-of-the-art models in terms of representation accuracy.
非标准张量常用于大型复杂动态网络的建模。张量表示学习(TRL)模型通过学习目标非标准张量的低维表示,从动态网络中提取有价值的知识。然而,现有的TRL模型对于非标准张量的表示学习能力有限,因为它不能准确地表示非标准张量的具体性质,即模态不平衡、高维和不完备性。为了解决这一问题,本研究创新性地提出了一种基于模式感知Tucker网络的张量表示学习(MTN-TRL)模型,该模型具有三个思路:a)设计一个模式感知Tucker网络来准确表征非标准张量的不平衡模式;b)构建一个融合了面向数据密度的建模原理和自适应参数学习方案的基于mtn的高效TRL模型;c)从理论上证明MTN-TRL模型的收敛性。对来自现实世界动态网络的八个非标准张量进行的大量实验表明,MTN-TRL在表示精度方面明显优于最先进的模型。
{"title":"Learning Accurate Representation to Nonstandard Tensors via a Mode-Aware Tucker Network","authors":"Hao Wu;Qu Wang;Xin Luo;Zidong Wang","doi":"10.1109/TKDE.2025.3617894","DOIUrl":"https://doi.org/10.1109/TKDE.2025.3617894","url":null,"abstract":"A nonstandard tensor is frequently adopted to model a large-sale complex dynamic network. A Tensor Representation Learning (TRL) model enables extracting valuable knowledge form a dynamic network via learning low-dimensional representation of a target nonstandard tensor. Nevertheless, the representation learning ability of existing TRL models are limited for a nonstandard tensor due to its inability to accurately represent the specific nature of the nonstandard tensor, i.e., mode imbalance, high-dimension, and incompleteness. To address this issue, this study innovatively proposes a Mode-Aware Tucker Network-based Tensor Representation Learning (MTN-TRL) model with three-fold ideas: a) designing a mode-aware Tucker network to accurately represent the imbalanced mode of a nonstandard tensor, b) building an MTN-based high-efficient TRL model that fuses both data density-oriented modeling principle and adaptive parameters learning scheme, and c) theoretically proving the MTN-TRL model’s convergence. Extensive experiments on eight nonstandard tensors generating from real-world dynamic networks demonstrate that MTN-TRL significantly outperforms state-of-the-art models in terms of representation accuracy.","PeriodicalId":13496,"journal":{"name":"IEEE Transactions on Knowledge and Data Engineering","volume":"37 12","pages":"7272-7285"},"PeriodicalIF":10.4,"publicationDate":"2025-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145455807","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Transactions on Knowledge and Data Engineering
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1