首页 > 最新文献

IEEE Transactions on Knowledge and Data Engineering最新文献

英文 中文
Numerical Data Collection Under Input-Discriminative Local Differential Privacy 输入判别局部差分隐私下的数值数据采集
IF 10.4 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-09-17 DOI: 10.1109/TKDE.2025.3610932
Youwen Zhu;Shibo Dai;Pengfei Zhang;Xiqi Kuang
Input-discriminative local differential privacy (ID-LDP) protects user data with a different range of values, which improves the utility of the estimated data compared to traditional LDP. However, the existing ID-LDP methods are used for categorical data and cannot be directly applied to numerical data. In this paper, we propose a numerical data collection (NDC) framework with ID-LDP to provide discriminative protection for the data with different inputs. This framework uses a piecewise mechanism to divide the numerical data into several segments and designs two perturbation methods to minimize the mean value of numerical data based on values submitted by users. We first create an NDC-UE method that encodes the raw data into a binary vector. This method sets the uploaded data bit as 1 and the rest as zero and perturbs each bit with a given probability. We further propose an NDC-GRR algorithm to perturb the numerical data with an optimal privacy budget. To reduce the complexity of NDC-GRR, we apply a greedy algorithm-based spanner to shorten the computation time and improve the accuracy. Theoretical analysis proves that our schemes satisfy the definition of ID-LDP. Experimental results based on two real-world datasets and a synthetic dataset show that the proposed schemes have less mean square error compared with the benchmarks.
ID-LDP (Input-discriminative local differential privacy)对用户数据进行不同范围的保护,与传统LDP相比,提高了估计数据的利用率。但是,现有的ID-LDP方法用于分类数据,不能直接应用于数值数据。在本文中,我们提出了一个带有ID-LDP的数字数据收集(NDC)框架,为不同输入的数据提供区别保护。该框架采用分段机制将数值数据分成若干段,并根据用户提交的数值设计了两种微扰方法,使数值数据的均值最小。我们首先创建一个NDC-UE方法,将原始数据编码为二进制向量。该方法将上传的数据位设置为1,其余位设置为0,并以给定的概率扰动每个位。我们进一步提出了一种NDC-GRR算法,用最优隐私预算对数值数据进行扰动。为了降低NDC-GRR的复杂度,我们采用了一种基于贪心算法的扳手来缩短计算时间和提高精度。理论分析证明了我们的方案满足ID-LDP的定义。基于两个真实数据集和一个合成数据集的实验结果表明,与基准数据集相比,所提出的方案具有较小的均方误差。
{"title":"Numerical Data Collection Under Input-Discriminative Local Differential Privacy","authors":"Youwen Zhu;Shibo Dai;Pengfei Zhang;Xiqi Kuang","doi":"10.1109/TKDE.2025.3610932","DOIUrl":"https://doi.org/10.1109/TKDE.2025.3610932","url":null,"abstract":"Input-discriminative local differential privacy (ID-LDP) protects user data with a different range of values, which improves the utility of the estimated data compared to traditional LDP. However, the existing ID-LDP methods are used for categorical data and cannot be directly applied to numerical data. In this paper, we propose a numerical data collection (NDC) framework with ID-LDP to provide discriminative protection for the data with different inputs. This framework uses a piecewise mechanism to divide the numerical data into several segments and designs two perturbation methods to minimize the mean value of numerical data based on values submitted by users. We first create an NDC-UE method that encodes the raw data into a binary vector. This method sets the uploaded data bit as 1 and the rest as zero and perturbs each bit with a given probability. We further propose an NDC-GRR algorithm to perturb the numerical data with an optimal privacy budget. To reduce the complexity of NDC-GRR, we apply a greedy algorithm-based spanner to shorten the computation time and improve the accuracy. Theoretical analysis proves that our schemes satisfy the definition of ID-LDP. Experimental results based on two real-world datasets and a synthetic dataset show that the proposed schemes have less mean square error compared with the benchmarks.","PeriodicalId":13496,"journal":{"name":"IEEE Transactions on Knowledge and Data Engineering","volume":"37 12","pages":"7346-7361"},"PeriodicalIF":10.4,"publicationDate":"2025-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145455982","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Empowering Explainable Artificial Intelligence Through Case-Based Reasoning: A Comprehensive Exploration 通过基于案例的推理赋予可解释的人工智能:一个全面的探索
IF 10.4 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-09-16 DOI: 10.1109/TKDE.2025.3609825
Preeja Pradeep;Marta Caro-Martínez;Anjana Wijekoon
Artificial intelligence (AI) advancements have significantly broadened its application across various sectors, simultaneously elevating concerns regarding the transparency and understandability of AI-driven decisions. Addressing these concerns, this paper embarks on an exploratory journey into Case-Based Reasoning (CBR) and Explainable Artificial Intelligence (XAI), critically examining their convergence and the potential this synergy holds for demystifying the decision-making processes of AI systems. We employ the concept of Explainable CBR (XCBR) system that leverages CBR to acquire case-based explanations or generate explanations using CBR methodologies to enhance AI decision explainability. Though the literature has few surveys on XCBR, recognizing its potential necessitates a detailed exploration of the principles for developing effective XCBR systems. We present a cycle-aligned perspective that examines how explainability functions can be embedded throughout the classical CBR phases: Retrieve, Reuse, Revise, and Retain. Drawing from a comprehensive literature review, we propose a set of six functional goals that reflect key explainability needs. These goals are mapped to six thematic categories, forming the basis of a structured XCBR taxonomy. The discussion extends to the broader challenges and prospects facing the CBR-XAI arena, setting the stage for future research directions. This paper offers design guidance and conceptual grounding for future XCBR research and system development.
人工智能(AI)的进步大大拓宽了其在各个领域的应用,同时也引发了人们对人工智能驱动决策的透明度和可理解性的担忧。为了解决这些问题,本文开始了基于案例的推理(CBR)和可解释人工智能(XAI)的探索之旅,批判性地研究了它们的融合以及这种协同作用在解开人工智能系统决策过程的神秘面纱方面的潜力。我们采用可解释的CBR (XCBR)系统的概念,利用CBR获取基于案例的解释或使用CBR方法生成解释,以增强人工智能决策的可解释性。尽管文献中对XCBR的调查很少,但是认识到它的潜力需要对开发有效的XCBR系统的原则进行详细的探索。我们提出了一个与循环一致的视角,该视角研究了如何将可解释性功能嵌入到经典的CBR阶段:检索、重用、修改和保留。从全面的文献综述中,我们提出了一套反映关键可解释性需求的六个功能目标。这些目标映射到六个主题类别,形成结构化XCBR分类法的基础。讨论扩展到CBR-XAI领域面临的更广泛的挑战和前景,为未来的研究方向奠定了基础。本文为未来XCBR的研究和系统开发提供了设计指导和概念基础。
{"title":"Empowering Explainable Artificial Intelligence Through Case-Based Reasoning: A Comprehensive Exploration","authors":"Preeja Pradeep;Marta Caro-Martínez;Anjana Wijekoon","doi":"10.1109/TKDE.2025.3609825","DOIUrl":"https://doi.org/10.1109/TKDE.2025.3609825","url":null,"abstract":"Artificial intelligence (AI) advancements have significantly broadened its application across various sectors, simultaneously elevating concerns regarding the transparency and understandability of AI-driven decisions. Addressing these concerns, this paper embarks on an exploratory journey into Case-Based Reasoning (CBR) and Explainable Artificial Intelligence (XAI), critically examining their convergence and the potential this synergy holds for demystifying the decision-making processes of AI systems. We employ the concept of Explainable CBR (XCBR) system that leverages CBR to acquire case-based explanations or generate explanations using CBR methodologies to enhance AI decision explainability. Though the literature has few surveys on XCBR, recognizing its potential necessitates a detailed exploration of the principles for developing effective XCBR systems. We present a cycle-aligned perspective that examines how explainability functions can be embedded throughout the classical CBR phases: Retrieve, Reuse, Revise, and Retain. Drawing from a comprehensive literature review, we propose a set of six functional goals that reflect key explainability needs. These goals are mapped to six thematic categories, forming the basis of a structured XCBR taxonomy. The discussion extends to the broader challenges and prospects facing the CBR-XAI arena, setting the stage for future research directions. This paper offers design guidance and conceptual grounding for future XCBR research and system development.","PeriodicalId":13496,"journal":{"name":"IEEE Transactions on Knowledge and Data Engineering","volume":"37 12","pages":"7120-7139"},"PeriodicalIF":10.4,"publicationDate":"2025-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11165042","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145455857","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Modeling Temporal Dependencies Within the Target for Long-Term Time Series Forecasting 长期时间序列预测目标内时间依赖性建模
IF 10.4 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-09-12 DOI: 10.1109/TKDE.2025.3609415
Qi Xiong;Kai Tang;Minbo Ma;Ji Zhang;Jie Xu;Tianrui Li
Long-term time series forecasting (LTSF) is a critical task across diverse domains. Despite significant advancements in LTSF research, we identify a performance bottleneck in existing LTSF methods caused by the inadequate modeling of Temporal Dependencies within the Target (TDT). To address this issue, we propose a novel and generic temporal modeling framework, Temporal Dependency Alignment (TDAlign), that equips existing LTSF methods with TDT learning capabilities. TDAlign introduces two key innovations: 1) a loss function that aligns the change values between adjacent time steps in the predictions with those in the target, ensuring consistency with variation patterns, and 2) an adaptive loss balancing strategy that seamlessly integrates the new loss function with existing LTSF methods without introducing additional learnable parameters. As a plug-and-play framework, TDAlign enhances existing methods with minimal computational overhead, featuring only linear time complexity and constant space complexity relative to the prediction length. Extensive experiments on six strong LTSF baselines across seven real-world datasets demonstrate the effectiveness and flexibility of TDAlign. On average, TDAlign reduces baseline prediction errors by 1.47% to 9.19% and change value errors by 4.57% to 15.78%, highlighting its substantial performance improvements.
长期时间序列预测(LTSF)是一项跨多个领域的关键任务。尽管LTSF研究取得了重大进展,但我们发现现有LTSF方法中的性能瓶颈是由于对目标内时间依赖性(TDT)的建模不足造成的。为了解决这个问题,我们提出了一个新的和通用的时间建模框架,时间依赖对齐(TDAlign),它为现有的LTSF方法配备了TDT学习能力。TDAlign引入了两个关键的创新:1)一个损失函数,它将预测中相邻时间步长的变化值与目标中的变化值保持一致,确保与变化模式的一致性;2)一个自适应损失平衡策略,它将新的损失函数与现有的LTSF方法无缝集成,而不引入额外的可学习参数。作为一个即插即用的框架,TDAlign以最小的计算开销增强了现有方法,仅具有相对于预测长度的线性时间复杂度和恒定的空间复杂度。在七个真实数据集的六个强LTSF基线上进行的大量实验证明了TDAlign的有效性和灵活性。平均而言,TDAlign将基线预测误差降低了1.47%至9.19%,将变化值误差降低了4.57%至15.78%,显著提高了性能。
{"title":"Modeling Temporal Dependencies Within the Target for Long-Term Time Series Forecasting","authors":"Qi Xiong;Kai Tang;Minbo Ma;Ji Zhang;Jie Xu;Tianrui Li","doi":"10.1109/TKDE.2025.3609415","DOIUrl":"https://doi.org/10.1109/TKDE.2025.3609415","url":null,"abstract":"Long-term time series forecasting (LTSF) is a critical task across diverse domains. Despite significant advancements in LTSF research, we identify a performance bottleneck in existing LTSF methods caused by the inadequate modeling of Temporal Dependencies within the Target (TDT). To address this issue, we propose a novel and generic temporal modeling framework, Temporal Dependency Alignment (TDAlign), that equips existing LTSF methods with TDT learning capabilities. TDAlign introduces two key innovations: 1) a loss function that aligns the change values between adjacent time steps in the predictions with those in the target, ensuring consistency with variation patterns, and 2) an adaptive loss balancing strategy that seamlessly integrates the new loss function with existing LTSF methods without introducing additional learnable parameters. As a plug-and-play framework, TDAlign enhances existing methods with minimal computational overhead, featuring only linear time complexity and constant space complexity relative to the prediction length. Extensive experiments on six strong LTSF baselines across seven real-world datasets demonstrate the effectiveness and flexibility of TDAlign. On average, TDAlign reduces baseline prediction errors by <bold>1.47%</b> to <bold>9.19%</b> and change value errors by <bold>4.57%</b> to <bold>15.78%</b>, highlighting its substantial performance improvements.","PeriodicalId":13496,"journal":{"name":"IEEE Transactions on Knowledge and Data Engineering","volume":"37 12","pages":"7300-7314"},"PeriodicalIF":10.4,"publicationDate":"2025-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145455930","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Similarity and Dissimilarity Guided Co-Association Matrix Construction for Ensemble Clustering 基于相似性和非相似性的集成聚类协同关联矩阵构造
IF 10.4 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-09-12 DOI: 10.1109/TKDE.2025.3608721
Xu Zhang;Yuheng Jia;Mofei Song;Ran Wang
Ensemble clustering aggregates multiple weak clusterings to achieve a more accurate and robust consensus result. The Co-Association matrix (CA matrix) based method is the mainstream ensemble clustering approach that constructs the similarity relationships between sample pairs according the weak clustering partitions to generate the final clustering result. However, the existing methods neglect that the quality of cluster is related to its size, i.e., a cluster with smaller size tends to higher accuracy. Moreover, they also do not consider the valuable dissimilarity information in the base clusterings which can reflect the varying importance of sample pairs that are completely disconnected. To this end, we propose the Similarity and Dissimilarity Guided Co-association matrix (SDGCA) to achieve ensemble clustering. First, we introduce normalized ensemble entropy to estimate the quality of each cluster, and construct a similarity matrix based on this estimation. Then, we employ the random walk to explore high-order proximity of base clusterings to construct a dissimilarity matrix. Finally, the adversarial relationship between the similarity matrix and the dissimilarity matrix is utilized to construct a promoted CA matrix for ensemble clustering. We compared our method with 13 state-of-the-art methods across 12 datasets, and the results demonstrated the superior clustering ability and robustness of the proposed approach.
集成聚类聚合多个弱聚类,以获得更准确和鲁棒的一致结果。基于协关联矩阵(CA矩阵)的方法是目前主流的集成聚类方法,它根据弱聚类划分来构建样本对之间的相似关系,从而产生最终的聚类结果。然而,现有的方法忽略了聚类的质量与其大小的关系,即聚类的大小越小,准确率越高。此外,它们也没有考虑基聚类中有价值的不相似信息,这些信息可以反映完全不连接的样本对的不同重要性。为此,我们提出了相似与不相似引导协关联矩阵(SDGCA)来实现集成聚类。首先,我们引入归一化集成熵来估计每个聚类的质量,并在此基础上构造相似矩阵。然后,我们使用随机漫步来探索基聚类的高阶接近性,以构造不相似矩阵。最后,利用相似矩阵和不相似矩阵之间的对抗关系构造一个改进的CA矩阵用于集成聚类。我们将我们的方法与13种最先进的方法在12个数据集上进行了比较,结果表明我们的方法具有优越的聚类能力和鲁棒性。
{"title":"Similarity and Dissimilarity Guided Co-Association Matrix Construction for Ensemble Clustering","authors":"Xu Zhang;Yuheng Jia;Mofei Song;Ran Wang","doi":"10.1109/TKDE.2025.3608721","DOIUrl":"https://doi.org/10.1109/TKDE.2025.3608721","url":null,"abstract":"Ensemble clustering aggregates multiple weak clusterings to achieve a more accurate and robust consensus result. The Co-Association matrix (CA matrix) based method is the mainstream ensemble clustering approach that constructs the similarity relationships between sample pairs according the weak clustering partitions to generate the final clustering result. However, the existing methods neglect that the quality of cluster is related to its size, i.e., a cluster with smaller size tends to higher accuracy. Moreover, they also do not consider the valuable dissimilarity information in the base clusterings which can reflect the varying importance of sample pairs that are completely disconnected. To this end, we propose the Similarity and Dissimilarity Guided Co-association matrix (SDGCA) to achieve ensemble clustering. First, we introduce normalized ensemble entropy to estimate the quality of each cluster, and construct a similarity matrix based on this estimation. Then, we employ the random walk to explore high-order proximity of base clusterings to construct a dissimilarity matrix. Finally, the adversarial relationship between the similarity matrix and the dissimilarity matrix is utilized to construct a promoted CA matrix for ensemble clustering. We compared our method with 13 state-of-the-art methods across 12 datasets, and the results demonstrated the superior clustering ability and robustness of the proposed approach.","PeriodicalId":13496,"journal":{"name":"IEEE Transactions on Knowledge and Data Engineering","volume":"37 11","pages":"6694-6707"},"PeriodicalIF":10.4,"publicationDate":"2025-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145242590","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Next-Generation Database Interfaces: A Survey of LLM-Based Text-to-SQL 下一代数据库接口:基于llm的文本到sql的综述
IF 10.4 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-09-12 DOI: 10.1109/TKDE.2025.3609486
Zijin Hong;Zheng Yuan;Qinggang Zhang;Hao Chen;Junnan Dong;Feiran Huang;Xiao Huang
Generating accurate SQL from users’ natural language questions (text-to-SQL) remains a long-standing challenge due to the complexities involved in user question understanding, database schema comprehension, and SQL generation. Traditional text-to-SQL systems, which combine human engineering and deep neural networks, have made significant progress. Subsequently, pre-trained language models (PLMs) have been developed for text-to-SQL tasks, achieving promising results. However, as modern databases and user questions grow more complex, PLMs with a limited parameter size often produce incorrect SQL. This necessitates more sophisticated and tailored optimization methods, which restrict the application of PLM-based systems. Recently, large language models (LLMs) have shown significant capabilities in natural language understanding as model scale increases. Thus, integrating LLM-based solutions can bring unique opportunities, improvements, and solutions to text-to-SQL research. In this survey, we provide a comprehensive review of existing LLM-based text-to-SQL studies. Specifically, we offer a brief overview of the technical challenges and evolutionary process of text-to-SQL. Next, we introduce the datasets and metrics designed to evaluate text-to-SQL systems. Subsequently, we present a systematic analysis of recent advances in LLM-based text-to-SQL. Finally, we make a summary and discuss the remaining challenges in this field and suggest expectations for future research directions.
从用户的自然语言问题(文本到SQL)生成准确的SQL仍然是一个长期存在的挑战,因为涉及到用户问题理解、数据库模式理解和SQL生成的复杂性。传统的文本到sql的系统,结合了人类工程学和深度神经网络,已经取得了重大进展。随后,针对文本到sql的任务开发了预训练语言模型(plm),取得了令人鼓舞的结果。然而,随着现代数据库和用户问题变得越来越复杂,具有有限参数大小的plm经常产生不正确的SQL。这需要更复杂和定制的优化方法,这限制了基于plm的系统的应用。近年来,随着模型规模的增加,大型语言模型(llm)在自然语言理解方面表现出了显著的能力。因此,集成基于llm的解决方案可以为文本到sql的研究带来独特的机会、改进和解决方案。在本调查中,我们对现有的基于法学硕士的文本到sql的研究进行了全面的回顾。具体来说,我们简要概述了从文本到sql的技术挑战和演进过程。接下来,我们将介绍用于评估文本到sql系统的数据集和指标。随后,我们对基于法学硕士的文本到sql的最新进展进行了系统分析。最后,对该领域存在的挑战进行了总结和讨论,并对未来的研究方向提出了展望。
{"title":"Next-Generation Database Interfaces: A Survey of LLM-Based Text-to-SQL","authors":"Zijin Hong;Zheng Yuan;Qinggang Zhang;Hao Chen;Junnan Dong;Feiran Huang;Xiao Huang","doi":"10.1109/TKDE.2025.3609486","DOIUrl":"https://doi.org/10.1109/TKDE.2025.3609486","url":null,"abstract":"Generating accurate SQL from users’ natural language questions (text-to-SQL) remains a long-standing challenge due to the complexities involved in user question understanding, database schema comprehension, and SQL generation. Traditional text-to-SQL systems, which combine human engineering and deep neural networks, have made significant progress. Subsequently, pre-trained language models (PLMs) have been developed for text-to-SQL tasks, achieving promising results. However, as modern databases and user questions grow more complex, PLMs with a limited parameter size often produce incorrect SQL. This necessitates more sophisticated and tailored optimization methods, which restrict the application of PLM-based systems. Recently, large language models (LLMs) have shown significant capabilities in natural language understanding as model scale increases. Thus, integrating LLM-based solutions can bring unique opportunities, improvements, and solutions to text-to-SQL research. In this survey, we provide a comprehensive review of existing LLM-based text-to-SQL studies. Specifically, we offer a brief overview of the technical challenges and evolutionary process of text-to-SQL. Next, we introduce the datasets and metrics designed to evaluate text-to-SQL systems. Subsequently, we present a systematic analysis of recent advances in LLM-based text-to-SQL. Finally, we make a summary and discuss the remaining challenges in this field and suggest expectations for future research directions.","PeriodicalId":13496,"journal":{"name":"IEEE Transactions on Knowledge and Data Engineering","volume":"37 12","pages":"7328-7345"},"PeriodicalIF":10.4,"publicationDate":"2025-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145455948","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Toward Effective and Transferable Detection for Multi-Modal Fake News in the Social Media Stream 社交媒体流中多模态假新闻的有效可转移检测
IF 10.4 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-09-11 DOI: 10.1109/TKDE.2025.3609045
Jingyi Xie;Jiawei Liu;Zheng-jun Zha
The rapid proliferation of multimedia fake news on social media has raised significant concerns in recent years. Existing studies on fake news detection predominantly adopt an instance-based paradigm, where the detector evaluates a single post to determine its veracity. Despite notable advancements achieved in this domain, we argue that the instance-based approach is misaligned with real-world deployment scenarios. In practice, detectors typically operate on servers that process incoming posts in temporal order, striving to assess their authenticity promptly. Instance-based detectors lack awareness of temporal information and contextual relationships between surrounding posts, therefore fail to capture long-range dependencies from the timeline. To bridge this gap, we introduce a more practical stream-based multi-modal fake news detection paradigm, which assumes that social media posts arrive continuously over time and allows the utilization of previously seen posts to aid in the classification of incoming ones. To enable effective and transferable fake news detection under this novel paradigm, we propose maintaining historical knowledge as a collection of incremental high-level forgery patterns. Based on this principle, we design a novel framework called Incremental Forgery Pattern Learning and Clues Refinement (IPLCR). IPLCR incrementally learns high-level forgery patterns as the stream evolves, leveraging this knowledge to improve the detection of newly arrived posts. At the core of IPLCR is the Incremental Forgery Pattern Bank (IPB), which dynamically summarizes historical posts into a set of latent forgery patterns. IPB is designed to continuously incorporate timely knowledge and actively discard obsolete information, even during inference. When a new post arrives, IPLCR retrieves the most relevant forgery pattern knowledge from IPB and refines the clues for fake news detection. The refined clues are subsequently incorporated into IPB to enrich its knowledge base. Extensive experiments validate IPLCR’s effectiveness as a robust stream-based detector. Moreover, IPLCR addresses several critical issues relevant to industrial applications, including seamless context transfer and efficient model upgrading, making it a practical solution for real-world deployment.
近年来,社交媒体上多媒体假新闻的迅速扩散引起了人们的极大关注。现有的假新闻检测研究主要采用基于实例的范式,其中检测器评估单个帖子以确定其真实性。尽管在这个领域取得了显著的进步,但我们认为基于实例的方法与现实世界的部署场景不一致。实际上,检测器通常在按时间顺序处理传入帖子的服务器上运行,努力迅速评估其真实性。基于实例的检测器缺乏对周围帖子之间的时间信息和上下文关系的感知,因此无法从时间轴中捕获长期依赖关系。为了弥补这一差距,我们引入了一种更实用的基于流的多模态假新闻检测范式,该范式假设社交媒体帖子随着时间的推移不断到达,并允许利用以前看到的帖子来帮助对传入的帖子进行分类。为了在这种新范式下实现有效和可转移的假新闻检测,我们建议将历史知识保存为增量高级伪造模式的集合。基于这一原理,我们设计了一个新的框架,称为增量伪造模式学习和线索改进(IPLCR)。随着信息流的发展,IPLCR逐渐学习高级伪造模式,利用这些知识来改进对新到达的帖子的检测。IPLCR的核心是增量伪造模式库(IPB),它动态地将历史帖子汇总为一组潜在的伪造模式。IPB的目的是不断地吸收及时的知识,并主动丢弃过时的信息,即使在推理过程中也是如此。当有新帖子发布时,IPLCR从IPB中检索最相关的伪造模式知识,并对假新闻检测的线索进行提炼。这些精炼的线索随后被纳入IPB,以丰富其知识库。大量的实验验证了IPLCR作为鲁棒流检测器的有效性。此外,IPLCR解决了与工业应用相关的几个关键问题,包括无缝上下文传输和高效模型升级,使其成为实际部署的实用解决方案。
{"title":"Toward Effective and Transferable Detection for Multi-Modal Fake News in the Social Media Stream","authors":"Jingyi Xie;Jiawei Liu;Zheng-jun Zha","doi":"10.1109/TKDE.2025.3609045","DOIUrl":"https://doi.org/10.1109/TKDE.2025.3609045","url":null,"abstract":"The rapid proliferation of multimedia fake news on social media has raised significant concerns in recent years. Existing studies on fake news detection predominantly adopt an instance-based paradigm, where the detector evaluates a single post to determine its veracity. Despite notable advancements achieved in this domain, we argue that the instance-based approach is misaligned with real-world deployment scenarios. In practice, detectors typically operate on servers that process incoming posts in temporal order, striving to assess their authenticity promptly. Instance-based detectors lack awareness of temporal information and contextual relationships between surrounding posts, therefore fail to capture long-range dependencies from the timeline. To bridge this gap, we introduce a more practical stream-based multi-modal fake news detection paradigm, which assumes that social media posts arrive continuously over time and allows the utilization of previously seen posts to aid in the classification of incoming ones. To enable effective and transferable fake news detection under this novel paradigm, we propose maintaining historical knowledge as a collection of incremental high-level forgery patterns. Based on this principle, we design a novel framework called Incremental Forgery Pattern Learning and Clues Refinement (IPLCR). IPLCR incrementally learns high-level forgery patterns as the stream evolves, leveraging this knowledge to improve the detection of newly arrived posts. At the core of IPLCR is the Incremental Forgery Pattern Bank (IPB), which dynamically summarizes historical posts into a set of latent forgery patterns. IPB is designed to continuously incorporate timely knowledge and actively discard obsolete information, even during inference. When a new post arrives, IPLCR retrieves the most relevant forgery pattern knowledge from IPB and refines the clues for fake news detection. The refined clues are subsequently incorporated into IPB to enrich its knowledge base. Extensive experiments validate IPLCR’s effectiveness as a robust stream-based detector. Moreover, IPLCR addresses several critical issues relevant to industrial applications, including seamless context transfer and efficient model upgrading, making it a practical solution for real-world deployment.","PeriodicalId":13496,"journal":{"name":"IEEE Transactions on Knowledge and Data Engineering","volume":"37 11","pages":"6723-6737"},"PeriodicalIF":10.4,"publicationDate":"2025-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145242610","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Flexible Keyword-Aware Top-$k$k Route Search 灵活的关键字感知Top-$k$k路由搜索
IF 10.4 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-09-11 DOI: 10.1109/TKDE.2025.3609302
Ziqiang Yu;Xiaohui Yu;Yueting Chen;Wei Liu;Anbang Song;Bolong Zheng
With the rise of Large Language Models (LLMs), tourists increasingly use it for route planning by entering keywords for attractions, instead of relying on traditional manual map services. LLMs provide generally reasonable suggestions, but often fail to generate optimal plans that account for detailed user requirements, given the vast number of potential POIs and possible routes based on POI combinations within a real-world road network. In this case, a route-planning API could serve as an external tool, accepting a sequence of keywords and returning the top-$k$ best routes tailored to user requests. To address this need, this paper introduces the Keyword-Aware Top-$k$ Routes (KATR) query that provides a more flexible and comprehensive semantic to route planning that caters to various user’s preferences including flexible POI visiting order, flexible travel distance budget, and personalized POI ratings. Subsequently, we propose an explore-and-bound paradigm to efficiently process KATR queries by eliminating redundant candidates based on estimated score bounds from global to local levels. Extensive experiments demonstrate our approach’s superior performance over existing methods across different scenarios.
随着大型语言模型(Large Language Models, llm)的兴起,越来越多的游客不再依赖传统的手工地图服务,而是通过输入景点关键词来进行路线规划。llm通常提供合理的建议,但考虑到现实道路网络中大量潜在的POI和基于POI组合的可能路线,llm通常无法生成考虑详细用户需求的最佳计划。在这种情况下,路由规划API可以作为外部工具,接受一系列关键字并返回根据用户请求定制的前k个最佳路由。为了满足这一需求,本文引入了关键字感知的Top-$k$ Routes (KATR)查询,该查询为路线规划提供了更灵活、更全面的语义,以满足各种用户的偏好,包括灵活的POI访问顺序、灵活的旅行距离预算和个性化的POI评级。随后,我们提出了一种探索和绑定范式,通过基于从全局到局部级别的估计分数界限来消除冗余候选者,从而有效地处理KATR查询。大量的实验表明,我们的方法在不同场景下的性能优于现有方法。
{"title":"Flexible Keyword-Aware Top-$k$k Route Search","authors":"Ziqiang Yu;Xiaohui Yu;Yueting Chen;Wei Liu;Anbang Song;Bolong Zheng","doi":"10.1109/TKDE.2025.3609302","DOIUrl":"https://doi.org/10.1109/TKDE.2025.3609302","url":null,"abstract":"With the rise of Large Language Models (LLMs), tourists increasingly use it for route planning by entering keywords for attractions, instead of relying on traditional manual map services. LLMs provide generally reasonable suggestions, but often fail to generate optimal plans that account for detailed user requirements, given the vast number of potential POIs and possible routes based on POI combinations within a real-world road network. In this case, a route-planning API could serve as an external tool, accepting a sequence of keywords and returning the top-<inline-formula><tex-math>$k$</tex-math></inline-formula> best routes tailored to user requests. To address this need, this paper introduces the Keyword-Aware Top-<inline-formula><tex-math>$k$</tex-math></inline-formula> Routes (KATR) query that provides a more flexible and comprehensive semantic to route planning that caters to various user’s preferences including flexible POI visiting order, flexible travel distance budget, and personalized POI ratings. Subsequently, we propose an explore-and-bound paradigm to efficiently process KATR queries by eliminating redundant candidates based on estimated score bounds from global to local levels. Extensive experiments demonstrate our approach’s superior performance over existing methods across different scenarios.","PeriodicalId":13496,"journal":{"name":"IEEE Transactions on Knowledge and Data Engineering","volume":"37 12","pages":"7184-7198"},"PeriodicalIF":10.4,"publicationDate":"2025-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145455869","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Uncertain Priors for Graphical Causal Models: A Multi-Objective Optimization Perspective 图解因果模型的不确定先验:多目标优化视角
IF 10.4 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-09-11 DOI: 10.1109/TKDE.2025.3608723
Zidong Wang;Xiaoguang Gao;Qingfu Zhang
Learning graphical causal models from observational data can effectively elucidate the underlying causal mechanism behind the variables. In the context of limited datasets, modelers often incorporate prior knowledge, which is assumed to be correct, as a penalty in single-objective optimization. However, this approach struggles to adapt complex and uncertain priors effectively. This paper introduces UpCM, which tackles the issue from a multi-objective optimization perspective. Instead of focusing exclusively on the DAG as the optimization goal, UpCM methodically evaluate the effect of uncertain priors on specific structures, merging data-driven and knowledge-driven objectives. Utilizing the MOEA/D framework, it achieve a balanced trade-off between these objectives. Furthermore, since uncertain priors may introduce erroneous constraints, resulting in PDAGs lacking consistent extensions, the minimal non-consistent extension is explored. This extension, which separately incorporates positive and negative constraints, aims to approximate the true causality of the PDAGs. Experimental results demonstrate that UpCM achieves significant structural accuracy improvements compared to baseline methods. It reduces the SHD by 7.94%, 13.23%, and 12.8% relative to PC_stable, GES, and MAHC, respectively, when incorporating uncertain priors. In downstream inference tasks, UpCM outperforms domain-expert knowledge graphs, owing to its ability to learn explainable causal relationships that balance data-driven evidence with prior knowledge.
从观测数据中学习图形因果模型可以有效地阐明变量背后潜在的因果机制。在有限的数据集的背景下,建模者经常将先验知识作为单目标优化的惩罚,这被认为是正确的。然而,这种方法很难有效地适应复杂和不确定的先验。本文介绍了UpCM,它从多目标优化的角度来解决这一问题。UpCM不是专门关注DAG作为优化目标,而是系统地评估不确定先验对特定结构的影响,合并数据驱动和知识驱动的目标。利用MOEA/D框架,它实现了这些目标之间的平衡权衡。此外,由于不确定的先验可能引入错误的约束,导致pdag缺乏一致扩展,因此探讨了最小不一致扩展。这一扩展分别包含了正约束和负约束,旨在近似PDAGs的真正因果关系。实验结果表明,与基线方法相比,UpCM可以显著提高结构精度。当考虑不确定先验时,相对于PC_stable、GES和MAHC, SHD分别降低了7.94%、13.23%和12.8%。在下游推理任务中,UpCM优于领域专家知识图,因为它能够学习可解释的因果关系,平衡数据驱动的证据和先验知识。
{"title":"Uncertain Priors for Graphical Causal Models: A Multi-Objective Optimization Perspective","authors":"Zidong Wang;Xiaoguang Gao;Qingfu Zhang","doi":"10.1109/TKDE.2025.3608723","DOIUrl":"https://doi.org/10.1109/TKDE.2025.3608723","url":null,"abstract":"Learning graphical causal models from observational data can effectively elucidate the underlying causal mechanism behind the variables. In the context of limited datasets, modelers often incorporate prior knowledge, which is assumed to be correct, as a penalty in single-objective optimization. However, this approach struggles to adapt complex and uncertain priors effectively. This paper introduces UpCM, which tackles the issue from a multi-objective optimization perspective. Instead of focusing exclusively on the DAG as the optimization goal, UpCM methodically evaluate the effect of uncertain priors on specific structures, merging data-driven and knowledge-driven objectives. Utilizing the MOEA/D framework, it achieve a balanced trade-off between these objectives. Furthermore, since uncertain priors may introduce erroneous constraints, resulting in PDAGs lacking consistent extensions, the minimal non-consistent extension is explored. This extension, which separately incorporates positive and negative constraints, aims to approximate the true causality of the PDAGs. Experimental results demonstrate that UpCM achieves significant structural accuracy improvements compared to baseline methods. It reduces the SHD by 7.94%, 13.23%, and 12.8% relative to PC_stable, GES, and MAHC, respectively, when incorporating uncertain priors. In downstream inference tasks, UpCM outperforms domain-expert knowledge graphs, owing to its ability to learn explainable causal relationships that balance data-driven evidence with prior knowledge.","PeriodicalId":13496,"journal":{"name":"IEEE Transactions on Knowledge and Data Engineering","volume":"37 12","pages":"7426-7439"},"PeriodicalIF":10.4,"publicationDate":"2025-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145455998","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SandwichSketch: A More Accurate Sketch for Frequent Object Mining in Data Streams SandwichSketch:数据流中频繁对象挖掘的更精确的草图
IF 10.4 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-09-09 DOI: 10.1109/TKDE.2025.3607691
Zhuochen Fan;Ruixin Wang;Zihan Jiang;Ruwen Zhang;Tong Yang;Sha Wang;Yuhan Wu;Ruijie Miao;Kaicheng Yang;Bui Cui
Frequent object mining has gained considerable interest in the research community and can be split into frequent item mining and frequent set mining depending on the type of object. While existing sketch-based algorithms have made significant progress in addressing these two tasks concurrently, they also possess notable limitations. They either support only software platforms with low throughput or compromise accuracy for faster processing speed and better hardware compatibility. In this paper, we make a substantial stride towards supporting frequent object mining by designing SandwichSketch, which draws inspiration from sandwich making and proposes two techniques including the double fidelity enhancement and hierarchical hot locking to guarantee high fidelity on both two tasks. We implement SandwichSketch on three platforms (CPU, Redis, and FPGA) and show that it enhances accuracy by $38.4times$ and $5times$ for two tasks on three real-world datasets, respectively. Additionally, it supports a distributed measurement scenario with less than a 0.01% decrease in Average Relative Error (ARE) when the number of nodes increases from 1 to 16.
频繁对象挖掘在研究界引起了相当大的兴趣,根据对象的类型可以分为频繁项挖掘和频繁集挖掘。虽然现有的基于草图的算法在同时处理这两个任务方面取得了重大进展,但它们也具有明显的局限性。它们要么只支持低吞吐量的软件平台,要么为了更快的处理速度和更好的硬件兼容性而牺牲精度。在本文中,我们通过设计SandwichSketch在支持频繁对象挖掘方面迈出了坚实的一步,该设计从三明治制作中获得灵感,并提出了两种技术,包括双保真度增强和分层热锁定,以保证两种任务的高保真度。我们在三个平台(CPU、Redis和FPGA)上实现了SandwichSketch,并表明它在三个真实数据集上分别为两个任务提高了38.4倍和5倍的准确性。此外,它还支持分布式测量场景,当节点数从1增加到16时,平均相对误差(Average Relative Error, ARE)下降小于0.01%。
{"title":"SandwichSketch: A More Accurate Sketch for Frequent Object Mining in Data Streams","authors":"Zhuochen Fan;Ruixin Wang;Zihan Jiang;Ruwen Zhang;Tong Yang;Sha Wang;Yuhan Wu;Ruijie Miao;Kaicheng Yang;Bui Cui","doi":"10.1109/TKDE.2025.3607691","DOIUrl":"https://doi.org/10.1109/TKDE.2025.3607691","url":null,"abstract":"Frequent object mining has gained considerable interest in the research community and can be split into frequent item mining and frequent set mining depending on the type of object. While existing sketch-based algorithms have made significant progress in addressing these two tasks concurrently, they also possess notable limitations. They either support only software platforms with low throughput or compromise accuracy for faster processing speed and better hardware compatibility. In this paper, we make a substantial stride towards supporting frequent object mining by designing SandwichSketch, which draws inspiration from sandwich making and proposes two techniques including the double fidelity enhancement and hierarchical hot locking to guarantee high fidelity on both two tasks. We implement SandwichSketch on three platforms (CPU, Redis, and FPGA) and show that it enhances accuracy by <inline-formula><tex-math>$38.4times$</tex-math></inline-formula> and <inline-formula><tex-math>$5times$</tex-math></inline-formula> for two tasks on three real-world datasets, respectively. Additionally, it supports a distributed measurement scenario with less than a 0.01% decrease in Average Relative Error (ARE) when the number of nodes increases from 1 to 16.","PeriodicalId":13496,"journal":{"name":"IEEE Transactions on Knowledge and Data Engineering","volume":"37 11","pages":"6636-6650"},"PeriodicalIF":10.4,"publicationDate":"2025-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145242587","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
KnobCF: Uncertainty-Aware Knob Tuning KnobCF:不确定性感知旋钮调谐
IF 10.4 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-09-09 DOI: 10.1109/TKDE.2025.3608030
Yu Yan;Junfang Huang;Hongzhi Wang;Jian Geng;Kaixin Zhang;Tao Yu
The knob tuning aims to optimize database performance by searching for the most effective knob configuration under a certain workload. Existing works suffer from two significant problems. First, there exist multiple useless evaluations of knob tuning even with diverse searching methods because of the different sensitivities of knobs on a certain workload. Second, the single evaluation of knob configurations may bring overestimation or underestimation because of query performance uncertainty. To solve the above problems, we propose a query uncertainty-aware knob classifier, called ${sf KnobCF}$, to enhance knob tuning. Our method has three contributions: (1) We propose uncertainty-aware configuration estimation to improve the tuning process. (2) We design a few-shot uncertainty estimator that requires no extra data collection, ensuring high efficiency in practical tasks. (3) We provide a flexible framework that can be integrated into existing knob tuners and DBMSs without modification. Our experiments on four open-source benchmarks demonstrate that our method effectively reduces useless evaluations and improves the tuning results. Especially in TPCC, our method achieves competitive tuning results with only 60% to 70% time consumption compared to the full workload evaluations.
旋钮调优的目的是在一定的工作负载下,通过搜索最有效的旋钮配置来优化数据库性能。现存的作品存在两个显著的问题。首先,在一定的工作负荷下,由于旋钮的灵敏度不同,即使使用不同的搜索方法,旋钮调优也存在多次无效的评价。其次,由于查询性能的不确定性,对旋钮配置的单一评估可能导致高估或低估。为了解决上述问题,我们提出了一个查询不确定性感知旋钮分类器,称为${sf KnobCF}$,以增强旋钮调优。我们的方法有三个贡献:(1)我们提出了不确定性感知配置估计来改进调优过程。(2)我们设计了一种无需额外数据采集的少次不确定性估计器,保证了实际任务的高效率。(3)我们提供了一个灵活的框架,可以集成到现有的旋钮调谐器和dbms中,而无需修改。我们在四个开源基准测试上的实验表明,我们的方法有效地减少了无用的评估并改善了调优结果。特别是在TPCC中,与全工作负载评估相比,我们的方法仅花费60%到70%的时间就获得了具有竞争力的调优结果。
{"title":"KnobCF: Uncertainty-Aware Knob Tuning","authors":"Yu Yan;Junfang Huang;Hongzhi Wang;Jian Geng;Kaixin Zhang;Tao Yu","doi":"10.1109/TKDE.2025.3608030","DOIUrl":"https://doi.org/10.1109/TKDE.2025.3608030","url":null,"abstract":"The knob tuning aims to optimize database performance by searching for the most effective knob configuration under a certain workload. Existing works suffer from two significant problems. First, there exist multiple useless evaluations of knob tuning even with diverse searching methods because of the different sensitivities of knobs on a certain workload. Second, the single evaluation of knob configurations may bring overestimation or underestimation because of query performance uncertainty. To solve the above problems, we propose a query uncertainty-aware knob classifier, called <inline-formula><tex-math>${sf KnobCF}$</tex-math></inline-formula>, to enhance knob tuning. Our method has three contributions: (1) We propose uncertainty-aware configuration estimation to improve the tuning process. (2) We design a few-shot uncertainty estimator that requires no extra data collection, ensuring high efficiency in practical tasks. (3) We provide a flexible framework that can be integrated into existing knob tuners and DBMSs without modification. Our experiments on four open-source benchmarks demonstrate that our method effectively reduces useless evaluations and improves the tuning results. Especially in TPCC, our method achieves competitive tuning results with only 60% to 70% time consumption compared to the full workload evaluations.","PeriodicalId":13496,"journal":{"name":"IEEE Transactions on Knowledge and Data Engineering","volume":"37 12","pages":"7240-7254"},"PeriodicalIF":10.4,"publicationDate":"2025-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145456054","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Transactions on Knowledge and Data Engineering
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1