首页 > 最新文献

IEEE Transactions on Knowledge and Data Engineering最新文献

英文 中文
Tensor Multi-Rank Constraint Guided Anchor-Wise Adaptive Alignment for Multi-View Clustering 张量多秩约束引导下的锚自适应多视图聚类
IF 10.4 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-20 DOI: 10.1109/TKDE.2026.3656199
Jun Wang;Miaomiao Li;Zhenglai Li;Hao Yu;Suyuan Liu;Dayu Hu;Chang Tang;Xinwang Liu
Anchor graph learning has become a widely used technique for significantly reducing the computational complexity in existing multi-view clustering methods. However, most existing approaches select anchors independently for each view and then generate the consensus graph by directly fusing all anchor graphs. This process overlooks the correspondence between anchor sets across different views, i.e., the column order correspondence of the anchor graphs. To address this limitation, we propose a novel anchor-based tensor multi-rank constraint multi-view clustering method (TMC). Specifically, TMC captures the high-order structural information of the original data by constructing an anchor graph tensor and enforcing a multi-rank constraint to induce a block-diagonal structure. Additionally, to enhance anchor consistency across all view, we construct the anchor graph of each view into an anchor tensor and impose a low-rank constraint on it. In this way, the block-diagonal structure of each anchor graph maintains an approximate alignment between anchors. Furthermore, we provide theoretical proof that the generated anchor graphs inherently exhibit a block-diagonal structure. Extensive experimental results on six multi-view datasets demonstrate that TMC outperforms existing state-of-the-art methods, highlighting its effectiveness in multi-view clustering task.
锚点图学习在现有的多视图聚类方法中已成为一种应用广泛的技术,它显著降低了计算复杂度。然而,大多数现有方法为每个视图独立选择锚点,然后通过直接融合所有锚点图来生成共识图。此过程忽略了跨不同视图的锚集之间的对应关系,即锚图的列顺序对应关系。为了解决这个问题,我们提出了一种新的基于锚的张量多秩约束多视图聚类方法(TMC)。具体而言,TMC通过构造锚点图张量和强制多秩约束来诱导块对角结构来捕获原始数据的高阶结构信息。此外,为了增强所有视图的锚点一致性,我们将每个视图的锚点图构建为锚点张量,并对其施加低秩约束。这样,每个锚点图的块对角线结构保持锚点之间的近似对齐。此外,我们提供了理论证明,生成的锚图固有地呈现块对角结构。在六个多视图数据集上的大量实验结果表明,TMC优于现有的最先进的方法,突出了其在多视图聚类任务中的有效性。
{"title":"Tensor Multi-Rank Constraint Guided Anchor-Wise Adaptive Alignment for Multi-View Clustering","authors":"Jun Wang;Miaomiao Li;Zhenglai Li;Hao Yu;Suyuan Liu;Dayu Hu;Chang Tang;Xinwang Liu","doi":"10.1109/TKDE.2026.3656199","DOIUrl":"https://doi.org/10.1109/TKDE.2026.3656199","url":null,"abstract":"Anchor graph learning has become a widely used technique for significantly reducing the computational complexity in existing multi-view clustering methods. However, most existing approaches select anchors independently for each view and then generate the consensus graph by directly fusing all anchor graphs. This process overlooks the correspondence between anchor sets across different views, i.e., the column order correspondence of the anchor graphs. To address this limitation, we propose a novel anchor-based tensor multi-rank constraint multi-view clustering method (TMC). Specifically, TMC captures the high-order structural information of the original data by constructing an anchor graph tensor and enforcing a multi-rank constraint to induce a block-diagonal structure. Additionally, to enhance anchor consistency across all view, we construct the anchor graph of each view into an anchor tensor and impose a low-rank constraint on it. In this way, the block-diagonal structure of each anchor graph maintains an approximate alignment between anchors. Furthermore, we provide theoretical proof that the generated anchor graphs inherently exhibit a block-diagonal structure. Extensive experimental results on six multi-view datasets demonstrate that TMC outperforms existing state-of-the-art methods, highlighting its effectiveness in multi-view clustering task.","PeriodicalId":13496,"journal":{"name":"IEEE Transactions on Knowledge and Data Engineering","volume":"38 3","pages":"2015-2027"},"PeriodicalIF":10.4,"publicationDate":"2026-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146162179","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Light Shapley: Improving the Scalability of Equitable Data Utility Valuation Light Shapley:提高公平数据效用评估的可扩展性
IF 10.4 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-19 DOI: 10.1109/TKDE.2026.3651564
Zhiwei Li;Cheng Wang
Collaborative allocation effectively integrates multi-party data and improves the quality of collaborative data-driven decisions. Equitable data utility valuation facilitates accurately quantified contributions and stimulates collaborative engagement, which forms the fundamental for collaborative allocation. The Shapley value is the dominant allocation scheme, making it the first choice for data utility valuation. However, calculating the exact Shapley value requires exponential utility function evaluations and factorial marginal contribution calculations, which limits scalability for large datasets. The most mainstream methods use various approximation techniques, including Monte Carlo sampling, lightweight model replacing, and stratified computation, to reduce computational costs. Nonetheless, these methods lack the guaranteed theoretical bounds for approximation errors in reducing computational costs. Finding the optimal trade-off between computational cost and approximation error is essential for practical data valuation. In this paper, we propose a stratified framework named Light Shapley for calculating Shapley values by incorporating quantization-aware training. For scenarios involving more players, we propose a cost-first method that achieves significant computational cost reductions while keeping the error within acceptable ranges. For scenarios with fewer players, we propose an error-first method that reduces the computational cost to less than half of the exact calculation while maintaining accuracy. Theoretical analysis and experimental results provide compelling evidence that Light Shapley balances computational cost and approximation error, enabling efficient and effective data utility valuation.
协同分配有效地集成了多方数据,提高了协同数据驱动决策的质量。公平的数据效用评估有助于准确量化贡献并刺激协作参与,这是协作分配的基础。Shapley值是占主导地位的分配方案,是数据效用评估的首选。然而,计算精确的Shapley值需要指数效用函数评估和因子边际贡献计算,这限制了大型数据集的可扩展性。最主流的方法使用各种近似技术,包括蒙特卡罗采样,轻量级模型替换和分层计算,以减少计算成本。然而,这些方法在降低计算成本方面缺乏保证近似误差的理论界限。在计算成本和近似误差之间找到最优平衡点对于实际数据评估至关重要。在本文中,我们提出了一个名为Light Shapley的分层框架,通过结合量化感知训练来计算Shapley值。对于涉及更多参与者的场景,我们提出了一种成本优先的方法,该方法可以显著降低计算成本,同时将误差保持在可接受的范围内。对于参与者较少的场景,我们提出了一种错误优先的方法,可以在保持准确性的同时将计算成本降低到精确计算的一半以下。理论分析和实验结果提供了令人信服的证据,证明Light Shapley平衡了计算成本和近似误差,实现了高效和有效的数据效用评估。
{"title":"Light Shapley: Improving the Scalability of Equitable Data Utility Valuation","authors":"Zhiwei Li;Cheng Wang","doi":"10.1109/TKDE.2026.3651564","DOIUrl":"https://doi.org/10.1109/TKDE.2026.3651564","url":null,"abstract":"Collaborative allocation effectively integrates multi-party data and improves the quality of collaborative data-driven decisions. Equitable data utility valuation facilitates accurately quantified contributions and stimulates collaborative engagement, which forms the fundamental for collaborative allocation. The Shapley value is the dominant allocation scheme, making it the first choice for data utility valuation. However, calculating the exact Shapley value requires exponential utility function evaluations and factorial marginal contribution calculations, which limits scalability for large datasets. The most mainstream methods use various approximation techniques, including Monte Carlo sampling, lightweight model replacing, and stratified computation, to reduce computational costs. Nonetheless, these methods lack the guaranteed theoretical bounds for approximation errors in reducing computational costs. Finding the optimal trade-off between computational cost and approximation error is essential for practical data valuation. In this paper, we propose a stratified framework named Light Shapley for calculating Shapley values by incorporating quantization-aware training. For scenarios involving more players, we propose a cost-first method that achieves significant computational cost reductions while keeping the error within acceptable ranges. For scenarios with fewer players, we propose an error-first method that reduces the computational cost to less than half of the exact calculation while maintaining accuracy. Theoretical analysis and experimental results provide compelling evidence that Light Shapley balances computational cost and approximation error, enabling efficient and effective data utility valuation.","PeriodicalId":13496,"journal":{"name":"IEEE Transactions on Knowledge and Data Engineering","volume":"38 3","pages":"1826-1842"},"PeriodicalIF":10.4,"publicationDate":"2026-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146162190","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unraveling Spatio-Temporal Foundation Models via the Pipeline Lens: A Comprehensive Review 通过管道透镜揭示时空基础模型:综述
IF 10.4 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-12 DOI: 10.1109/TKDE.2026.3651536
Yuchen Fang;Hao Miao;Yuxuan Liang;Liwei Deng;Yue Cui;Ximu Zeng;Yuyang Xia;Yan Zhao;Torben Bach Pedersen;Christian S. Jensen;Xiaofang Zhou;Kai Zheng
Spatio-temporal data proliferates in numerous real-world domains, such as transportation, weather, and energy. Spatio-temporal deep learning models aims to utilize useful patterns in such data to support tasks like prediction, imputation, and anomaly detection. However, previous one-to-one deep learning models designed for specific tasks typically require separate training for each use case, leading to increased computational and storage costs. To address this issue, one-to-many spatio-temporal foundation models have emerged, offering a unified framework capable of solving multiple spatio-temporal tasks. These foundation models achieve remarkable success by learning general knowledge with spatio-temporal data or transferring the general capabilities of pre-trained language models. While previous surveys have explored spatio-temporal data and methodologies separately, they have ignored a comprehensive examination of how foundation models are designed, selected, pre-trained, and adapted. As a result, the overall pipeline for spatio-temporal foundation models remains unclear. To bridge this gap, we innovatively provide an up-to-date review of previous spatio-temporal foundation models from the pipeline perspective. The pipeline begins with an introduction to different types of spatio-temporal data, followed by details of data preprocessing and embedding techniques. The pipeline then presents a novel data property taxonomy to divide existing methods according to data sources and dependencies, providing efficient and effective model design and selection for researchers. On this basis, we further illustrate the training objectives of primitive models, as well as the adaptation techniques of transferred models. Overall, our survey provides a clear and structured pipeline to understand the connection between core elements of spatio-temporal foundation models while guiding researchers to get started quickly. Additionally, we introduce emerging opportunities such as multi-objective training in the field of spatio-temporal foundation models, providing valuable insights for researchers and practitioners.
时空数据在许多现实世界领域中激增,例如交通、天气和能源。时空深度学习模型旨在利用这些数据中的有用模式来支持预测、imputation和异常检测等任务。然而,以前为特定任务设计的一对一深度学习模型通常需要针对每个用例进行单独的训练,从而增加了计算和存储成本。为了解决这一问题,一对多时空基础模型应运而生,提供了一个能够解决多个时空任务的统一框架。这些基础模型通过使用时空数据学习一般知识或转移预训练语言模型的一般能力,取得了显著的成功。虽然以前的调查分别探索了时空数据和方法,但它们忽略了对基础模型如何设计、选择、预训练和适应的全面检查。因此,时空基础模型的整体管道仍不清楚。为了弥补这一差距,我们创新性地从管道的角度对以前的时空基础模型进行了最新的回顾。管道开始介绍不同类型的时空数据,其次是数据预处理和嵌入技术的细节。然后,该管道提出了一种新的数据属性分类法,根据数据源和依赖关系对现有方法进行划分,为研究人员提供高效的模型设计和选择。在此基础上,进一步阐述了原语模型的训练目标,以及迁移模型的自适应技术。总体而言,我们的调查为了解时空基础模型核心要素之间的联系提供了一个清晰而结构化的管道,同时指导研究人员快速入门。此外,我们还介绍了时空基础模型领域的多目标训练等新兴机会,为研究人员和实践者提供了有价值的见解。
{"title":"Unraveling Spatio-Temporal Foundation Models via the Pipeline Lens: A Comprehensive Review","authors":"Yuchen Fang;Hao Miao;Yuxuan Liang;Liwei Deng;Yue Cui;Ximu Zeng;Yuyang Xia;Yan Zhao;Torben Bach Pedersen;Christian S. Jensen;Xiaofang Zhou;Kai Zheng","doi":"10.1109/TKDE.2026.3651536","DOIUrl":"https://doi.org/10.1109/TKDE.2026.3651536","url":null,"abstract":"Spatio-temporal data proliferates in numerous real-world domains, such as transportation, weather, and energy. Spatio-temporal deep learning models aims to utilize useful patterns in such data to support tasks like prediction, imputation, and anomaly detection. However, previous <italic>one-to-one</i> deep learning models designed for specific tasks typically require separate training for each use case, leading to increased computational and storage costs. To address this issue, <italic>one-to-many</i> spatio-temporal foundation models have emerged, offering a unified framework capable of solving multiple spatio-temporal tasks. These foundation models achieve remarkable success by learning general knowledge with spatio-temporal data or transferring the general capabilities of pre-trained language models. While previous surveys have explored spatio-temporal data and methodologies separately, they have ignored a comprehensive examination of how foundation models are designed, selected, pre-trained, and adapted. As a result, the overall pipeline for spatio-temporal foundation models remains unclear. To bridge this gap, we innovatively provide an up-to-date review of previous spatio-temporal foundation models from the pipeline perspective. The pipeline begins with an introduction to different types of spatio-temporal data, followed by details of data preprocessing and embedding techniques. The pipeline then presents a novel data property taxonomy to divide existing methods according to data sources and dependencies, providing efficient and effective model design and selection for researchers. On this basis, we further illustrate the training objectives of primitive models, as well as the adaptation techniques of transferred models. Overall, our survey provides a clear and structured pipeline to understand the connection between core elements of spatio-temporal foundation models while guiding researchers to get started quickly. Additionally, we introduce emerging opportunities such as multi-objective training in the field of spatio-temporal foundation models, providing valuable insights for researchers and practitioners.","PeriodicalId":13496,"journal":{"name":"IEEE Transactions on Knowledge and Data Engineering","volume":"38 3","pages":"2040-2063"},"PeriodicalIF":10.4,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146162227","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PRISM: Link Prediction in Attributed Networks With Uncertain Modalities PRISM:具有不确定模式的属性网络中的链路预测
IF 10.4 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-12 DOI: 10.1109/TKDE.2026.3652729
Ruohan Yang;Muhammad Asif Ali;Huan Wang;Zhongfei Zhang;Junyang Chen;Di Wang
Link prediction for attributed graphs has garnered significant attention due to its ability to enhance predictive performance by leveraging multi-modal node attributes. However, real-world challenges such as privacy concerns, content restrictions, and attribute constraints often result in nodes facing varying degrees of missing modalities in their attributes, significantly limiting the effectiveness of existing approaches. Building on this fact, we propose a model for link PRediction in attrIbuted networkS with uncertain Modalities (PRISM), which learns the shared representations across various scenarios of missing modalities through dual-level adversarial training. PRISM comprises four modules, i.e., a GCN extractor, an adversarial extractor, an attentive fusion, and an adaptive aggregator. The GCN extractor leverages graph convolutional networks (GCN) to extract fundamental representations from the network topology. The adversarial extractor employs dual-level adversarial training to acquire the shared representations across various multi-modal scenarios at the node-level and link-level, respectively. The attentive fusion applies the multi-head attention mechanism to integrate the shared representations and the fundamental representations. The adaptive aggregator comprehensively considers both node-level and link-level representations to predict the existence of links. Experimental evaluation using real-world datasets demonstrates that PRISM significantly outperforms existing state-of-the-art link prediction methods for multi-modal attributed graphs under missing modalities by improving the Recall@50 metric (R@50) by up to 38.79%.
属性图的链接预测由于能够通过利用多模态节点属性来增强预测性能而引起了极大的关注。然而,现实世界的挑战,如隐私问题、内容限制和属性约束,通常会导致节点在其属性中面临不同程度的缺失模式,从而极大地限制了现有方法的有效性。基于这一事实,我们提出了一种具有不确定模式的属性网络(PRISM)链接预测模型,该模型通过双级对抗性训练学习各种缺失模式场景的共享表示。PRISM包括四个模块,即GCN提取器、对抗性提取器、关注融合器和自适应聚合器。GCN提取器利用图卷积网络(GCN)从网络拓扑中提取基本表示。对抗性提取器采用双级对抗性训练,分别在节点级和链路级获取各种多模态场景的共享表示。注意融合采用多头注意机制,对共享表征和基本表征进行整合。自适应聚合器综合考虑节点级和链路级表示来预测链路的存在。使用真实数据集的实验评估表明,PRISM通过将Recall@50度量(R@50)提高38.79%,显著优于现有的缺失模态下的多模态属性图的最先进的链接预测方法。
{"title":"PRISM: Link Prediction in Attributed Networks With Uncertain Modalities","authors":"Ruohan Yang;Muhammad Asif Ali;Huan Wang;Zhongfei Zhang;Junyang Chen;Di Wang","doi":"10.1109/TKDE.2026.3652729","DOIUrl":"https://doi.org/10.1109/TKDE.2026.3652729","url":null,"abstract":"Link prediction for attributed graphs has garnered significant attention due to its ability to enhance predictive performance by leveraging multi-modal node attributes. However, real-world challenges such as privacy concerns, content restrictions, and attribute constraints often result in nodes facing varying degrees of missing modalities in their attributes, significantly limiting the effectiveness of existing approaches. Building on this fact, we propose a model for link <underline><small><b>PR</b></small></u>ediction in attr<underline><small><b>I</b></small></u>buted network<underline><small><b>S</b></small></u> with uncertain <underline><small><b>M</b></small></u>odalities (<sc>PRISM</small>), which learns the shared representations across various scenarios of missing modalities through dual-level adversarial training. <sc>PRISM</small> comprises four modules, i.e., a GCN extractor, an adversarial extractor, an attentive fusion, and an adaptive aggregator. The GCN extractor leverages graph convolutional networks (GCN) to extract fundamental representations from the network topology. The adversarial extractor employs dual-level adversarial training to acquire the shared representations across various multi-modal scenarios at the node-level and link-level, respectively. The attentive fusion applies the multi-head attention mechanism to integrate the shared representations and the fundamental representations. The adaptive aggregator comprehensively considers both node-level and link-level representations to predict the existence of links. Experimental evaluation using real-world datasets demonstrates that <sc>PRISM</small> significantly outperforms existing state-of-the-art link prediction methods for multi-modal attributed graphs under missing modalities by improving the Recall@50 metric (R@50) by up to 38.79%.","PeriodicalId":13496,"journal":{"name":"IEEE Transactions on Knowledge and Data Engineering","volume":"38 3","pages":"1919-1931"},"PeriodicalIF":10.4,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146162239","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Modular Model Adaptation for Online Learning in Streaming Text Classification 流文本分类中在线学习的模块化模型适应
IF 10.4 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-12 DOI: 10.1109/TKDE.2026.3651427
Min-Seon Kim;Ling Liu;Hyuk-Yoon Kwon
The dynamic nature of streaming data often introduces distribution shifts that challenge typical text classification models. This paper proposes an online learning framework tailored for streaming text classification under distribution shifts. First, we decompose a neural network-based text classification model into distinct modules and analyze the varying impact of updating these modules under different types of shifts. Based on this insight, we define three novel indicators to efficiently measure the extent of distribution shifts without evaluating the entire model. These indicators enable the development of predictive models that dynamically optimize module update strategies, balancing learning efficiency and accuracy in real-time. To the best of our knowledge, this is the first approach to systematically adapt model updates according to a trade-off between efficiency and accuracy in online text classification. Extensive experiments on real-world streaming datasets demonstrate the effectiveness of our method, which consistently outperforms both static update strategies and state-of-the-art online text classification models.
流数据的动态特性经常引入分布变化,这对典型的文本分类模型提出了挑战。本文提出了一种针对分布变化下流文本分类的在线学习框架。首先,我们将基于神经网络的文本分类模型分解为不同的模块,并分析了在不同类型的移位下更新这些模块的不同影响。基于这一见解,我们定义了三个新的指标来有效地衡量分布转移的程度,而无需评估整个模型。这些指标使预测模型的开发能够动态优化模块更新策略,实时平衡学习效率和准确性。据我们所知,这是第一个根据在线文本分类的效率和准确性之间的权衡来系统地适应模型更新的方法。在现实世界流数据集上的大量实验证明了我们的方法的有效性,它始终优于静态更新策略和最先进的在线文本分类模型。
{"title":"Modular Model Adaptation for Online Learning in Streaming Text Classification","authors":"Min-Seon Kim;Ling Liu;Hyuk-Yoon Kwon","doi":"10.1109/TKDE.2026.3651427","DOIUrl":"https://doi.org/10.1109/TKDE.2026.3651427","url":null,"abstract":"The dynamic nature of streaming data often introduces distribution shifts that challenge typical text classification models. This paper proposes an online learning framework tailored for streaming text classification under distribution shifts. First, we decompose a neural network-based text classification model into distinct modules and analyze the varying impact of updating these modules under different types of shifts. Based on this insight, we define three novel indicators to efficiently measure the extent of distribution shifts without evaluating the entire model. These indicators enable the development of predictive models that dynamically optimize module update strategies, balancing learning efficiency and accuracy in real-time. To the best of our knowledge, this is the first approach to systematically adapt model updates according to a trade-off between efficiency and accuracy in online text classification. Extensive experiments on real-world streaming datasets demonstrate the effectiveness of our method, which consistently outperforms both static update strategies and state-of-the-art online text classification models.","PeriodicalId":13496,"journal":{"name":"IEEE Transactions on Knowledge and Data Engineering","volume":"38 3","pages":"1843-1856"},"PeriodicalIF":10.4,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146162204","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Groundhog: Accelerating Spatio-Temporal Data Analytics With Fine-Grained In-Storage Processing 土拨鼠:用细粒度存储处理加速时空数据分析
IF 10.4 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-05 DOI: 10.1109/TKDE.2026.3650857
Yang Guo;Tianyu Wang;Zizhan Chen;Zili Shao
With the rapid growth of mobile devices and applications, a prodigious number of spatio-temporal data are generated constantly. To process these data for applications like traffic forecasting, existing spatio-temporal systems rely on the move-data-to-computation paradigm. However, this approach incurs significant data movement overhead between hosts and storage devices, particularly when a spatio-temporal query is executed on a non-preferred data layout or when the query has a small result size due to its inherent nature. To address this issue, this work introduces Groundhog, an efficient in-storage computing technique designed specifically for spatio-temporal queries, aimed at reducing unnecessary data movement and computations. Groundhog introduces three key designs for efficient in-storage computing: (i) a self-contained and segment-based storage model, which is lightweight for in-storage computing and enables fine-grained pruning for spatio-temporal queries; (ii) a set of fine-grained techniques to optimize query processing inside storage devices for spatio-temporal queries; and (iii) an in-storage-computing-aware query planner, which offloads spatio-temporal queries in a fine-grained manner using a cost-based approach. We implemented Groundhog on real hardware and demonstrated how to apply fine-grained techniques to accelerate various spatio-temporal queries. Extensive experiments conducted on real-world datasets demonstrate that Groundhog achieves significant performance improvements, with latency reductions of up to 81% for widely used spatio-temporal queries compared to host computing solutions.
随着移动设备和应用的快速发展,不断产生大量的时空数据。为了处理这些数据用于交通预测等应用,现有的时空系统依赖于“从数据到计算”的模式。但是,这种方法会在主机和存储设备之间产生很大的数据移动开销,特别是在对非首选数据布局执行时空查询时,或者由于其固有性质,查询的结果大小很小时。为了解决这个问题,本工作介绍了土拨鼠,一种专门为时空查询设计的高效存储计算技术,旨在减少不必要的数据移动和计算。土拨鼠介绍了高效存储计算的三个关键设计:(i)一个自包含和基于段的存储模型,该模型对于存储计算来说是轻量级的,并且能够对时空查询进行细粒度修剪;(ii)一套细粒度技术,以优化存储设备内部的查询处理,以进行时空查询;(iii)存储计算感知查询规划器,它使用基于成本的方法以细粒度的方式卸载时空查询。我们在真实硬件上实现了Groundhog,并演示了如何应用细粒度技术来加速各种时空查询。在真实数据集上进行的大量实验表明,与主机计算解决方案相比,Groundhog实现了显著的性能改进,对于广泛使用的时空查询,延迟减少高达81%。
{"title":"Groundhog: Accelerating Spatio-Temporal Data Analytics With Fine-Grained In-Storage Processing","authors":"Yang Guo;Tianyu Wang;Zizhan Chen;Zili Shao","doi":"10.1109/TKDE.2026.3650857","DOIUrl":"https://doi.org/10.1109/TKDE.2026.3650857","url":null,"abstract":"With the rapid growth of mobile devices and applications, a prodigious number of spatio-temporal data are generated constantly. To process these data for applications like traffic forecasting, existing spatio-temporal systems rely on the move-data-to-computation paradigm. However, this approach incurs significant data movement overhead between hosts and storage devices, particularly when a spatio-temporal query is executed on a non-preferred data layout or when the query has a small result size due to its inherent nature. To address this issue, this work introduces Groundhog, an efficient in-storage computing technique designed specifically for spatio-temporal queries, aimed at reducing unnecessary data movement and computations. Groundhog introduces three key designs for efficient in-storage computing: (i) a self-contained and segment-based storage model, which is lightweight for in-storage computing and enables fine-grained pruning for spatio-temporal queries; (ii) a set of fine-grained techniques to optimize query processing inside storage devices for spatio-temporal queries; and (iii) an in-storage-computing-aware query planner, which offloads spatio-temporal queries in a fine-grained manner using a cost-based approach. We implemented Groundhog on real hardware and demonstrated how to apply fine-grained techniques to accelerate various spatio-temporal queries. Extensive experiments conducted on real-world datasets demonstrate that Groundhog achieves significant performance improvements, with latency reductions of up to 81% for widely used spatio-temporal queries compared to host computing solutions.","PeriodicalId":13496,"journal":{"name":"IEEE Transactions on Knowledge and Data Engineering","volume":"38 3","pages":"1798-1812"},"PeriodicalIF":10.4,"publicationDate":"2026-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146162206","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Secure Multi-Character Searchable Encryption Supporting Rich Search Functionalities 安全的多字符可搜索加密支持丰富的搜索功能
IF 10.4 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-05 DOI: 10.1109/TKDE.2025.3650082
Qing Wang;Donghui Hu;Meng Li;Yan Qiao;Guomin Yang;Mauro Conti
Wildcard Keyword Searchable Encryption (WKSE) has grown into a ubiquitous tool. It enables clients to search desired files with wildcard expressions. Although promising, previous schemes confront three barriers: (1) An adversary can launch a correlation attack to acquire the similarity between keywords. (2) The WKSE schemes exhibit false positives which can lead to wrong search results. (3) Existing feature extraction strategies limit the flexibility of search expressions. In this paper, we propose a Multi-Character Searchable Encryption scheme (MCSE) that overcomes the aforementioned barriers. To resist correlation attacks, we design the randomize-pad model to encrypt the vector. To eradicate false positives, we apply the vector space model and complete feature extraction strategies so that a feature set uniquely identifies a keyword or expression. To enhance search flexibility, we introduce three distinct feature extraction strategies for keyword expressions, wildcard expressions, and logical expressions, enabling effective multi-character search. These strategies enable indexes to accommodate the search of diverse expressions. Finally, we prove that MCSE is indistinguishable against chosen-feature attacks and implement MCSE on two real datasets. Compared with state-of-the-art schemes, the experiment results show that MCSE achieves good performance.
通配符关键字可搜索加密(WKSE)已经发展成为一种无处不在的工具。它使客户端能够使用通配符表达式搜索所需的文件。虽然有前景,但以前的方案面临三个障碍:(1)攻击者可以发起关联攻击来获取关键字之间的相似性。(2) WKSE方案存在误报,可能导致错误的搜索结果。(3)现有特征提取策略限制了搜索表达式的灵活性。在本文中,我们提出了一种克服上述障碍的多字符可搜索加密方案(MCSE)。为了抵抗相关攻击,我们设计了随机填充模型对向量进行加密。为了消除误报,我们应用向量空间模型和完整的特征提取策略,使特征集唯一地标识关键字或表达式。为了增强搜索的灵活性,我们引入了三种不同的特征提取策略,分别用于关键字表达式、通配符表达式和逻辑表达式,从而实现有效的多字符搜索。这些策略使索引能够适应不同表达式的搜索。最后,我们证明了MCSE对选择特征攻击的不可区分性,并在两个真实数据集上实现了MCSE。实验结果表明,该算法与现有算法相比具有较好的性能。
{"title":"Secure Multi-Character Searchable Encryption Supporting Rich Search Functionalities","authors":"Qing Wang;Donghui Hu;Meng Li;Yan Qiao;Guomin Yang;Mauro Conti","doi":"10.1109/TKDE.2025.3650082","DOIUrl":"https://doi.org/10.1109/TKDE.2025.3650082","url":null,"abstract":"Wildcard Keyword Searchable Encryption (WKSE) has grown into a ubiquitous tool. It enables clients to search desired files with wildcard expressions. Although promising, previous schemes confront three barriers: (1) An adversary can launch a correlation attack to acquire the similarity between keywords. (2) The WKSE schemes exhibit false positives which can lead to wrong search results. (3) Existing feature extraction strategies limit the flexibility of search expressions. In this paper, we propose a Multi-Character Searchable Encryption scheme (MCSE) that overcomes the aforementioned barriers. To resist correlation attacks, we design the randomize-pad model to encrypt the vector. To eradicate false positives, we apply the vector space model and complete feature extraction strategies so that a feature set uniquely identifies a keyword or expression. To enhance search flexibility, we introduce three distinct feature extraction strategies for keyword expressions, wildcard expressions, and logical expressions, enabling effective multi-character search. These strategies enable indexes to accommodate the search of diverse expressions. Finally, we prove that MCSE is indistinguishable against chosen-feature attacks and implement MCSE on two real datasets. Compared with state-of-the-art schemes, the experiment results show that MCSE achieves good performance.","PeriodicalId":13496,"journal":{"name":"IEEE Transactions on Knowledge and Data Engineering","volume":"38 3","pages":"1958-1972"},"PeriodicalIF":10.4,"publicationDate":"2026-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146162182","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
STORM: Exploiting Spatiotemporal Continuity for Trajectory Similarity Learning in Road Networks 基于时空连续性的道路网络轨迹相似学习
IF 10.4 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-01 DOI: 10.1109/TKDE.2025.3650227
Jialiang Li;Hua Lu;Cyrus Shahabi
Trajectory similarity in road networks is pivotal for numerous applications in transportation, urban planning, and ridesharing. However, due to the varying lengths of trajectories, employing similarity metrics directly on raw trajectory data (e.g., DTW (Yi et al., 1998)) becomes impractical at scale. Therefore, current research primarily revolves around applying deep learning to embed trajectories into vector representations, i.e., embeddings, enabling the application of simpler (and indexable) similarity metrics such as Euclidean distance. Existing research either involves embedding trajectories independent of the downstream tasks, or tailors the embedding specifically for a designated similarity metric. While the former offers versatility and allows for easy fine-tuning to accommodate various metrics, the latter typically yields more effective results but necessitates reconfiguration for different, yet similar metrics. Moreover, both approaches neglect the intrinsic spatiotemporal continuity in trajectory data, resulting in suboptimal trajectory modeling. Our objective is to address the limitations in modeling and have the best of the two worlds. Initially, we generate an embedding through pre-training, decoupled from any particular similarity metric. Subsequently, through a meticulous yet less complex fine-tuning process, we enhance the embedding to encapsulate the nuances of a designated similarity metric. Moreover, a significant aspect of our approach lies in our trajectory modeling that captures spatiotemporal continuity, which mainly consists of a trajectory-oriented road segment embedding and a Transformer encoder enhanced by spatiotemporal semantics inherent in road network-constrained trajectories. Our experimental results demonstrate the superiority of our approach in approximating multiple trajectory similarity metrics over existing state-of-the-art models from both categories of approaches.
道路网络的轨迹相似性对于交通、城市规划和拼车的众多应用至关重要。然而,由于轨迹的长度不同,直接在原始轨迹数据(例如,DTW (Yi et al., 1998))上使用相似性度量在规模上变得不切实际。因此,目前的研究主要围绕着应用深度学习将轨迹嵌入到向量表示中,即嵌入,从而能够应用更简单(且可索引)的相似性度量,如欧几里得距离。现有的研究要么涉及独立于下游任务的嵌入轨迹,要么专门针对指定的相似性度量来定制嵌入。前者提供了多功能性,并允许轻松微调以适应各种指标,而后者通常会产生更有效的结果,但需要针对不同但相似的指标进行重新配置。此外,这两种方法都忽略了轨迹数据固有的时空连续性,导致轨迹建模不够理想。我们的目标是解决建模中的局限性,并获得两者的最佳效果。首先,我们通过预训练生成嵌入,从任何特定的相似度度量解耦。随后,通过细致但不太复杂的微调过程,我们增强了嵌入,以封装指定相似度度量的细微差别。此外,我们的方法的一个重要方面在于我们的轨迹建模,捕捉时空连续性,主要包括一个面向轨迹的道路段嵌入和一个由路网约束轨迹中固有的时空语义增强的Transformer编码器。我们的实验结果表明,我们的方法在近似多轨迹相似度量方面优于现有的两类方法的最先进模型。
{"title":"STORM: Exploiting Spatiotemporal Continuity for Trajectory Similarity Learning in Road Networks","authors":"Jialiang Li;Hua Lu;Cyrus Shahabi","doi":"10.1109/TKDE.2025.3650227","DOIUrl":"https://doi.org/10.1109/TKDE.2025.3650227","url":null,"abstract":"Trajectory similarity in road networks is pivotal for numerous applications in transportation, urban planning, and ridesharing. However, due to the varying lengths of trajectories, employing similarity metrics directly on raw trajectory data (e.g., DTW (Yi et al., 1998)) becomes impractical at scale. Therefore, current research primarily revolves around applying deep learning to embed trajectories into vector representations, i.e., embeddings, enabling the application of simpler (and indexable) similarity metrics such as Euclidean distance. Existing research either involves embedding trajectories independent of the downstream tasks, or tailors the embedding specifically for a designated similarity metric. While the former offers versatility and allows for easy fine-tuning to accommodate various metrics, the latter typically yields more effective results but necessitates reconfiguration for different, yet similar metrics. Moreover, both approaches neglect the intrinsic spatiotemporal continuity in trajectory data, resulting in suboptimal trajectory modeling. Our objective is to address the limitations in modeling and have the best of the two worlds. Initially, we generate an embedding through pre-training, decoupled from any particular similarity metric. Subsequently, through a meticulous yet less complex fine-tuning process, we enhance the embedding to encapsulate the nuances of a designated similarity metric. Moreover, a significant aspect of our approach lies in our trajectory modeling that captures spatiotemporal continuity, which mainly consists of a trajectory-oriented road segment embedding and a Transformer encoder enhanced by spatiotemporal semantics inherent in road network-constrained trajectories. Our experimental results demonstrate the superiority of our approach in approximating multiple trajectory similarity metrics over existing state-of-the-art models from both categories of approaches.","PeriodicalId":13496,"journal":{"name":"IEEE Transactions on Knowledge and Data Engineering","volume":"38 3","pages":"1986-2000"},"PeriodicalIF":10.4,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146162200","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Order-Preserving Pattern Matching: Review 保序模式匹配:综述
IF 10.4 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-31 DOI: 10.1109/TKDE.2025.3649808
Yu Zhou;Feng Wang;Yongning Tang
Order-preserving pattern matching (OPPM) is a specialized area within the domain of pattern recognition and string matching. This specialized area is dedicated to identifying patterns in sequences where the intrinsic order of elements is crucially important. This comprehensive review provides an in-depth analysis of diverse order-preserving pattern matching techniques, focusing on their algorithms and methodologies. Particular attention is paid to the challenges researchers face in preserving order during pattern matching. The review also evaluates the performance and scalability of various techniques to handle large-scale datasets. By discussing the current state of OPPM research, we identify gaps, opportunities, and potential avenues for future exploration. Through this exploration, we aim to contribute valuable insights that will guide researchers and practitioners in advancing the frontiers of OPPM research, shaping the trajectory of this field in the coming years.
保序模式匹配(OPPM)是模式识别和字符串匹配领域的一个专门研究领域。这个专业领域致力于识别序列中的模式,其中元素的内在顺序至关重要。这篇全面的综述提供了对各种保序模式匹配技术的深入分析,重点是它们的算法和方法。特别关注的是研究人员在模式匹配过程中所面临的保持秩序的挑战。本文还评估了处理大规模数据集的各种技术的性能和可扩展性。通过讨论OPPM研究的现状,我们确定了未来探索的差距、机会和潜在途径。通过这一探索,我们的目标是提供有价值的见解,指导研究人员和实践者推进OPPM研究的前沿,塑造该领域未来几年的发展轨迹。
{"title":"Order-Preserving Pattern Matching: Review","authors":"Yu Zhou;Feng Wang;Yongning Tang","doi":"10.1109/TKDE.2025.3649808","DOIUrl":"https://doi.org/10.1109/TKDE.2025.3649808","url":null,"abstract":"Order-preserving pattern matching (OPPM) is a specialized area within the domain of pattern recognition and string matching. This specialized area is dedicated to identifying patterns in sequences where the intrinsic order of elements is crucially important. This comprehensive review provides an in-depth analysis of diverse order-preserving pattern matching techniques, focusing on their algorithms and methodologies. Particular attention is paid to the challenges researchers face in preserving order during pattern matching. The review also evaluates the performance and scalability of various techniques to handle large-scale datasets. By discussing the current state of OPPM research, we identify gaps, opportunities, and potential avenues for future exploration. Through this exploration, we aim to contribute valuable insights that will guide researchers and practitioners in advancing the frontiers of OPPM research, shaping the trajectory of this field in the coming years.","PeriodicalId":13496,"journal":{"name":"IEEE Transactions on Knowledge and Data Engineering","volume":"38 3","pages":"1885-1904"},"PeriodicalIF":10.4,"publicationDate":"2025-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146162217","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SLEADE: Disagreement-Based Semi-Supervised Learning for Sparsely Labeled Evolving Data Streams 稀疏标记演化数据流的基于分歧的半监督学习
IF 10.4 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-23 DOI: 10.1109/TKDE.2025.3647050
Heitor Murilo Gomes;Jesse Read;Maciej Grzenda;Bernhard Pfahringer;Albert Bifet
Semi-supervised learning (SSL) problems are challenging, appear in many domains, and are particularly relevant to streaming applications, where data are abundant but labels are not. The problem tackled here is classification over an evolving data stream where labels are rare and distributed randomly. We propose SLEADE (Stream LEArning by Disagreement Ensemble), a novel method that exploits disagreement-based learning and unsupervised drift detection to leverage unlabeled data during training. SLEADE uses pseudo-labeled instances to augment the training set of each member of an ensemble using a majority trains the minority scheme. The pseudo-labeled data impact is controlled by a weighting function that considers the confidence in the prediction attributed by the ensemble members. SLEADE exploits unsupervised drift detection, which allows the ensemble to respond to changes. We present several experiments using real and synthetic data to illustrate the benefits and limitations of SLEADE compared to existing algorithms.
半监督学习(SSL)问题具有挑战性,出现在许多领域,并且与流媒体应用程序特别相关,其中数据丰富但标签不足。这里要解决的问题是对不断发展的数据流进行分类,其中标签很少且随机分布。我们提出了SLEADE (Stream LEArning by disagree Ensemble),这是一种利用基于分歧的学习和无监督漂移检测来利用训练过程中未标记数据的新方法。SLEADE使用伪标记实例来增强集合中每个成员的训练集,使用多数训练少数方案。伪标记数据影响由一个加权函数控制,该函数考虑了集成成员对预测的置信度。SLEADE利用无监督漂移检测,允许集成响应变化。我们提出了几个使用真实和合成数据的实验,以说明与现有算法相比,SLEADE的优点和局限性。
{"title":"SLEADE: Disagreement-Based Semi-Supervised Learning for Sparsely Labeled Evolving Data Streams","authors":"Heitor Murilo Gomes;Jesse Read;Maciej Grzenda;Bernhard Pfahringer;Albert Bifet","doi":"10.1109/TKDE.2025.3647050","DOIUrl":"https://doi.org/10.1109/TKDE.2025.3647050","url":null,"abstract":"Semi-supervised learning (SSL) problems are challenging, appear in many domains, and are particularly relevant to streaming applications, where data are abundant but labels are not. The problem tackled here is classification over an evolving data stream where labels are rare and distributed randomly. We propose SLEADE (Stream LEArning by Disagreement Ensemble), a novel method that exploits disagreement-based learning and unsupervised drift detection to leverage unlabeled data during training. SLEADE uses pseudo-labeled instances to augment the training set of each member of an ensemble using a <italic>majority trains the minority</i> scheme. The pseudo-labeled data impact is controlled by a weighting function that considers the confidence in the prediction attributed by the ensemble members. SLEADE exploits unsupervised drift detection, which allows the ensemble to respond to changes. We present several experiments using real and synthetic data to illustrate the benefits and limitations of SLEADE compared to existing algorithms.","PeriodicalId":13496,"journal":{"name":"IEEE Transactions on Knowledge and Data Engineering","volume":"38 3","pages":"1973-1985"},"PeriodicalIF":10.4,"publicationDate":"2025-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146162223","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Transactions on Knowledge and Data Engineering
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1