首页 > 最新文献

ACM Transactions on Knowledge Discovery from Data最新文献

英文 中文
Building Shortcuts between Distant Nodes with Biaffine Mapping for Graph Convolutional Networks 用双亲映射为图卷积网络建立远节点之间的捷径
IF 3.6 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-03-01 DOI: 10.1145/3650113
Acong Zhang, Jincheng Huang, Ping Li, Kai Zhang

Multiple recent studies show a paradox in graph convolutional networks (GCNs), that is, shallow architectures limit the capability of learning information from high-order neighbors, while deep architectures suffer from over-smoothing or over-squashing. To enjoy the simplicity of shallow architectures and overcome their limits of neighborhood extension, in this work, we introduce Biaffine technique to improve the expressiveness of graph convolutional networks with a shallow architecture. The core design of our method is to learn direct dependency on long-distance neighbors for nodes, with which only one-hop message passing is capable of capturing rich information for node representation. Besides, we propose a multi-view contrastive learning method to exploit the representations learned from long-distance dependencies. Extensive experiments on nine graph benchmark datasets suggest that the shallow biaffine graph convolutional networks (BAGCN) significantly outperforms state-of-the-art GCNs (with deep or shallow architectures) on semi-supervised node classification. We further verify the effectiveness of biaffine design in node representation learning and the performance consistency on different sizes of training data.

最近的多项研究表明,图卷积网络(GCN)中存在一个悖论,即浅层架构限制了从高阶邻域学习信息的能力,而深层架构则存在过度平滑或过度扭曲的问题。为了享受浅层架构的简单性,克服其邻域扩展的局限性,我们在这项工作中引入了 Biaffine 技术,以提高浅层架构图卷积网络的表现力。我们方法的核心设计是学习节点对远距离邻居的直接依赖,只有单跳消息传递才能捕捉到丰富的节点表示信息。此外,我们还提出了一种多视角对比学习方法,以利用从远距离依赖关系中学到的表征。在九个图基准数据集上进行的广泛实验表明,浅层双亲图卷积网络(BAGCN)在半监督节点分类上的表现明显优于最先进的 GCN(深层或浅层架构)。我们进一步验证了双亲设计在节点表示学习中的有效性,以及在不同规模的训练数据上的性能一致性。
{"title":"Building Shortcuts between Distant Nodes with Biaffine Mapping for Graph Convolutional Networks","authors":"Acong Zhang, Jincheng Huang, Ping Li, Kai Zhang","doi":"10.1145/3650113","DOIUrl":"https://doi.org/10.1145/3650113","url":null,"abstract":"<p>Multiple recent studies show a paradox in graph convolutional networks (GCNs), that is, shallow architectures limit the capability of learning information from high-order neighbors, while deep architectures suffer from over-smoothing or over-squashing. To enjoy the simplicity of shallow architectures and overcome their limits of neighborhood extension, in this work, we introduce <i>Biaffine</i> technique to improve the expressiveness of graph convolutional networks with a shallow architecture. The core design of our method is to learn direct dependency on long-distance neighbors for nodes, with which only one-hop message passing is capable of capturing rich information for node representation. Besides, we propose a multi-view contrastive learning method to exploit the representations learned from long-distance dependencies. Extensive experiments on nine graph benchmark datasets suggest that the shallow biaffine graph convolutional networks (BAGCN) significantly outperforms state-of-the-art GCNs (with deep or shallow architectures) on semi-supervised node classification. We further verify the effectiveness of biaffine design in node representation learning and the performance consistency on different sizes of training data.</p>","PeriodicalId":49249,"journal":{"name":"ACM Transactions on Knowledge Discovery from Data","volume":"24 1","pages":""},"PeriodicalIF":3.6,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140019092","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MoMENt: Marked Point Processes with Memory-Enhanced Neural Networks for User Activity Modeling MoMENt:用于用户活动建模的记忆增强型神经网络标记点过程
IF 3.6 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-02-29 DOI: 10.1145/3649504
Sherry Sahebi, Mengfan Yao, Siqian Zhao, Reza Feyzi Behnagh

Marked temporal point process models (MTPPs) aim to model event sequences and event markers (associated features) in continuous time. These models have been applied to various application domains where capturing event dynamics in continuous time is beneficial, such as education systems, social networks, and recommender systems. However, current MTPPs suffer from two major limitations, i.e., inefficient representation of event dynamic’s influence on marker distribution and losing fine-grained representation of historical marker distributions in the modeling. Motivated by these limitations, we propose a novel model called Marked Point Processes with Memory-Enhanced Neural Networks (MoMENt) that can capture the bidirectional interrelations between markers and event dynamics while providing fine-grained marker representations. Specifically, MoMENt is constructed of two concurrent networks: Recurrent Activity Updater (RAU) to capture model event dynamics and Memory-Enhanced Marker Updater (MEMU) to represent markers. Both RAU and MEMU components are designed to update each other at every step to model the bidirectional influence of markers and event dynamics. To obtain a fine-grained representation of maker distributions, MEMU is devised with external memories that model detailed marker-level features with latent component vectors. Our extensive experiments on six real-world user interaction datasets demonstrate that MoMENt can accurately represent users’ activity dynamics, boosting time, type, and marker predictions, as well as recommendation performance up to (76.5% ), (65.6% ), (77.2% ), and (57.7% ), respectively, compared to baseline approaches. Furthermore, our case studies show the effectiveness of MoMENt in providing meaningful and fine-grained interpretations of user-system relations over time, e.g., how user choices influence their future preferences in the recommendation domain.

标记时间点过程模型(MTPP)旨在为连续时间中的事件序列和事件标记(相关特征)建模。这些模型已被应用于各种有利于捕捉连续时间事件动态的应用领域,如教育系统、社交网络和推荐系统。然而,目前的 MTPP 存在两大局限性,即不能有效地表示事件动态对标记分布的影响,以及在建模中失去了对历史标记分布的细粒度表示。基于这些局限性,我们提出了一种名为 "记忆增强神经网络标记点过程"(MoMENt)的新型模型,它可以捕捉标记点与事件动态之间的双向相互关系,同时提供精细的标记点表示。具体来说,MoMENt 由两个并发网络构成:Recurrent Activity Updater (RAU) 用于捕捉模型事件动态,而 Memory-Enhanced Marker Updater (MEMU) 则用于表示标记。RAU 和 MEMU 组件在每一步都会相互更新,以模拟标记和事件动态的双向影响。为了获得制作者分布的细粒度表示,MEMU 设计了外部存储器,用潜在分量向量模拟详细的标记级特征。我们在六个真实世界的用户交互数据集上进行了大量实验,结果表明,与基线方法相比,MoMENt能够准确地表示用户的活动动态,提升时间、类型和标记预测,并将推荐性能分别提高到(76.5%)、(65.6%)、(77.2%)和(57.7%)。此外,我们的案例研究表明,MoMENt 能够有效地对用户与系统之间的关系提供有意义的、细粒度的解释,例如,在推荐领域,用户的选择是如何影响其未来偏好的。
{"title":"MoMENt: Marked Point Processes with Memory-Enhanced Neural Networks for User Activity Modeling","authors":"Sherry Sahebi, Mengfan Yao, Siqian Zhao, Reza Feyzi Behnagh","doi":"10.1145/3649504","DOIUrl":"https://doi.org/10.1145/3649504","url":null,"abstract":"<p>Marked temporal point process models (MTPPs) aim to model event sequences and event markers (associated features) in continuous time. These models have been applied to various application domains where capturing event dynamics in continuous time is beneficial, such as education systems, social networks, and recommender systems. However, current MTPPs suffer from two major limitations, i.e., inefficient representation of event dynamic’s influence on marker distribution and losing fine-grained representation of historical marker distributions in the modeling. Motivated by these limitations, we propose a novel model called <underline>M</underline>arked P<underline>o</underline>int Processes with <underline>M</underline>emory-<underline>E</underline>nhanced <underline>N</underline>eural Ne<underline>t</underline>works (MoMENt) that can capture the bidirectional interrelations between markers and event dynamics while providing fine-grained marker representations. Specifically, MoMENt is constructed of two concurrent networks: Recurrent Activity Updater (RAU) to capture model event dynamics and Memory-Enhanced Marker Updater (MEMU) to represent markers. Both RAU and MEMU components are designed to update each other at every step to model the bidirectional influence of markers and event dynamics. To obtain a fine-grained representation of maker distributions, MEMU is devised with external memories that model detailed marker-level features with latent component vectors. Our extensive experiments on six real-world user interaction datasets demonstrate that MoMENt can accurately represent users’ activity dynamics, boosting time, type, and marker predictions, as well as recommendation performance up to (76.5% ), (65.6% ), (77.2% ), and (57.7% ), respectively, compared to baseline approaches. Furthermore, our case studies show the effectiveness of MoMENt in providing meaningful and fine-grained interpretations of user-system relations over time, e.g., how user choices influence their future preferences in the recommendation domain.</p>","PeriodicalId":49249,"journal":{"name":"ACM Transactions on Knowledge Discovery from Data","volume":"18 1","pages":""},"PeriodicalIF":3.6,"publicationDate":"2024-02-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140007970","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DP-GCN: Node Classification by Connectivity and Local Topology Structure on Real-World Network DP-GCN:在真实世界网络上通过连接性和局部拓扑结构进行节点分类
IF 3.6 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-02-28 DOI: 10.1145/3649460
Zhe Chen, Aixin Sun

Node classification is to predict the class label of a node by analyzing its properties and interactions in a network. We note that many existing solutions for graph-based node classification only consider node connectivity but not node’s local topology structure. However, nodes residing in different parts of a real-world network may share similar local topology structures. For example, local topology structures in a payment network may reveal sellers’ business roles (e.g., supplier or retailer). To model both connectivity and local topology structure for better node classification performance, we present DP-GCN, a dual-path graph convolution network. DP-GCN consists of three main modules: (i) a C-GCN module to capture the connectivity relationships between nodes, (ii) a T-GCN module to capture the topology structure similarity among nodes, and (iii) a multi-head self-attention module to align both properties. We evaluate DP-GCN on seven benchmark datasets against diverse baselines to demonstrate its effectiveness. We also provide a case study of running DP-GCN on three large-scale payment networks from PayPal, a leading payment service provider, for risky seller detection. Experimental results show DP-GCN’s effectiveness and practicability in large-scale settings. PayPal’s internal testing also show DP-GCN’s effectiveness in defending real risks from transaction networks.

节点分类是通过分析节点在网络中的属性和相互作用来预测其类别标签。我们注意到,许多现有的基于图的节点分类解决方案只考虑节点的连接性,而不考虑节点的局部拓扑结构。然而,位于真实世界网络不同部分的节点可能具有相似的局部拓扑结构。例如,支付网络中的局部拓扑结构可能揭示卖家的业务角色(如供应商或零售商)。为了对连接性和局部拓扑结构进行建模,以获得更好的节点分类性能,我们提出了双路径图卷积网络 DP-GCN。DP-GCN 由三个主要模块组成:(i) C-GCN 模块,用于捕捉节点间的连接关系;(ii) T-GCN 模块,用于捕捉节点间的拓扑结构相似性;(iii) 多头自关注模块,用于调整这两种属性。我们在七个基准数据集上评估了 DP-GCN 与不同基线的对比,以证明其有效性。我们还提供了一个案例研究,在领先的支付服务提供商贝宝(PayPal)的三个大型支付网络上运行 DP-GCN,进行风险卖家检测。实验结果表明了 DP-GCN 在大规模环境中的有效性和实用性。PayPal 的内部测试也显示了 DP-GCN 在防御交易网络真实风险方面的有效性。
{"title":"DP-GCN: Node Classification by Connectivity and Local Topology Structure on Real-World Network","authors":"Zhe Chen, Aixin Sun","doi":"10.1145/3649460","DOIUrl":"https://doi.org/10.1145/3649460","url":null,"abstract":"<p>Node classification is to predict the class label of a node by analyzing its properties and interactions in a network. We note that many existing solutions for graph-based node classification only consider node connectivity but not node’s local topology structure. However, nodes residing in different parts of a real-world network may share similar local topology structures. For example, local topology structures in a payment network may reveal sellers’ business roles (<i>e.g.,</i> supplier or retailer). To model both connectivity and local topology structure for better node classification performance, we present DP-GCN, a dual-path graph convolution network. DP-GCN consists of three main modules: (i) a C-GCN module to capture the connectivity relationships between nodes, (ii) a T-GCN module to capture the topology structure similarity among nodes, and (iii) a multi-head self-attention module to align both properties. We evaluate DP-GCN on seven benchmark datasets against diverse baselines to demonstrate its effectiveness. We also provide a case study of running DP-GCN on three large-scale payment networks from PayPal, a leading payment service provider, for risky seller detection. Experimental results show DP-GCN’s effectiveness and practicability in large-scale settings. PayPal’s internal testing also show DP-GCN’s effectiveness in defending real risks from transaction networks.</p>","PeriodicalId":49249,"journal":{"name":"ACM Transactions on Knowledge Discovery from Data","volume":"6 1","pages":""},"PeriodicalIF":3.6,"publicationDate":"2024-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139988036","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Harnessing the Power of LLMs in Practice: A Survey on ChatGPT and Beyond 在实践中利用法律硕士的力量:关于 ChatGPT 及其他的调查
IF 3.6 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-02-28 DOI: 10.1145/3649506
Jingfeng Yang, Hongye Jin, Ruixiang Tang, Xiaotian Han, Qizhang Feng, Haoming Jiang, Shaochen Zhong, Bing Yin, Xia Hu

This paper presents a comprehensive and practical guide for practitioners and end-users working with Large Language Models (LLMs) in their downstream natural language processing (NLP) tasks. We provide discussions and insights into the usage of LLMs from the perspectives of models, data, and downstream tasks. Firstly, we offer an introduction and brief summary of current language models. Then, we discuss the influence of pre-training data, training data, and test data. Most importantly, we provide a detailed discussion about the use and non-use cases of large language models for various natural language processing tasks, such as knowledge-intensive tasks, traditional natural language understanding tasks, generation tasks, emergent abilities, and considerations for specific tasks. We present various use cases and non-use cases to illustrate the practical applications and limitations of LLMs in real-world scenarios. We also try to understand the importance of data and the specific challenges associated with each NLP task. Furthermore, we explore the impact of spurious biases on LLMs and delve into other essential considerations, such as efficiency, cost, and latency, to ensure a comprehensive understanding of deploying LLMs in practice. This comprehensive guide aims to provide researchers and practitioners with valuable insights and best practices for working with LLMs, thereby enabling the successful implementation of these models in a wide range of NLP tasks. A curated list of practical guide resources of LLMs, regularly updated, can be found at https://github.com/Mooler0410/LLMsPracticalGuide. An LLMs evolutionary tree, editable yet regularly updated, can be found at llmtree.ai.

本文为在下游自然语言处理(NLP)任务中使用大型语言模型(LLM)的从业人员和最终用户提供了一份全面而实用的指南。我们从模型、数据和下游任务的角度对 LLM 的使用进行了讨论并提出了见解。首先,我们介绍并简要总结了当前的语言模型。然后,我们讨论了预训练数据、训练数据和测试数据的影响。最重要的是,我们详细讨论了大型语言模型在各种自然语言处理任务中的使用和非使用案例,如知识密集型任务、传统自然语言理解任务、生成任务、新兴能力以及对特定任务的考虑。我们介绍了各种使用案例和非使用案例,以说明 LLM 在现实世界场景中的实际应用和局限性。我们还试图了解数据的重要性以及与每项 NLP 任务相关的具体挑战。此外,我们还探讨了虚假偏差对 LLM 的影响,并深入探讨了其他基本考虑因素,如效率、成本和延迟,以确保对在实践中部署 LLM 有一个全面的了解。本综合指南旨在为研究人员和从业人员提供使用 LLM 的宝贵见解和最佳实践,从而使这些模型能够在各种 NLP 任务中成功实施。定期更新的 LLM 实用指南资源精选列表可在 https://github.com/Mooler0410/LLMsPracticalGuide 上找到。可编辑并定期更新的 LLMs 演化树可在 llmtree.ai 上找到。
{"title":"Harnessing the Power of LLMs in Practice: A Survey on ChatGPT and Beyond","authors":"Jingfeng Yang, Hongye Jin, Ruixiang Tang, Xiaotian Han, Qizhang Feng, Haoming Jiang, Shaochen Zhong, Bing Yin, Xia Hu","doi":"10.1145/3649506","DOIUrl":"https://doi.org/10.1145/3649506","url":null,"abstract":"<p>This paper presents a comprehensive and practical guide for practitioners and end-users working with Large Language Models (LLMs) in their downstream natural language processing (NLP) tasks. We provide discussions and insights into the usage of LLMs from the perspectives of models, data, and downstream tasks. Firstly, we offer an introduction and brief summary of current language models. Then, we discuss the influence of pre-training data, training data, and test data. Most importantly, we provide a detailed discussion about the use and non-use cases of large language models for various natural language processing tasks, such as knowledge-intensive tasks, traditional natural language understanding tasks, generation tasks, emergent abilities, and considerations for specific tasks. We present various use cases and non-use cases to illustrate the practical applications and limitations of LLMs in real-world scenarios. We also try to understand the importance of data and the specific challenges associated with each NLP task. Furthermore, we explore the impact of spurious biases on LLMs and delve into other essential considerations, such as efficiency, cost, and latency, to ensure a comprehensive understanding of deploying LLMs in practice. This comprehensive guide aims to provide researchers and practitioners with valuable insights and best practices for working with LLMs, thereby enabling the successful implementation of these models in a wide range of NLP tasks. A curated list of practical guide resources of LLMs, regularly updated, can be found at https://github.com/Mooler0410/LLMsPracticalGuide. An LLMs evolutionary tree, editable yet regularly updated, can be found at llmtree.ai.</p>","PeriodicalId":49249,"journal":{"name":"ACM Transactions on Knowledge Discovery from Data","volume":"9 1","pages":""},"PeriodicalIF":3.6,"publicationDate":"2024-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139987833","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Fully Test-Time Training Framework for Semi-Supervised Node Classification on Out-of-Distribution Graphs 分布外图谱上半监督节点分类的全测试时间训练框架
IF 3.6 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-02-26 DOI: 10.1145/3649507
Jiaxin Zhang, Yiqi Wang, Xihong Yang, En Zhu

Graph neural networks (GNNs) have shown great potential in representation learning for various graph tasks. However, the distribution shift between the training and test sets poses a challenge to the efficiency of GNNs. To address this challenge, HomoTTT propose a fully test-time training (FTTT) framework for GNNs to enhance the model’s generalization capabilities for node classification tasks. Specifically, our proposed HomoTTT designs a homophily-based and parameter-free graph contrastive learning task with adaptive augmentation to guide the model’s adaptation during the test time training, allowing the model to adapt for specific target data. In the inference stage, HomoTTT proposes to integrate the original GNN model and the adapted model after TTT using a homophily-based model selection method, which prevents potential performance degradation caused by unconstrained model adaptation. Extensive experimental results on six benchmark datasets demonstrate the effectiveness of our proposed framework. Additionally, the exploratory study further validates the rationality of the homophily-based graph contrastive learning task with adaptive augmentation and the homophily-based model selection designed in HomoTTT.

图神经网络(GNN)在各种图任务的表征学习方面显示出巨大的潜力。然而,训练集和测试集之间的分布偏移给图神经网络的效率带来了挑战。为了应对这一挑战,HomoTTT 为 GNNs 提出了完全测试时间训练(FTTT)框架,以增强模型在节点分类任务中的泛化能力。具体来说,我们提出的 HomoTTT 设计了一个基于同亲性和无参数的图对比学习任务,并在测试时间训练中使用自适应增强来指导模型的适应,从而使模型能够适应特定的目标数据。在推理阶段,HomoTTT 利用基于同源性的模型选择方法,将原始 GNN 模型与 TTT 后的适应模型进行整合,从而避免了无约束模型适应可能导致的性能下降。在六个基准数据集上的大量实验结果证明了我们提出的框架的有效性。此外,探索性研究进一步验证了 HomoTTT 中设计的基于同亲的图对比学习任务与自适应增强和基于同亲的模型选择的合理性。
{"title":"A Fully Test-Time Training Framework for Semi-Supervised Node Classification on Out-of-Distribution Graphs","authors":"Jiaxin Zhang, Yiqi Wang, Xihong Yang, En Zhu","doi":"10.1145/3649507","DOIUrl":"https://doi.org/10.1145/3649507","url":null,"abstract":"<p>Graph neural networks (GNNs) have shown great potential in representation learning for various graph tasks. However, the distribution shift between the training and test sets poses a challenge to the efficiency of GNNs. To address this challenge, <span>HomoTTT</span> propose a fully test-time training (FTTT) framework for GNNs to enhance the model’s generalization capabilities for node classification tasks. Specifically, our proposed <span>HomoTTT</span> designs a homophily-based and parameter-free graph contrastive learning task with adaptive augmentation to guide the model’s adaptation during the test time training, allowing the model to adapt for specific target data. In the inference stage, <span>HomoTTT</span> proposes to integrate the original GNN model and the adapted model after TTT using a homophily-based model selection method, which prevents potential performance degradation caused by unconstrained model adaptation. Extensive experimental results on six benchmark datasets demonstrate the effectiveness of our proposed framework. Additionally, the exploratory study further validates the rationality of the homophily-based graph contrastive learning task with adaptive augmentation and the homophily-based model selection designed in <span>HomoTTT</span>.</p>","PeriodicalId":49249,"journal":{"name":"ACM Transactions on Knowledge Discovery from Data","volume":"282 1","pages":""},"PeriodicalIF":3.6,"publicationDate":"2024-02-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139981301","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fairness-Aware Graph Neural Networks: A Survey 公平感知图神经网络:调查
IF 3.6 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-02-24 DOI: 10.1145/3649142
April Chen, Ryan A. Rossi, Namyong Park, Puja Trivedi, Yu Wang, Tong Yu, Sungchul Kim, Franck Dernoncourt, Nesreen K. Ahmed

Graph Neural Networks (GNNs) have become increasingly important due to their representational power and state-of-the-art predictive performance on many fundamental learning tasks. Despite this success, GNNs suffer from fairness issues that arise as a result of the underlying graph data and the fundamental aggregation mechanism that lies at the heart of the large class of GNN models. In this article, we examine and categorize fairness techniques for improving the fairness of GNNs. We categorize these techniques by whether they focus on improving fairness in the pre-processing, in-processing (during training), or post-processing phases. Furthermore, we discuss how such techniques can be used together whenever appropriate, and highlight the advantages and intuition as well. We also introduce an intuitive taxonomy for fairness evaluation metrics including graph-level fairness, neighborhood-level fairness, embedding-level fairness, and prediction-level fairness metrics. In addition, graph datasets that are useful for benchmarking the fairness of GNN models are summarized succinctly. Finally, we highlight key open problems and challenges that remain to be addressed.

图神经网络(GNN)在许多基本学习任务中具有强大的表征能力和一流的预测性能,因而变得越来越重要。尽管取得了这一成功,但 GNN 仍然存在公平性问题,这些问题是由底层图数据和基本聚合机制引起的,而基本聚合机制是一大类 GNN 模型的核心。在本文中,我们将对用于提高 GNN 公平性的公平性技术进行研究和分类。我们按照这些技术是侧重于在预处理、处理中(训练期间)还是在处理后阶段提高公平性进行分类。此外,我们还讨论了如何在适当的时候同时使用这些技术,并强调了它们的优势和直观性。我们还介绍了一种直观的公平性评价指标分类法,包括图级公平性、邻域级公平性、嵌入级公平性和预测级公平性指标。此外,我们还简明扼要地总结了有助于为 GNN 模型的公平性设定基准的图数据集。最后,我们强调了有待解决的关键问题和挑战。
{"title":"Fairness-Aware Graph Neural Networks: A Survey","authors":"April Chen, Ryan A. Rossi, Namyong Park, Puja Trivedi, Yu Wang, Tong Yu, Sungchul Kim, Franck Dernoncourt, Nesreen K. Ahmed","doi":"10.1145/3649142","DOIUrl":"https://doi.org/10.1145/3649142","url":null,"abstract":"<p>Graph Neural Networks (GNNs) have become increasingly important due to their representational power and state-of-the-art predictive performance on many fundamental learning tasks. Despite this success, GNNs suffer from fairness issues that arise as a result of the underlying graph data and the fundamental aggregation mechanism that lies at the heart of the large class of GNN models. In this article, we examine and categorize fairness techniques for improving the fairness of GNNs. We categorize these techniques by whether they focus on improving fairness in the pre-processing, in-processing (during training), or post-processing phases. Furthermore, we discuss how such techniques can be used together whenever appropriate, and highlight the advantages and intuition as well. We also introduce an intuitive taxonomy for fairness evaluation metrics including graph-level fairness, neighborhood-level fairness, embedding-level fairness, and prediction-level fairness metrics. In addition, graph datasets that are useful for benchmarking the fairness of GNN models are summarized succinctly. Finally, we highlight key open problems and challenges that remain to be addressed.</p>","PeriodicalId":49249,"journal":{"name":"ACM Transactions on Knowledge Discovery from Data","volume":"15 1","pages":""},"PeriodicalIF":3.6,"publicationDate":"2024-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139953733","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
BapFL : You can Backdoor Personalized Federated Learning BapFL : 您可以后门个性化联合学习
IF 3.6 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-02-23 DOI: 10.1145/3649316
Tiandi Ye, Cen Chen, Yinggui Wang, Xiang Li, Ming Gao

In federated learning (FL), malicious clients could manipulate the predictions of the trained model through backdoor attacks, posing a significant threat to the security of FL systems. Existing research primarily focuses on backdoor attacks and defenses within the generic federated learning scenario, where all clients collaborate to train a single global model. A recent study conducted by Qin et al. [24] marks the initial exploration of backdoor attacks within the personalized federated learning (pFL) scenario, where each client constructs a personalized model based on its local data. Notably, the study demonstrates that pFL methods with parameter decoupling can significantly enhance robustness against backdoor attacks. However, in this paper, we whistleblow that pFL methods with parameter decoupling are still vulnerable to backdoor attacks. The resistance of pFL methods with parameter decoupling is attributed to the heterogeneous classifiers between malicious clients and benign counterparts. We analyze two direct causes of the heterogeneous classifiers: (1) data heterogeneity inherently exists among clients and (2) poisoning by malicious clients further exacerbates the data heterogeneity. To address these issues, we propose a two-pronged attack method, BapFL , which comprises two simple yet effective strategies: (1) poisoning only the feature encoder while keeping the classifier fixed and (2) diversifying the classifier through noise introduction to simulate that of the benign clients. Extensive experiments on three benchmark datasets under varying conditions demonstrate the effectiveness of our proposed attack. Additionally, we evaluate the effectiveness of six widely used defense methods and find that BapFL still poses a significant threat even in the presence of the best defense, Multi-Krum. We hope to inspire further research on attack and defense strategies in pFL scenarios. The code is available at: https://github.com/BapFL/code.

在联合学习(FL)中,恶意客户端可以通过后门攻击操纵训练好的模型的预测结果,从而对联合学习系统的安全性构成重大威胁。现有的研究主要集中在通用联合学习场景中的后门攻击和防御,在这种场景中,所有客户端都在合作训练一个全局模型。Qin 等人最近进行的一项研究[24] 标志着对个性化联合学习(pFL)场景中后门攻击的初步探索,在这种场景中,每个客户端都会根据其本地数据构建个性化模型。值得注意的是,该研究表明,参数解耦的 pFL 方法可以显著增强抵御后门攻击的鲁棒性。然而,在本文中,我们揭示了带参数解耦的 pFL 方法仍然容易受到后门攻击。参数解耦的 pFL 方法的抗攻击性归因于恶意客户端和良性客户端之间的异构分类器。我们分析了异构分类器的两个直接原因:(1) 客户端之间本来就存在数据异构性;(2) 恶意客户端的中毒进一步加剧了数据异构性。为了解决这些问题,我们提出了一种双管齐下的攻击方法--BapFL,其中包括两种简单而有效的策略:(1) 只对特征编码器下毒,同时保持分类器固定不变;(2) 通过引入噪声来模拟良性客户端的分类器,从而使分类器多样化。在不同条件下对三个基准数据集进行的广泛实验证明了我们提出的攻击方法的有效性。此外,我们还评估了六种广泛使用的防御方法的有效性,并发现即使存在最佳防御方法 Multi-Krum,BapFL 仍会构成重大威胁。我们希望能进一步激发对 pFL 场景中攻击和防御策略的研究。代码见:https://github.com/BapFL/code。
{"title":"BapFL : You can Backdoor Personalized Federated Learning","authors":"Tiandi Ye, Cen Chen, Yinggui Wang, Xiang Li, Ming Gao","doi":"10.1145/3649316","DOIUrl":"https://doi.org/10.1145/3649316","url":null,"abstract":"<p>In federated learning (FL), malicious clients could manipulate the predictions of the trained model through backdoor attacks, posing a significant threat to the security of FL systems. Existing research primarily focuses on backdoor attacks and defenses within the generic federated learning scenario, where all clients collaborate to train a single global model. A recent study conducted by Qin et al. [24] marks the initial exploration of backdoor attacks within the personalized federated learning (pFL) scenario, where each client constructs a personalized model based on its local data. Notably, the study demonstrates that pFL methods with <i>parameter decoupling</i> can significantly enhance robustness against backdoor attacks. However, in this paper, we whistleblow that pFL methods with parameter decoupling are still vulnerable to backdoor attacks. The resistance of pFL methods with parameter decoupling is attributed to the heterogeneous classifiers between malicious clients and benign counterparts. We analyze two direct causes of the heterogeneous classifiers: (1) data heterogeneity inherently exists among clients and (2) poisoning by malicious clients further exacerbates the data heterogeneity. To address these issues, we propose a two-pronged attack method, BapFL , which comprises two simple yet effective strategies: (1) poisoning only the feature encoder while keeping the classifier fixed and (2) diversifying the classifier through noise introduction to simulate that of the benign clients. Extensive experiments on three benchmark datasets under varying conditions demonstrate the effectiveness of our proposed attack. Additionally, we evaluate the effectiveness of six widely used defense methods and find that BapFL still poses a significant threat even in the presence of the best defense, Multi-Krum. We hope to inspire further research on attack and defense strategies in pFL scenarios. The code is available at: https://github.com/BapFL/code.</p>","PeriodicalId":49249,"journal":{"name":"ACM Transactions on Knowledge Discovery from Data","volume":"126 1","pages":""},"PeriodicalIF":3.6,"publicationDate":"2024-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139953732","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Citation Forecasting with Multi-Context Attention-Aided Dependency Modeling 利用多语境注意力辅助依赖性建模进行引文预测
IF 3.6 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-02-23 DOI: 10.1145/3649140
Taoran Ji, Nathan Self, Kaiqun Fu, Zhiqian Chen, Naren Ramakrishnan, Chang-Tien Lu

Forecasting citations of scientific patents and publications is a crucial task for understanding the evolution and development of technological domains and for foresight into emerging technologies. By construing citations as a time series, the task can be cast into the domain of temporal point processes. Most existing work on forecasting with temporal point processes, both conventional and neural network-based, only performs single-step forecasting. In citation forecasting, however, the more salient goal is n-step forecasting: predicting the arrival of the next n citations. In this paper, we propose Dynamic Multi-Context Attention Networks (DMA-Nets), a novel deep learning sequence-to-sequence (Seq2Seq) model with a novel hierarchical dynamic attention mechanism for long-term citation forecasting. Extensive experiments on two real-world datasets demonstrate that the proposed model learns better representations of conditional dependencies over historical sequences compared to state-of-the-art counterparts and thus achieves significant performance for citation predictions.

预测科学专利和出版物的引用情况是了解技术领域的演变和发展以及展望新兴技术的一项重要任务。通过将引文解释为时间序列,可以将这项任务纳入时间点过程领域。无论是传统预测还是基于神经网络的预测,大多数现有的时间点过程预测工作都只能进行单步预测。然而,在引文预测中,更突出的目标是 n 步预测:预测下一个 n 篇引文的到来。在本文中,我们提出了动态多语境注意力网络(DMA-Nets),这是一种新颖的深度学习序列到序列(Seq2Seq)模型,具有新颖的分层动态注意力机制,可用于长期引文预测。在两个真实世界数据集上进行的广泛实验表明,与最先进的同行相比,所提出的模型能更好地学习历史序列的条件依赖关系表征,因此在引文预测方面取得了显著的性能。
{"title":"Citation Forecasting with Multi-Context Attention-Aided Dependency Modeling","authors":"Taoran Ji, Nathan Self, Kaiqun Fu, Zhiqian Chen, Naren Ramakrishnan, Chang-Tien Lu","doi":"10.1145/3649140","DOIUrl":"https://doi.org/10.1145/3649140","url":null,"abstract":"<p>Forecasting citations of scientific patents and publications is a crucial task for understanding the evolution and development of technological domains and for foresight into emerging technologies. By construing citations as a time series, the task can be cast into the domain of temporal point processes. Most existing work on forecasting with temporal point processes, both conventional and neural network-based, only performs single-step forecasting. In citation forecasting, however, the more salient goal is <i>n</i>-step forecasting: predicting the arrival of the next <i>n</i> citations. In this paper, we propose Dynamic Multi-Context Attention Networks (DMA-Nets), a novel deep learning sequence-to-sequence (Seq2Seq) model with a novel hierarchical dynamic attention mechanism for long-term citation forecasting. Extensive experiments on two real-world datasets demonstrate that the proposed model learns better representations of conditional dependencies over historical sequences compared to state-of-the-art counterparts and thus achieves significant performance for citation predictions.</p>","PeriodicalId":49249,"journal":{"name":"ACM Transactions on Knowledge Discovery from Data","volume":"14 1","pages":""},"PeriodicalIF":3.6,"publicationDate":"2024-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139953632","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ProtoMGAE: Prototype-aware Masked Graph Auto-Encoder for Graph Representation Learning ProtoMGAE:用于图形表征学习的原型感知掩码图形自动编码器
IF 3.6 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-02-20 DOI: 10.1145/3649143
Yimei Zheng, Caiyan Jia

Graph self-supervised representation learning has gained considerable attention and demonstrated remarkable efficacy in extracting meaningful representations from graphs, particularly in the absence of labeled data. Two representative methods in this domain are graph auto-encoding and graph contrastive learning. However, the former methods primarily focus on global structures, potentially overlooking some fine-grained information during reconstruction. The latter methods emphasize node similarity across correlated views in the embedding space, potentially neglecting the inherent global graph information in the original input space. Moreover, handling incomplete graphs in real-world scenarios, where original features are unavailable for certain nodes, poses challenges for both types of methods. To alleviate these limitations, we integrate masked graph auto-encoding and prototype-aware graph contrastive learning into a unified model to learn node representations in graphs. In our method, we begin by masking a portion of node features and utilize a specific decoding strategy to reconstruct the masked information. This process facilitates the recovery of graphs from a global or macro level and enables handling incomplete graphs easily. Moreover, we treat the masked graph and the original one as a pair of contrasting views, enforcing the alignment and uniformity between their corresponding node representations at a local or micro level. Lastly, to capture cluster structures from a meso level and learn more discriminative representations, we introduce a prototype-aware clustering consistency loss that is jointly optimized with the above two complementary objectives. Extensive experiments conducted on several datasets demonstrate that the proposed method achieves significantly better or competitive performance on downstream tasks, especially for graph clustering, compared with the state-of-the-art methods, showcasing its superiority in enhancing graph representation learning.

图自监督表征学习在从图中提取有意义的表征(尤其是在没有标记数据的情况下)方面获得了相当多的关注,并显示出显著的功效。该领域的两种代表性方法是图自动编码和图对比学习。不过,前一种方法主要关注全局结构,在重建过程中可能会忽略一些细粒度信息。后一种方法强调嵌入空间中相关视图的节点相似性,可能会忽略原始输入空间中固有的全局图信息。此外,在现实世界中,某些节点的原始特征不可用,处理这种不完整的图对这两类方法都提出了挑战。为了缓解这些限制,我们将屏蔽图自动编码和原型感知图对比学习整合到一个统一的模型中,以学习图中的节点表征。在我们的方法中,我们首先屏蔽部分节点特征,然后利用特定的解码策略重建屏蔽信息。这一过程有助于从全局或宏观层面恢复图,并能轻松处理不完整的图。此外,我们将掩蔽图和原始图视为一对对比视图,在局部或微观层面上强化了其相应节点表示之间的对齐性和统一性。最后,为了从中观层面捕捉聚类结构并学习更具区分性的表征,我们引入了原型感知聚类一致性损失,该损失与上述两个互补目标共同优化。在多个数据集上进行的广泛实验表明,与最先进的方法相比,所提出的方法在下游任务(尤其是图聚类)上取得了明显更好或更有竞争力的性能,展示了它在增强图表征学习方面的优越性。
{"title":"ProtoMGAE: Prototype-aware Masked Graph Auto-Encoder for Graph Representation Learning","authors":"Yimei Zheng, Caiyan Jia","doi":"10.1145/3649143","DOIUrl":"https://doi.org/10.1145/3649143","url":null,"abstract":"<p>Graph self-supervised representation learning has gained considerable attention and demonstrated remarkable efficacy in extracting meaningful representations from graphs, particularly in the absence of labeled data. Two representative methods in this domain are graph auto-encoding and graph contrastive learning. However, the former methods primarily focus on global structures, potentially overlooking some fine-grained information during reconstruction. The latter methods emphasize node similarity across correlated views in the embedding space, potentially neglecting the inherent global graph information in the original input space. Moreover, handling incomplete graphs in real-world scenarios, where original features are unavailable for certain nodes, poses challenges for both types of methods. To alleviate these limitations, we integrate masked graph auto-encoding and prototype-aware graph contrastive learning into a unified model to learn node representations in graphs. In our method, we begin by masking a portion of node features and utilize a specific decoding strategy to reconstruct the masked information. This process facilitates the recovery of graphs from a global or macro level and enables handling incomplete graphs easily. Moreover, we treat the masked graph and the original one as a pair of contrasting views, enforcing the alignment and uniformity between their corresponding node representations at a local or micro level. Lastly, to capture cluster structures from a meso level and learn more discriminative representations, we introduce a prototype-aware clustering consistency loss that is jointly optimized with the above two complementary objectives. Extensive experiments conducted on several datasets demonstrate that the proposed method achieves significantly better or competitive performance on downstream tasks, especially for graph clustering, compared with the state-of-the-art methods, showcasing its superiority in enhancing graph representation learning.</p>","PeriodicalId":49249,"journal":{"name":"ACM Transactions on Knowledge Discovery from Data","volume":"48 1","pages":""},"PeriodicalIF":3.6,"publicationDate":"2024-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139922112","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dynamic Environment Responsive Online Meta-Learning with Fairness Awareness 具有公平意识的动态环境响应式在线元学习
IF 3.6 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-02-20 DOI: 10.1145/3648684
Chen Zhao, Feng Mi, Xintao Wu, Kai Jiang, Latifur Khan, Feng Chen

The fairness-aware online learning framework has emerged as a potent tool within the context of continuous lifelong learning. In this scenario, the learner’s objective is to progressively acquire new tasks as they arrive over time, while also guaranteeing statistical parity among various protected sub-populations, such as race and gender, when it comes to the newly introduced tasks. A significant limitation of current approaches lies in their heavy reliance on the i.i.d (independent and identically distributed) assumption concerning data, leading to a static regret analysis of the framework. Nevertheless, it’s crucial to note that achieving low static regret does not necessarily translate to strong performance in dynamic environments characterized by tasks sampled from diverse distributions. In this paper, to tackle the fairness-aware online learning challenge in evolving settings, we introduce a unique regret measure, FairSAR, by incorporating long-term fairness constraints into a strongly adapted loss regret framework. Moreover, to determine an optimal model parameter at each time step, we introduce an innovative adaptive fairness-aware online meta-learning algorithm, referred to as FairSAOML. This algorithm possesses the ability to adjust to dynamic environments by effectively managing bias control and model accuracy. The problem is framed as a bi-level convex-concave optimization, considering both the model’s primal and dual parameters, which pertain to its accuracy and fairness attributes, respectively. Theoretical analysis yields sub-linear upper bounds for both loss regret and the cumulative violation of fairness constraints. Our experimental evaluation on various real-world datasets in dynamic environments demonstrates that our proposed FairSAOML algorithm consistently outperforms alternative approaches rooted in the most advanced prior online learning methods.

公平意识在线学习框架已成为终身持续学习背景下的一种有效工具。在这种情况下,学习者的目标是随着时间的推移逐步获得新任务,同时还要保证各种受保护的子人群(如种族和性别)在完成新任务时的统计均等性。当前方法的一个重要局限在于,它们严重依赖于数据的 i.i.d(独立且同分布)假设,从而导致对框架进行静态遗憾分析。然而,需要注意的是,在任务采样来自不同分布的动态环境中,实现低静态遗憾并不一定能转化为强大的性能。在本文中,为了应对不断变化的环境中公平感知在线学习的挑战,我们通过将长期公平性约束纳入强适应性损失后悔框架,引入了一种独特的后悔度量--FairSAR。此外,为了确定每个时间步的最优模型参数,我们引入了一种创新的自适应公平感知在线元学习算法,简称为 FairSAOML。该算法通过有效管理偏差控制和模型精度,具有适应动态环境的能力。该问题的框架为双级凸凹优化,同时考虑模型的主参数和双参数,这两个参数分别与模型的准确性和公平性属性有关。理论分析得出了损失遗憾和违反公平性约束累积的亚线性上限。我们在动态环境中的各种真实数据集上进行的实验评估表明,我们提出的 FairSAOML 算法始终优于植根于最先进的先验在线学习方法的其他方法。
{"title":"Dynamic Environment Responsive Online Meta-Learning with Fairness Awareness","authors":"Chen Zhao, Feng Mi, Xintao Wu, Kai Jiang, Latifur Khan, Feng Chen","doi":"10.1145/3648684","DOIUrl":"https://doi.org/10.1145/3648684","url":null,"abstract":"<p>The fairness-aware online learning framework has emerged as a potent tool within the context of continuous lifelong learning. In this scenario, the learner’s objective is to progressively acquire new tasks as they arrive over time, while also guaranteeing statistical parity among various protected sub-populations, such as race and gender, when it comes to the newly introduced tasks. A significant limitation of current approaches lies in their heavy reliance on the <i>i.i.d</i> (independent and identically distributed) assumption concerning data, leading to a static regret analysis of the framework. Nevertheless, it’s crucial to note that achieving low static regret does not necessarily translate to strong performance in dynamic environments characterized by tasks sampled from diverse distributions. In this paper, to tackle the fairness-aware online learning challenge in evolving settings, we introduce a unique regret measure, FairSAR, by incorporating long-term fairness constraints into a strongly adapted loss regret framework. Moreover, to determine an optimal model parameter at each time step, we introduce an innovative adaptive fairness-aware online meta-learning algorithm, referred to as FairSAOML. This algorithm possesses the ability to adjust to dynamic environments by effectively managing bias control and model accuracy. The problem is framed as a bi-level convex-concave optimization, considering both the model’s primal and dual parameters, which pertain to its accuracy and fairness attributes, respectively. Theoretical analysis yields sub-linear upper bounds for both loss regret and the cumulative violation of fairness constraints. Our experimental evaluation on various real-world datasets in dynamic environments demonstrates that our proposed FairSAOML algorithm consistently outperforms alternative approaches rooted in the most advanced prior online learning methods.</p>","PeriodicalId":49249,"journal":{"name":"ACM Transactions on Knowledge Discovery from Data","volume":"11 1","pages":""},"PeriodicalIF":3.6,"publicationDate":"2024-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139922326","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
ACM Transactions on Knowledge Discovery from Data
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1