首页 > 最新文献

ACM Transactions on Knowledge Discovery from Data最新文献

英文 中文
Congestion-aware Spatio-Temporal Graph Convolutional Network Based A* Search Algorithm for Fastest Route Search 基于拥塞感知时空图卷积网络的 A* 搜索算法,用于最快路径搜索
IF 3.6 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-04-11 DOI: 10.1145/3657640
Hongjie Sui, Huan Yan, Tianyi Zheng, Wenzhen Huang, Yunlin Zhuang, Yong Li

The fastest route search, which is to find a path with the shortest travel time when the user initiates a query, has become one of the most important services in many map applications. To enhance the user experience of travel, it is necessary to achieve accurate and real-time route search. However, traffic conditions are changing dynamically, especially the frequent occurrence of traffic congestion may greatly increase travel time. Thus, it is challenging to achieve the above goal. To deal with it, we present a congestion-aware spatio-temporal graph convolutional network based A* search algorithm for the task of fastest route search. We first identify a sequence of consecutive congested traffic conditions as a traffic congestion event. Then, we propose a spatio-temporal graph convolutional network that jointly models the congestion events and changing travel time to capture their complex spatio-temporal correlations, which can predict the future travel time information of each road segment as the basis of route planning. Further, we design a path-aided neural network to achieve effective origin-destination (OD) shortest travel time estimation by encoding the complex relationships between OD pairs and their corresponding fastest paths. Finally, the cost function in the A* algorithm is set by fusing the output results of the two components, which is used to guide the route search. Our experimental results on the two real-world datasets show the superior performance of the proposed method.

最快路线搜索,即在用户发起查询时找到一条旅行时间最短的路径,已成为许多地图应用中最重要的服务之一。为了提升用户的出行体验,必须实现准确、实时的路线搜索。然而,交通状况是动态变化的,特别是经常发生的交通拥堵可能会大大增加旅行时间。因此,实现上述目标具有挑战性。为此,我们提出了一种基于拥堵感知时空图卷积网络的 A* 搜索算法,以完成最快路线搜索任务。我们首先将一连串连续的拥堵交通状况识别为交通拥堵事件。然后,我们提出了一种时空图卷积网络,它能对拥堵事件和不断变化的旅行时间进行联合建模,捕捉其复杂的时空相关性,从而预测各路段的未来旅行时间信息,作为路线规划的基础。此外,我们还设计了一种路径辅助神经网络,通过编码 OD 对及其对应的最快路径之间的复杂关系,实现有效的起点-终点(OD)最短旅行时间估计。最后,A* 算法中的成本函数是通过融合两个组件的输出结果来设定的,用于指导路径搜索。我们在两个真实世界数据集上的实验结果表明,所提出的方法性能优越。
{"title":"Congestion-aware Spatio-Temporal Graph Convolutional Network Based A* Search Algorithm for Fastest Route Search","authors":"Hongjie Sui, Huan Yan, Tianyi Zheng, Wenzhen Huang, Yunlin Zhuang, Yong Li","doi":"10.1145/3657640","DOIUrl":"https://doi.org/10.1145/3657640","url":null,"abstract":"<p>The fastest route search, which is to find a path with the shortest travel time when the user initiates a query, has become one of the most important services in many map applications. To enhance the user experience of travel, it is necessary to achieve accurate and real-time route search. However, traffic conditions are changing dynamically, especially the frequent occurrence of traffic congestion may greatly increase travel time. Thus, it is challenging to achieve the above goal. To deal with it, we present a congestion-aware spatio-temporal graph convolutional network based A* search algorithm for the task of fastest route search. We first identify a sequence of consecutive congested traffic conditions as a traffic congestion event. Then, we propose a spatio-temporal graph convolutional network that jointly models the congestion events and changing travel time to capture their complex spatio-temporal correlations, which can predict the future travel time information of each road segment as the basis of route planning. Further, we design a path-aided neural network to achieve effective origin-destination (OD) shortest travel time estimation by encoding the complex relationships between OD pairs and their corresponding fastest paths. Finally, the cost function in the A* algorithm is set by fusing the output results of the two components, which is used to guide the route search. Our experimental results on the two real-world datasets show the superior performance of the proposed method.</p>","PeriodicalId":49249,"journal":{"name":"ACM Transactions on Knowledge Discovery from Data","volume":"10 1","pages":""},"PeriodicalIF":3.6,"publicationDate":"2024-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140591449","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FETILDA: An Evaluation Framework for Effective Representations of Long Financial Documents FETILDA:长篇财务文件有效表述的评估框架
IF 3.6 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-04-10 DOI: 10.1145/3657299
Bolun (Namir) Xia, Vipula Rawte, Aparna Gupta, Mohammed Zaki

In the financial sphere, there is a wealth of accumulated unstructured financial data, such as the textual disclosure documents that companies submit on a regular basis to regulatory agencies, such as the Securities and Exchange Commission (SEC). These documents are typically very long and tend to contain valuable soft information about a company’s performance that is not present in quantitative predictors. It is therefore of great interest to learn predictive models from these long textual documents, especially for forecasting numerical key performance indicators (KPIs). In recent years, there has been a great progress in natural language processing via pre-trained language models (LMs) learned from large corpora of textual data. This prompts the important question of whether they can be used effectively to produce representations for long documents, as well as how we can evaluate the effectiveness of representations produced by various LMs. Our work focuses on answering this critical question, namely the evaluation of the efficacy of various LMs in extracting useful soft information from long textual documents for prediction tasks. In this paper, we propose and implement a deep learning evaluation framework that utilizes a sequential chunking approach combined with an attention mechanism. We perform an extensive set of experiments on a collection of 10-K reports submitted annually by US banks, and another dataset of reports submitted by US companies, in order to investigate thoroughly the performance of different types of language models. Overall, our framework using LMs outperforms strong baseline methods for textual modeling as well as for numerical regression. Our work provides better insights into how utilizing pre-trained domain-specific and fine-tuned long-input LMs for representing long documents can improve the quality of representation of textual data, and therefore, help in improving predictive analyses.

在金融领域,积累了大量的非结构化金融数据,例如公司定期向证券交易委员会(SEC)等监管机构提交的文本披露文件。这些文件通常篇幅很长,往往包含定量预测指标中没有的有关公司业绩的宝贵软信息。因此,从这些长文本文件中学习预测模型,尤其是预测关键绩效指标(KPI)的数值,是非常有意义的。近年来,通过从大量文本数据中学习预训练语言模型(LMs)的自然语言处理技术取得了巨大进步。这就提出了一个重要问题:这些模型能否有效地用于生成长文档的表征,以及我们如何评估各种 LM 生成的表征的有效性。我们的工作重点就是回答这个关键问题,即评估各种 LM 在为预测任务从长文本文档中提取有用软信息方面的功效。在本文中,我们提出并实施了一个深度学习评估框架,该框架利用了一种与注意力机制相结合的顺序分块方法。我们在美国银行每年提交的 10-K 报告集和美国公司提交的另一个报告数据集上进行了大量实验,以深入研究不同类型语言模型的性能。总体而言,在文本建模和数值回归方面,我们使用语言模型的框架优于强大的基准方法。我们的工作提供了更好的见解,让人们了解利用预先训练的特定领域和微调的长输入 LM 来表示长文档如何能提高文本数据的表示质量,从而有助于改进预测分析。
{"title":"FETILDA: An Evaluation Framework for Effective Representations of Long Financial Documents","authors":"Bolun (Namir) Xia, Vipula Rawte, Aparna Gupta, Mohammed Zaki","doi":"10.1145/3657299","DOIUrl":"https://doi.org/10.1145/3657299","url":null,"abstract":"<p>In the financial sphere, there is a wealth of accumulated unstructured financial data, such as the textual disclosure documents that companies submit on a regular basis to regulatory agencies, such as the Securities and Exchange Commission (SEC). These documents are typically very long and tend to contain valuable soft information about a company’s performance that is not present in quantitative predictors. It is therefore of great interest to learn predictive models from these long textual documents, especially for forecasting numerical key performance indicators (KPIs). In recent years, there has been a great progress in natural language processing via pre-trained language models (LMs) learned from large corpora of textual data. This prompts the important question of whether they can be used effectively to produce representations for long documents, as well as how we can evaluate the effectiveness of representations produced by various LMs. Our work focuses on answering this critical question, namely the evaluation of the efficacy of various LMs in extracting useful soft information from long textual documents for prediction tasks. In this paper, we propose and implement a deep learning evaluation framework that utilizes a sequential chunking approach combined with an attention mechanism. We perform an extensive set of experiments on a collection of 10-K reports submitted annually by US banks, and another dataset of reports submitted by US companies, in order to investigate thoroughly the performance of different types of language models. Overall, our framework using LMs outperforms strong baseline methods for textual modeling as well as for numerical regression. Our work provides better insights into how utilizing pre-trained domain-specific and fine-tuned long-input LMs for representing long documents can improve the quality of representation of textual data, and therefore, help in improving predictive analyses.</p>","PeriodicalId":49249,"journal":{"name":"ACM Transactions on Knowledge Discovery from Data","volume":"106 1","pages":""},"PeriodicalIF":3.6,"publicationDate":"2024-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140591154","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards Few-Label Vertical Federated Learning 实现少标签垂直联合学习
IF 3.6 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-04-09 DOI: 10.1145/3656344
Lei Zhang, Lele Fu, Chen Liu, Zhao Yang, Jinghua Yang, Zibin Zheng, Chuan Chen

Federated Learning (FL) provided a novel paradigm for privacy-preserving machine learning, enabling multiple clients to collaborate on model training without sharing private data. To handle multi-source heterogeneous data, vertical federated learning (VFL) has been extensively investigated. However, in the context of VFL, the label information tends to be kept in one authoritative client and is very limited. This poses two challenges for model training in the VFL scenario: On the one hand, a small number of labels cannot guarantee to train a well VFL model with informative network parameters, resulting in unclear boundaries for classification decisions; On the other hand, the large amount of unlabeled data is dominant and should not be discounted, and it’s worthwhile to focus on how to leverage them to improve representation modeling capabilities. In order to address the above two challenges, Firstly, we introduce supervised contrastive loss to enhance the intra-class aggregation and inter-class estrangement, which is to deeply explore label information and improve the effectiveness of downstream classification tasks. Secondly, for unlabeled data, we introduce a pseudo-label-guided consistency mechanism to induce the classification results coherent across clients, which allows the representations learned by local networks to absorb the knowledge from other clients, and alleviates the disagreement between different clients for classification tasks. We conduct sufficient experiments on four commonly used datasets, and the experimental results demonstrate that our method is superior to the state-of-the-art methods, especially in the low-label rate scenario, and the improvement becomes more significant.

联合学习(FL)为保护隐私的机器学习提供了一种新模式,使多个客户端能够在不共享私人数据的情况下合作进行模型训练。为了处理多源异构数据,垂直联合学习(VFL)得到了广泛的研究。然而,在垂直联合学习中,标签信息往往只保存在一个权威客户端中,而且非常有限。这给 VFL 场景中的模型训练带来了两个挑战:一方面,少量的标签无法保证训练出一个具有信息网络参数的良好 VFL 模型,导致分类决策的边界不清晰;另一方面,大量的无标签数据占主导地位,不应被忽视,如何利用这些数据提高表示建模能力值得关注。针对上述两个难题,首先,我们引入了有监督的对比损失(contrastive loss)来增强类内聚合和类间疏离,即深度挖掘标签信息,提高下游分类任务的有效性。其次,对于无标签数据,我们引入了伪标签引导的一致性机制,促使不同客户端的分类结果一致,这使得本地网络学习到的表征可以吸收其他客户端的知识,缓解了不同客户端在分类任务中的分歧。我们在四个常用数据集上进行了充分的实验,实验结果表明我们的方法优于最先进的方法,尤其是在低标签率的情况下,改进更为显著。
{"title":"Towards Few-Label Vertical Federated Learning","authors":"Lei Zhang, Lele Fu, Chen Liu, Zhao Yang, Jinghua Yang, Zibin Zheng, Chuan Chen","doi":"10.1145/3656344","DOIUrl":"https://doi.org/10.1145/3656344","url":null,"abstract":"<p>Federated Learning (FL) provided a novel paradigm for privacy-preserving machine learning, enabling multiple clients to collaborate on model training without sharing private data. To handle multi-source heterogeneous data, vertical federated learning (VFL) has been extensively investigated. However, in the context of VFL, the label information tends to be kept in one authoritative client and is very limited. This poses two challenges for model training in the VFL scenario: On the one hand, a small number of labels cannot guarantee to train a well VFL model with informative network parameters, resulting in unclear boundaries for classification decisions; On the other hand, the large amount of unlabeled data is dominant and should not be discounted, and it’s worthwhile to focus on how to leverage them to improve representation modeling capabilities. In order to address the above two challenges, Firstly, we introduce supervised contrastive loss to enhance the intra-class aggregation and inter-class estrangement, which is to deeply explore label information and improve the effectiveness of downstream classification tasks. Secondly, for unlabeled data, we introduce a pseudo-label-guided consistency mechanism to induce the classification results coherent across clients, which allows the representations learned by local networks to absorb the knowledge from other clients, and alleviates the disagreement between different clients for classification tasks. We conduct sufficient experiments on four commonly used datasets, and the experimental results demonstrate that our method is superior to the state-of-the-art methods, especially in the low-label rate scenario, and the improvement becomes more significant.</p>","PeriodicalId":49249,"journal":{"name":"ACM Transactions on Knowledge Discovery from Data","volume":"142 1","pages":""},"PeriodicalIF":3.6,"publicationDate":"2024-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140591135","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Computing Random Forest-distances in the presence of missing data 在数据缺失的情况下计算随机森林间距
IF 3.6 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-04-08 DOI: 10.1145/3656345
Manuele Bicego, Ferdinando Cicalese

In this paper, we study the problem of computing Random Forest-distances in the presence of missing data. We present a general framework which avoids pre-imputation and uses in an agnostic way the information contained in the input points. We centre our investigation on RatioRF, an RF-based distance recently introduced in the context of clustering and shown to outperform most known RF-based distance measures. We also show that the same framework can be applied to several other state-of-the-art RF-based measures and provide their extensions to the missing data case. We provide significant empirical evidence of the effectiveness of the proposed framework, showing extensive experiments with RatioRF on 15 datasets. Finally, we also positively compare our method with many alternative literature distances, which can be computed with missing values.

本文研究了在数据缺失的情况下计算随机森林间距的问题。我们提出了一个通用框架,它避免了预先输入,并以一种不可知的方式使用输入点中包含的信息。我们的研究以 RatioRF 为中心,RatioRF 是最近在聚类中引入的一种基于 RF 的距离测量方法,其性能优于大多数已知的基于 RF 的距离测量方法。我们还证明,同样的框架可应用于其他几种最先进的基于射频的测量方法,并将其扩展到缺失数据的情况。我们在 15 个数据集上使用 RatioRF 进行了大量实验,为所提框架的有效性提供了重要的经验证据。最后,我们还将我们的方法与许多可计算缺失值的其他文献距离进行了正面比较。
{"title":"Computing Random Forest-distances in the presence of missing data","authors":"Manuele Bicego, Ferdinando Cicalese","doi":"10.1145/3656345","DOIUrl":"https://doi.org/10.1145/3656345","url":null,"abstract":"<p>In this paper, we study the problem of computing Random Forest-distances in the presence of missing data. We present a general framework which avoids pre-imputation and uses in an agnostic way the information contained in the input points. We centre our investigation on RatioRF, an RF-based distance recently introduced in the context of clustering and shown to outperform most known RF-based distance measures. We also show that the same framework can be applied to several other state-of-the-art RF-based measures and provide their extensions to the missing data case. We provide significant empirical evidence of the effectiveness of the proposed framework, showing extensive experiments with RatioRF on 15 datasets. Finally, we also positively compare our method with many alternative literature distances, which can be computed with missing values.</p>","PeriodicalId":49249,"journal":{"name":"ACM Transactions on Knowledge Discovery from Data","volume":"8 1","pages":""},"PeriodicalIF":3.6,"publicationDate":"2024-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140591035","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing Unsupervised Outlier Model Selection: A Study on IREOS Algorithms 增强无监督离群值模型选择:关于 IREOS 算法的研究
IF 3.6 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-04-05 DOI: 10.1145/3653719
Philipp Schlieper, Hermann Luft, Kai Klede, Christoph Strohmeyer, Bjoern Eskofier, Dario Zanca

Outlier detection stands as a critical cornerstone in the field of data mining, with a wide range of applications spanning from fraud detection to network security. However, real-world scenarios often lack labeled data for training, necessitating unsupervised outlier detection methods. This study centers on Unsupervised Outlier Model Selection (UOMS), with a specific focus on the family of Internal, Relative Evaluation of Outlier Solutions (IREOS) algorithms. IREOS measures outlier candidate separability by evaluating multiple maximum-margin classifiers and, while effective, it is constrained by its high computational demands. We investigate the impact of several different separation methods in UOMS in terms of ranking quality and runtime. Surprisingly, our findings indicate that different separability measures have minimal impact on IREOS’ effectiveness. However, using linear separation methods within IREOS significantly reduces its computation time. These insights hold significance for real-world applications where efficient outlier detection is critical. In the context of this work, we provide the code for the IREOS algorithm and our separability techniques.

离群点检测是数据挖掘领域的重要基石,应用范围广泛,从欺诈检测到网络安全都有涉及。然而,现实世界中往往缺乏用于训练的标注数据,因此需要采用无监督离群点检测方法。本研究以无监督离群值模型选择(UOMS)为中心,特别关注离群值解决方案内部相对评估(IREOS)算法系列。IREOS 通过评估多个最大边际分类器来衡量离群点候选模型的可分离性,虽然有效,但受限于其较高的计算要求。我们研究了 UOMS 中几种不同分离方法对排名质量和运行时间的影响。令人惊讶的是,我们的研究结果表明,不同的分离度量对 IREOS 的有效性影响很小。不过,在 IREOS 中使用线性分离方法可以大大减少计算时间。这些见解对于高效离群点检测至关重要的实际应用具有重要意义。在这项工作中,我们提供了 IREOS 算法和分离技术的代码。
{"title":"Enhancing Unsupervised Outlier Model Selection: A Study on IREOS Algorithms","authors":"Philipp Schlieper, Hermann Luft, Kai Klede, Christoph Strohmeyer, Bjoern Eskofier, Dario Zanca","doi":"10.1145/3653719","DOIUrl":"https://doi.org/10.1145/3653719","url":null,"abstract":"<p>Outlier detection stands as a critical cornerstone in the field of data mining, with a wide range of applications spanning from fraud detection to network security. However, real-world scenarios often lack labeled data for training, necessitating unsupervised outlier detection methods. This study centers on Unsupervised Outlier Model Selection (UOMS), with a specific focus on the family of Internal, Relative Evaluation of Outlier Solutions (IREOS) algorithms. IREOS measures outlier candidate separability by evaluating multiple maximum-margin classifiers and, while effective, it is constrained by its high computational demands. We investigate the impact of several different separation methods in UOMS in terms of ranking quality and runtime. Surprisingly, our findings indicate that different separability measures have minimal impact on IREOS’ effectiveness. However, using linear separation methods within IREOS significantly reduces its computation time. These insights hold significance for real-world applications where efficient outlier detection is critical. In the context of this work, we provide the code for the IREOS algorithm and our separability techniques.</p>","PeriodicalId":49249,"journal":{"name":"ACM Transactions on Knowledge Discovery from Data","volume":"38 1","pages":""},"PeriodicalIF":3.6,"publicationDate":"2024-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140591115","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dual Homogeneity Hypergraph Motifs with Cross-view Contrastive Learning for Multiple Social Recommendations 针对多重社交推荐的跨视角对比学习双同质性超图动机
IF 3.6 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-03-26 DOI: 10.1145/3653976
Jiadi Han, Yufei Tang, Qian Tao, Yuhan Xia, LiMing Zhang

Social relations are often used as auxiliary information to address data sparsity and cold-start issues in social recommendations. In the real world, social relations among users are complex and diverse. Widely used graph neural networks (GNNs) can only model pairwise node relationships and are not conducive to exploring higher-order connectivity, while hypergraph provides a natural way to model high-order relations between nodes. However, recent studies show that social recommendations still face the following challenges: 1) a majority of social recommendations ignore the impact of multifaceted social relationships on user preferences; 2) the item homogeneity is often neglected, mainly referring to items with similar static attributes have similar attractiveness when exposed to users that indicating hidden links between items; and 3) directly combining the representations learned from different independent views cannot fully exploit the potential connections between different views. To address these challenges, in this paper, we propose a novel method DH-HGCN++ for multiple social recommendations. Specifically, dual homogeneity (i.e., social homogeneity and item homogeneity) is introduced to mine the impact of diverse social relations on user preferences and enrich item representations. Hypergraph convolution networks with motifs are further exploited to model the high-order relations between nodes. Finally, cross-view contrastive learning is proposed as an auxiliary task to jointly optimize the DH-HGCN++. Real-world datasets are used to validate the effectiveness of the proposed model, where we use sentiment analysis to extract comment relations and employ the k-means clustering algorithm to construct the item-item correlation graph. Experiment results demonstrate that our proposed method consistently outperforms the state-of-the-art baselines on Top-N recommendations.

社交关系通常被用作辅助信息,以解决社交推荐中的数据稀缺和冷启动问题。在现实世界中,用户之间的社交关系复杂多样。广泛使用的图神经网络(GNN)只能模拟成对的节点关系,不利于探索高阶连接性,而超图为模拟节点间的高阶关系提供了一种自然的方法。然而,最近的研究表明,社交推荐仍然面临以下挑战:1)大多数社交推荐忽略了多方面社交关系对用户偏好的影响;2)项目同质性往往被忽视,主要是指静态属性相似的项目在暴露给用户时具有相似的吸引力,这表明项目之间存在隐藏的联系;3)直接结合从不同独立视图中学习到的表征无法充分利用不同视图之间的潜在联系。为了解决这些难题,本文提出了一种用于多重社交推荐的新方法 DH-HGCN++。具体来说,我们引入了双重同质性(即社会同质性和项目同质性)来挖掘不同社会关系对用户偏好的影响,并丰富项目表征。此外,还进一步利用具有主题的超图卷积网络来模拟节点之间的高阶关系。最后,提出了跨视图对比学习作为一项辅助任务,以共同优化 DH-HGCN++。我们使用情感分析来提取评论关系,并采用 k-means 聚类算法来构建项-项关联图。实验结果表明,在 Top-N 推荐上,我们提出的方法始终优于最先进的基线方法。
{"title":"Dual Homogeneity Hypergraph Motifs with Cross-view Contrastive Learning for Multiple Social Recommendations","authors":"Jiadi Han, Yufei Tang, Qian Tao, Yuhan Xia, LiMing Zhang","doi":"10.1145/3653976","DOIUrl":"https://doi.org/10.1145/3653976","url":null,"abstract":"<p>Social relations are often used as auxiliary information to address data sparsity and cold-start issues in social recommendations. In the real world, social relations among users are complex and diverse. Widely used graph neural networks (GNNs) can only model pairwise node relationships and are not conducive to exploring higher-order connectivity, while hypergraph provides a natural way to model high-order relations between nodes. However, recent studies show that social recommendations still face the following challenges: 1) a majority of social recommendations ignore the impact of multifaceted social relationships on user preferences; 2) the item homogeneity is often neglected, mainly referring to items with similar static attributes have similar attractiveness when exposed to users that indicating hidden links between items; and 3) directly combining the representations learned from different independent views cannot fully exploit the potential connections between different views. To address these challenges, in this paper, we propose a novel method DH-HGCN++ for multiple social recommendations. Specifically, dual homogeneity (i.e., social homogeneity and item homogeneity) is introduced to mine the impact of diverse social relations on user preferences and enrich item representations. Hypergraph convolution networks with motifs are further exploited to model the high-order relations between nodes. Finally, cross-view contrastive learning is proposed as an auxiliary task to jointly optimize the DH-HGCN++. Real-world datasets are used to validate the effectiveness of the proposed model, where we use sentiment analysis to extract comment relations and employ the k-means clustering algorithm to construct the item-item correlation graph. Experiment results demonstrate that our proposed method consistently outperforms the state-of-the-art baselines on Top-N recommendations.</p>","PeriodicalId":49249,"journal":{"name":"ACM Transactions on Knowledge Discovery from Data","volume":"33 1","pages":""},"PeriodicalIF":3.6,"publicationDate":"2024-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140298504","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automatically Inspecting Thousands of Static Bug Warnings with Large Language Model: How Far Are We? 利用大型语言模型自动检查成千上万的静态错误警告:我们还有多远?
IF 3.6 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-03-26 DOI: 10.1145/3653718
Cheng Wen, Yuandao Cai, Bin Zhang, Jie Su, Zhiwu Xu, Dugang Liu, Shengchao Qin, Zhong Ming, Cong Tian

Static analysis tools for capturing bugs and vulnerabilities in software programs are widely employed in practice, as they have the unique advantages of high coverage and independence from the execution environment. However, existing tools for analyzing large codebases often produce a great deal of false warnings over genuine bug reports. As a result, developers are required to manually inspect and confirm each warning, a challenging, time-consuming, and automation-essential task.

This paper advocates a fast, general, and easily extensible approach called Llm4sa that automatically inspects a sheer volume of static warnings by harnessing (some of) the powers of Large Language Models (LLMs). Our key insight is that LLMs have advanced program understanding capabilities, enabling them to effectively act as human experts in conducting manual inspections on bug warnings with their relevant code snippets. In this spirit, we propose a static analysis to effectively extract the relevant code snippets via program dependence traversal guided by the bug warnings reports themselves. Then, by formulating customized questions that are enriched with domain knowledge and representative cases to query LLMs, Llm4sa can remove a great deal of false warnings and facilitate bug discovery significantly. Our experiments demonstrate that Llm4sa is practical in automatically inspecting thousands of static warnings from Juliet benchmark programs and 11 real-world C/C++ projects, showcasing a high precision (81.13%) and a recall rate (94.64%) for a total of 9,547 bug warnings. Our research introduces new opportunities and methodologies for using the LLMs to reduce human labor costs, improve the precision of static analyzers, and ensure software trustworthiness.

用于捕捉软件程序中的错误和漏洞的静态分析工具具有高覆盖率和独立于执行环境的独特优势,因此在实践中被广泛使用。然而,用于分析大型代码库的现有工具往往会产生大量虚假警告,而不是真正的漏洞报告。因此,开发人员需要手动检查并确认每个警告,这是一项具有挑战性、耗时且必须自动化的任务。本文提出了一种名为 Llm4sa 的快速、通用且易于扩展的方法,通过利用大型语言模型(LLM)的(部分)能力,自动检查大量静态警告。我们的主要观点是,大型语言模型具有先进的程序理解能力,能够有效地充当人类专家,利用相关代码片段对错误警告进行人工检查。本着这一精神,我们提出了一种静态分析方法,以错误警告报告本身为指导,通过程序依赖性遍历有效提取相关代码片段。然后,通过提出富含领域知识和代表性案例的定制问题来查询 LLM,Llm4sa 可以消除大量错误警告,大大促进错误发现。我们的实验证明,Llm4sa 在自动检测来自朱丽叶基准程序和 11 个真实 C/C++ 项目的数千条静态警告方面非常实用,在总共 9547 条错误警告中显示了很高的精确率(81.13%)和召回率(94.64%)。我们的研究为使用 LLMs 降低人力成本、提高静态分析器的精度和确保软件的可信度提供了新的机遇和方法。
{"title":"Automatically Inspecting Thousands of Static Bug Warnings with Large Language Model: How Far Are We?","authors":"Cheng Wen, Yuandao Cai, Bin Zhang, Jie Su, Zhiwu Xu, Dugang Liu, Shengchao Qin, Zhong Ming, Cong Tian","doi":"10.1145/3653718","DOIUrl":"https://doi.org/10.1145/3653718","url":null,"abstract":"<p>Static analysis tools for capturing bugs and vulnerabilities in software programs are widely employed in practice, as they have the unique advantages of high coverage and independence from the execution environment. However, existing tools for analyzing large codebases often produce a great deal of false warnings over genuine bug reports. As a result, developers are required to manually inspect and confirm each warning, a challenging, time-consuming, and automation-essential task. </p><p>This paper advocates a fast, general, and easily extensible approach called <span>Llm4sa</span> that automatically inspects a sheer volume of static warnings by harnessing (some of) the powers of Large Language Models (LLMs). Our key insight is that LLMs have advanced program understanding capabilities, enabling them to effectively act as human experts in conducting manual inspections on bug warnings with their relevant code snippets. In this spirit, we propose a static analysis to effectively extract the relevant code snippets via program dependence traversal guided by the bug warnings reports themselves. Then, by formulating customized questions that are enriched with domain knowledge and representative cases to query LLMs, <span>Llm4sa</span> can remove a great deal of false warnings and facilitate bug discovery significantly. Our experiments demonstrate that <span>Llm4sa</span> is practical in automatically inspecting thousands of static warnings from Juliet benchmark programs and 11 real-world C/C++ projects, showcasing a high precision (81.13%) and a recall rate (94.64%) for a total of 9,547 bug warnings. Our research introduces new opportunities and methodologies for using the LLMs to reduce human labor costs, improve the precision of static analyzers, and ensure software trustworthiness.</p>","PeriodicalId":49249,"journal":{"name":"ACM Transactions on Knowledge Discovery from Data","volume":"54 1","pages":""},"PeriodicalIF":3.6,"publicationDate":"2024-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140298586","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SA2E-AD: A Stacked Attention Autoencoder for Anomaly Detection in Multivariate Time Series SA2E-AD:用于多变量时间序列异常检测的堆叠注意力自动编码器
IF 3.6 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-03-26 DOI: 10.1145/3653677
Mengyao Li, Zhiyong Li, Zhibang Yang, Xu Zhou, Yifan Li, Ziyan Wu, Lingzhao Kong, Ke Nai

Anomaly detection for multivariate time series is an essential task in the modern industrial field. Although several methods have been developed for anomaly detection, they usually fail to effectively exploit the metrical-temporal correlation and the other dependencies among multiple variables. To address this problem, we propose a stacked attention autoencoder for anomaly detection in multivariate time series (SA2E-AD); it focuses on fully utilizing the metrical and temporal relationships among multivariate time series. We design a multiattention block, alternately containing the temporal attention and metrical attention components in a hierarchical structure to better reconstruct normal time series, which is helpful in distinguishing the anomalies from the normal time series. Meanwhile, a two-stage training strategy is designed to further separate the anomalies from the normal data. Experiments on three publicly available datasets show that SA2E-AD outperforms the advanced baseline methods in detection performance and demonstrate the effectiveness of each part of the process in our method.

多变量时间序列的异常检测是现代工业领域的一项重要任务。虽然已经开发出了多种异常检测方法,但这些方法通常无法有效利用多个变量之间的计量-时间相关性和其他依赖关系。针对这一问题,我们提出了一种用于多变量时间序列异常检测的堆叠注意力自动编码器(SA2E-AD),其重点是充分利用多变量时间序列之间的计量和时间关系。我们设计了一个多注意块,在分层结构中交替包含时间注意和韵律注意成分,以更好地重建正常时间序列,这有助于从正常时间序列中区分异常。同时,还设计了一种两阶段训练策略,以进一步将异常数据与正常数据区分开来。在三个公开数据集上进行的实验表明,SA2E-AD 的检测性能优于先进的基线方法,并证明了我们方法中各个环节的有效性。
{"title":"SA2E-AD: A Stacked Attention Autoencoder for Anomaly Detection in Multivariate Time Series","authors":"Mengyao Li, Zhiyong Li, Zhibang Yang, Xu Zhou, Yifan Li, Ziyan Wu, Lingzhao Kong, Ke Nai","doi":"10.1145/3653677","DOIUrl":"https://doi.org/10.1145/3653677","url":null,"abstract":"<p>Anomaly detection for multivariate time series is an essential task in the modern industrial field. Although several methods have been developed for anomaly detection, they usually fail to effectively exploit the metrical-temporal correlation and the other dependencies among multiple variables. To address this problem, we propose a stacked attention autoencoder for anomaly detection in multivariate time series (SA2E-AD); it focuses on fully utilizing the metrical and temporal relationships among multivariate time series. We design a multiattention block, alternately containing the temporal attention and metrical attention components in a hierarchical structure to better reconstruct normal time series, which is helpful in distinguishing the anomalies from the normal time series. Meanwhile, a two-stage training strategy is designed to further separate the anomalies from the normal data. Experiments on three publicly available datasets show that SA2E-AD outperforms the advanced baseline methods in detection performance and demonstrate the effectiveness of each part of the process in our method.</p>","PeriodicalId":49249,"journal":{"name":"ACM Transactions on Knowledge Discovery from Data","volume":"52 1","pages":""},"PeriodicalIF":3.6,"publicationDate":"2024-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140298590","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hierarchical Convolutional Neural Network with Knowledge Complementation for Long-Tailed Classification 用于长尾分类的具有知识补充功能的分层卷积神经网络
IF 3.6 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-03-22 DOI: 10.1145/3653717
Hong Zhao, Zhengyu Li, Wenwei He, Yan Zhao

Existing methods based on transfer learning leverage auxiliary information to help tail generalization and improve the performance of the tail classes. However, they cannot fully exploit the relationships between auxiliary information and tail classes and bring irrelevant knowledge to the tail classes. To solve this problem, we propose a hierarchical CNN with knowledge complementation, which regards hierarchical relationships as auxiliary information and transfers relevant knowledge to tail classes. First, we integrate semantics and clustering relationships as hierarchical knowledge into the CNN to guide feature learning. Then, we design a complementary strategy to jointly exploit the two types of knowledge, where semantic knowledge acts as a prior dependence and clustering knowledge reduces the negative information caused by excessive semantic dependence (i.e., semantic gaps). In this way, the CNN facilitates the utilization of the two complementary hierarchical relationships and transfers useful knowledge to tail data to improve long-tailed classification accuracy. Experimental results on public benchmarks show that the proposed model outperforms existing methods. In particular, our model improves accuracy by 3.46% compared with the second-best method on the long-tailed tieredImageNet dataset.

现有的基于迁移学习的方法利用辅助信息来帮助尾类泛化,提高尾类的性能。但是,这些方法无法充分利用辅助信息与尾类之间的关系,从而给尾类带来了不相关的知识。为了解决这个问题,我们提出了一种具有知识互补性的分层 CNN,它将分层关系视为辅助信息,并将相关知识转移到尾类。首先,我们将语义和聚类关系作为层次知识整合到 CNN 中,以指导特征学习。然后,我们设计了一种互补策略来共同利用这两类知识,其中语义知识充当先验依赖,而聚类知识则减少过度语义依赖(即语义空白)所导致的负面信息。这样,CNN 就能促进两种互补层次关系的利用,并将有用的知识转移到尾部数据中,从而提高长尾分类的准确性。在公共基准上的实验结果表明,所提出的模型优于现有方法。特别是在长尾分层图像网络数据集上,与排名第二的方法相比,我们的模型提高了 3.46% 的准确率。
{"title":"Hierarchical Convolutional Neural Network with Knowledge Complementation for Long-Tailed Classification","authors":"Hong Zhao, Zhengyu Li, Wenwei He, Yan Zhao","doi":"10.1145/3653717","DOIUrl":"https://doi.org/10.1145/3653717","url":null,"abstract":"<p>Existing methods based on transfer learning leverage auxiliary information to help tail generalization and improve the performance of the tail classes. However, they cannot fully exploit the relationships between auxiliary information and tail classes and bring irrelevant knowledge to the tail classes. To solve this problem, we propose a hierarchical CNN with knowledge complementation, which regards hierarchical relationships as auxiliary information and transfers relevant knowledge to tail classes. First, we integrate semantics and clustering relationships as hierarchical knowledge into the CNN to guide feature learning. Then, we design a complementary strategy to jointly exploit the two types of knowledge, where semantic knowledge acts as a prior dependence and clustering knowledge reduces the negative information caused by excessive semantic dependence (i.e., semantic gaps). In this way, the CNN facilitates the utilization of the two complementary hierarchical relationships and transfers useful knowledge to tail data to improve long-tailed classification accuracy. Experimental results on public benchmarks show that the proposed model outperforms existing methods. In particular, our model improves accuracy by 3.46% compared with the second-best method on the long-tailed tieredImageNet dataset.</p>","PeriodicalId":49249,"journal":{"name":"ACM Transactions on Knowledge Discovery from Data","volume":"131 1","pages":""},"PeriodicalIF":3.6,"publicationDate":"2024-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140196426","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-Source and Multi-modal Deep Network Embedding for Cross-Network Node Classification 用于跨网络节点分类的多源和多模态深度网络嵌入
IF 3.6 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-03-20 DOI: 10.1145/3653304
Hongwei Yang, Hui He, Weizhe Zhang, Yan Wang, Lin Jing

In recent years, to address the issue of networked data sparsity in node classification tasks, cross-network node classification (CNNC) leverages the richer information from a source network to enhance the performance of node classification in the target network, which typically has sparser information. However, in real-world applications, labeled nodes may be collected from multiple sources with multiple modalities (e.g., text, vision, and video). Naive application of single-source and single-modal CNNC methods may result in sub-optimal solutions. To this end, in this paper, we propose a model called M2CDNE (Multi-source and Multi-modal Cross-network Deep Network Embedding) for cross-network node classification. In M2CDNE, we propose a deep multi-modal network embedding approach that combines the extracted deep multi-modal features to make the node vector representations network-invariant. In addition, we apply dynamic adversarial adaptation to assess the significance of marginal and conditional probability distributions between each source and target network to make node vector representations label-discriminative. Furthermore, we devise to classify nodes in the target network through the related source classifier and aggregate different predictions utilizing respective network weights, corresponding to the discrepancy between each source and target network. Extensive experiments performed on real-world datasets demonstrate that the proposed M2CDNE significantly outperforms the state-of-the-art approaches.

近年来,为了解决节点分类任务中网络数据稀疏的问题,跨网络节点分类(CNNC)利用源网络中更丰富的信息来提高目标网络中节点分类的性能,而目标网络中的信息通常比较稀疏。然而,在现实世界的应用中,标记的节点可能来自多种来源和多种模式(如文本、视觉和视频)。盲目应用单一来源和单一模式的 CNNC 方法可能会导致次优解决方案。为此,我们在本文中提出了一种名为 M2CDNE(多源多模态跨网络深度网络嵌入)的跨网络节点分类模型。在 M2CDNE 中,我们提出了一种深度多模态网络嵌入方法,该方法结合了提取的深度多模态特征,使节点向量表示具有网络不变性。此外,我们还应用动态对抗自适应来评估每个源网络和目标网络之间的边际和条件概率分布的重要性,从而使节点向量表示具有标签区分性。此外,我们还设计了通过相关的源分类器对目标网络中的节点进行分类,并利用各自的网络权重(与每个源网络和目标网络之间的差异相对应)汇总不同的预测结果。在实际数据集上进行的大量实验表明,所提出的 M2CDNE 明显优于最先进的方法。
{"title":"Multi-Source and Multi-modal Deep Network Embedding for Cross-Network Node Classification","authors":"Hongwei Yang, Hui He, Weizhe Zhang, Yan Wang, Lin Jing","doi":"10.1145/3653304","DOIUrl":"https://doi.org/10.1145/3653304","url":null,"abstract":"<p>In recent years, to address the issue of networked data sparsity in node classification tasks, cross-network node classification (CNNC) leverages the richer information from a source network to enhance the performance of node classification in the target network, which typically has sparser information. However, in real-world applications, labeled nodes may be collected from multiple sources with multiple modalities (e.g., text, vision, and video). Naive application of single-source and single-modal CNNC methods may result in sub-optimal solutions. To this end, in this paper, we propose a model called M<sup>2</sup>CDNE (Multi-source and Multi-modal Cross-network Deep Network Embedding) for cross-network node classification. In M<sup>2</sup>CDNE, we propose a deep multi-modal network embedding approach that combines the extracted deep multi-modal features to make the node vector representations network-invariant. In addition, we apply dynamic adversarial adaptation to assess the significance of marginal and conditional probability distributions between each source and target network to make node vector representations label-discriminative. Furthermore, we devise to classify nodes in the target network through the related source classifier and aggregate different predictions utilizing respective network weights, corresponding to the discrepancy between each source and target network. Extensive experiments performed on real-world datasets demonstrate that the proposed M<sup>2</sup>CDNE significantly outperforms the state-of-the-art approaches.</p>","PeriodicalId":49249,"journal":{"name":"ACM Transactions on Knowledge Discovery from Data","volume":"131 1","pages":""},"PeriodicalIF":3.6,"publicationDate":"2024-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140196407","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
ACM Transactions on Knowledge Discovery from Data
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1