首页 > 最新文献

ACM Transactions on Information Systems最新文献

英文 中文
A Self-Distilled Learning to Rank Model for Ad-hoc Retrieval 用于临时检索的自蒸馏学习排名模型
IF 5.4 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-07-25 DOI: 10.1145/3681784
S. Keshvari, Farzan Saeedi, Hadi Sadoghi Yazdi, F. Ensan
Learning to rank models are broadly applied in ad-hoc retrieval for scoring and sorting documents based on their relevance to textual queries. The generalizability of the trained model in the learning to rank approach, however, can have an impact on the retrieval performance, particularly when data includes noise and outliers, or is incorrectly collected or measured. In this paper, we introduce a Self-Distilled Learning to Rank (SDLR) framework for ad-hoc retrieval, and analyze its performance over a range of retrieval datasets and also in the presence of features’ noise. SDLR assigns a confidence weight to each training sample, aiming at reducing the impact of noisy and outlier data in the training process. The confidence wight is approximated based on the feature’s distributions derived from the values observed for the features of the documents labeled for a query in a listwise training sample. SDLR includes a distillation process that facilitates passing on the underlying patterns in assigning confidence weights from the teacher model to the student one. We empirically illustrate that SDLR outperforms state-of-the-art learning to rank models in ad-hoc retrieval. We thoroughly investigate the SDLR performance in different settings including when no distillation strategy is applied; when different portion of data is used for training the teacher and the student models, and when both teacher and student models are trained over identical data. We show that SDLR is more effective when training data is split between a teacher and a student model. We also show that SDLR’s performance is robust when data features are noisy.
学习排序模型广泛应用于临时检索,根据文档与文本查询的相关性对文档进行评分和排序。然而,在学习排序方法中,训练好的模型的通用性会对检索性能产生影响,尤其是当数据包含噪声和异常值,或者数据收集或测量不正确时。在本文中,我们介绍了一种用于临时检索的自填充学习排名(SDLR)框架,并分析了它在一系列检索数据集以及存在特征噪声的情况下的性能。SDLR 为每个训练样本分配一个置信度权重,旨在减少训练过程中噪声和离群数据的影响。置信度权重是根据从列表训练样本中为查询所标注文档的特征值观察到的特征分布近似得出的。SDLR 包括一个提炼过程,有助于在从教师模型向学生模型分配置信度权重时传递基本模式。我们通过经验证明,在临时检索中,SDLR 优于最先进的学习排名模型。我们深入研究了 SDLR 在不同情况下的性能,包括未应用蒸馏策略、使用不同部分的数据训练教师模型和学生模型,以及教师模型和学生模型在相同数据上进行训练。我们发现,当教师模型和学生模型的训练数据分开时,SDLR 更为有效。我们还证明,当数据特征有噪声时,SDLR 的性能是稳健的。
{"title":"A Self-Distilled Learning to Rank Model for Ad-hoc Retrieval","authors":"S. Keshvari, Farzan Saeedi, Hadi Sadoghi Yazdi, F. Ensan","doi":"10.1145/3681784","DOIUrl":"https://doi.org/10.1145/3681784","url":null,"abstract":"Learning to rank models are broadly applied in ad-hoc retrieval for scoring and sorting documents based on their relevance to textual queries. The generalizability of the trained model in the learning to rank approach, however, can have an impact on the retrieval performance, particularly when data includes noise and outliers, or is incorrectly collected or measured. In this paper, we introduce a Self-Distilled Learning to Rank (SDLR) framework for ad-hoc retrieval, and analyze its performance over a range of retrieval datasets and also in the presence of features’ noise. SDLR assigns a confidence weight to each training sample, aiming at reducing the impact of noisy and outlier data in the training process. The confidence wight is approximated based on the feature’s distributions derived from the values observed for the features of the documents labeled for a query in a listwise training sample. SDLR includes a distillation process that facilitates passing on the underlying patterns in assigning confidence weights from the teacher model to the student one. We empirically illustrate that SDLR outperforms state-of-the-art learning to rank models in ad-hoc retrieval. We thoroughly investigate the SDLR performance in different settings including when no distillation strategy is applied; when different portion of data is used for training the teacher and the student models, and when both teacher and student models are trained over identical data. We show that SDLR is more effective when training data is split between a teacher and a student model. We also show that SDLR’s performance is robust when data features are noisy.","PeriodicalId":50936,"journal":{"name":"ACM Transactions on Information Systems","volume":null,"pages":null},"PeriodicalIF":5.4,"publicationDate":"2024-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141804186","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AdaGIN: Adaptive Graph Interaction Network for Click-Through Rate Prediction AdaGIN:用于点击率预测的自适应图形交互网络
IF 5.4 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-07-25 DOI: 10.1145/3681785
Lei Sang, Honghao Li, Yiwen Zhang, Yi Zhang, Yun Yang
The goal of click-through rate (CTR) prediction in recommender systems is to effectively work with input features. However, existing CTR prediction models face three main issues. First, many models use a basic approach for feature combinations, leading to noise and reduced accuracy. Second, there is no consideration for the varying importance of features in different interaction orders, affecting model performance. Third, current model architectures struggle to capture different interaction signals from various semantic spaces, leading to sub-optimal performance. To address these issues, we propose the Adaptive Graph Interaction Network (AdaGIN) with the Graph Neural Networks-based Feature Interaction Module (GFIM), the Multi-semantic Feature Interaction Module (MFIM), and the Negative Feedback-based Search (NFS) algorithm. GFIM explicitly aggregates information between features and assesses their importance, while MFIM captures information from different semantic spaces. NFS uses negative feedback to optimize model complexity. Experimental results show AdaGIN outperforms existing models on large-scale public benchmark datasets.
推荐系统中点击率(CTR)预测的目标是有效利用输入特征。然而,现有的点击率预测模型面临三个主要问题。首先,许多模型使用基本的特征组合方法,导致噪音和准确性降低。其次,没有考虑不同交互顺序中特征的不同重要性,从而影响了模型的性能。第三,目前的模型架构难以捕捉来自不同语义空间的不同交互信号,导致性能不达标。为了解决这些问题,我们提出了自适应图交互网络(AdaGIN),其中包含基于图神经网络的特征交互模块(GFIM)、多语义特征交互模块(MFIM)和基于负反馈的搜索(NFS)算法。GFIM 明确聚合特征之间的信息并评估其重要性,而 MFIM 则捕捉来自不同语义空间的信息。NFS 使用负反馈来优化模型的复杂性。实验结果表明,AdaGIN 在大规模公共基准数据集上的表现优于现有模型。
{"title":"AdaGIN: Adaptive Graph Interaction Network for Click-Through Rate Prediction","authors":"Lei Sang, Honghao Li, Yiwen Zhang, Yi Zhang, Yun Yang","doi":"10.1145/3681785","DOIUrl":"https://doi.org/10.1145/3681785","url":null,"abstract":"The goal of click-through rate (CTR) prediction in recommender systems is to effectively work with input features. However, existing CTR prediction models face three main issues. First, many models use a basic approach for feature combinations, leading to noise and reduced accuracy. Second, there is no consideration for the varying importance of features in different interaction orders, affecting model performance. Third, current model architectures struggle to capture different interaction signals from various semantic spaces, leading to sub-optimal performance. To address these issues, we propose the Adaptive Graph Interaction Network (AdaGIN) with the Graph Neural Networks-based Feature Interaction Module (GFIM), the Multi-semantic Feature Interaction Module (MFIM), and the Negative Feedback-based Search (NFS) algorithm. GFIM explicitly aggregates information between features and assesses their importance, while MFIM captures information from different semantic spaces. NFS uses negative feedback to optimize model complexity. Experimental results show AdaGIN outperforms existing models on large-scale public benchmark datasets.","PeriodicalId":50936,"journal":{"name":"ACM Transactions on Information Systems","volume":null,"pages":null},"PeriodicalIF":5.4,"publicationDate":"2024-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141803944","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
RevGNN: Negative Sampling Enhanced Contrastive Graph Learning for Academic Reviewer Recommendation RevGNN:用于学术评审人推荐的负采样增强型对比图学习
IF 5.4 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-07-22 DOI: 10.1145/3679200
Weibin Liao, Yifan Zhu, Yanyan Li, Qi Zhang, Zhonghong Ou, Xuesong Li
Acquiring reviewers for academic submissions is a challenging recommendation scenario. Recent graph learning-driven models have made remarkable progress in the field of recommendation, but their performance in the academic reviewer recommendation task may suffer from a significant false negative issue. This arises from the assumption that unobserved edges represent negative samples. In fact, the mechanism of anonymous review results in inadequate exposure of interactions between reviewers and submissions, leading to a higher number of unobserved interactions compared to those caused by reviewers declining to participate. Therefore, investigating how to better comprehend the negative labeling of unobserved interactions in academic reviewer recommendations is a significant challenge. This study aims to tackle the ambiguous nature of unobserved interactions in academic reviewer recommendations. Specifically, we propose an unsupervised Pseudo Neg-Label strategy to enhance graph contrastive learning (GCL) for recommending reviewers for academic submissions, which we call RevGNN. RevGNN utilizes a two-stage encoder structure that encodes both scientific knowledge and behavior using Pseudo Neg-Label to approximate review preference. Extensive experiments on three real-world datasets demonstrate that RevGNN outperforms all baselines across four metrics. Additionally, detailed further analyses confirm the effectiveness of each component in RevGNN.
为学术论文获取审稿人是一个极具挑战性的推荐场景。最近的图学习驱动模型在推荐领域取得了显著进展,但它们在学术审稿人推荐任务中的表现可能存在严重的假否定问题。这是因为假设未观察到的边代表负样本。事实上,匿名评审机制会导致评审人与投稿之间的互动暴露不足,从而导致未观察到的互动数量高于评审人拒绝参与所导致的互动数量。因此,研究如何更好地理解学术审稿人推荐中未观察到的互动的负面标签是一项重大挑战。本研究旨在解决学术审稿人推荐中未观察到的互动的模糊性问题。具体来说,我们提出了一种无监督伪负标策略(Pseudo Neg-Label)来增强图对比学习(GCL),用于为学术论文推荐审稿人,我们称之为 RevGNN。RevGNN 采用两阶段编码器结构,利用伪负标对科学知识和行为进行编码,从而近似地确定审稿偏好。在三个真实世界数据集上进行的广泛实验表明,RevGNN 在四个指标上均优于所有基线。此外,详细的进一步分析证实了 RevGNN 中每个组件的有效性。
{"title":"RevGNN: Negative Sampling Enhanced Contrastive Graph Learning for Academic Reviewer Recommendation","authors":"Weibin Liao, Yifan Zhu, Yanyan Li, Qi Zhang, Zhonghong Ou, Xuesong Li","doi":"10.1145/3679200","DOIUrl":"https://doi.org/10.1145/3679200","url":null,"abstract":"Acquiring reviewers for academic submissions is a challenging recommendation scenario. Recent graph learning-driven models have made remarkable progress in the field of recommendation, but their performance in the academic reviewer recommendation task may suffer from a significant false negative issue. This arises from the assumption that unobserved edges represent negative samples. In fact, the mechanism of anonymous review results in inadequate exposure of interactions between reviewers and submissions, leading to a higher number of unobserved interactions compared to those caused by reviewers declining to participate. Therefore, investigating how to better comprehend the negative labeling of unobserved interactions in academic reviewer recommendations is a significant challenge. This study aims to tackle the ambiguous nature of unobserved interactions in academic reviewer recommendations. Specifically, we propose an unsupervised Pseudo Neg-Label strategy to enhance graph contrastive learning (GCL) for recommending reviewers for academic submissions, which we call RevGNN. RevGNN utilizes a two-stage encoder structure that encodes both scientific knowledge and behavior using Pseudo Neg-Label to approximate review preference. Extensive experiments on three real-world datasets demonstrate that RevGNN outperforms all baselines across four metrics. Additionally, detailed further analyses confirm the effectiveness of each component in RevGNN.","PeriodicalId":50936,"journal":{"name":"ACM Transactions on Information Systems","volume":null,"pages":null},"PeriodicalIF":5.4,"publicationDate":"2024-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141815618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dual Contrastive Learning for Cross-domain Named Entity Recognition 跨领域命名实体识别的双重对比学习
IF 5.4 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-07-20 DOI: 10.1145/3678879
Jingyun Xu, Junnan Yu, Yi Cai, Tat-Seng Chua
Benefiting many information retrieval applications, named entity recognition (NER) has shown impressive progress. Recently, there has been a growing trend to decompose complex NER tasks into two subtasks ( (e.g.,) entity span detection (ESD) and entity type classification (ETC), to achieve better performance. Despite the remarkable success, from the perspective of representation, existing methods do not explicitly distinguish non-entities and entities, which may lead to entity span detection errors. Meanwhile, they do not explicitly distinguish entities with different entity types, which may lead to entity type misclassification. As such, the limited representation abilities may challenge some competitive NER methods, leading to unsatisfactory performance, especially in the low-resource setting ( (e.g.,) cross-domain NER). In light of these challenges, we propose to utilize contrastive learning to refine the original chaotic representations and learn the generalized representations for cross-domain NER. In particular, this paper proposes a dual contrastive learning model (Dual-CL), which respectively utilizes a token-level contrastive learning module and a sentence-level contrastive learning module to enhance ESD, ETC for cross-domain NER. Empirical results on 10 domain pairs under two different settings show that Dual-CL achieves better performances than compared baselines in terms of several standard metrics. Moreover, we conduct detailed analyses to are presented to better understand each component’s effectiveness.
命名实体识别(NER)使许多信息检索应用受益匪浅,并取得了令人瞩目的进展。近来,将复杂的 NER 任务分解为实体跨度检测(ESD)和实体类型分类(ETC)两个子任务以获得更好性能的趋势越来越明显。尽管取得了显著的成绩,但从表征的角度来看,现有的方法并没有明确区分非实体和实体,这可能会导致实体跨度检测错误。同时,它们也没有明确区分不同实体类型的实体,这可能会导致实体类型分类错误。因此,有限的表示能力可能会挑战一些有竞争力的 NER 方法,导致其性能不尽如人意,尤其是在低资源环境下(如跨域 NER)。鉴于这些挑战,我们建议利用对比学习来完善原始混沌表征,并学习用于跨域 NER 的广义表征。本文特别提出了一种双对比学习模型(Dual-CL),分别利用标记级对比学习模块和句子级对比学习模块来增强跨域 NER 的 ESD 和 ETC。在两种不同设置下对 10 个域对进行的实证结果表明,Dual-CL 在多个标准指标方面都取得了比基线更好的性能。此外,我们还进行了详细的分析,以更好地了解每个组件的有效性。
{"title":"Dual Contrastive Learning for Cross-domain Named Entity Recognition","authors":"Jingyun Xu, Junnan Yu, Yi Cai, Tat-Seng Chua","doi":"10.1145/3678879","DOIUrl":"https://doi.org/10.1145/3678879","url":null,"abstract":"\u0000 Benefiting many information retrieval applications, named entity recognition (NER) has shown impressive progress. Recently, there has been a growing trend to decompose complex NER tasks into two subtasks (\u0000 \u0000 (e.g.,)\u0000 \u0000 entity span detection (ESD) and entity type classification (ETC), to achieve better performance. Despite the remarkable success, from the perspective of representation, existing methods do not explicitly distinguish non-entities and entities, which may lead to entity span detection errors. Meanwhile, they do not explicitly distinguish entities with different entity types, which may lead to entity type misclassification. As such, the limited representation abilities may challenge some competitive NER methods, leading to unsatisfactory performance, especially in the low-resource setting (\u0000 \u0000 (e.g.,)\u0000 \u0000 cross-domain NER). In light of these challenges, we propose to utilize contrastive learning to refine the original chaotic representations and learn the generalized representations for cross-domain NER. In particular, this paper proposes a dual contrastive learning model (Dual-CL), which respectively utilizes a token-level contrastive learning module and a sentence-level contrastive learning module to enhance ESD, ETC for cross-domain NER. Empirical results on 10 domain pairs under two different settings show that Dual-CL achieves better performances than compared baselines in terms of several standard metrics. Moreover, we conduct detailed analyses to are presented to better understand each component’s effectiveness.\u0000","PeriodicalId":50936,"journal":{"name":"ACM Transactions on Information Systems","volume":null,"pages":null},"PeriodicalIF":5.4,"publicationDate":"2024-07-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141820691","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Knowledge Graph Embedding Model for Answering Factoid Entity Questions 用于回答事实实体问题的知识图谱嵌入模型
IF 5.4 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-07-15 DOI: 10.1145/3678003
Parastoo Jafarzadeh, F. Ensan, Mahdiyar Ali Akbar Alavi, Fattane Zarrinkalam
Factoid entity questions (FEQ), which seek answers in the form of a single entity from knowledge sources such as DBpedia and Wikidata, constitute a substantial portion of user queries in search engines. This paper introduces the Knowledge Graph Embedding model for Factoid Entity Question answering (KGE-FEQ). Leveraging a textual knowledge graph derived from extensive text collections, KGE-FEQ encodes textual relationships between entities. The model employs a two-step process: (1) Triple Retrieval, where relevant triples are retrieved from the textual knowledge graph based on semantic similarities to the question, and (2) Answer Selection, where a knowledge graph embedding approach is utilized for answering the question. This involves positioning the embedding for the answer entity close to the embedding of the question entity, incorporating a vector representing the question and textual relations between entities. Extensive experiments evaluate the performance of the proposed approach, comparing KGE-FEQ to state-of-the-art baselines in factoid entity question answering and the most advanced open-domain question answering techniques applied to FEQs. The results show that KGE-FEQ outperforms existing methods across different datasets. Ablation studies highlights the effectiveness of KGE-FEQ when both the question and textual relations between entities are considered for answering questions.
类事实实体问题(FEQ)是从 DBpedia 和 Wikidata 等知识源中以单一实体的形式寻求答案的问题,在搜索引擎的用户查询中占有相当大的比重。本文介绍了用于事实实体问题解答的知识图谱嵌入模型(KGE-FEQ)。KGE-FEQ 利用从大量文本集合中提取的文本知识图谱,对实体之间的文本关系进行编码。该模型采用两步流程:(1) 三元检索,根据与问题的语义相似性从文本知识图谱中检索相关的三元;(2) 答案选择,利用知识图谱嵌入方法回答问题。这包括将答案实体的嵌入定位在问题实体的嵌入附近,并纳入一个代表问题和实体间文本关系的向量。广泛的实验评估了所提方法的性能,并将 KGE-FEQ 与事实实体问题解答领域最先进的基准以及应用于 FEQ 的最先进的开放域问题解答技术进行了比较。结果表明,KGE-FEQ 在不同的数据集上都优于现有方法。当回答问题时同时考虑实体之间的问题关系和文本关系时,消融研究凸显了 KGE-FEQ 的有效性。
{"title":"A Knowledge Graph Embedding Model for Answering Factoid Entity Questions","authors":"Parastoo Jafarzadeh, F. Ensan, Mahdiyar Ali Akbar Alavi, Fattane Zarrinkalam","doi":"10.1145/3678003","DOIUrl":"https://doi.org/10.1145/3678003","url":null,"abstract":"Factoid entity questions (FEQ), which seek answers in the form of a single entity from knowledge sources such as DBpedia and Wikidata, constitute a substantial portion of user queries in search engines. This paper introduces the Knowledge Graph Embedding model for Factoid Entity Question answering (KGE-FEQ). Leveraging a textual knowledge graph derived from extensive text collections, KGE-FEQ encodes textual relationships between entities. The model employs a two-step process: (1) Triple Retrieval, where relevant triples are retrieved from the textual knowledge graph based on semantic similarities to the question, and (2) Answer Selection, where a knowledge graph embedding approach is utilized for answering the question. This involves positioning the embedding for the answer entity close to the embedding of the question entity, incorporating a vector representing the question and textual relations between entities. Extensive experiments evaluate the performance of the proposed approach, comparing KGE-FEQ to state-of-the-art baselines in factoid entity question answering and the most advanced open-domain question answering techniques applied to FEQs. The results show that KGE-FEQ outperforms existing methods across different datasets. Ablation studies highlights the effectiveness of KGE-FEQ when both the question and textual relations between entities are considered for answering questions.","PeriodicalId":50936,"journal":{"name":"ACM Transactions on Information Systems","volume":null,"pages":null},"PeriodicalIF":5.4,"publicationDate":"2024-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141646748","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Knowledge-Enhanced Conversational Recommendation via Transformer-based Sequential Modelling 通过基于变换器的序列建模实现知识增强型对话推荐
IF 5.4 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-07-12 DOI: 10.1145/3677376
Jie Zou, Aixin Sun, Cheng Long, E. Kanoulas
In Conversational Recommender Systems (CRSs), conversations usually involve a set of items and item-related entities or attributes, e.g., director is a related entity of a movie. These items and item-related entities are often mentioned along the development of a dialog, leading to potential sequential dependencies among them. However, most of existing CRSs neglect these potential sequential dependencies. In this paper, we first propose a Transformer-based sequential conversational recommendation method, named TSCR, to model the sequential dependencies in the conversations to improve CRS. In TSCR, we represent conversations by items and the item-related entities, and construct user sequences to discover user preferences by considering both the mentioned items and item-related entities. Based on the constructed sequences, we deploy a Cloze task to predict the recommended items along a sequence. Meanwhile, in certain domains, knowledge graphs formed by the items and their related entities are readily available, which provide various different kinds of associations among them. Given that TSCR does not benefit from such knowledge graphs, we then propose a knowledge graph enhanced version of TSCR, called TSCRKG. In specific, we leverage the knowledge graph to offline initialize our model TSCRKG, and augment the user sequence of conversations (i.e., sequence of the mentioned items and item-related entities in the conversation) with multi-hop paths in the knowledge graph. Experimental results demonstrate that our TSCR model significantly outperforms state-of-the-art baselines, and the enhanced version TSCRKG further improves recommendation performance on top of TSCR.
在对话推荐系统(CRS)中,对话通常涉及一系列项目和项目相关实体或属性,例如导演是电影的相关实体。在对话的发展过程中,这些项目和项目相关实体经常被提及,从而导致它们之间潜在的顺序依赖关系。然而,大多数现有的 CRS 都忽略了这些潜在的顺序依赖关系。在本文中,我们首先提出了一种基于变换器的顺序对话推荐方法,命名为 TSCR,以模拟对话中的顺序依赖关系,从而改进 CRS。在 TSCR 中,我们用项目和与项目相关的实体来表示会话,并通过考虑提及的项目和与项目相关的实体来构建用户序列,从而发现用户偏好。基于构建的序列,我们部署了一个 Cloze 任务来预测序列中的推荐项目。同时,在某些领域,由项目及其相关实体形成的知识图谱是现成的,这些知识图谱提供了它们之间各种不同的关联。鉴于 TSCR 无法从这类知识图谱中获益,我们提出了知识图谱增强版 TSCR,称为 TSCRKG。具体来说,我们利用知识图谱离线初始化我们的模型 TSCRKG,并用知识图谱中的多跳路径增强用户对话序列(即对话中提到的项目和项目相关实体的序列)。实验结果表明,我们的 TSCR 模型明显优于最先进的基线模型,增强版 TSCRKG 在 TSCR 的基础上进一步提高了推荐性能。
{"title":"Knowledge-Enhanced Conversational Recommendation via Transformer-based Sequential Modelling","authors":"Jie Zou, Aixin Sun, Cheng Long, E. Kanoulas","doi":"10.1145/3677376","DOIUrl":"https://doi.org/10.1145/3677376","url":null,"abstract":"In Conversational Recommender Systems (CRSs), conversations usually involve a set of items and item-related entities or attributes, e.g., director is a related entity of a movie. These items and item-related entities are often mentioned along the development of a dialog, leading to potential sequential dependencies among them. However, most of existing CRSs neglect these potential sequential dependencies. In this paper, we first propose a Transformer-based sequential conversational recommendation method, named TSCR, to model the sequential dependencies in the conversations to improve CRS. In TSCR, we represent conversations by items and the item-related entities, and construct user sequences to discover user preferences by considering both the mentioned items and item-related entities. Based on the constructed sequences, we deploy a Cloze task to predict the recommended items along a sequence. Meanwhile, in certain domains, knowledge graphs formed by the items and their related entities are readily available, which provide various different kinds of associations among them. Given that TSCR does not benefit from such knowledge graphs, we then propose a knowledge graph enhanced version of TSCR, called TSCRKG. In specific, we leverage the knowledge graph to offline initialize our model TSCRKG, and augment the user sequence of conversations (i.e., sequence of the mentioned items and item-related entities in the conversation) with multi-hop paths in the knowledge graph. Experimental results demonstrate that our TSCR model significantly outperforms state-of-the-art baselines, and the enhanced version TSCRKG further improves recommendation performance on top of TSCR.","PeriodicalId":50936,"journal":{"name":"ACM Transactions on Information Systems","volume":null,"pages":null},"PeriodicalIF":5.4,"publicationDate":"2024-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141652814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
On Elastic Language Models 关于弹性语言模型
IF 5.4 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-07-12 DOI: 10.1145/3677375
Chen Zhang, Benyou Wang, Dawei Song
Large-scale pretrained language models have achieved compelling performance in a wide range of language understanding and information retrieval tasks. While their large scales ensure capacity, they also hinder deployment. Knowledge distillation offers an opportunity to compress a large language model to a small one, in order to reach a reasonable latency-performance tradeoff. However, for scenarios where the number of requests (e.g., queries submitted to a search engine) is highly variant, the static tradeoff attained by the compressed language model might not always fit. Once a model is assigned with a static tradeoff, it could be inadequate in that the latency is too high when the number of requests is large, or the performance is too low when the number of requests is small. To this end, we propose an elastic language model ( ElasticLM ) that elastically adjusts the tradeoff according to the request stream. The basic idea is to introduce a compute elasticity to the compressed language model, so that the tradeoff could vary on-the-fly along a scalable and controllable compute. Specifically, we impose an elastic structure to equip ElasticLM with compute elasticity and design an elastic optimization method to learn ElasticLM under compute elasticity. To serve ElasticLM , we apply an elastic schedule. Considering the specificity of information retrieval, we adapt ElasticLM to dense retrieval and reranking, and present an ElasticDenser and an ElasticRanker respectively. Offline evaluation is conducted on a language understanding benchmark GLUE, and several information retrieval tasks including Natural Question, Trivia QA and MS MARCO. The results show that ElasticLM along with ElasticDenser and ElasticRanker can perform correctly and competitively compared with an array of static baselines. Furthermore, an online simulation with concurrency is also carried out. The results demonstrate that ElasticLM can provide elastic tradeoffs with respect to varying request stream.
大规模预训练语言模型在各种语言理解和信息检索任务中取得了令人瞩目的性能。虽然大规模模型确保了容量,但也阻碍了部署。知识蒸馏提供了一个将大型语言模型压缩为小型模型的机会,以实现合理的延迟-性能权衡。然而,在请求数量(如提交给搜索引擎的查询)变化很大的情况下,压缩语言模型所达到的静态折衷可能并不总是合适的。一旦分配了一个静态权衡模型,它就可能不够用,因为当请求数量较多时,延迟会过高,而当请求数量较少时,性能又会过低。为此,我们提出了一种弹性语言模型(ElasticLM),可根据请求流灵活调整权衡。其基本思想是在压缩语言模型中引入计算弹性,从而使权衡可以沿着可扩展、可控制的计算方式实时变化。具体来说,我们采用弹性结构使 ElasticLM 具有计算弹性,并设计了一种弹性优化方法来学习计算弹性下的 ElasticLM。为了服务于 ElasticLM,我们采用了弹性时间表。考虑到信息检索的特殊性,我们将 ElasticLM 适应于密集检索和重新排序,并分别提出了 ElasticDenser 和 ElasticRanker。离线评估是在语言理解基准 GLUE 和多个信息检索任务(包括自然问题、琐事 QA 和 MS MARCO)上进行的。结果表明,与一系列静态基线相比,ElasticLM 以及 ElasticDenser 和 ElasticRanker 能够正确执行任务,并且具有竞争力。此外,还进行了并发在线模拟。结果表明,ElasticLM 可以根据请求流的变化提供弹性权衡。
{"title":"On Elastic Language Models","authors":"Chen Zhang, Benyou Wang, Dawei Song","doi":"10.1145/3677375","DOIUrl":"https://doi.org/10.1145/3677375","url":null,"abstract":"\u0000 Large-scale pretrained language models have achieved compelling performance in a wide range of language understanding and information retrieval tasks. While their large scales ensure capacity, they also hinder deployment. Knowledge distillation offers an opportunity to compress a large language model to a small one, in order to reach a reasonable latency-performance tradeoff. However, for scenarios where the number of requests (e.g., queries submitted to a search engine) is highly variant, the static tradeoff attained by the compressed language model might not always fit. Once a model is assigned with a static tradeoff, it could be inadequate in that the latency is too high when the number of requests is large, or the performance is too low when the number of requests is small. To this end, we propose an elastic language model (\u0000 ElasticLM\u0000 ) that elastically adjusts the tradeoff according to the request stream. The basic idea is to introduce a compute elasticity to the compressed language model, so that the tradeoff could vary on-the-fly along a scalable and controllable compute. Specifically, we impose an elastic structure to equip\u0000 ElasticLM\u0000 with compute elasticity and design an elastic optimization method to learn\u0000 ElasticLM\u0000 under compute elasticity. To serve\u0000 ElasticLM\u0000 , we apply an elastic schedule. Considering the specificity of information retrieval, we adapt\u0000 ElasticLM\u0000 to dense retrieval and reranking, and present an\u0000 ElasticDenser\u0000 and an\u0000 ElasticRanker\u0000 respectively. Offline evaluation is conducted on a language understanding benchmark GLUE, and several information retrieval tasks including Natural Question, Trivia QA and MS MARCO. The results show that\u0000 ElasticLM\u0000 along with\u0000 ElasticDenser\u0000 and\u0000 ElasticRanker\u0000 can perform correctly and competitively compared with an array of static baselines. Furthermore, an online simulation with concurrency is also carried out. The results demonstrate that\u0000 ElasticLM\u0000 can provide elastic tradeoffs with respect to varying request stream.\u0000","PeriodicalId":50936,"journal":{"name":"ACM Transactions on Information Systems","volume":null,"pages":null},"PeriodicalIF":5.4,"publicationDate":"2024-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141653049","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Graph Augmentation Empowered Contrastive Learning for Recommendation 为推荐而进行的图增强对比学习
IF 5.4 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-07-12 DOI: 10.1145/3677377
Lixiang Xu, Yusheng Liu, Tong Xu, Enhong Chen, Y. Tang
The application of contrastive learning (CL) to collaborative filtering (CF) in recommender systems has achieved remarkable success. CL-based recommendation models mainly focus on creating multiple augmented views by employing different graph augmentation methods and utilizing these views for self-supervised learning. However, current CL methods for recommender systems usually struggle to fully address the problem of noisy data. To address this problem, we propose the G raph A ugmentation E mpowered C ontrastive L earning (GAECL) for recommendation framework, which uses graph augmentation based on topological and semantic dual adaptation and global co-modeling via structural optimization to co-create contrasting views for better augmentation of the CF paradigm. Specifically, we strictly filter out unimportant topologies by reconstructing the adjacency matrix and mask unimportant attributes in nodes according to the PageRank centrality principle to generate an augmented view that filters out noisy data. Additionally, GAECL achieves global collaborative modeling through structural optimization and generates another augmented view based on the PageRank centrality principle. This helps to filter the noisy data while preserving the original semantics of the data for more effective data augmentation. Extensive experiments are conducted on five datasets to demonstrate the superior performance of our model over various recommendation models.
将对比学习(CL)应用于推荐系统中的协同过滤(CF)已取得显著成效。基于对比学习的推荐模型主要侧重于通过采用不同的图增强方法创建多个增强视图,并利用这些视图进行自监督学习。然而,目前用于推荐系统的 CL 方法通常很难完全解决噪声数据问题。为了解决这个问题,我们提出了用于推荐框架的图增强增强增强对比(GAECL),该框架使用基于拓扑和语义双重适应的图增强,并通过结构优化进行全局协同建模,以共同创建对比视图,从而更好地增强 CF 范例。具体来说,我们通过重构邻接矩阵严格过滤掉不重要的拓扑结构,并根据 PageRank 中心性原则屏蔽节点中不重要的属性,从而生成可过滤掉噪声数据的增强视图。此外,GAECL 还通过结构优化实现全局协作建模,并根据 PageRank 中心性原则生成另一个增强视图。这有助于过滤噪声数据,同时保留数据的原始语义,实现更有效的数据增强。我们在五个数据集上进行了广泛的实验,证明我们的模型比各种推荐模型性能更优越。
{"title":"Graph Augmentation Empowered Contrastive Learning for Recommendation","authors":"Lixiang Xu, Yusheng Liu, Tong Xu, Enhong Chen, Y. Tang","doi":"10.1145/3677377","DOIUrl":"https://doi.org/10.1145/3677377","url":null,"abstract":"\u0000 The application of contrastive learning (CL) to collaborative filtering (CF) in recommender systems has achieved remarkable success. CL-based recommendation models mainly focus on creating multiple augmented views by employing different graph augmentation methods and utilizing these views for self-supervised learning. However, current CL methods for recommender systems usually struggle to fully address the problem of noisy data. To address this problem, we propose the\u0000 G\u0000 raph\u0000 A\u0000 ugmentation\u0000 E\u0000 mpowered\u0000 C\u0000 ontrastive\u0000 L\u0000 earning\u0000 (GAECL)\u0000 for recommendation framework, which uses graph augmentation based on topological and semantic dual adaptation and global co-modeling via structural optimization to co-create contrasting views for better augmentation of the CF paradigm. Specifically, we strictly filter out unimportant topologies by reconstructing the adjacency matrix and mask unimportant attributes in nodes according to the PageRank centrality principle to generate an augmented view that filters out noisy data. Additionally, GAECL achieves global collaborative modeling through structural optimization and generates another augmented view based on the PageRank centrality principle. This helps to filter the noisy data while preserving the original semantics of the data for more effective data augmentation. Extensive experiments are conducted on five datasets to demonstrate the superior performance of our model over various recommendation models.\u0000","PeriodicalId":50936,"journal":{"name":"ACM Transactions on Information Systems","volume":null,"pages":null},"PeriodicalIF":5.4,"publicationDate":"2024-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141653160","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ReCRec: Reasoning the Causes of Implicit Feedback for Debiased Recommendation ReCRec:推理有偏差推荐的内隐反馈原因
IF 5.4 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-07-08 DOI: 10.1145/3672275
Siyi Lin, Sheng Zhou, Jiawei Chen, Yan Feng, Qihao Shi, Chun Chen, Ying Li, Can Wang
Implicit feedback ( e.g ., user clicks) is widely used in building recommender systems (RS). However, the inherent notorious exposure bias significantly affects recommendation performance. Exposure bias refers a phenomenon that implicit feedback is influenced by user exposure, and does not precisely reflect user preference. Current methods for addressing exposure bias primarily reduce confidence in unclicked data, employ exposure models, or leverage propensity scores. Regrettably, these approaches often lead to biased estimations or elevated model variance, yielding sub-optimal results. To overcome these limitations, we propose a new method ReCRec that Re asons the C auses behind the implicit feedback for debiased Rec ommendation. ReCRec identifies three scenarios behind unclicked data — i.e. , unexposed, dislike or a combination of both. A reasoning module is employed to infer the category to which each instance pertains. Consequently, the model is capable of extracting reliable positive and negative signals from unclicked data, thereby facilitating more accurate learning of user preferences. We also conduct thorough theoretical analyses to demonstrate the debiased nature and low variance of ReCRec. Extensive experiments on both semi-synthetic and real-world datasets validate its superiority over state-of-the-art methods.
隐式反馈(如用户点击)被广泛用于构建推荐系统(RS)。然而,与生俱来的暴露偏差会严重影响推荐性能。曝光偏差指的是隐式反馈受用户曝光的影响,不能准确反映用户偏好的现象。目前解决暴露偏差的方法主要是降低未点击数据的置信度、采用暴露模型或利用倾向分数。遗憾的是,这些方法往往会导致估计偏差或模型方差增大,从而产生次优结果。 为了克服这些局限性,我们提出了一种新方法 ReCRec,该方法能找出有偏差的推荐隐含反馈背后的原因。ReCRec 可识别未点击数据背后的三种情况--即未曝光、不喜欢或两者兼而有之。推理模块用于推断每个实例所属的类别。因此,该模型能够从未曾点击的数据中提取可靠的积极和消极信号,从而促进更准确地学习用户偏好。我们还进行了深入的理论分析,以证明 ReCRec 的去偏差性和低方差性。在半合成数据集和真实数据集上进行的大量实验验证了其优于最先进方法的性能。
{"title":"ReCRec: Reasoning the Causes of Implicit Feedback for Debiased Recommendation","authors":"Siyi Lin, Sheng Zhou, Jiawei Chen, Yan Feng, Qihao Shi, Chun Chen, Ying Li, Can Wang","doi":"10.1145/3672275","DOIUrl":"https://doi.org/10.1145/3672275","url":null,"abstract":"\u0000 Implicit feedback (\u0000 e.g\u0000 ., user clicks) is widely used in building recommender systems (RS). However, the inherent notorious\u0000 exposure bias\u0000 significantly affects recommendation performance. Exposure bias refers a phenomenon that implicit feedback is influenced by user exposure, and does not precisely reflect user preference. Current methods for addressing exposure bias primarily reduce confidence in unclicked data, employ exposure models, or leverage propensity scores. Regrettably, these approaches often lead to biased estimations or elevated model variance, yielding sub-optimal results.\u0000 \u0000 \u0000 To overcome these limitations, we propose a new method\u0000 ReCRec\u0000 that\u0000 Re\u0000 asons the\u0000 C\u0000 auses behind the implicit feedback for debiased\u0000 Rec\u0000 ommendation. ReCRec identifies three scenarios behind unclicked data —\u0000 i.e.\u0000 , unexposed, dislike or a combination of both. A reasoning module is employed to infer the category to which each instance pertains. Consequently, the model is capable of extracting reliable positive and negative signals from unclicked data, thereby facilitating more accurate learning of user preferences. We also conduct thorough theoretical analyses to demonstrate the debiased nature and low variance of ReCRec. Extensive experiments on both semi-synthetic and real-world datasets validate its superiority over state-of-the-art methods.\u0000","PeriodicalId":50936,"journal":{"name":"ACM Transactions on Information Systems","volume":null,"pages":null},"PeriodicalIF":5.4,"publicationDate":"2024-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141669524","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TriMLP: A Foundational MLP-like Architecture for Sequential Recommendation TriMLP:用于顺序推荐的类似 MLP 的基础结构
IF 5.6 2区 计算机科学 Q1 Business, Management and Accounting Pub Date : 2024-06-10 DOI: 10.1145/3670995
Yiheng Jiang, Yuanbo Xu, Yongjian Yang, Funing Yang, Pengyang Wang, Chaozhuo Li, Fuzhen Zhuang, Hui Xiong
In this work, we present TriMLP as a foundational MLP-like architecture for the sequential recommendation, simultaneously achieving computational efficiency and promising performance. First, we empirically study the incompatibility between existing purely MLP-based models and sequential recommendation, that the inherent fully-connective structure endows historical user-item interactions (referred as tokens) with unrestricted communications and overlooks the essential chronological order in sequences. Then, we propose the MLP-based Triangular Mixer to establish ordered contact among tokens and excavate the primary sequential modeling capability under the standard auto-regressive training fashion. It contains (i) a global mixing layer that drops the lower-triangle neurons in MLP to block the anti-chronological connections from future tokens and (ii) a local mixing layer that further disables specific upper-triangle neurons to split the sequence as multiple independent sessions. The mixer serially alternates these two layers to support fine-grained preferences modeling, where the global one focuses on the long-range dependency in the whole sequence, and the local one calls for the short-term patterns in sessions. Experimental results on 12 datasets of different scales from 4 benchmarks elucidate that TriMLP consistently attains favorable accuracy/efficiency trade-off over all validated datasets, where the average performance boost against several state-of-the-art baselines achieves up to 14.88%, and the maximum reduction of inference time reaches 23.73%. The intriguing properties render TriMLP a strong contender to the well-established RNN-, CNN- and Transformer-based sequential recommenders. Code is available at https://github.com/jiangyiheng1/TriMLP.
在这项工作中,我们提出了 TriMLP 作为顺序推荐的基础 MLP 类架构,同时实现了计算效率和良好的性能。首先,我们实证研究了现有的纯 MLP 模型与顺序推荐之间的不兼容性,即固有的全连接结构赋予了用户与物品之间的历史交互(称为标记)以无限制的通信,而忽略了顺序中必不可少的时间顺序。因此,我们提出了基于 MLP 的三角混合器(Triangular Mixer)来建立代币之间的有序联系,并在标准的自动回归训练方式下挖掘主要的序列建模能力。它包含:(i) 全局混合层,用于丢弃 MLP 中的下三角神经元,以阻断来自未来标记的反时序连接;(ii) 局部混合层,用于进一步禁用特定的上三角神经元,以将序列分割为多个独立片段。混合器连续交替使用这两个层,以支持细粒度偏好建模,其中全局层侧重于整个序列中的长程依赖性,而局部层则需要会话中的短期模式。来自 4 个基准的 12 个不同规模数据集的实验结果表明,TriMLP 在所有经过验证的数据集上始终保持着良好的准确性/效率权衡,与几个最先进的基准相比,平均性能提升了 14.88%,推理时间最大缩短了 23.73%。这些引人入胜的特性使 TriMLP 成为基于 RNN、CNN 和 Transformer 的序列推荐器的有力竞争者。代码见 https://github.com/jiangyiheng1/TriMLP。
{"title":"TriMLP: A Foundational MLP-like Architecture for Sequential Recommendation","authors":"Yiheng Jiang, Yuanbo Xu, Yongjian Yang, Funing Yang, Pengyang Wang, Chaozhuo Li, Fuzhen Zhuang, Hui Xiong","doi":"10.1145/3670995","DOIUrl":"https://doi.org/10.1145/3670995","url":null,"abstract":"In this work, we present TriMLP as a foundational MLP-like architecture for the sequential recommendation, simultaneously achieving computational efficiency and promising performance. First, we empirically study the incompatibility between existing purely MLP-based models and sequential recommendation, that the inherent fully-connective structure endows historical user-item interactions (referred as tokens) with unrestricted communications and overlooks the essential chronological order in sequences. Then, we propose the MLP-based Triangular Mixer to establish ordered contact among tokens and excavate the primary sequential modeling capability under the standard auto-regressive training fashion. It contains (i) a global mixing layer that drops the lower-triangle neurons in MLP to block the anti-chronological connections from future tokens and (ii) a local mixing layer that further disables specific upper-triangle neurons to split the sequence as multiple independent sessions. The mixer serially alternates these two layers to support fine-grained preferences modeling, where the global one focuses on the long-range dependency in the whole sequence, and the local one calls for the short-term patterns in sessions. Experimental results on 12 datasets of different scales from 4 benchmarks elucidate that TriMLP consistently attains favorable accuracy/efficiency trade-off over all validated datasets, where the average performance boost against several state-of-the-art baselines achieves up to 14.88%, and the maximum reduction of inference time reaches 23.73%. The intriguing properties render TriMLP a strong contender to the well-established RNN-, CNN- and Transformer-based sequential recommenders. Code is available at https://github.com/jiangyiheng1/TriMLP.","PeriodicalId":50936,"journal":{"name":"ACM Transactions on Information Systems","volume":null,"pages":null},"PeriodicalIF":5.6,"publicationDate":"2024-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141361602","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
ACM Transactions on Information Systems
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1