首页 > 最新文献

ACM Transactions on Information Systems最新文献

英文 中文
Less is More: Removing Redundancy of Graph Convolutional Networks for Recommendation 少即是多:去除图卷积网络的推荐冗余
IF 5.6 2区 计算机科学 Q1 Business, Management and Accounting Pub Date : 2023-11-20 DOI: 10.1145/3632751
Shaowen Peng, Kazunari Sugiyama, Tsunenori Mine

While Graph Convolutional Networks (GCNs) have shown great potential in recommender systems and collaborative filtering (CF), they suffer from expensive computational complexity and poor scalability. On top of that, recent works mostly combine GCNs with other advanced algorithms which further sacrifice model efficiency and scalability. In this work, we unveil the redundancy of existing GCN-based methods in three aspects: (1) Feature redundancy. By reviewing GCNs from a spectral perspective, we show that most spectral graph features are noisy for recommendation, while stacking graph convolution layers can suppress but cannot completely remove the noisy features, which we mostly summarize from our previous work; (2) Structure redundancy. By providing a deep insight into how user/item representations are generated, we show that what makes them distinctive lies in the spectral graph features, while the core idea of GCNs (i.e., neighborhood aggregation) is not the reason making GCNs effective; and (3) Distribution redundancy. Following observations from (1), we further show that the number of required spectral features is closely related to the spectral distribution, where important information tends to be concentrated in more (fewer) spectral features on a flatter (sharper) distribution. To make important information be concentrated in as few features as possible, we sharpen the spectral distribution by increasing the node similarity without changing the original data, thereby reducing the computational cost. To remove these three kinds of redundancies, we propose a Simplified Graph Denoising Encoder (SGDE) only exploiting the top-K singular vectors without explicitly aggregating neighborhood, which significantly reduces the complexity of GCN-based methods. We further propose a scalable contrastive learning framework to alleviate data sparsity and to boost model robustness and generalization, leading to significant improvement. Extensive experiments on three real-world datasets show that our proposed SGDE not only achieves state-of-the-art but also shows higher scalability and efficiency than our previously proposed GDE as well as traditional and GCN-based CF methods.

虽然图卷积网络(GCNs)在推荐系统和协同过滤(CF)中显示出巨大的潜力,但它们存在计算复杂度高和可扩展性差的问题。最重要的是,最近的研究大多将GCNs与其他高级算法结合起来,这进一步牺牲了模型的效率和可扩展性。在这项工作中,我们从三个方面揭示了现有的基于遗传神经网络的方法的冗余性:(1)特征冗余。通过从谱的角度回顾GCNs,我们发现大多数谱图特征是有噪声的,而叠加图卷积层可以抑制但不能完全去除有噪声的特征,这是我们从之前的工作中总结出来的;(2)结构冗余。通过深入了解用户/物品表征是如何生成的,我们发现它们的独特之处在于谱图特征,而GCNs的核心思想(即邻里聚集)并不是使GCNs有效的原因;(3)分布冗余。根据(1)的观测结果,我们进一步表明,所需光谱特征的数量与光谱分布密切相关,其中重要信息往往集中在更平坦(更锐利)分布的更多(更少)光谱特征中。为了使重要信息尽可能集中在较少的特征中,我们在不改变原始数据的情况下,通过增加节点相似度来锐化谱分布,从而降低计算成本。为了消除这三种冗余,我们提出了一种简化图去噪编码器(SGDE),它只利用top-K奇异向量,而不显式地聚集邻域,这大大降低了基于gcn方法的复杂性。我们进一步提出了一个可扩展的对比学习框架,以减轻数据稀疏性,并提高模型的鲁棒性和泛化,从而显著改善。在三个真实数据集上的大量实验表明,我们提出的SGDE不仅达到了最先进的水平,而且比我们之前提出的GDE以及传统的和基于gcn的CF方法具有更高的可扩展性和效率。
{"title":"Less is More: Removing Redundancy of Graph Convolutional Networks for Recommendation","authors":"Shaowen Peng, Kazunari Sugiyama, Tsunenori Mine","doi":"10.1145/3632751","DOIUrl":"https://doi.org/10.1145/3632751","url":null,"abstract":"<p>While Graph Convolutional Networks (GCNs) have shown great potential in recommender systems and collaborative filtering (CF), they suffer from expensive computational complexity and poor scalability. On top of that, recent works mostly combine GCNs with other advanced algorithms which further sacrifice model efficiency and scalability. In this work, we unveil the redundancy of existing GCN-based methods in three aspects: (1) <b>Feature redundancy</b>. By reviewing GCNs from a spectral perspective, we show that most spectral graph features are noisy for recommendation, while stacking graph convolution layers can suppress but cannot completely remove the noisy features, which we mostly summarize from our previous work; (2) <b>Structure redundancy</b>. By providing a deep insight into how user/item representations are generated, we show that what makes them distinctive lies in the spectral graph features, while the core idea of GCNs (<i>i.e.,</i> neighborhood aggregation) is not the reason making GCNs effective; and (3) <b>Distribution redundancy</b>. Following observations from (1), we further show that the number of required spectral features is closely related to the spectral distribution, where important information tends to be concentrated in more (fewer) spectral features on a flatter (sharper) distribution. To make important information be concentrated in as few features as possible, we sharpen the spectral distribution by increasing the node similarity without changing the original data, thereby reducing the computational cost. To remove these three kinds of redundancies, we propose a Simplified Graph Denoising Encoder (SGDE) only exploiting the top-<i>K</i> singular vectors without explicitly aggregating neighborhood, which significantly reduces the complexity of GCN-based methods. We further propose a scalable contrastive learning framework to alleviate data sparsity and to boost model robustness and generalization, leading to significant improvement. Extensive experiments on three real-world datasets show that our proposed SGDE not only achieves state-of-the-art but also shows higher scalability and efficiency than our previously proposed GDE as well as traditional and GCN-based CF methods.</p>","PeriodicalId":50936,"journal":{"name":"ACM Transactions on Information Systems","volume":null,"pages":null},"PeriodicalIF":5.6,"publicationDate":"2023-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138537119","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Contextualizing and Expanding Conversational Queries without Supervision 上下文化和扩展会话查询没有监督
IF 5.6 2区 计算机科学 Q1 Business, Management and Accounting Pub Date : 2023-11-17 DOI: 10.1145/3632622
Antonios Minas Krasakis, Andrew Yates, Evangelos Kanoulas

Most conversational passage retrieval systems try to resolve conversational dependencies by using an intermediate query resolution step. To do so, they synthesize conversational data or assume the availability of large-scale question rewritting datasets. To relax those conditions, we propose a zero-shot unified resolution–retrieval approach, that (i) contextualizes and (ii) expands query embeddings using the conversation history and without fine-tuning on conversational data. Contextualization biases the last user question embeddings towards the conversation. Query expansion is used in two ways: (i) abstractive expansion generates embeddings based on the current question and previous history, whereas (ii) extractive expansion tries to identify history term embeddings based on attention weights from the retriever. Our experiments demonstrate the effectiveness of both contextualization and unified expansion in improving conversational retrieval. Contextualization does so mostly by resolving anaphoras to the conversation and bringing their embeddings closer to the important resolution terms that were omitted. By adding embeddings to the query, expansion targets phenomena of ellipsis more explicitly, with our analysis verifying its effectiveness on identifying and adding important resolutions to the query. By combining contextualization and expansion, we find that our zero-shot unified resolution–retrieval methods are competitive and can even outperform supervised methods.

大多数会话通道检索系统试图通过使用中间查询解析步骤来解决会话依赖关系。为此,他们合成会话数据或假设大规模问题重写数据集的可用性。为了放松这些条件,我们提出了一种零采样统一分辨率检索方法,该方法(i)将会话历史上下文化,(ii)使用会话历史扩展查询嵌入,而不需要对会话数据进行微调。语境化使最后的用户问题嵌入偏向于对话。查询扩展以两种方式使用:(i)抽象扩展基于当前问题和以前的历史生成嵌入,而(ii)抽取扩展试图根据检索者的关注权重来识别历史术语嵌入。我们的实验证明了语境化和统一扩展在提高会话检索方面的有效性。语境化主要是通过解决对话中的回指,并使其嵌入更接近被省略的重要解决术语来实现的。通过在查询中添加嵌入,扩展更明确地针对省略号现象,我们的分析验证了它在识别和添加查询重要分辨率方面的有效性。通过将上下文化和扩展相结合,我们发现我们的零射击统一分辨率检索方法具有竞争力,甚至可以优于监督方法。
{"title":"Contextualizing and Expanding Conversational Queries without Supervision","authors":"Antonios Minas Krasakis, Andrew Yates, Evangelos Kanoulas","doi":"10.1145/3632622","DOIUrl":"https://doi.org/10.1145/3632622","url":null,"abstract":"<p>Most conversational passage retrieval systems try to resolve conversational dependencies by using an intermediate query resolution step. To do so, they synthesize conversational data or assume the availability of large-scale question rewritting datasets. To relax those conditions, we propose a zero-shot unified resolution–retrieval approach, that (i) contextualizes and (ii) expands query embeddings using the conversation history and without fine-tuning on conversational data. Contextualization biases the last user question embeddings towards the conversation. Query expansion is used in two ways: (i) abstractive expansion generates embeddings based on the current question and previous history, whereas (ii) extractive expansion tries to identify history term embeddings based on attention weights from the retriever. Our experiments demonstrate the effectiveness of both contextualization and unified expansion in improving conversational retrieval. Contextualization does so mostly by resolving anaphoras to the conversation and bringing their embeddings closer to the important resolution terms that were omitted. By adding embeddings to the query, expansion targets phenomena of ellipsis more explicitly, with our analysis verifying its effectiveness on identifying and adding important resolutions to the query. By combining contextualization and expansion, we find that our zero-shot unified resolution–retrieval methods are competitive and can even outperform supervised methods.</p>","PeriodicalId":50936,"journal":{"name":"ACM Transactions on Information Systems","volume":null,"pages":null},"PeriodicalIF":5.6,"publicationDate":"2023-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138537118","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploring Dense Retrieval for Dialogue Response Selection 探索对话响应选择的密集检索
2区 计算机科学 Q1 Business, Management and Accounting Pub Date : 2023-11-14 DOI: 10.1145/3632750
Tian Lan, Deng Cai, Yan Wang, Yixuan Su, Heyan Huang, Xian-Ling Mao
Recent progress in deep learning has continuously improved the accuracy of dialogue response selection. However, in real-world scenarios, the high computation cost forces existing dialogue response selection models to rank only a small number of candidates, recalled by a coarse-grained model, precluding many high-quality candidates. To overcome this problem, we present a novel and efficient response selection model and a set of tailor-designed learning strategies to train it effectively. The proposed model consists of a dense retrieval module and an interaction layer, which could directly select the proper response from a large corpus. We conduct re-rank and full-rank evaluations on widely used benchmarks to evaluate our proposed model. Extensive experimental results demonstrate that our proposed model notably outperforms the state-of-the-art baselines on both re-rank and full-rank evaluations. Moreover, human evaluation results show that the response quality could be improved further by enlarging the candidate pool with nonparallel corpora. In addition, we also release high-quality benchmarks that are carefully annotated for more accurate dialogue response selection evaluation. All source codes, datasets, model parameters, and other related resources have been publicly available.
深度学习的最新进展不断提高了对话响应选择的准确性。然而,在现实场景中,高计算成本迫使现有的对话响应选择模型仅对少量候选对象进行排名,由粗粒度模型召回,从而排除了许多高质量的候选对象。为了克服这一问题,我们提出了一种新颖有效的响应选择模型和一套量身定制的学习策略来有效地训练它。该模型由密集检索模块和交互层组成,可以直接从大量语料库中选择合适的响应。我们对广泛使用的基准进行重新排序和全排序评估,以评估我们提出的模型。广泛的实验结果表明,我们提出的模型在重新排序和全排序评估上都明显优于最先进的基线。此外,人工评价结果表明,使用非并行语料库扩大候选语料库可以进一步提高响应质量。此外,我们还发布了经过仔细注释的高质量基准,以便更准确地评估对话响应选择。所有源代码、数据集、模型参数和其他相关资源都已公开。
{"title":"Exploring Dense Retrieval for Dialogue Response Selection","authors":"Tian Lan, Deng Cai, Yan Wang, Yixuan Su, Heyan Huang, Xian-Ling Mao","doi":"10.1145/3632750","DOIUrl":"https://doi.org/10.1145/3632750","url":null,"abstract":"Recent progress in deep learning has continuously improved the accuracy of dialogue response selection. However, in real-world scenarios, the high computation cost forces existing dialogue response selection models to rank only a small number of candidates, recalled by a coarse-grained model, precluding many high-quality candidates. To overcome this problem, we present a novel and efficient response selection model and a set of tailor-designed learning strategies to train it effectively. The proposed model consists of a dense retrieval module and an interaction layer, which could directly select the proper response from a large corpus. We conduct re-rank and full-rank evaluations on widely used benchmarks to evaluate our proposed model. Extensive experimental results demonstrate that our proposed model notably outperforms the state-of-the-art baselines on both re-rank and full-rank evaluations. Moreover, human evaluation results show that the response quality could be improved further by enlarging the candidate pool with nonparallel corpora. In addition, we also release high-quality benchmarks that are carefully annotated for more accurate dialogue response selection evaluation. All source codes, datasets, model parameters, and other related resources have been publicly available.","PeriodicalId":50936,"journal":{"name":"ACM Transactions on Information Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134953620","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
(Un)likelihood Training for Interpretable Embedding 可解释嵌入的(非)似然训练
2区 计算机科学 Q1 Business, Management and Accounting Pub Date : 2023-11-13 DOI: 10.1145/3632752
Jiaxin Wu, Chong-Wah Ngo, Wing-Kwong Chan, Zhijian Hou
Cross-modal representation learning has become a new normal for bridging the semantic gap between text and visual data. Learning modality agnostic representations in a continuous latent space, however, is often treated as a black-box data-driven training process. It is well-known that the effectiveness of representation learning depends heavily on the quality and scale of training data. For video representation learning, having a complete set of labels that annotate the full spectrum of video content for training is highly difficult, if not impossible. These issues, black-box training and dataset bias, make representation learning practically challenging to be deployed for video understanding due to unexplainable and unpredictable results. In this paper, we propose two novel training objectives, likelihood and unlikelihood functions, to unroll the semantics behind embeddings while addressing the label sparsity problem in training. The likelihood training aims to interpret semantics of embeddings beyond training labels, while the unlikelihood training leverages prior knowledge for regularization to ensure semantically coherent interpretation. With both training objectives, a new encoder-decoder network, which learns interpretable cross-modal representation, is proposed for ad-hoc video search. Extensive experiments on TRECVid and MSR-VTT datasets show that the proposed network outperforms several state-of-the-art retrieval models with a statistically significant performance margin.
跨模态表示学习已成为弥合文本和视觉数据之间语义差距的新常态。然而,在连续潜在空间中学习模态不可知表示通常被视为一个黑箱数据驱动的训练过程。众所周知,表示学习的有效性在很大程度上取决于训练数据的质量和规模。对于视频表示学习,拥有一套完整的标签来注释用于训练的全部视频内容是非常困难的,如果不是不可能的话。这些问题,黑箱训练和数据集偏差,由于无法解释和不可预测的结果,使得表示学习在视频理解中部署具有实际挑战性。在本文中,我们提出了两个新的训练目标,即似然函数和非似然函数,以揭示嵌入背后的语义,同时解决训练中的标签稀疏性问题。似然训练旨在解释超越训练标签的嵌入语义,而非似然训练利用先验知识进行正则化以确保语义连贯的解释。基于这两个训练目标,提出了一种新的编码器-解码器网络,该网络学习可解释的跨模态表示,用于自组织视频搜索。在TRECVid和MSR-VTT数据集上的大量实验表明,所提出的网络在统计上显著优于几种最先进的检索模型。
{"title":"(Un)likelihood Training for Interpretable Embedding","authors":"Jiaxin Wu, Chong-Wah Ngo, Wing-Kwong Chan, Zhijian Hou","doi":"10.1145/3632752","DOIUrl":"https://doi.org/10.1145/3632752","url":null,"abstract":"Cross-modal representation learning has become a new normal for bridging the semantic gap between text and visual data. Learning modality agnostic representations in a continuous latent space, however, is often treated as a black-box data-driven training process. It is well-known that the effectiveness of representation learning depends heavily on the quality and scale of training data. For video representation learning, having a complete set of labels that annotate the full spectrum of video content for training is highly difficult, if not impossible. These issues, black-box training and dataset bias, make representation learning practically challenging to be deployed for video understanding due to unexplainable and unpredictable results. In this paper, we propose two novel training objectives, likelihood and unlikelihood functions, to unroll the semantics behind embeddings while addressing the label sparsity problem in training. The likelihood training aims to interpret semantics of embeddings beyond training labels, while the unlikelihood training leverages prior knowledge for regularization to ensure semantically coherent interpretation. With both training objectives, a new encoder-decoder network, which learns interpretable cross-modal representation, is proposed for ad-hoc video search. Extensive experiments on TRECVid and MSR-VTT datasets show that the proposed network outperforms several state-of-the-art retrieval models with a statistically significant performance margin.","PeriodicalId":50936,"journal":{"name":"ACM Transactions on Information Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136348319","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cross-domain Recommendation via Dual Adversarial Adaptation 基于双对抗性适应的跨领域推荐
2区 计算机科学 Q1 Business, Management and Accounting Pub Date : 2023-11-11 DOI: 10.1145/3632524
Hongzu Su, Jingjing Li, Zhekai Du, Lei Zhu, Ke Lu, Heng Tao Shen
Data scarcity is a perpetual challenge of recommendation systems, and researchers have proposed a variety of cross-domain recommendation methods to alleviate the problem of data scarcity in target domains. However, in many real-world cross-domain recommendation systems, the source domain and the target domain are sampled from different data distributions, which obstructs the cross-domain knowledge transfer. In this paper, we propose to specifically align the data distributions between the source domain and the target domain to alleviate imbalanced sample distribution and thus challenge the data scarcity issue in the target domain. Technically, our proposed approach builds a dual adversarial adaptation (DAA) framework to adversarially train the target model together with a pre-trained source model. Two domain discriminators play the two-player minmax game with the target model and guide the target model to learn reliable domain-invariant features that can be transferred across domains. At the same time, the target model is calibrated to learn domain-specific information of the target domain. In addition, we formulate our approach as a plug-and-play module to boost existing recommendation systems. We apply the proposed method to address the issues of insufficient data and imbalanced sample distribution in real-world Click-Through Rate (CTR)/Conversion Rate (CVR) predictions on two large-scale industrial datasets. We evaluate the proposed method in scenarios with and without overlapping users/items, and extensive experiments verify that the proposed method is able to significantly improve the prediction performance on the target domain. For instance, our method can boost PLE with a performance improvement of 15.4% in terms of Area Under Curve (AUC) compared with single-domain PLE on our private game dataset. In addition, our method is able to surpass single-domain MMoE by 6.85% on the public datasets. Code: https://github.com/TL-UESTC/DAA.
数据稀缺性是推荐系统面临的一个永恒挑战,研究者们提出了多种跨领域推荐方法来缓解目标领域的数据稀缺性问题。然而,在现实世界的许多跨领域推荐系统中,源领域和目标领域的样本来自不同的数据分布,这阻碍了跨领域的知识转移。在本文中,我们提出针对性地对齐源域和目标域之间的数据分布,以缓解样本分布不平衡,从而挑战目标域的数据稀缺性问题。从技术上讲,我们提出的方法构建了一个双对抗性适应(DAA)框架,与预训练的源模型一起对抗性训练目标模型。两个域鉴别器与目标模型进行二人最小最大博弈,引导目标模型学习可跨域迁移的可靠域不变特征。同时,对目标模型进行标定,学习目标域的特定领域信息。此外,我们将我们的方法制定为一个即插即用模块,以促进现有的推荐系统。我们应用所提出的方法来解决两个大规模工业数据集上真实点击率(CTR)/转化率(CVR)预测中数据不足和样本分布不平衡的问题。我们在有和没有重叠用户/项目的场景下评估了所提出的方法,大量的实验验证了所提出的方法能够显著提高目标域的预测性能。例如,与我们的私人游戏数据集上的单域PLE相比,我们的方法可以在曲线下面积(AUC)方面提高15.4%的PLE性能。此外,在公共数据集上,我们的方法能够比单域MMoE高出6.85%。代码:https://github.com/TL-UESTC/DAA。
{"title":"Cross-domain Recommendation via Dual Adversarial Adaptation","authors":"Hongzu Su, Jingjing Li, Zhekai Du, Lei Zhu, Ke Lu, Heng Tao Shen","doi":"10.1145/3632524","DOIUrl":"https://doi.org/10.1145/3632524","url":null,"abstract":"Data scarcity is a perpetual challenge of recommendation systems, and researchers have proposed a variety of cross-domain recommendation methods to alleviate the problem of data scarcity in target domains. However, in many real-world cross-domain recommendation systems, the source domain and the target domain are sampled from different data distributions, which obstructs the cross-domain knowledge transfer. In this paper, we propose to specifically align the data distributions between the source domain and the target domain to alleviate imbalanced sample distribution and thus challenge the data scarcity issue in the target domain. Technically, our proposed approach builds a dual adversarial adaptation (DAA) framework to adversarially train the target model together with a pre-trained source model. Two domain discriminators play the two-player minmax game with the target model and guide the target model to learn reliable domain-invariant features that can be transferred across domains. At the same time, the target model is calibrated to learn domain-specific information of the target domain. In addition, we formulate our approach as a plug-and-play module to boost existing recommendation systems. We apply the proposed method to address the issues of insufficient data and imbalanced sample distribution in real-world Click-Through Rate (CTR)/Conversion Rate (CVR) predictions on two large-scale industrial datasets. We evaluate the proposed method in scenarios with and without overlapping users/items, and extensive experiments verify that the proposed method is able to significantly improve the prediction performance on the target domain. For instance, our method can boost PLE with a performance improvement of 15.4% in terms of Area Under Curve (AUC) compared with single-domain PLE on our private game dataset. In addition, our method is able to surpass single-domain MMoE by 6.85% on the public datasets. Code: https://github.com/TL-UESTC/DAA.","PeriodicalId":50936,"journal":{"name":"ACM Transactions on Information Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135087072","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Retrieval for Extremely Long Queries and Documents with RPRS: a Highly Efficient and Effective Transformer-based Re-Ranker 基于RPRS的超长查询和文档检索:一种高效的基于转换的重新排序器
2区 计算机科学 Q1 Business, Management and Accounting Pub Date : 2023-11-11 DOI: 10.1145/3631938
Arian Askari, Suzan Verberne, Amin Abolghasemi, Wessel Kraaij, Gabriella Pasi
Retrieval with extremely long queries and documents is a well-known and challenging task in information retrieval and is commonly known as Query-by-Document (QBD) retrieval. Specifically designed Transformer models that can handle long input sequences have not shown high effectiveness in QBD tasks in previous work. We propose a R e-Ranker based on the novel P roportional R elevance S core (RPRS) to compute the relevance score between a query and the top-k candidate documents. Our extensive evaluation shows RPRS obtains significantly better results than the state-of-the-art models on five different datasets. Furthermore, RPRS is highly efficient since all documents can be pre-processed, embedded, and indexed before query time which gives our re-ranker the advantage of having a complexity of O ( N ) where N is the total number of sentences in the query and candidate documents. Furthermore, our method solves the problem of the low-resource training in QBD retrieval tasks as it does not need large amounts of training data, and has only three parameters with a limited range that can be optimized with a grid search even if a small amount of labeled data is available. Our detailed analysis shows that RPRS benefits from covering the full length of candidate documents and queries.
在信息检索中,对超长查询和文档进行检索是一项众所周知且具有挑战性的任务,通常称为按文档查询(QBD)检索。在以前的工作中,专门设计的可以处理长输入序列的Transformer模型在QBD任务中没有显示出很高的有效性。我们提出了一种基于新型比例R相关性核心(RPRS)的R - ranker来计算查询与前k个候选文档之间的相关性分数。我们的广泛评估表明,在五个不同的数据集上,RPRS比最先进的模型获得了明显更好的结果。此外,RPRS非常高效,因为所有文档都可以在查询时间之前进行预处理、嵌入和索引,这使我们的重新排序器具有复杂度为O (N)的优势,其中N是查询和候选文档中的句子总数。此外,我们的方法解决了QBD检索任务中训练资源不足的问题,因为它不需要大量的训练数据,并且只有三个范围有限的参数,即使有少量的标记数据,也可以通过网格搜索进行优化。我们的详细分析表明,RPRS受益于覆盖候选文档和查询的完整长度。
{"title":"Retrieval for Extremely Long Queries and Documents with RPRS: a Highly Efficient and Effective Transformer-based Re-Ranker","authors":"Arian Askari, Suzan Verberne, Amin Abolghasemi, Wessel Kraaij, Gabriella Pasi","doi":"10.1145/3631938","DOIUrl":"https://doi.org/10.1145/3631938","url":null,"abstract":"Retrieval with extremely long queries and documents is a well-known and challenging task in information retrieval and is commonly known as Query-by-Document (QBD) retrieval. Specifically designed Transformer models that can handle long input sequences have not shown high effectiveness in QBD tasks in previous work. We propose a R e-Ranker based on the novel P roportional R elevance S core (RPRS) to compute the relevance score between a query and the top-k candidate documents. Our extensive evaluation shows RPRS obtains significantly better results than the state-of-the-art models on five different datasets. Furthermore, RPRS is highly efficient since all documents can be pre-processed, embedded, and indexed before query time which gives our re-ranker the advantage of having a complexity of O ( N ) where N is the total number of sentences in the query and candidate documents. Furthermore, our method solves the problem of the low-resource training in QBD retrieval tasks as it does not need large amounts of training data, and has only three parameters with a limited range that can be optimized with a grid search even if a small amount of labeled data is available. Our detailed analysis shows that RPRS benefits from covering the full length of candidate documents and queries.","PeriodicalId":50936,"journal":{"name":"ACM Transactions on Information Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135043233","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Stopping Methods for Technology Assisted Reviews based on Point Processes 基于点过程的技术辅助评审停止方法
2区 计算机科学 Q1 Business, Management and Accounting Pub Date : 2023-11-11 DOI: 10.1145/3631990
Mark Stevenson, Reem Bin-Hezam
Technology Assisted Review (TAR), which aims to reduce the effort required to screen collections of documents for relevance, is used to develop systematic reviews of medical evidence and identify documents that must be disclosed in response to legal proceedings. Stopping methods are algorithms which determine when to stop screening documents during the TAR process, helping to ensure that workload is minimised while still achieving a high level of recall. This paper proposes a novel stopping method based on point processes, which are statistical models that can be used to represent the occurrence of random events. The approach uses rate functions to model the occurrence of relevant documents in the ranking and compares four candidates, including one that has not previously been used for this purpose (hyperbolic). Evaluation is carried out using standard datasets (CLEF e-Health, TREC Total Recall, TREC Legal), and this work is the first to explore stopping method robustness by reporting performance on a range of rankings of varying effectiveness. Results show that the proposed method achieves the desired level of recall without requiring an excessive number of documents to be examined in the majority of cases and also compares well against multiple alternative approaches.
技术辅助审查(TAR)的目的是减少对收集的文件进行相关性筛选所需的努力,用于对医学证据进行系统审查,并确定为应对法律诉讼而必须披露的文件。停止方法是确定在TAR过程中何时停止筛选文件的算法,有助于确保在实现高水平召回的同时最大限度地减少工作量。本文提出了一种新的基于点过程的停止方法,点过程是一种可以用来表示随机事件发生的统计模型。该方法使用比率函数对相关文档在排名中的出现情况进行建模,并比较四个候选文档,包括一个以前未用于此目的的文档(双曲线)。使用标准数据集(CLEF e-Health, TREC Total Recall, TREC Legal)进行评估,这项工作是第一个通过在一系列不同有效性排名上报告性能来探索停止方法稳健性的工作。结果表明,在大多数情况下,所提出的方法达到了所需的召回水平,而不需要检查过多的文档,并且与多种替代方法相比也比较好。
{"title":"Stopping Methods for Technology Assisted Reviews based on Point Processes","authors":"Mark Stevenson, Reem Bin-Hezam","doi":"10.1145/3631990","DOIUrl":"https://doi.org/10.1145/3631990","url":null,"abstract":"Technology Assisted Review (TAR), which aims to reduce the effort required to screen collections of documents for relevance, is used to develop systematic reviews of medical evidence and identify documents that must be disclosed in response to legal proceedings. Stopping methods are algorithms which determine when to stop screening documents during the TAR process, helping to ensure that workload is minimised while still achieving a high level of recall. This paper proposes a novel stopping method based on point processes, which are statistical models that can be used to represent the occurrence of random events. The approach uses rate functions to model the occurrence of relevant documents in the ranking and compares four candidates, including one that has not previously been used for this purpose (hyperbolic). Evaluation is carried out using standard datasets (CLEF e-Health, TREC Total Recall, TREC Legal), and this work is the first to explore stopping method robustness by reporting performance on a range of rankings of varying effectiveness. Results show that the proposed method achieves the desired level of recall without requiring an excessive number of documents to be examined in the majority of cases and also compares well against multiple alternative approaches.","PeriodicalId":50936,"journal":{"name":"ACM Transactions on Information Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135042712","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Contrastive Multi-View Interest Learning for Cross-Domain Sequential Recommendation 跨领域顺序推荐的对比多视角兴趣学习
2区 计算机科学 Q1 Business, Management and Accounting Pub Date : 2023-11-09 DOI: 10.1145/3632402
Tianzi Zang, Yanmin Zhu, Ruohan Zhang, Chunyang Wang, Ke Wang, Jiadi Yu
Cross-domain recommendation (CDR), which leverages information collected from other domains, has been empirically demonstrated to effectively alleviate data sparsity and cold-start problems encountered in traditional recommendation systems. However, current CDR methods, including those considering time information, do not jointly model the general and current interests within and across domains, which is pivotal for accurately predicting users’ future interactions. In this paper, we propose a Contrastive learning enhanced Multi-View interest learning model (CMVCDR) for cross-domain sequential recommendation. Specifically, we design a static view and a sequential view to model uses’ general interests and current interests, respectively. We divide a user’s general interest representation into a domain-invariant part and a domain-specific part. A cross-domain contrastive learning objective is introduced to impose constraints for optimizing these representations. In the sequential view, we first devise an attention mechanism guided by users’ domain-invariant interest representations to distill cross-domain knowledge pertaining to domain-invariant factors while reducing noise from irrelevant factors. We further design a domain-specific interest-guided temporal information aggregation mechanism to generate users’ current interest representations. Extensive experiments demonstrate the effectiveness of our proposed model compared with state-of-the-art methods.
跨领域推荐(CDR)利用从其他领域收集的信息,已被经验证明可以有效地缓解传统推荐系统中遇到的数据稀疏性和冷启动问题。然而,目前的CDR方法,包括那些考虑时间信息的方法,并没有联合建模域内和跨域的一般和当前兴趣,而这对于准确预测用户未来的交互至关重要。本文提出了一种基于对比学习增强的多视图兴趣学习模型(CMVCDR)进行跨域顺序推荐。具体来说,我们分别设计了一个静态视图和一个顺序视图来对用户的一般兴趣和当前兴趣进行建模。我们将用户的一般兴趣表示分为领域不变部分和领域特定部分。引入了一个跨域对比学习目标来对这些表示进行优化。在顺序视图中,我们首先设计了一种以用户的领域不变兴趣表示为指导的注意机制,以提取与领域不变因素相关的跨领域知识,同时降低无关因素的噪声。我们进一步设计了一个特定领域的兴趣引导时态信息聚合机制来生成用户当前的兴趣表示。大量的实验表明,与现有的方法相比,我们提出的模型是有效的。
{"title":"Contrastive Multi-View Interest Learning for Cross-Domain Sequential Recommendation","authors":"Tianzi Zang, Yanmin Zhu, Ruohan Zhang, Chunyang Wang, Ke Wang, Jiadi Yu","doi":"10.1145/3632402","DOIUrl":"https://doi.org/10.1145/3632402","url":null,"abstract":"Cross-domain recommendation (CDR), which leverages information collected from other domains, has been empirically demonstrated to effectively alleviate data sparsity and cold-start problems encountered in traditional recommendation systems. However, current CDR methods, including those considering time information, do not jointly model the general and current interests within and across domains, which is pivotal for accurately predicting users’ future interactions. In this paper, we propose a Contrastive learning enhanced Multi-View interest learning model (CMVCDR) for cross-domain sequential recommendation. Specifically, we design a static view and a sequential view to model uses’ general interests and current interests, respectively. We divide a user’s general interest representation into a domain-invariant part and a domain-specific part. A cross-domain contrastive learning objective is introduced to impose constraints for optimizing these representations. In the sequential view, we first devise an attention mechanism guided by users’ domain-invariant interest representations to distill cross-domain knowledge pertaining to domain-invariant factors while reducing noise from irrelevant factors. We further design a domain-specific interest-guided temporal information aggregation mechanism to generate users’ current interest representations. Extensive experiments demonstrate the effectiveness of our proposed model compared with state-of-the-art methods.","PeriodicalId":50936,"journal":{"name":"ACM Transactions on Information Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135242579","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Personalized and Diversified: Ranking Search Results in an Integrated Way 个性化和多样化:以综合的方式对搜索结果进行排名
2区 计算机科学 Q1 Business, Management and Accounting Pub Date : 2023-11-09 DOI: 10.1145/3631989
Shuting Wang, Zhicheng Dou, Jiongnan Liu, Qiannan Zhu, Ji-Rong Wen
Ambiguity in queries is a common problem in information retrieval. There are currently two solutions: Search result personalization and diversification. The former aims to tailor results for different users based on their preferences, but the limitations are redundant results and incomplete capture of user intents. The goal of the latter is to return results that cover as many aspects related to the query as possible. It improves diversity yet loses personality and cannot return the exact results the user wants. Intuitively, such two solutions can complement each other and bring more satisfactory reranking results. In this paper, we propose a novel framework, namely PnD to integrate personalization and diversification reasonably. We employ the degree of refinding to determine the weight of personalization dynamically. Moreover, to improve the diversity and relevance of reranked results simultaneously, we design a reset RNN structure (RRNN) with the “reset gate” to measure the influence of the newly selected document on novelty. Besides, we devise a “subtopic learning layer” to learn the virtual subtopics, which can yield fine-grained representations of queries, documents, and user profiles. Experimental results illustrate that our model can significantly outperform existing search result personalization and diversification methods.
查询歧义是信息检索中常见的问题。目前有两种解决方案:搜索结果个性化和多样化。前者旨在根据不同用户的偏好定制结果,但其局限性是结果冗余和对用户意图的不完整捕获。后者的目标是返回尽可能多地涵盖与查询相关的方面的结果。它增加了多样性,但失去了个性,不能返回用户想要的确切结果。直观上,这两种解决方案可以相辅相成,带来更令人满意的重排序结果。在本文中,我们提出了一个新的框架,即PnD,以合理地整合个性化和多样化。我们采用重新发现的程度来动态地确定个性化的权重。此外,为了同时提高重新排序结果的多样性和相关性,我们设计了一个带有“重置门”的重置RNN结构(RRNN)来衡量新选择的文档对新颖性的影响。此外,我们设计了一个“子主题学习层”来学习虚拟子主题,它可以产生查询、文档和用户配置文件的细粒度表示。实验结果表明,我们的模型可以显著优于现有的搜索结果个性化和多样化方法。
{"title":"Personalized and Diversified: Ranking Search Results in an Integrated Way","authors":"Shuting Wang, Zhicheng Dou, Jiongnan Liu, Qiannan Zhu, Ji-Rong Wen","doi":"10.1145/3631989","DOIUrl":"https://doi.org/10.1145/3631989","url":null,"abstract":"Ambiguity in queries is a common problem in information retrieval. There are currently two solutions: Search result personalization and diversification. The former aims to tailor results for different users based on their preferences, but the limitations are redundant results and incomplete capture of user intents. The goal of the latter is to return results that cover as many aspects related to the query as possible. It improves diversity yet loses personality and cannot return the exact results the user wants. Intuitively, such two solutions can complement each other and bring more satisfactory reranking results. In this paper, we propose a novel framework, namely PnD to integrate personalization and diversification reasonably. We employ the degree of refinding to determine the weight of personalization dynamically. Moreover, to improve the diversity and relevance of reranked results simultaneously, we design a reset RNN structure (RRNN) with the “reset gate” to measure the influence of the newly selected document on novelty. Besides, we devise a “subtopic learning layer” to learn the virtual subtopics, which can yield fine-grained representations of queries, documents, and user profiles. Experimental results illustrate that our model can significantly outperform existing search result personalization and diversification methods.","PeriodicalId":50936,"journal":{"name":"ACM Transactions on Information Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135285578","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Decoupled Progressive Distillation for Sequential Prediction with Interaction Dynamics 解耦递进蒸馏与交互动力学序列预测
2区 计算机科学 Q1 Business, Management and Accounting Pub Date : 2023-11-09 DOI: 10.1145/3632403
Kaixi Hu, Lin Li, Qing Xie, Jianquan Liu, Xiaohui Tao, Guandong Xu
Sequential prediction has great value for resource allocation due to its capability in analyzing intents for next prediction. A fundamental challenge arises from real-world interaction dynamics where similar sequences involving multiple intents may exhibit different next items. More importantly, the character of volume candidate items in sequential prediction may amplify such dynamics, making deep networks hard to capture comprehensive intents. This paper presents a sequential prediction framework with De coupled P r o gressive D istillation (DePoD), drawing on the progressive nature of human cognition. We redefine target and non-target item distillation according to their different effects in the decoupled formulation. This can be achieved through two aspects: (1) Regarding how to learn, our target item distillation with progressive difficulty increases the contribution of low-confidence samples in the later training phase while keeping high-confidence samples in the earlier phase. And, the non-target item distillation starts from a small subset of non-target items from which size increases according to the item frequency. (2) Regarding whom to learn from, a difference evaluator is utilized to progressively select an expert that provides informative knowledge among items from the cohort of peers. Extensive experiments on four public datasets show DePoD outperforms state-of-the-art methods in terms of accuracy-based metrics.
序列预测具有分析下一次预测意图的能力,对资源分配具有重要价值。一个基本的挑战来自于现实世界的互动动态,即包含多个意图的相似序列可能呈现出不同的下一个项目。更重要的是,序列预测中体积候选项目的特征可能会放大这种动态,使深度网络难以捕获综合意图。本文利用人类认知的进步性,提出了一种具有De耦合P或渐进D蒸馏(DePoD)的序列预测框架。根据目标蒸馏和非目标蒸馏在解耦公式中的不同作用,重新定义了它们。这可以通过两个方面来实现:(1)关于如何学习,我们的目标项目逐级难度蒸馏在训练后期增加了低置信度样本的贡献,同时保持了高置信度样本在早期。并且,非目标项目蒸馏从非目标项目的一个小子集开始,其大小根据项目频率增加。(2)对于学习对象,利用差异评估器从同侪队列中逐步选择提供信息性知识的专家。在四个公共数据集上进行的广泛实验表明,就基于准确性的指标而言,DePoD优于最先进的方法。
{"title":"Decoupled Progressive Distillation for Sequential Prediction with Interaction Dynamics","authors":"Kaixi Hu, Lin Li, Qing Xie, Jianquan Liu, Xiaohui Tao, Guandong Xu","doi":"10.1145/3632403","DOIUrl":"https://doi.org/10.1145/3632403","url":null,"abstract":"Sequential prediction has great value for resource allocation due to its capability in analyzing intents for next prediction. A fundamental challenge arises from real-world interaction dynamics where similar sequences involving multiple intents may exhibit different next items. More importantly, the character of volume candidate items in sequential prediction may amplify such dynamics, making deep networks hard to capture comprehensive intents. This paper presents a sequential prediction framework with De coupled P r o gressive D istillation (DePoD), drawing on the progressive nature of human cognition. We redefine target and non-target item distillation according to their different effects in the decoupled formulation. This can be achieved through two aspects: (1) Regarding how to learn, our target item distillation with progressive difficulty increases the contribution of low-confidence samples in the later training phase while keeping high-confidence samples in the earlier phase. And, the non-target item distillation starts from a small subset of non-target items from which size increases according to the item frequency. (2) Regarding whom to learn from, a difference evaluator is utilized to progressively select an expert that provides informative knowledge among items from the cohort of peers. Extensive experiments on four public datasets show DePoD outperforms state-of-the-art methods in terms of accuracy-based metrics.","PeriodicalId":50936,"journal":{"name":"ACM Transactions on Information Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135242374","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
ACM Transactions on Information Systems
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1