首页 > 最新文献

ACM Transactions on Information Systems最新文献

英文 中文
SPContrastNet: A Self-Paced Contrastive Learning Model for Few-Shot Text Classification SPContrastNet:用于少量文本分类的自定进度对比学习模型
IF 5.6 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-03-20 DOI: 10.1145/3652600
Junfan Chen, Richong Zhang, Xiaohan Jiang, Chunming Hu

Meta-learning has recently promoted few-shot text classification, which identifies target classes based on information transferred from source classes through a series of small tasks or episodes. Existing works constructing their meta-learner on Prototypical Networks need improvement in learning discriminative text representations between similar classes that may lead to conflicts in label prediction. The overfitting problems caused by a few training instances need to be adequately addressed. In addition, efficient episode sampling procedures that could enhance few-shot training should be utilized. To address the problems mentioned above, we first present a contrastive learning framework that simultaneously learns discriminative text representations via supervised contrastive learning while mitigating the overfitting problem via unsupervised contrastive regularization, and then we build an efficient self-paced episode sampling approach on top of it to include more difficult episodes as training progresses. Empirical results on 8 few-shot text classification datasets show that our model outperforms the current state-of-the-art models. The extensive experimental analysis demonstrates that our supervised contrastive representation learning and unsupervised contrastive regularization techniques improve the performance of few-shot text classification. The episode-sampling analysis reveals that our self-paced sampling strategy improves training efficiency.

元学习(Meta-learning)近来推动了少量文本分类(few-shot text classification)的发展,它通过一系列小型任务或事件,根据从源类中传递的信息来识别目标类。在原型网络上构建元学习器的现有作品在学习相似类别之间的区别性文本表征方面需要改进,这可能会导致标签预测中的冲突。少数训练实例导致的过拟合问题也需要充分解决。此外,还应该利用高效的插集采样程序来加强少量训练。为了解决上述问题,我们首先提出了一种对比学习框架,该框架在通过无监督对比正则化减轻过拟合问题的同时,还通过有监督对比学习学习了具有区分性的文本表征,然后我们在此基础上建立了一种高效的自定步调情节采样方法,随着训练的进行,将更多的困难情节纳入其中。在 8 个少量文本分类数据集上的经验结果表明,我们的模型优于目前最先进的模型。广泛的实验分析表明,我们的有监督对比表示学习和无监督对比正则化技术提高了少量文本分类的性能。情节采样分析表明,我们的自定步调采样策略提高了训练效率。
{"title":"SPContrastNet: A Self-Paced Contrastive Learning Model for Few-Shot Text Classification","authors":"Junfan Chen, Richong Zhang, Xiaohan Jiang, Chunming Hu","doi":"10.1145/3652600","DOIUrl":"https://doi.org/10.1145/3652600","url":null,"abstract":"<p>Meta-learning has recently promoted few-shot text classification, which identifies target classes based on information transferred from source classes through a series of small tasks or episodes. Existing works constructing their meta-learner on Prototypical Networks need improvement in learning discriminative text representations between similar classes that may lead to conflicts in label prediction. The overfitting problems caused by a few training instances need to be adequately addressed. In addition, efficient episode sampling procedures that could enhance few-shot training should be utilized. To address the problems mentioned above, we first present a contrastive learning framework that simultaneously learns discriminative text representations via supervised contrastive learning while mitigating the overfitting problem via unsupervised contrastive regularization, and then we build an efficient self-paced episode sampling approach on top of it to include more difficult episodes as training progresses. Empirical results on 8 few-shot text classification datasets show that our model outperforms the current state-of-the-art models. The extensive experimental analysis demonstrates that our supervised contrastive representation learning and unsupervised contrastive regularization techniques improve the performance of few-shot text classification. The episode-sampling analysis reveals that our self-paced sampling strategy improves training efficiency.</p>","PeriodicalId":50936,"journal":{"name":"ACM Transactions on Information Systems","volume":"36 1","pages":""},"PeriodicalIF":5.6,"publicationDate":"2024-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140166716","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Distributional Fairness-aware Recommendation 注重分配公平的建议
IF 5.6 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-03-18 DOI: 10.1145/3652854
Hao Yang, Xian Wu, Zhaopeng Qiu, Yefeng Zheng, Xu Chen

Fairness has been gradually recognized as a significant problem in the recommendation domain. Previous models usually achieve fairness by reducing the average performance gap between different user groups. However, the average performance may not sufficiently represent all the characteristics of the performances in a user group. Thus, equivalent average performance may not mean the recommender model is fair, for example, the variance of the performances can be different. To alleviate this problem, in this paper, we define a novel type of fairness, where we require that the performance distributions across different user groups should be similar. We prove that with the same performance distribution, the numerical characteristics of the group performance, including the expectation, variance and any higher order moment, are also the same. To achieve distributional fairness, we propose a generative and adversarial training framework. In specific, we regard the recommender model as the generator to compute the performance for each user in different groups, and then we deploy a discriminator to judge which group the performance is drawn from. By iteratively optimizing the generator and the discriminator, we can theoretically prove that the optimal generator (the recommender model) can indeed lead to the equivalent performance distributions. To smooth the adversarial training process, we propose a novel dual curriculum learning strategy for optimal scheduling of training samples. Additionally, we tailor our framework to better suit top-N recommendation tasks by incorporating softened ranking metrics as measures of performance discrepancies. We conduct extensive experiments based on real-world datasets to demonstrate the effectiveness of our model.

公平性已逐渐被认为是推荐领域的一个重要问题。以往的模型通常通过缩小不同用户组之间的平均性能差距来实现公平性。然而,平均性能可能并不能充分代表一个用户群的所有性能特征。因此,同等的平均性能可能并不意味着推荐模型是公平的,例如,性能的方差可能是不同的。为了缓解这一问题,我们在本文中定义了一种新的公平性类型,即要求不同用户组的性能分布应相似。我们证明,在性能分布相同的情况下,群体性能的数字特征,包括期望值、方差和任何高阶矩,也都是相同的。为了实现分布公平,我们提出了一个生成和对抗训练框架。具体来说,我们将推荐模型视为生成器,计算每个用户在不同群体中的表现,然后部署一个判别器来判断表现来自哪个群体。通过对生成器和判别器进行迭代优化,我们可以从理论上证明,最优生成器(推荐模型)确实可以导致等效的性能分布。为了平滑对抗训练过程,我们提出了一种新颖的双课程学习策略,用于优化训练样本的调度。此外,我们还调整了我们的框架,将软化排名指标作为性能差异的衡量标准,以更好地适应顶N推荐任务。我们基于真实世界的数据集进行了大量实验,以证明我们模型的有效性。
{"title":"Distributional Fairness-aware Recommendation","authors":"Hao Yang, Xian Wu, Zhaopeng Qiu, Yefeng Zheng, Xu Chen","doi":"10.1145/3652854","DOIUrl":"https://doi.org/10.1145/3652854","url":null,"abstract":"<p>Fairness has been gradually recognized as a significant problem in the recommendation domain. Previous models usually achieve fairness by reducing the average performance gap between different user groups. However, the average performance may not sufficiently represent all the characteristics of the performances in a user group. Thus, equivalent average performance may not mean the recommender model is fair, for example, the variance of the performances can be different. To alleviate this problem, in this paper, we define a novel type of fairness, where we require that the performance distributions across different user groups should be similar. We prove that with the same performance distribution, the numerical characteristics of the group performance, including the expectation, variance and any higher order moment, are also the same. To achieve distributional fairness, we propose a generative and adversarial training framework. In specific, we regard the recommender model as the generator to compute the performance for each user in different groups, and then we deploy a discriminator to judge which group the performance is drawn from. By iteratively optimizing the generator and the discriminator, we can theoretically prove that the optimal generator (the recommender model) can indeed lead to the equivalent performance distributions. To smooth the adversarial training process, we propose a novel dual curriculum learning strategy for optimal scheduling of training samples. Additionally, we tailor our framework to better suit top-N recommendation tasks by incorporating softened ranking metrics as measures of performance discrepancies. We conduct extensive experiments based on real-world datasets to demonstrate the effectiveness of our model.</p>","PeriodicalId":50936,"journal":{"name":"ACM Transactions on Information Systems","volume":"143 1","pages":""},"PeriodicalIF":5.6,"publicationDate":"2024-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140166713","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Discrete Federated Multi-behavior Recommendation for Privacy-Preserving Heterogeneous One-Class Collaborative Filtering 为保护隐私的异构单类协作过滤提供离散联合多行为推荐
IF 5.6 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-03-18 DOI: 10.1145/3652853
Enyue Yang, Weike Pan, Qiang Yang, Zhong Ming

Recently, federated recommendation has become a research hotspot mainly because of users’ awareness of privacy in data. As a recent and important recommendation problem, in heterogeneous one-class collaborative filtering (HOCCF), each user may involve of two different types of implicit feedback, i.e., examinations and purchases. So far, privacy-preserving HOCCF has received relatively little attention. Existing federated recommendation works often overlook the fact that some privacy sensitive behaviors such as purchases should be collected to ensure the basic business imperatives in e-commerce for example. Hence, the user privacy constraints can and should be relaxed while deploying a recommendation system in real scenarios. In this paper, we study the federated multi-behavior recommendation problem under the assumption that purchase behaviors can be collected. Moreover, there are two additional challenges that need to be addressed when deploying federated recommendation. One is the low storage capacity for users’ devices to store all the item vectors, and the other is the low computational power for users to participate in federated learning. To release the potential of privacy-preserving HOCCF, we propose a novel framework, named discrete federated multi-behavior recommendation (DFMR), which allows the collection of the business necessary behaviors (i.e., purchases) by the server. As to reduce the storage overhead, we use discrete hashing techniques, which can compress the parameters down to 1.56% of the real-valued parameters. To further improve the computation-efficiency, we design a memorization strategy in the cache updating module to accelerate the training process. Extensive experiments on four public datasets show the superiority of our DFMR in terms of both accuracy and efficiency.

近来,联合推荐成为研究热点,主要原因是用户对数据隐私的关注。作为一个新近出现的重要推荐问题,在异构单类协同过滤(HOCCF)中,每个用户都可能涉及两种不同类型的隐式反馈,即考试和购买。迄今为止,保护隐私的 HOCCF 受到的关注相对较少。现有的联合推荐作品往往忽视了这样一个事实,即为了确保电子商务等领域的基本业务需要,应收集一些隐私敏感行为(如购买)。因此,在实际场景中部署推荐系统时,可以也应该放宽对用户隐私的限制。在本文中,我们研究了假设可以收集购买行为的联合多行为推荐问题。此外,在部署联合推荐时,还需要解决两个额外的挑战。一个是用户设备存储所有项目向量的存储容量较低,另一个是用户参与联合学习的计算能力较低。为了释放保护隐私的 HOCCF 的潜力,我们提出了一个新颖的框架,即离散联合多行为推荐(DFMR),它允许服务器收集业务必需的行为(即购买)。为了减少存储开销,我们使用了离散散列技术,可以将参数压缩到实值参数的 1.56%。为了进一步提高计算效率,我们在缓存更新模块中设计了一种记忆策略,以加速训练过程。在四个公共数据集上进行的大量实验表明,我们的 DFMR 在准确性和效率方面都非常出色。
{"title":"Discrete Federated Multi-behavior Recommendation for Privacy-Preserving Heterogeneous One-Class Collaborative Filtering","authors":"Enyue Yang, Weike Pan, Qiang Yang, Zhong Ming","doi":"10.1145/3652853","DOIUrl":"https://doi.org/10.1145/3652853","url":null,"abstract":"<p>Recently, federated recommendation has become a research hotspot mainly because of users’ awareness of privacy in data. As a recent and important recommendation problem, in heterogeneous one-class collaborative filtering (HOCCF), each user may involve of two different types of implicit feedback, i.e., examinations and purchases. So far, privacy-preserving HOCCF has received relatively little attention. Existing federated recommendation works often overlook the fact that some privacy sensitive behaviors such as purchases should be collected to ensure the basic business imperatives in e-commerce for example. Hence, the user privacy constraints can and should be relaxed while deploying a recommendation system in real scenarios. In this paper, we study the federated multi-behavior recommendation problem under the assumption that purchase behaviors can be collected. Moreover, there are two additional challenges that need to be addressed when deploying federated recommendation. One is the low storage capacity for users’ devices to store all the item vectors, and the other is the low computational power for users to participate in federated learning. To release the potential of privacy-preserving HOCCF, we propose a novel framework, named discrete federated multi-behavior recommendation (DFMR), which allows the collection of the business necessary behaviors (i.e., purchases) by the server. As to reduce the storage overhead, we use discrete hashing techniques, which can compress the parameters down to 1.56% of the real-valued parameters. To further improve the computation-efficiency, we design a memorization strategy in the cache updating module to accelerate the training process. Extensive experiments on four public datasets show the superiority of our DFMR in terms of both accuracy and efficiency.</p>","PeriodicalId":50936,"journal":{"name":"ACM Transactions on Information Systems","volume":"87 1","pages":""},"PeriodicalIF":5.6,"publicationDate":"2024-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140166582","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DHyper: A Recurrent Dual Hypergraph Neural Network for Event Prediction in Temporal Knowledge Graphs DHyper:用于时态知识图谱事件预测的递归双超图神经网络
IF 5.6 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-03-18 DOI: 10.1145/3653015
Xing Tang, Ling Chen, Hongyu Shi, Dandan Lyu

Event prediction is a vital and challenging task in temporal knowledge graphs (TKGs), which have played crucial roles in various applications. Recently, many graph neural networks based approaches are proposed to model the graph structure information in TKGs. However, these approaches only construct graphs based on quadruplets and model the pairwise correlation between entities, which fail to capture the high-order correlations among entities. To this end, we propose DHyper, a recurrent Dual Hypergraph neural network for event prediction in TKGs, which simultaneously models the influences of both the high-order correlations among entities and among relations. Specifically, a dual hypergraph learning module is proposed to discover the high-order correlations among entities and among relations in a parameterized way. A dual hypergraph message passing network is introduced to perform the information aggregation and representation fusion on the entity hypergraph and the relation hypergraph. Extensive experiments on six real-world datasets demonstrate that DHyper achieves the state-of-the-art performances, outperforming the best baseline by an average of 13.09%, 4.26%, 17.60%, and 18.03% in MRR, Hits@1, Hits@3, and Hits@10, respectively.

事件预测是时态知识图谱(TKG)中一项重要而具有挑战性的任务,TKG 在各种应用中发挥着至关重要的作用。最近,许多基于图神经网络的方法被提出来为 TKGs 中的图结构信息建模。然而,这些方法只能构建基于四元组的图,并对实体间的成对相关性进行建模,无法捕捉实体间的高阶相关性。为此,我们提出了用于 TKG 事件预测的递归双超图神经网络 DHyper,它能同时模拟实体间和关系间高阶相关性的影响。具体来说,我们提出了一个双超图学习模块,以参数化的方式发现实体间和关系间的高阶相关性。此外,还引入了一个双超图消息传递网络,对实体超图和关系超图进行信息聚合和表征融合。在六个真实数据集上进行的广泛实验表明,DHyper 实现了最先进的性能,在 MRR、Hits@1、Hits@3 和 Hits@10 方面分别比最佳基线平均高出 13.09%、4.26%、17.60% 和 18.03%。
{"title":"DHyper: A Recurrent Dual Hypergraph Neural Network for Event Prediction in Temporal Knowledge Graphs","authors":"Xing Tang, Ling Chen, Hongyu Shi, Dandan Lyu","doi":"10.1145/3653015","DOIUrl":"https://doi.org/10.1145/3653015","url":null,"abstract":"<p>Event prediction is a vital and challenging task in temporal knowledge graphs (TKGs), which have played crucial roles in various applications. Recently, many graph neural networks based approaches are proposed to model the graph structure information in TKGs. However, these approaches only construct graphs based on quadruplets and model the pairwise correlation between entities, which fail to capture the high-order correlations among entities. To this end, we propose DHyper, a recurrent <b>D</b>ual <b>Hyper</b>graph neural network for event prediction in TKGs, which simultaneously models the influences of both the high-order correlations among entities and among relations. Specifically, a dual hypergraph learning module is proposed to discover the high-order correlations among entities and among relations in a parameterized way. A dual hypergraph message passing network is introduced to perform the information aggregation and representation fusion on the entity hypergraph and the relation hypergraph. Extensive experiments on six real-world datasets demonstrate that DHyper achieves the state-of-the-art performances, outperforming the best baseline by an average of 13.09%, 4.26%, 17.60%, and 18.03% in MRR, Hits@1, Hits@3, and Hits@10, respectively.</p>","PeriodicalId":50936,"journal":{"name":"ACM Transactions on Information Systems","volume":"21 1","pages":""},"PeriodicalIF":5.6,"publicationDate":"2024-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140166714","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Diversifying Sequential Recommendation with Retrospective and Prospective Transformers 利用回溯式和前瞻式转换器使顺序推荐多样化
IF 5.6 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-03-17 DOI: 10.1145/3653016
Chaoyu Shi, Pengjie Ren, Dongjie Fu, Xin Xin, Shansong Yang, Fei Cai, Zhaochun Ren, Zhumin Chen

Previous studies on sequential recommendation (SR) have predominantly concentrated on optimizing recommendation accuracy. However, there remains a significant gap in enhancing recommendation diversity, particularly for short interaction sequences. The limited availability of interaction information in short sequences hampers the recommender’s ability to comprehensively model users’ intents, consequently affecting both the diversity and accuracy of recommendation. In light of the above challenge, we propose reTrospective and pRospective Transformers for dIversified sEquential Recommendation (TRIER). The TRIER addresses the issue of insufficient information in short interaction sequences by first retrospectively learning to predict users’ potential historical interactions, thereby introducing additional information and expanding short interaction sequences, and then capturing users’ potential intents from multiple augmented sequences. Finally, the TRIER learns to generate diverse recommendation lists by covering as many potential intents as possible.

To evaluate the effectiveness of TRIER, we conduct extensive experiments on three benchmark datasets. The experimental results demonstrate that TRIER significantly outperforms state-of-the-art methods, exhibiting diversity improvement of up to 11.36% in terms of intra-list distance (ILD@5) on the Steam dataset, 3.43% ILD@5 on the Yelp dataset and 3.77% in terms of category coverage (CC@5) on the Beauty dataset. As for accuracy, on the Yelp dataset, we observe notable improvement of 7.62% and 8.63% in HR@5 and NDCG@5, respectively. Moreover, we found that TRIER reveals more significant accuracy and diversity improvement for short interaction sequences.

以往关于序列推荐(SR)的研究主要集中在优化推荐准确性上。然而,在增强推荐多样性方面仍存在巨大差距,尤其是在短交互序列方面。短序列中交互信息的有限性阻碍了推荐者全面模拟用户意图的能力,从而影响了推荐的多样性和准确性。有鉴于此,我们提出了用于逆向均衡推荐的前瞻性和后瞻性转换器(TRIER)。TRIER 首先通过回溯学习预测用户潜在的历史互动,从而引入额外信息并扩展短互动序列,然后从多个增强序列中捕捉用户的潜在意图,从而解决短互动序列信息不足的问题。最后,TRIER 通过学习尽可能多的潜在意图来生成多样化的推荐列表。为了评估 TRIER 的有效性,我们在三个基准数据集上进行了广泛的实验。实验结果表明,TRIER 的性能明显优于最先进的方法,在 Steam 数据集上的列表内距离(ILD@5)多样性提高了 11.36%,在 Yelp 数据集上的列表内距离(ILD@5)提高了 3.43%,在 Beauty 数据集上的类别覆盖率(CC@5)提高了 3.77%。至于准确性,在 Yelp 数据集上,我们观察到 HR@5 和 NDCG@5 分别显著提高了 7.62% 和 8.63%。此外,我们还发现 TRIER 对短交互序列的准确性和多样性有更显著的改善。
{"title":"Diversifying Sequential Recommendation with Retrospective and Prospective Transformers","authors":"Chaoyu Shi, Pengjie Ren, Dongjie Fu, Xin Xin, Shansong Yang, Fei Cai, Zhaochun Ren, Zhumin Chen","doi":"10.1145/3653016","DOIUrl":"https://doi.org/10.1145/3653016","url":null,"abstract":"<p>Previous studies on sequential recommendation (SR) have predominantly concentrated on optimizing recommendation accuracy. However, there remains a significant gap in enhancing recommendation diversity, particularly for short interaction sequences. The limited availability of interaction information in short sequences hampers the recommender’s ability to comprehensively model users’ intents, consequently affecting both the diversity and accuracy of recommendation. In light of the above challenge, we propose <i>reTrospective and pRospective Transformers for dIversified sEquential Recommendation</i> (TRIER). The TRIER addresses the issue of insufficient information in short interaction sequences by first retrospectively learning to predict users’ potential historical interactions, thereby introducing additional information and expanding short interaction sequences, and then capturing users’ potential intents from multiple augmented sequences. Finally, the TRIER learns to generate diverse recommendation lists by covering as many potential intents as possible. </p><p>To evaluate the effectiveness of TRIER, we conduct extensive experiments on three benchmark datasets. The experimental results demonstrate that TRIER significantly outperforms state-of-the-art methods, exhibiting diversity improvement of up to 11.36% in terms of intra-list distance (ILD@5) on the Steam dataset, 3.43% ILD@5 on the Yelp dataset and 3.77% in terms of category coverage (CC@5) on the Beauty dataset. As for accuracy, on the Yelp dataset, we observe notable improvement of 7.62% and 8.63% in HR@5 and NDCG@5, respectively. Moreover, we found that TRIER reveals more significant accuracy and diversity improvement for short interaction sequences.</p>","PeriodicalId":50936,"journal":{"name":"ACM Transactions on Information Systems","volume":"16 1","pages":""},"PeriodicalIF":5.6,"publicationDate":"2024-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140166710","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-grained Document Modeling for Search Result Diversification 多级文档建模促进搜索结果多样化
IF 5.6 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-03-15 DOI: 10.1145/3652852
Zhirui Deng, Zhicheng Dou, Zhan Su, Ji-Rong Wen

Search result diversification plays a crucial role in improving users’ search experience by providing users with documents covering more subtopics. Previous studies have made great progress in leveraging inter-document interactions to measure the similarity among documents. However, different parts of the document may embody different subtopics and existing models ignore the subtle similarities and differences of content within each document. In this paper, we propose a hierarchical attention framework to combine intra-document interactions with inter-document interactions in a complementary manner in order to conduct multi-grained document modeling. Specifically, we separate the document into passages to model the document content from multi-grained perspectives. Then, we design stacked interaction blocks to conduct inter-document and intra-document interactions. Moreover, to measure the subtopic coverage of each document more accurately, we propose a passage-aware document-subtopic interaction to perform fine-grained document-subtopic interaction. Experimental results demonstrate that our model achieves state-of-the-art performance compared with existing methods.

通过向用户提供涵盖更多子主题的文档,搜索结果多样化在改善用户搜索体验方面发挥着至关重要的作用。以往的研究在利用文档间交互来衡量文档相似性方面取得了很大进展。然而,文档的不同部分可能包含不同的子主题,现有模型忽略了每个文档内部内容的细微异同。在本文中,我们提出了一个分层注意力框架,以互补的方式将文档内交互与文档间交互结合起来,从而进行多粒度文档建模。具体来说,我们将文档分成若干段落,从多级视角对文档内容进行建模。然后,我们设计了堆叠交互块来进行文档间和文档内的交互。此外,为了更准确地衡量每篇文档的子主题覆盖率,我们提出了一种段落感知的文档-子主题交互,以实现细粒度的文档-子主题交互。实验结果表明,与现有方法相比,我们的模型达到了最先进的性能。
{"title":"Multi-grained Document Modeling for Search Result Diversification","authors":"Zhirui Deng, Zhicheng Dou, Zhan Su, Ji-Rong Wen","doi":"10.1145/3652852","DOIUrl":"https://doi.org/10.1145/3652852","url":null,"abstract":"<p>Search result diversification plays a crucial role in improving users’ search experience by providing users with documents covering more subtopics. Previous studies have made great progress in leveraging inter-document interactions to measure the similarity among documents. However, different parts of the document may embody different subtopics and existing models ignore the subtle similarities and differences of content within each document. In this paper, we propose a hierarchical attention framework to combine intra-document interactions with inter-document interactions in a complementary manner in order to conduct multi-grained document modeling. Specifically, we separate the document into passages to model the document content from multi-grained perspectives. Then, we design stacked interaction blocks to conduct inter-document and intra-document interactions. Moreover, to measure the subtopic coverage of each document more accurately, we propose a passage-aware document-subtopic interaction to perform fine-grained document-subtopic interaction. Experimental results demonstrate that our model achieves state-of-the-art performance compared with existing methods.</p>","PeriodicalId":50936,"journal":{"name":"ACM Transactions on Information Systems","volume":"47 1","pages":""},"PeriodicalIF":5.6,"publicationDate":"2024-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140149512","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cooking with Conversation: Enhancing User Engagement and Learning with a Knowledge-Enhancing Assistant 用对话烹饪:利用知识强化助手提高用户参与度和学习效果
IF 5.6 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-03-15 DOI: 10.1145/3649500
Alexander Frummet, Alessandro Speggiorin, David Elsweiler, Anton Leuski, Jeff Dalton

We present two empirical studies to investigate users’ expectations and behaviours when using digital assistants, such as Alexa and Google Home, in a kitchen context: First, a survey (N=200) queries participants on their expectations for the kinds of information that such systems should be able to provide. While consensus exists on expecting information about cooking steps and processes, younger participants who enjoy cooking express a higher likelihood of expecting details on food history or the science of cooking. In a follow-up Wizard-of-Oz study (N = 48), users were guided through the steps of a recipe either by an active wizard that alerted participants to information it could provide or a passive wizard who only answered questions that were provided by the user. The active policy led to almost double the number of conversational utterances and 1.5 times more knowledge-related user questions compared to the passive policy. Also, it resulted in 1.7 times more knowledge communicated than the passive policy. We discuss the findings in the context of related work and reveal implications for the design and use of such assistants for cooking and other purposes such as DIY and craft tasks, as well as the lessons we learned for evaluating such systems.

我们介绍了两项实证研究,以调查用户在厨房环境中使用 Alexa 和 Google Home 等数字助理时的期望和行为:首先,一项调查(N=200)询问了参与者对此类系统应能提供的信息种类的期望。虽然对烹饪步骤和流程信息的期望已达成共识,但喜欢烹饪的年轻参与者表示更有可能期望获得有关食物历史或烹饪科学的详细信息。在一项 "向导"(Wizard-of-Oz)的后续研究(N = 48)中,用户在菜谱步骤中的指导可以是主动向导(提醒参与者它可以提供的信息),也可以是被动向导(只回答用户提出的问题)。与被动向导相比,主动向导所产生的对话语句数量几乎是被动向导的两倍,而与知识相关的用户提问数量则是被动向导的 1.5 倍。此外,主动政策所传播的知识也是被动政策的 1.7 倍。我们将在相关工作的背景下讨论这些研究结果,并揭示设计和使用此类烹饪助手及其他用途(如 DIY 和手工任务)的意义,以及我们在评估此类系统时学到的经验。
{"title":"Cooking with Conversation: Enhancing User Engagement and Learning with a Knowledge-Enhancing Assistant","authors":"Alexander Frummet, Alessandro Speggiorin, David Elsweiler, Anton Leuski, Jeff Dalton","doi":"10.1145/3649500","DOIUrl":"https://doi.org/10.1145/3649500","url":null,"abstract":"<p>We present two empirical studies to investigate users’ expectations and behaviours when using digital assistants, such as Alexa and Google Home, in a kitchen context: First, a survey (N=200) queries participants on their expectations for the kinds of information that such systems should be able to provide. While consensus exists on expecting information about cooking steps and processes, younger participants who enjoy cooking express a higher likelihood of expecting details on food history or the science of cooking. In a follow-up Wizard-of-Oz study (N = 48), users were guided through the steps of a recipe either by an <i>active</i> wizard that alerted participants to information it could provide or a <i>passive</i> wizard who only answered questions that were provided by the user. The <i>active</i> policy led to almost double the number of conversational utterances and 1.5 times more knowledge-related user questions compared to the <i>passive</i> policy. Also, it resulted in 1.7 times more knowledge communicated than the <i>passive</i> policy. We discuss the findings in the context of related work and reveal implications for the design and use of such assistants for cooking and other purposes such as DIY and craft tasks, as well as the lessons we learned for evaluating such systems.</p>","PeriodicalId":50936,"journal":{"name":"ACM Transactions on Information Systems","volume":"8 1","pages":""},"PeriodicalIF":5.6,"publicationDate":"2024-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140149509","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Collaborative Sequential Recommendations via Multi-View GNN-Transformers 通过多视图 GNN 变换器进行协作式顺序推荐
IF 5.6 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-03-15 DOI: 10.1145/3649436
Tianze Luo, Yong Liu, Sinno Jialin Pan

Sequential recommendation systems aim to exploit users’ sequential behavior patterns to capture their interaction intentions and improve recommendation accuracy. Existing sequential recommendation methods mainly focus on modeling the items’ chronological relationships in each individual user behavior sequence, which may not be effective in making accurate and robust recommendations. On one hand, the performance of existing sequential recommendation methods is usually sensitive to the length of a user’s behavior sequence (i.e., the list of a user’s historically interacted items). On the other hand, besides the context information in each individual user behavior sequence, the collaborative information among different users’ behavior sequences is also crucial to make accurate recommendations. However, this kind of information is usually ignored by existing sequential recommendation methods. In this work, we propose a new sequential recommendation framework, which encodes the context information in each individual user behavior sequence as well as the collaborative information among the behavior sequences of different users, through building a local dependency graph for each item. We conduct extensive experiments to compare the proposed model with state-of-the-art sequential recommendation methods on five benchmark datasets. The experimental results demonstrate that the proposed model is able to achieve better recommendation performance than existing methods, by incorporating collaborative information.

顺序推荐系统旨在利用用户的顺序行为模式来捕捉用户的交互意图并提高推荐的准确性。现有的顺序推荐方法主要侧重于对每个用户行为序列中的项目时序关系建模,这可能无法有效地做出准确而稳健的推荐。一方面,现有顺序推荐方法的性能通常对用户行为序列(即用户历史交互项目列表)的长度很敏感。另一方面,除了每个用户行为序列中的上下文信息外,不同用户行为序列之间的协同信息也是进行准确推荐的关键。然而,现有的顺序推荐方法通常会忽略这类信息。在这项工作中,我们提出了一种新的顺序推荐框架,它通过为每个项目建立局部依赖图来编码每个用户行为序列中的上下文信息以及不同用户行为序列之间的协作信息。我们进行了大量实验,在五个基准数据集上比较了所提出的模型和最先进的顺序推荐方法。实验结果表明,通过结合协作信息,所提出的模型能够实现比现有方法更好的推荐性能。
{"title":"Collaborative Sequential Recommendations via Multi-View GNN-Transformers","authors":"Tianze Luo, Yong Liu, Sinno Jialin Pan","doi":"10.1145/3649436","DOIUrl":"https://doi.org/10.1145/3649436","url":null,"abstract":"<p>Sequential recommendation systems aim to exploit users’ sequential behavior patterns to capture their interaction intentions and improve recommendation accuracy. Existing sequential recommendation methods mainly focus on modeling the items’ chronological relationships in each individual user behavior sequence, which may not be effective in making accurate and robust recommendations. On one hand, the performance of existing sequential recommendation methods is usually sensitive to the length of a user’s behavior sequence (<i>i.e.</i>, the list of a user’s historically interacted items). On the other hand, besides the context information in each individual user behavior sequence, the collaborative information among different users’ behavior sequences is also crucial to make accurate recommendations. However, this kind of information is usually ignored by existing sequential recommendation methods. In this work, we propose a new sequential recommendation framework, which encodes the context information in each individual user behavior sequence as well as the collaborative information among the behavior sequences of different users, through building a local dependency graph for each item. We conduct extensive experiments to compare the proposed model with state-of-the-art sequential recommendation methods on five benchmark datasets. The experimental results demonstrate that the proposed model is able to achieve better recommendation performance than existing methods, by incorporating collaborative information.</p>","PeriodicalId":50936,"journal":{"name":"ACM Transactions on Information Systems","volume":"24 1","pages":""},"PeriodicalIF":5.6,"publicationDate":"2024-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140149508","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cross-Model Comparative Loss for Enhancing Neuronal Utility in Language Understanding 跨模型比较损失,提高神经元在语言理解中的效用
IF 5.6 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-03-15 DOI: 10.1145/3652599
Yunchang Zhu, Liang Pang, Kangxi Wu, Yanyan Lan, Huawei Shen, Xueqi Cheng

Current natural language understanding (NLU) models have been continuously scaling up, both in terms of model size and input context, introducing more hidden and input neurons. While this generally improves performance on average, the extra neurons do not yield a consistent improvement for all instances. This is because some hidden neurons are redundant, and the noise mixed in input neurons tends to distract the model. Previous work mainly focuses on extrinsically reducing low-utility neurons by additional post- or pre-processing, such as network pruning and context selection, to avoid this problem. Beyond that, can we make the model reduce redundant parameters and suppress input noise by intrinsically enhancing the utility of each neuron? If a model can efficiently utilize neurons, no matter which neurons are ablated (disabled), the ablated submodel should perform no better than the original full model. Based on such a comparison principle between models, we propose a cross-model comparative loss for a broad range of tasks. Comparative loss is essentially a ranking loss on top of the task-specific losses of the full and ablated models, with the expectation that the task-specific loss of the full model is minimal. We demonstrate the universal effectiveness of comparative loss through extensive experiments on 14 datasets from 3 distinct NLU tasks based on 5 widely used pretrained language models and find it particularly superior for models with few parameters or long input.

当前的自然语言理解(NLU)模型在模型规模和输入语境方面都在不断扩大,引入了更多的隐藏神经元和输入神经元。虽然这通常能提高平均性能,但额外的神经元并不能对所有实例产生一致的改进。这是因为一些隐藏神经元是冗余的,而输入神经元中混杂的噪声往往会分散模型的注意力。以往的工作主要集中在通过额外的后处理或预处理(如网络剪枝和上下文选择)从外部减少低效用神经元,以避免这一问题。除此以外,我们能否通过内在提高每个神经元的效用,使模型减少冗余参数,抑制输入噪声呢?如果一个模型能有效利用神经元,那么无论哪些神经元被消减(禁用),被消减的子模型的表现都不应该比原来的完整模型更好。基于这种模型间的比较原则,我们提出了一种适用于多种任务的跨模型比较损失法。比较损失本质上是在完整模型和消减模型的特定任务损失基础上的排名损失,期望完整模型的特定任务损失最小。我们在基于 5 个广泛使用的预训练语言模型的 3 个不同 NLU 任务的 14 个数据集上进行了大量实验,证明了比较损失的普遍有效性,并发现它对于参数较少或输入较长的模型尤为优越。
{"title":"Cross-Model Comparative Loss for Enhancing Neuronal Utility in Language Understanding","authors":"Yunchang Zhu, Liang Pang, Kangxi Wu, Yanyan Lan, Huawei Shen, Xueqi Cheng","doi":"10.1145/3652599","DOIUrl":"https://doi.org/10.1145/3652599","url":null,"abstract":"<p>Current natural language understanding (NLU) models have been continuously scaling up, both in terms of model size and input context, introducing more hidden and input neurons. While this generally improves performance on average, the extra neurons do not yield a consistent improvement for all instances. This is because some hidden neurons are redundant, and the noise mixed in input neurons tends to distract the model. Previous work mainly focuses on extrinsically reducing low-utility neurons by additional post- or pre-processing, such as network pruning and context selection, to avoid this problem. Beyond that, can we make the model reduce redundant parameters and suppress input noise by intrinsically enhancing the utility of each neuron? If a model can efficiently utilize neurons, no matter which neurons are ablated (disabled), the ablated submodel should perform no better than the original full model. Based on such a comparison principle between models, we propose a cross-model comparative loss for a broad range of tasks. Comparative loss is essentially a ranking loss on top of the task-specific losses of the full and ablated models, with the expectation that the task-specific loss of the full model is minimal. We demonstrate the universal effectiveness of comparative loss through extensive experiments on 14 datasets from 3 distinct NLU tasks based on 5 widely used pretrained language models and find it particularly superior for models with few parameters or long input.</p>","PeriodicalId":50936,"journal":{"name":"ACM Transactions on Information Systems","volume":"116 1","pages":""},"PeriodicalIF":5.6,"publicationDate":"2024-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140149426","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ELAKT: Enhancing Locality for Attentive Knowledge Tracing ELAKT:增强定位能力,实现专注的知识追踪
IF 5.6 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-03-14 DOI: 10.1145/3652601
Yanjun Pu, Fang Liu, Rongye Shi, Haitao Yuan, Ruibo Chen, Tianhao Peng, WenJun Wu

Knowledge tracing models based on deep learning can achieve impressive predictive performance by leveraging attention mechanisms. However, there still exist two challenges in attentive knowledge tracing: First, the mechanism of classical models of attentive knowledge tracing demonstrates relatively low attention when processing exercise sequences with shifting knowledge concepts, making it difficult to capture the comprehensive state of knowledge across sequences. Second, classical models do not consider stochastic behaviors, which negatively affects models of attentive knowledge tracing in terms of capturing anomalous knowledge states. This paper proposes a model of attentive knowledge tracing, called Enhancing Locality for Attentive Knowledge Tracing (ELAKT), that is a variant of the deep knowledge tracing model. The proposed model leverages the encoder module of the transformer to aggregate knowledge embedding generated by both exercises and responses over all timesteps. In addition, it uses causal convolutions to aggregate and smooth the states of local knowledge. The ELAKT model uses the states of comprehensive knowledge concepts to introduce a prediction correction module to forecast the future responses of students to deal with noise caused by stochastic behaviors. The results of experiments demonstrated that the ELAKT model consistently outperforms state-of-the-art baseline knowledge tracing models.

基于深度学习的知识追踪模型可以利用注意力机制实现令人印象深刻的预测性能。然而,注意力知识追踪仍然存在两个挑战:首先,在处理知识概念不断变化的练习序列时,经典的注意力知识追踪模型的机制表现出相对较低的注意力,因此难以捕捉跨序列的综合知识状态。其次,经典模型没有考虑随机行为,这对专注知识追踪模型捕捉异常知识状态产生了负面影响。本文提出了一种细心知识追踪模型,称为 "增强细心知识追踪的位置性(ELAKT)",它是深度知识追踪模型的一种变体。该模型利用变换器的编码器模块来汇总所有时间步上由练习和响应产生的知识嵌入。此外,它还使用因果卷积来聚合和平滑局部知识的状态。ELAKT 模型利用综合知识概念的状态引入预测修正模块,预测学生未来的反应,以处理随机行为造成的噪声。实验结果表明,ELAKT 模型的性能始终优于最先进的基线知识追踪模型。
{"title":"ELAKT: Enhancing Locality for Attentive Knowledge Tracing","authors":"Yanjun Pu, Fang Liu, Rongye Shi, Haitao Yuan, Ruibo Chen, Tianhao Peng, WenJun Wu","doi":"10.1145/3652601","DOIUrl":"https://doi.org/10.1145/3652601","url":null,"abstract":"<p>Knowledge tracing models based on deep learning can achieve impressive predictive performance by leveraging attention mechanisms. However, there still exist two challenges in attentive knowledge tracing: First, the mechanism of classical models of attentive knowledge tracing demonstrates relatively low attention when processing exercise sequences with shifting knowledge concepts, making it difficult to capture the comprehensive state of knowledge across sequences. Second, classical models do not consider stochastic behaviors, which negatively affects models of attentive knowledge tracing in terms of capturing anomalous knowledge states. This paper proposes a model of attentive knowledge tracing, called Enhancing Locality for Attentive Knowledge Tracing (ELAKT), that is a variant of the deep knowledge tracing model. The proposed model leverages the encoder module of the transformer to aggregate knowledge embedding generated by both exercises and responses over all timesteps. In addition, it uses causal convolutions to aggregate and smooth the states of local knowledge. The ELAKT model uses the states of comprehensive knowledge concepts to introduce a prediction correction module to forecast the future responses of students to deal with noise caused by stochastic behaviors. The results of experiments demonstrated that the ELAKT model consistently outperforms state-of-the-art baseline knowledge tracing models.</p>","PeriodicalId":50936,"journal":{"name":"ACM Transactions on Information Systems","volume":"30 1","pages":""},"PeriodicalIF":5.6,"publicationDate":"2024-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140129402","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
ACM Transactions on Information Systems
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1