利用可靠且信息丰富的增强功能加强图表对比学习,以进行推荐

Bowen Zheng, Junjie Zhang, Hongyu Lu, Yu Chen, Ming Chen, Wayne Xin Zhao, Ji-Rong Wen
{"title":"利用可靠且信息丰富的增强功能加强图表对比学习,以进行推荐","authors":"Bowen Zheng, Junjie Zhang, Hongyu Lu, Yu Chen, Ming Chen, Wayne Xin Zhao, Ji-Rong Wen","doi":"arxiv-2409.05633","DOIUrl":null,"url":null,"abstract":"Graph neural network (GNN) has been a powerful approach in collaborative\nfiltering (CF) due to its ability to model high-order user-item relationships.\nRecently, to alleviate the data sparsity and enhance representation learning,\nmany efforts have been conducted to integrate contrastive learning (CL) with\nGNNs. Despite the promising improvements, the contrastive view generation based\non structure and representation perturbations in existing methods potentially\ndisrupts the collaborative information in contrastive views, resulting in\nlimited effectiveness of positive alignment. To overcome this issue, we propose\nCoGCL, a novel framework that aims to enhance graph contrastive learning by\nconstructing contrastive views with stronger collaborative information via\ndiscrete codes. The core idea is to map users and items into discrete codes\nrich in collaborative information for reliable and informative contrastive view\ngeneration. To this end, we initially introduce a multi-level vector quantizer\nin an end-to-end manner to quantize user and item representations into discrete\ncodes. Based on these discrete codes, we enhance the collaborative information\nof contrastive views by considering neighborhood structure and semantic\nrelevance respectively. For neighborhood structure, we propose virtual neighbor\naugmentation by treating discrete codes as virtual neighbors, which expands an\nobserved user-item interaction into multiple edges involving discrete codes.\nRegarding semantic relevance, we identify similar users/items based on shared\ndiscrete codes and interaction targets to generate the semantically relevant\nview. Through these strategies, we construct contrastive views with stronger\ncollaborative information and develop a triple-view graph contrastive learning\napproach. Extensive experiments on four public datasets demonstrate the\neffectiveness of our proposed approach.","PeriodicalId":501281,"journal":{"name":"arXiv - CS - Information Retrieval","volume":"55 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Enhancing Graph Contrastive Learning with Reliable and Informative Augmentation for Recommendation\",\"authors\":\"Bowen Zheng, Junjie Zhang, Hongyu Lu, Yu Chen, Ming Chen, Wayne Xin Zhao, Ji-Rong Wen\",\"doi\":\"arxiv-2409.05633\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Graph neural network (GNN) has been a powerful approach in collaborative\\nfiltering (CF) due to its ability to model high-order user-item relationships.\\nRecently, to alleviate the data sparsity and enhance representation learning,\\nmany efforts have been conducted to integrate contrastive learning (CL) with\\nGNNs. Despite the promising improvements, the contrastive view generation based\\non structure and representation perturbations in existing methods potentially\\ndisrupts the collaborative information in contrastive views, resulting in\\nlimited effectiveness of positive alignment. To overcome this issue, we propose\\nCoGCL, a novel framework that aims to enhance graph contrastive learning by\\nconstructing contrastive views with stronger collaborative information via\\ndiscrete codes. The core idea is to map users and items into discrete codes\\nrich in collaborative information for reliable and informative contrastive view\\ngeneration. To this end, we initially introduce a multi-level vector quantizer\\nin an end-to-end manner to quantize user and item representations into discrete\\ncodes. Based on these discrete codes, we enhance the collaborative information\\nof contrastive views by considering neighborhood structure and semantic\\nrelevance respectively. For neighborhood structure, we propose virtual neighbor\\naugmentation by treating discrete codes as virtual neighbors, which expands an\\nobserved user-item interaction into multiple edges involving discrete codes.\\nRegarding semantic relevance, we identify similar users/items based on shared\\ndiscrete codes and interaction targets to generate the semantically relevant\\nview. Through these strategies, we construct contrastive views with stronger\\ncollaborative information and develop a triple-view graph contrastive learning\\napproach. Extensive experiments on four public datasets demonstrate the\\neffectiveness of our proposed approach.\",\"PeriodicalId\":501281,\"journal\":{\"name\":\"arXiv - CS - Information Retrieval\",\"volume\":\"55 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-09\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Information Retrieval\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.05633\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Information Retrieval","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.05633","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

图神经网络(GNN)因其能够模拟高阶用户-项目关系而成为协同过滤(CF)中的一种强大方法。最近,为了缓解数据稀疏性和增强表征学习,许多人努力将对比学习(CL)与图神经网络结合起来。尽管对比学习的改进前景广阔,但现有方法中基于结构和表示扰动的对比视图生成可能会破坏对比视图中的协作信息,从而导致正向配准的效果有限。为了克服这个问题,我们提出了一个新颖的框架--CoGCL,旨在通过构建具有更强协作信息的对比视图(contrastive views)来增强图对比学习(graph contrastive learning)。其核心思想是将用户和条目映射到富含协作信息的离散代码中,从而生成可靠、翔实的对比视图。为此,我们首先引入了多级向量量化器,以端到端的方式将用户和项目表示量化为离散代码。在这些离散代码的基础上,我们分别通过考虑邻域结构和语义相关性来增强对比视图的协作信息。在邻域结构方面,我们将离散代码视为虚拟邻域,提出了虚拟邻域扩展(virtual neighboraugmentation)方法,将观察到的用户与物品的交互扩展为涉及离散代码的多条边;在语义相关性方面,我们根据共享的离散代码和交互目标识别相似的用户/物品,生成语义相关的视图。通过这些策略,我们构建了具有较强协作信息的对比视图,并开发了三重视图图对比学习方法。在四个公共数据集上进行的广泛实验证明了我们提出的方法的有效性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Enhancing Graph Contrastive Learning with Reliable and Informative Augmentation for Recommendation
Graph neural network (GNN) has been a powerful approach in collaborative filtering (CF) due to its ability to model high-order user-item relationships. Recently, to alleviate the data sparsity and enhance representation learning, many efforts have been conducted to integrate contrastive learning (CL) with GNNs. Despite the promising improvements, the contrastive view generation based on structure and representation perturbations in existing methods potentially disrupts the collaborative information in contrastive views, resulting in limited effectiveness of positive alignment. To overcome this issue, we propose CoGCL, a novel framework that aims to enhance graph contrastive learning by constructing contrastive views with stronger collaborative information via discrete codes. The core idea is to map users and items into discrete codes rich in collaborative information for reliable and informative contrastive view generation. To this end, we initially introduce a multi-level vector quantizer in an end-to-end manner to quantize user and item representations into discrete codes. Based on these discrete codes, we enhance the collaborative information of contrastive views by considering neighborhood structure and semantic relevance respectively. For neighborhood structure, we propose virtual neighbor augmentation by treating discrete codes as virtual neighbors, which expands an observed user-item interaction into multiple edges involving discrete codes. Regarding semantic relevance, we identify similar users/items based on shared discrete codes and interaction targets to generate the semantically relevant view. Through these strategies, we construct contrastive views with stronger collaborative information and develop a triple-view graph contrastive learning approach. Extensive experiments on four public datasets demonstrate the effectiveness of our proposed approach.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Decoding Style: Efficient Fine-Tuning of LLMs for Image-Guided Outfit Recommendation with Preference Retrieve, Annotate, Evaluate, Repeat: Leveraging Multimodal LLMs for Large-Scale Product Retrieval Evaluation Active Reconfigurable Intelligent Surface Empowered Synthetic Aperture Radar Imaging FLARE: Fusing Language Models and Collaborative Architectures for Recommender Enhancement Basket-Enhanced Heterogenous Hypergraph for Price-Sensitive Next Basket Recommendation
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1