Bowen Zheng, Junjie Zhang, Hongyu Lu, Yu Chen, Ming Chen, Wayne Xin Zhao, Ji-Rong Wen
{"title":"利用可靠且信息丰富的增强功能加强图表对比学习,以进行推荐","authors":"Bowen Zheng, Junjie Zhang, Hongyu Lu, Yu Chen, Ming Chen, Wayne Xin Zhao, Ji-Rong Wen","doi":"arxiv-2409.05633","DOIUrl":null,"url":null,"abstract":"Graph neural network (GNN) has been a powerful approach in collaborative\nfiltering (CF) due to its ability to model high-order user-item relationships.\nRecently, to alleviate the data sparsity and enhance representation learning,\nmany efforts have been conducted to integrate contrastive learning (CL) with\nGNNs. Despite the promising improvements, the contrastive view generation based\non structure and representation perturbations in existing methods potentially\ndisrupts the collaborative information in contrastive views, resulting in\nlimited effectiveness of positive alignment. To overcome this issue, we propose\nCoGCL, a novel framework that aims to enhance graph contrastive learning by\nconstructing contrastive views with stronger collaborative information via\ndiscrete codes. The core idea is to map users and items into discrete codes\nrich in collaborative information for reliable and informative contrastive view\ngeneration. To this end, we initially introduce a multi-level vector quantizer\nin an end-to-end manner to quantize user and item representations into discrete\ncodes. Based on these discrete codes, we enhance the collaborative information\nof contrastive views by considering neighborhood structure and semantic\nrelevance respectively. For neighborhood structure, we propose virtual neighbor\naugmentation by treating discrete codes as virtual neighbors, which expands an\nobserved user-item interaction into multiple edges involving discrete codes.\nRegarding semantic relevance, we identify similar users/items based on shared\ndiscrete codes and interaction targets to generate the semantically relevant\nview. Through these strategies, we construct contrastive views with stronger\ncollaborative information and develop a triple-view graph contrastive learning\napproach. Extensive experiments on four public datasets demonstrate the\neffectiveness of our proposed approach.","PeriodicalId":501281,"journal":{"name":"arXiv - CS - Information Retrieval","volume":"55 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Enhancing Graph Contrastive Learning with Reliable and Informative Augmentation for Recommendation\",\"authors\":\"Bowen Zheng, Junjie Zhang, Hongyu Lu, Yu Chen, Ming Chen, Wayne Xin Zhao, Ji-Rong Wen\",\"doi\":\"arxiv-2409.05633\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Graph neural network (GNN) has been a powerful approach in collaborative\\nfiltering (CF) due to its ability to model high-order user-item relationships.\\nRecently, to alleviate the data sparsity and enhance representation learning,\\nmany efforts have been conducted to integrate contrastive learning (CL) with\\nGNNs. Despite the promising improvements, the contrastive view generation based\\non structure and representation perturbations in existing methods potentially\\ndisrupts the collaborative information in contrastive views, resulting in\\nlimited effectiveness of positive alignment. To overcome this issue, we propose\\nCoGCL, a novel framework that aims to enhance graph contrastive learning by\\nconstructing contrastive views with stronger collaborative information via\\ndiscrete codes. The core idea is to map users and items into discrete codes\\nrich in collaborative information for reliable and informative contrastive view\\ngeneration. To this end, we initially introduce a multi-level vector quantizer\\nin an end-to-end manner to quantize user and item representations into discrete\\ncodes. Based on these discrete codes, we enhance the collaborative information\\nof contrastive views by considering neighborhood structure and semantic\\nrelevance respectively. For neighborhood structure, we propose virtual neighbor\\naugmentation by treating discrete codes as virtual neighbors, which expands an\\nobserved user-item interaction into multiple edges involving discrete codes.\\nRegarding semantic relevance, we identify similar users/items based on shared\\ndiscrete codes and interaction targets to generate the semantically relevant\\nview. Through these strategies, we construct contrastive views with stronger\\ncollaborative information and develop a triple-view graph contrastive learning\\napproach. Extensive experiments on four public datasets demonstrate the\\neffectiveness of our proposed approach.\",\"PeriodicalId\":501281,\"journal\":{\"name\":\"arXiv - CS - Information Retrieval\",\"volume\":\"55 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-09\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Information Retrieval\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.05633\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Information Retrieval","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.05633","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Enhancing Graph Contrastive Learning with Reliable and Informative Augmentation for Recommendation
Graph neural network (GNN) has been a powerful approach in collaborative
filtering (CF) due to its ability to model high-order user-item relationships.
Recently, to alleviate the data sparsity and enhance representation learning,
many efforts have been conducted to integrate contrastive learning (CL) with
GNNs. Despite the promising improvements, the contrastive view generation based
on structure and representation perturbations in existing methods potentially
disrupts the collaborative information in contrastive views, resulting in
limited effectiveness of positive alignment. To overcome this issue, we propose
CoGCL, a novel framework that aims to enhance graph contrastive learning by
constructing contrastive views with stronger collaborative information via
discrete codes. The core idea is to map users and items into discrete codes
rich in collaborative information for reliable and informative contrastive view
generation. To this end, we initially introduce a multi-level vector quantizer
in an end-to-end manner to quantize user and item representations into discrete
codes. Based on these discrete codes, we enhance the collaborative information
of contrastive views by considering neighborhood structure and semantic
relevance respectively. For neighborhood structure, we propose virtual neighbor
augmentation by treating discrete codes as virtual neighbors, which expands an
observed user-item interaction into multiple edges involving discrete codes.
Regarding semantic relevance, we identify similar users/items based on shared
discrete codes and interaction targets to generate the semantically relevant
view. Through these strategies, we construct contrastive views with stronger
collaborative information and develop a triple-view graph contrastive learning
approach. Extensive experiments on four public datasets demonstrate the
effectiveness of our proposed approach.