Xian Mo , Zihang Zhao , Xiaoru He , Hang Qi , Hao Liu
{"title":"具有注意力感知的智能图谱对比学习,用于推荐","authors":"Xian Mo , Zihang Zhao , Xiaoru He , Hang Qi , Hao Liu","doi":"10.1016/j.neucom.2024.128781","DOIUrl":null,"url":null,"abstract":"<div><div>Recommender systems are an important tool for information retrieval, which can aid in the solution of the issue of information overload. Recently, contrastive learning has shown remarkable performance in recommendation by data augmentation processes to address highly sparse data. Our paper proposes an <u>Int</u>elligible <u>G</u>raph <u>C</u>ontrastive <u>L</u>earning with attention-aware (IntGCL) for recommendation. Particularly, our IntGCL first introduces a novel attention-aware matrix into graph convolutional networks (GCN) to identify the importance between users and items, which is constructed to preserve the importance between users and items by a random walk with a restart strategy and can enhance the intelligibility of our model. Then, the attention-aware matrix is further utilised to guide the generation of a graph-generative model with attention-aware and a graph-denoising model for automatically generating two trainable contrastive views for data augmentation, which can de-noise and further enhance the intelligibility. Comprehensive experiments on four real-world datasets indicate the superiority of our IntGCL approach over multiple state-of-the-art methods. Our datasets and source code are available at <span><span>https://github.com/restarthxr/InpGCL</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"614 ","pages":"Article 128781"},"PeriodicalIF":5.5000,"publicationDate":"2024-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Intelligible graph contrastive learning with attention-aware for recommendation\",\"authors\":\"Xian Mo , Zihang Zhao , Xiaoru He , Hang Qi , Hao Liu\",\"doi\":\"10.1016/j.neucom.2024.128781\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Recommender systems are an important tool for information retrieval, which can aid in the solution of the issue of information overload. Recently, contrastive learning has shown remarkable performance in recommendation by data augmentation processes to address highly sparse data. Our paper proposes an <u>Int</u>elligible <u>G</u>raph <u>C</u>ontrastive <u>L</u>earning with attention-aware (IntGCL) for recommendation. Particularly, our IntGCL first introduces a novel attention-aware matrix into graph convolutional networks (GCN) to identify the importance between users and items, which is constructed to preserve the importance between users and items by a random walk with a restart strategy and can enhance the intelligibility of our model. Then, the attention-aware matrix is further utilised to guide the generation of a graph-generative model with attention-aware and a graph-denoising model for automatically generating two trainable contrastive views for data augmentation, which can de-noise and further enhance the intelligibility. Comprehensive experiments on four real-world datasets indicate the superiority of our IntGCL approach over multiple state-of-the-art methods. Our datasets and source code are available at <span><span>https://github.com/restarthxr/InpGCL</span><svg><path></path></svg></span>.</div></div>\",\"PeriodicalId\":19268,\"journal\":{\"name\":\"Neurocomputing\",\"volume\":\"614 \",\"pages\":\"Article 128781\"},\"PeriodicalIF\":5.5000,\"publicationDate\":\"2024-11-06\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Neurocomputing\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0925231224015522\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neurocomputing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0925231224015522","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
Intelligible graph contrastive learning with attention-aware for recommendation
Recommender systems are an important tool for information retrieval, which can aid in the solution of the issue of information overload. Recently, contrastive learning has shown remarkable performance in recommendation by data augmentation processes to address highly sparse data. Our paper proposes an Intelligible Graph Contrastive Learning with attention-aware (IntGCL) for recommendation. Particularly, our IntGCL first introduces a novel attention-aware matrix into graph convolutional networks (GCN) to identify the importance between users and items, which is constructed to preserve the importance between users and items by a random walk with a restart strategy and can enhance the intelligibility of our model. Then, the attention-aware matrix is further utilised to guide the generation of a graph-generative model with attention-aware and a graph-denoising model for automatically generating two trainable contrastive views for data augmentation, which can de-noise and further enhance the intelligibility. Comprehensive experiments on four real-world datasets indicate the superiority of our IntGCL approach over multiple state-of-the-art methods. Our datasets and source code are available at https://github.com/restarthxr/InpGCL.
期刊介绍:
Neurocomputing publishes articles describing recent fundamental contributions in the field of neurocomputing. Neurocomputing theory, practice and applications are the essential topics being covered.