Improving graph-based recommendation with unraveled graph learning

IF 2.8 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Data Mining and Knowledge Discovery Pub Date : 2024-06-02 DOI:10.1007/s10618-024-01038-7
Chih-Chieh Chang, Diing-Ruey Tzeng, Chia-Hsun Lu, Ming-Yi Chang, Chih-Ya Shen
{"title":"Improving graph-based recommendation with unraveled graph learning","authors":"Chih-Chieh Chang, Diing-Ruey Tzeng, Chia-Hsun Lu, Ming-Yi Chang, Chih-Ya Shen","doi":"10.1007/s10618-024-01038-7","DOIUrl":null,"url":null,"abstract":"<p>Graph Collaborative Filtering (GraphCF) has emerged as a promising approach in recommendation systems, leveraging the inferential power of Graph Neural Networks. Furthermore, the integration of contrastive learning has enhanced the performance of GraphCF methods. Recent research has shifted from graph augmentation to noise perturbation in contrastive learning, leading to significant performance improvements. However, we contend that the primary factor in performance enhancement is not graph augmentation or noise perturbation, but rather the <i>balance of the embedding from each layer in the output embedding</i>. To substantiate our claim, we conducted preliminary experiments with multiple state-of-the-art GraphCF methods. Based on our observations and insights, we propose a novel approach named <i>Unraveled Graph Contrastive Learning (UGCL)</i>, which includes a new propagation scheme to further enhance performance. To the best of our knowledge, this is the first approach that specifically addresses the balance factor in the output embedding for performance improvement. We have carried out extensive experiments on multiple large-scale benchmark datasets to evaluate the effectiveness of our proposed approach. The results indicate that UGCL significantly outperforms all other state-of-the-art baseline models, also showing superior performance in terms of fairness and debiasing capabilities compared to other baselines.</p>","PeriodicalId":55183,"journal":{"name":"Data Mining and Knowledge Discovery","volume":"30 1","pages":""},"PeriodicalIF":2.8000,"publicationDate":"2024-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Data Mining and Knowledge Discovery","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s10618-024-01038-7","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Graph Collaborative Filtering (GraphCF) has emerged as a promising approach in recommendation systems, leveraging the inferential power of Graph Neural Networks. Furthermore, the integration of contrastive learning has enhanced the performance of GraphCF methods. Recent research has shifted from graph augmentation to noise perturbation in contrastive learning, leading to significant performance improvements. However, we contend that the primary factor in performance enhancement is not graph augmentation or noise perturbation, but rather the balance of the embedding from each layer in the output embedding. To substantiate our claim, we conducted preliminary experiments with multiple state-of-the-art GraphCF methods. Based on our observations and insights, we propose a novel approach named Unraveled Graph Contrastive Learning (UGCL), which includes a new propagation scheme to further enhance performance. To the best of our knowledge, this is the first approach that specifically addresses the balance factor in the output embedding for performance improvement. We have carried out extensive experiments on multiple large-scale benchmark datasets to evaluate the effectiveness of our proposed approach. The results indicate that UGCL significantly outperforms all other state-of-the-art baseline models, also showing superior performance in terms of fairness and debiasing capabilities compared to other baselines.

Abstract Image

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
利用未揭示图学习改进基于图的推荐
图协同过滤(GraphCF)利用图神经网络的推理能力,已成为推荐系统中一种很有前途的方法。此外,对比学习的整合也提高了 GraphCF 方法的性能。最近的研究已从图增强转向对比学习中的噪声扰动,从而显著提高了性能。然而,我们认为性能提升的主要因素不是图形增强或噪声扰动,而是输出嵌入中各层嵌入的平衡。为了证实我们的观点,我们用多种最先进的 GraphCF 方法进行了初步实验。基于我们的观察和见解,我们提出了一种名为 "未揭示图对比学习"(UGCL)的新方法,其中包括一种新的传播方案,以进一步提高性能。据我们所知,这是第一种专门解决输出嵌入中平衡因素以提高性能的方法。我们在多个大规模基准数据集上进行了广泛的实验,以评估我们提出的方法的有效性。结果表明,UGCL 明显优于所有其他最先进的基线模型,在公平性和去除杂能力方面也表现出优于其他基线模型的性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Data Mining and Knowledge Discovery
Data Mining and Knowledge Discovery 工程技术-计算机:人工智能
CiteScore
10.40
自引率
4.20%
发文量
68
审稿时长
10 months
期刊介绍: Advances in data gathering, storage, and distribution have created a need for computational tools and techniques to aid in data analysis. Data Mining and Knowledge Discovery in Databases (KDD) is a rapidly growing area of research and application that builds on techniques and theories from many fields, including statistics, databases, pattern recognition and learning, data visualization, uncertainty modelling, data warehousing and OLAP, optimization, and high performance computing.
期刊最新文献
FRUITS: feature extraction using iterated sums for time series classification Bounding the family-wise error rate in local causal discovery using Rademacher averages Evaluating the disclosure risk of anonymized documents via a machine learning-based re-identification attack Efficient learning with projected histograms Opinion dynamics in social networks incorporating higher-order interactions
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1