Anti-FakeU: Defending Shilling Attacks on Graph Neural Network based Recommender Model

X. You, Chi-Pan Li, Daizong Ding, Mi Zhang, Fuli Feng, Xudong Pan, Min Yang
{"title":"Anti-FakeU: Defending Shilling Attacks on Graph Neural Network based Recommender Model","authors":"X. You, Chi-Pan Li, Daizong Ding, Mi Zhang, Fuli Feng, Xudong Pan, Min Yang","doi":"10.1145/3543507.3583289","DOIUrl":null,"url":null,"abstract":"Graph neural network (GNN) based recommendation models are observed to be more vulnerable against carefully-designed malicious records injected into the system, i.e., shilling attacks, which manipulate the recommendation to common users and therefore impair user trust. In this paper, we for the first time conduct a systematic study on the vulnerability of GNN based recommendation model against the shilling attack. With the aid of theoretical analysis, we attribute the root cause of the vulnerability to its neighborhood aggregation mechanism, which could make the negative impact of attacks propagate rapidly in the system. To restore the robustness of GNN based recommendation model, the key factor lies in detecting malicious records in the system and preventing the propagation of misinformation. To this end, we construct a user-user graph to capture the patterns of malicious behaviors and design a novel GNN based detector to identify fake users. Furthermore, we develop a data augmentation strategy and a joint learning paradigm to train the recommender model and the proposed detector. Extensive experiments on benchmark datasets validate the enhanced robustness of the proposed method in resisting various types of shilling attacks and identifying fake users, e.g., our proposed method fully mitigating the impact of popularity attacks on target items up to , and improving the accuracy of detecting fake users on the Gowalla dataset by .","PeriodicalId":296351,"journal":{"name":"Proceedings of the ACM Web Conference 2023","volume":"84 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the ACM Web Conference 2023","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3543507.3583289","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Graph neural network (GNN) based recommendation models are observed to be more vulnerable against carefully-designed malicious records injected into the system, i.e., shilling attacks, which manipulate the recommendation to common users and therefore impair user trust. In this paper, we for the first time conduct a systematic study on the vulnerability of GNN based recommendation model against the shilling attack. With the aid of theoretical analysis, we attribute the root cause of the vulnerability to its neighborhood aggregation mechanism, which could make the negative impact of attacks propagate rapidly in the system. To restore the robustness of GNN based recommendation model, the key factor lies in detecting malicious records in the system and preventing the propagation of misinformation. To this end, we construct a user-user graph to capture the patterns of malicious behaviors and design a novel GNN based detector to identify fake users. Furthermore, we develop a data augmentation strategy and a joint learning paradigm to train the recommender model and the proposed detector. Extensive experiments on benchmark datasets validate the enhanced robustness of the proposed method in resisting various types of shilling attacks and identifying fake users, e.g., our proposed method fully mitigating the impact of popularity attacks on target items up to , and improving the accuracy of detecting fake users on the Gowalla dataset by .
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
反假币:基于图神经网络推荐模型的先令攻击防御
基于图神经网络(GNN)的推荐模型更容易受到精心设计的恶意记录注入系统的攻击,即先令攻击,这种攻击会操纵对普通用户的推荐,从而损害用户信任。本文首次系统研究了基于GNN的推荐模型对先令攻击的脆弱性。通过理论分析,我们将漏洞产生的根本原因归结为其邻域聚集机制,该机制使得攻击的负面影响在系统中迅速传播。要恢复基于GNN的推荐模型的鲁棒性,关键在于检测系统中的恶意记录,防止错误信息的传播。为此,我们构建了一个用户-用户图来捕捉恶意行为的模式,并设计了一个新的基于GNN的检测器来识别假用户。此外,我们开发了一个数据增强策略和一个联合学习范例来训练推荐模型和所提出的检测器。在基准数据集上的大量实验验证了所提方法在抵抗各种类型的先令攻击和识别假用户方面增强的鲁棒性,例如,我们提出的方法完全减轻了流行攻击对目标项目的影响,并提高了Gowalla数据集上检测假用户的准确性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
CurvDrop: A Ricci Curvature Based Approach to Prevent Graph Neural Networks from Over-Smoothing and Over-Squashing Learning to Simulate Crowd Trajectories with Graph Networks Word Sense Disambiguation by Refining Target Word Embedding Curriculum Graph Poisoning Optimizing Guided Traversal for Fast Learned Sparse Retrieval
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1