LLM-Powered Text Simulation Attack Against ID-Free Recommender Systems

Zongwei Wang, Min Gao, Junliang Yu, Xinyi Gao, Quoc Viet Hung Nguyen, Shazia Sadiq, Hongzhi Yin
{"title":"LLM-Powered Text Simulation Attack Against ID-Free Recommender Systems","authors":"Zongwei Wang, Min Gao, Junliang Yu, Xinyi Gao, Quoc Viet Hung Nguyen, Shazia Sadiq, Hongzhi Yin","doi":"arxiv-2409.11690","DOIUrl":null,"url":null,"abstract":"The ID-free recommendation paradigm has been proposed to address the\nlimitation that traditional recommender systems struggle to model cold-start\nusers or items with new IDs. Despite its effectiveness, this study uncovers\nthat ID-free recommender systems are vulnerable to the proposed Text Simulation\nattack (TextSimu) which aims to promote specific target items. As a novel type\nof text poisoning attack, TextSimu exploits large language models (LLM) to\nalter the textual information of target items by simulating the characteristics\nof popular items. It operates effectively in both black-box and white-box\nsettings, utilizing two key components: a unified popularity extraction module,\nwhich captures the essential characteristics of popular items, and an N-persona\nconsistency simulation strategy, which creates multiple personas to\ncollaboratively synthesize refined promotional textual descriptions for target\nitems by simulating the popular items. To withstand TextSimu-like attacks, we\nfurther explore the detection approach for identifying LLM-generated\npromotional text. Extensive experiments conducted on three datasets demonstrate\nthat TextSimu poses a more significant threat than existing poisoning attacks,\nwhile our defense method can detect malicious text of target items generated by\nTextSimu. By identifying the vulnerability, we aim to advance the development\nof more robust ID-free recommender systems.","PeriodicalId":501281,"journal":{"name":"arXiv - CS - Information Retrieval","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Information Retrieval","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.11690","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

The ID-free recommendation paradigm has been proposed to address the limitation that traditional recommender systems struggle to model cold-start users or items with new IDs. Despite its effectiveness, this study uncovers that ID-free recommender systems are vulnerable to the proposed Text Simulation attack (TextSimu) which aims to promote specific target items. As a novel type of text poisoning attack, TextSimu exploits large language models (LLM) to alter the textual information of target items by simulating the characteristics of popular items. It operates effectively in both black-box and white-box settings, utilizing two key components: a unified popularity extraction module, which captures the essential characteristics of popular items, and an N-persona consistency simulation strategy, which creates multiple personas to collaboratively synthesize refined promotional textual descriptions for target items by simulating the popular items. To withstand TextSimu-like attacks, we further explore the detection approach for identifying LLM-generated promotional text. Extensive experiments conducted on three datasets demonstrate that TextSimu poses a more significant threat than existing poisoning attacks, while our defense method can detect malicious text of target items generated by TextSimu. By identifying the vulnerability, we aim to advance the development of more robust ID-free recommender systems.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
由 LLM 驱动的针对无 ID 推荐系统的文本模拟攻击
无 ID 推荐范式的提出是为了解决传统推荐系统难以对冷启动用户或具有新 ID 的项目进行建模的限制。尽管无 ID 推荐系统很有效,但本研究发现它很容易受到旨在推广特定目标项目的文本模拟攻击(TextSimu)的攻击。作为一种新型文本中毒攻击,TextSimu 利用大型语言模型(LLM),通过模拟流行项目的特征来改变目标项目的文本信息。它在黑盒和白盒环境下都能有效运行,利用了两个关键组件:一个是统一的流行度提取模块,它能捕捉流行项目的基本特征;另一个是 N 人一致性模拟策略,它能创建多个角色,通过模拟流行项目来协作合成目标项目的精炼促销文本描述。为了抵御类似 TextSimu 的攻击,我们进一步探索了识别 LLM 生成的促销文本的检测方法。在三个数据集上进行的广泛实验表明,TextSimu 比现有的中毒攻击构成了更大的威胁,而我们的防御方法可以检测到由 TextSimu 生成的目标项目的恶意文本。通过识别该漏洞,我们旨在推动更强大的无 ID 推荐系统的开发。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Decoding Style: Efficient Fine-Tuning of LLMs for Image-Guided Outfit Recommendation with Preference Retrieve, Annotate, Evaluate, Repeat: Leveraging Multimodal LLMs for Large-Scale Product Retrieval Evaluation Active Reconfigurable Intelligent Surface Empowered Synthetic Aperture Radar Imaging FLARE: Fusing Language Models and Collaborative Architectures for Recommender Enhancement Basket-Enhanced Heterogenous Hypergraph for Price-Sensitive Next Basket Recommendation
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1