Explanation framework for industrial recommendation systems based on the generative adversarial network with embedding constraints

Binchuan Qi, Wei Gong, Li Li
{"title":"Explanation framework for industrial recommendation systems based on the generative adversarial network with embedding constraints","authors":"Binchuan Qi,&nbsp;Wei Gong,&nbsp;Li Li","doi":"10.1007/s43684-025-00092-2","DOIUrl":null,"url":null,"abstract":"<div><p>The explainability of recommendation systems refers to the ability to explain the logic that guides the system’s decision to endorse or exclude an item. In industrial-grade recommendation systems, the high complexity of features, the presence of embedding layers, the existence of adversarial samples and the requirements for explanation accuracy and efficiency pose significant challenges to current explanation methods. This paper proposes a novel framework AdvLIME (Adversarial Local Interpretable Model-agnostic Explanation) that leverages Generative Adversarial Networks (GANs) with Embedding Constraints to enhance explainability. This method utilizes adversarial samples as references to explain recommendation decisions, generating these samples in accordance with realistic distributions and ensuring they meet the structural constraints of the embedding module. AdvLIME requires no modifications to the existing model architecture and needs only a single training session for global explanation, making it ideal for industrial applications. This work contributes two significant advancements. First, it develops a model-independent global explanation method via adversarial generation. Second, it introduces a model discrimination method to guarantee that the generated samples adhere to the embedding constraints. We evaluate the AdvLIME framework on the Behavior Sequence Transformer (BST) model using the MovieLens 20 M dataset. The experimental results show that AdvLIME outperforms traditional methods such as LIME and DLIME, reducing the approximation error of real samples by 50% and demonstrating improved stability and accuracy.</p></div>","PeriodicalId":71187,"journal":{"name":"自主智能系统(英文)","volume":"5 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2025-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43684-025-00092-2.pdf","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"自主智能系统(英文)","FirstCategoryId":"1093","ListUrlMain":"https://link.springer.com/article/10.1007/s43684-025-00092-2","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

The explainability of recommendation systems refers to the ability to explain the logic that guides the system’s decision to endorse or exclude an item. In industrial-grade recommendation systems, the high complexity of features, the presence of embedding layers, the existence of adversarial samples and the requirements for explanation accuracy and efficiency pose significant challenges to current explanation methods. This paper proposes a novel framework AdvLIME (Adversarial Local Interpretable Model-agnostic Explanation) that leverages Generative Adversarial Networks (GANs) with Embedding Constraints to enhance explainability. This method utilizes adversarial samples as references to explain recommendation decisions, generating these samples in accordance with realistic distributions and ensuring they meet the structural constraints of the embedding module. AdvLIME requires no modifications to the existing model architecture and needs only a single training session for global explanation, making it ideal for industrial applications. This work contributes two significant advancements. First, it develops a model-independent global explanation method via adversarial generation. Second, it introduces a model discrimination method to guarantee that the generated samples adhere to the embedding constraints. We evaluate the AdvLIME framework on the Behavior Sequence Transformer (BST) model using the MovieLens 20 M dataset. The experimental results show that AdvLIME outperforms traditional methods such as LIME and DLIME, reducing the approximation error of real samples by 50% and demonstrating improved stability and accuracy.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
推荐系统的可解释性是指解释系统决定认可或排除某个项目的逻辑的能力。在工业级推荐系统中,特征的高复杂性、嵌入层的存在、对抗样本的存在以及对解释准确性和效率的要求,都对当前的解释方法提出了巨大挑战。本文提出了一个新颖的框架 AdvLIME(对抗性本地可解释模型-不可知解释),利用具有嵌入约束的生成对抗网络(GAN)来增强可解释性。这种方法利用对抗样本作为参考来解释推荐决策,按照现实分布生成这些样本,并确保它们符合嵌入模块的结构约束。AdvLIME 无需修改现有的模型架构,只需进行一次全局解释训练,因此非常适合工业应用。这项工作有两个重大进展。首先,它通过对抗生成开发了一种与模型无关的全局解释方法。其次,它引入了一种模型判别方法,以保证生成的样本符合嵌入约束条件。我们使用 MovieLens 20 M 数据集对行为序列转换器(BST)模型上的 AdvLIME 框架进行了评估。实验结果表明,AdvLIME 优于 LIME 和 DLIME 等传统方法,真实样本的近似误差减少了 50%,稳定性和准确性也得到了提高。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
3.90
自引率
0.00%
发文量
0
期刊最新文献
Explanation framework for industrial recommendation systems based on the generative adversarial network with embedding constraints Adaptive control of bilateral teleoperation systems under denial-of-service attacks Efficient and accurate road crack detection technology based on YOLOv8-ES A cooperative jamming decision-making method based on multi-agent reinforcement learning Enhanced bearing RUL prediction based on dynamic temporal attention and mixed MLP
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1