基于学习的推荐反事实解释

IF 7.3 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Science China Information Sciences Pub Date : 2024-07-25 DOI:10.1007/s11432-023-3974-2
Jingxuan Wen, Huafeng Liu, Liping Jing, Jian Yu
{"title":"基于学习的推荐反事实解释","authors":"Jingxuan Wen, Huafeng Liu, Liping Jing, Jian Yu","doi":"10.1007/s11432-023-3974-2","DOIUrl":null,"url":null,"abstract":"<p>Counterfactual explanations provide explanations by exploring the changes in effect caused by changes in cause. They have attracted significant attention in recommender system research to explore the impact of changes in certain properties on the recommendation mechanism. Among several counterfactual recommendation methods, item-based counterfactual explanation methods have attracted considerable attention because of their flexibility. The core idea of item-based counterfactual explanation methods is to find a minimal subset of interacted items (i.e., short length) such that the recommended item would topple out of the top-<i>K</i> recommendation list once these items have been removed from user interactions (i.e., good quality). Usually, explanations are generated by ranking the precomputed importance of items, which fails to characterize the true importance of interacted items due to separation from the explanation generation. Additionally, the final explanations are generated according to a certain search strategy given the precomputed importance. This indicates that the quality and length of counterfactual explanations are deterministic; therefore, they cannot be balanced once the search strategy is fixed. To overcome these obstacles, this study proposes learning-based counterfactual explanations for recommendation (LCER) to provide counterfactual explanations based on personalized recommendations by jointly modeling the factual and counterfactual preference. To achieve consistency between the computation of importance and generation of counterfactual explanations, the proposed LCER endows an optimizable importance for each interacted item, which is supervised by the goal of counterfactual explanations to guarantee its credibility. Because of the model’s flexibility, the trade-off between quality and length can be customized by setting different proportions. The experimental results on four real-world datasets demonstrate the effectiveness of the proposed LCER over several state-of-the-art baselines, both quantitatively and qualitatively.</p>","PeriodicalId":21618,"journal":{"name":"Science China Information Sciences","volume":"84 1","pages":""},"PeriodicalIF":7.3000,"publicationDate":"2024-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Learning-based counterfactual explanations for recommendation\",\"authors\":\"Jingxuan Wen, Huafeng Liu, Liping Jing, Jian Yu\",\"doi\":\"10.1007/s11432-023-3974-2\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Counterfactual explanations provide explanations by exploring the changes in effect caused by changes in cause. They have attracted significant attention in recommender system research to explore the impact of changes in certain properties on the recommendation mechanism. Among several counterfactual recommendation methods, item-based counterfactual explanation methods have attracted considerable attention because of their flexibility. The core idea of item-based counterfactual explanation methods is to find a minimal subset of interacted items (i.e., short length) such that the recommended item would topple out of the top-<i>K</i> recommendation list once these items have been removed from user interactions (i.e., good quality). Usually, explanations are generated by ranking the precomputed importance of items, which fails to characterize the true importance of interacted items due to separation from the explanation generation. Additionally, the final explanations are generated according to a certain search strategy given the precomputed importance. This indicates that the quality and length of counterfactual explanations are deterministic; therefore, they cannot be balanced once the search strategy is fixed. To overcome these obstacles, this study proposes learning-based counterfactual explanations for recommendation (LCER) to provide counterfactual explanations based on personalized recommendations by jointly modeling the factual and counterfactual preference. To achieve consistency between the computation of importance and generation of counterfactual explanations, the proposed LCER endows an optimizable importance for each interacted item, which is supervised by the goal of counterfactual explanations to guarantee its credibility. Because of the model’s flexibility, the trade-off between quality and length can be customized by setting different proportions. The experimental results on four real-world datasets demonstrate the effectiveness of the proposed LCER over several state-of-the-art baselines, both quantitatively and qualitatively.</p>\",\"PeriodicalId\":21618,\"journal\":{\"name\":\"Science China Information Sciences\",\"volume\":\"84 1\",\"pages\":\"\"},\"PeriodicalIF\":7.3000,\"publicationDate\":\"2024-07-25\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Science China Information Sciences\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1007/s11432-023-3974-2\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Science China Information Sciences","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s11432-023-3974-2","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

摘要

反事实解释通过探索原因变化引起的结果变化来提供解释。它们在推荐系统研究中引起了极大的关注,以探索某些属性的变化对推荐机制的影响。在几种反事实推荐方法中,基于项目的反事实解释方法因其灵活性而备受关注。基于项目的反事实解释方法的核心思想是找到一个最小的互动项目子集(即长度短),一旦这些项目从用户互动中删除(即质量好),推荐项目就会从 Top-K 推荐列表中删除。通常情况下,解释是通过对项目的预计算重要性进行排序来生成的,由于与解释的生成分离,这就无法体现互动项目的真实重要性。此外,最终的解释是根据预先计算出的重要性,按照一定的搜索策略生成的。这表明,反事实解释的质量和长度是确定的;因此,一旦搜索策略固定下来,它们就无法平衡。为了克服这些障碍,本研究提出了基于学习的反事实解释推荐(LCER),通过对事实偏好和反事实偏好联合建模,在个性化推荐的基础上提供反事实解释。为了实现重要性计算与反事实解释生成之间的一致性,所提出的 LCER 为每个交互项目赋予了一个可优化的重要性,并由反事实解释的目标进行监督,以保证其可信度。由于模型具有灵活性,因此可以通过设置不同的比例来定制质量和长度之间的权衡。在四个真实世界数据集上的实验结果表明,所提出的 LCER 在定量和定性方面都优于几种最先进的基线方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Learning-based counterfactual explanations for recommendation

Counterfactual explanations provide explanations by exploring the changes in effect caused by changes in cause. They have attracted significant attention in recommender system research to explore the impact of changes in certain properties on the recommendation mechanism. Among several counterfactual recommendation methods, item-based counterfactual explanation methods have attracted considerable attention because of their flexibility. The core idea of item-based counterfactual explanation methods is to find a minimal subset of interacted items (i.e., short length) such that the recommended item would topple out of the top-K recommendation list once these items have been removed from user interactions (i.e., good quality). Usually, explanations are generated by ranking the precomputed importance of items, which fails to characterize the true importance of interacted items due to separation from the explanation generation. Additionally, the final explanations are generated according to a certain search strategy given the precomputed importance. This indicates that the quality and length of counterfactual explanations are deterministic; therefore, they cannot be balanced once the search strategy is fixed. To overcome these obstacles, this study proposes learning-based counterfactual explanations for recommendation (LCER) to provide counterfactual explanations based on personalized recommendations by jointly modeling the factual and counterfactual preference. To achieve consistency between the computation of importance and generation of counterfactual explanations, the proposed LCER endows an optimizable importance for each interacted item, which is supervised by the goal of counterfactual explanations to guarantee its credibility. Because of the model’s flexibility, the trade-off between quality and length can be customized by setting different proportions. The experimental results on four real-world datasets demonstrate the effectiveness of the proposed LCER over several state-of-the-art baselines, both quantitatively and qualitatively.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Science China Information Sciences
Science China Information Sciences COMPUTER SCIENCE, INFORMATION SYSTEMS-
CiteScore
12.60
自引率
5.70%
发文量
224
审稿时长
8.3 months
期刊介绍: Science China Information Sciences is a dedicated journal that showcases high-quality, original research across various domains of information sciences. It encompasses Computer Science & Technologies, Control Science & Engineering, Information & Communication Engineering, Microelectronics & Solid-State Electronics, and Quantum Information, providing a platform for the dissemination of significant contributions in these fields.
期刊最新文献
Weighted sum power maximization for STAR-RIS-aided SWIPT systems with nonlinear energy harvesting TSCompiler: efficient compilation framework for dynamic-shape models NeurDB: an AI-powered autonomous data system State and parameter identification of linearized water wave equation via adjoint method An STP look at logical blocking of finite state machines: formulation, detection, and search
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1