Explaining the model and feature dependencies by decomposition of the Shapley value

IF 6.7 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Decision Support Systems Pub Date : 2024-04-27 DOI:10.1016/j.dss.2024.114234
Joran Michiels , Johan Suykens , Maarten De Vos
{"title":"Explaining the model and feature dependencies by decomposition of the Shapley value","authors":"Joran Michiels ,&nbsp;Johan Suykens ,&nbsp;Maarten De Vos","doi":"10.1016/j.dss.2024.114234","DOIUrl":null,"url":null,"abstract":"<div><p>Shapley values have become one of the go-to methods to explain complex models to end-users. They provide a model agnostic post-hoc explanation with foundations in game theory: what is the worth of a player (in machine learning, a feature value) in the objective function (the output of the complex machine learning model). One downside is that they always require outputs of the model when some features are missing. These are usually computed by taking the expectation over the missing features. This however introduces a non-trivial choice: do we condition on the unknown features or not? In this paper we examine this question and claim that they represent two different explanations which are valid for different end-users: one that explains the model and one that explains the model combined with the feature dependencies in the data. We propose a new algorithmic approach to combine both explanations, removing the burden of choice and enhancing the explanatory power of Shapley values, and show that it achieves intuitive results on simple problems. We apply our method to two real-world datasets and discuss the explanations. Finally, we demonstrate how our method is either equivalent or superior to state-to-of-art Shapley value implementations while simultaneously allowing for increased insight into the model-data structure.</p></div>","PeriodicalId":55181,"journal":{"name":"Decision Support Systems","volume":"182 ","pages":"Article 114234"},"PeriodicalIF":6.7000,"publicationDate":"2024-04-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Decision Support Systems","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0167923624000678","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Shapley values have become one of the go-to methods to explain complex models to end-users. They provide a model agnostic post-hoc explanation with foundations in game theory: what is the worth of a player (in machine learning, a feature value) in the objective function (the output of the complex machine learning model). One downside is that they always require outputs of the model when some features are missing. These are usually computed by taking the expectation over the missing features. This however introduces a non-trivial choice: do we condition on the unknown features or not? In this paper we examine this question and claim that they represent two different explanations which are valid for different end-users: one that explains the model and one that explains the model combined with the feature dependencies in the data. We propose a new algorithmic approach to combine both explanations, removing the burden of choice and enhancing the explanatory power of Shapley values, and show that it achieves intuitive results on simple problems. We apply our method to two real-world datasets and discuss the explanations. Finally, we demonstrate how our method is either equivalent or superior to state-to-of-art Shapley value implementations while simultaneously allowing for increased insight into the model-data structure.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
通过分解沙普利值解释模型和特征的依赖关系
Shapley 值已成为向最终用户解释复杂模型的常用方法之一。它们提供了一种与模型无关的事后解释,以博弈论为基础:在目标函数(复杂机器学习模型的输出)中,一个参与者(在机器学习中为特征值)的价值是什么。一个缺点是,当某些特征缺失时,它们总是需要模型的输出。这些输出通常是通过对缺失特征的期望值来计算的。然而,这就带来了一个非难选择:我们是否要对未知特征设定条件?在本文中,我们对这一问题进行了研究,并声称它们代表了两种不同的解释,对不同的最终用户都是有效的:一种解释了模型,另一种解释了模型与数据中特征依赖性的结合。我们提出了一种新的算法方法来结合这两种解释,消除了选择的负担,增强了夏普利值的解释能力,并证明它在简单问题上取得了直观的结果。我们将我们的方法应用于两个真实世界的数据集,并对解释进行了讨论。最后,我们展示了我们的方法如何等同于或优于现有的 Shapley 值实现方法,同时又能提高对模型数据结构的洞察力。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Decision Support Systems
Decision Support Systems 工程技术-计算机:人工智能
CiteScore
14.70
自引率
6.70%
发文量
119
审稿时长
13 months
期刊介绍: The common thread of articles published in Decision Support Systems is their relevance to theoretical and technical issues in the support of enhanced decision making. The areas addressed may include foundations, functionality, interfaces, implementation, impacts, and evaluation of decision support systems (DSSs).
期刊最新文献
A comparative analysis of the effect of initiative risk statement versus passive risk disclosure on the financing performance of Kickstarter campaigns DeepSecure: A computational design science approach for interpretable threat hunting in cybersecurity decision making Editorial Board Effects of visual-preview and information-sidedness features on website persuasiveness The evolution of organizations and stakeholders for metaverse ecosystems: Editorial for the special issue on metaverse part 1
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1