Towards consistency of rule-based explainer and black box model — Fusion of rule induction and XAI-based feature importance

IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Knowledge-Based Systems Pub Date : 2025-02-06 DOI:10.1016/j.knosys.2025.113092
Michał Kozielski , Marek Sikora , Łukasz Wawrowski
{"title":"Towards consistency of rule-based explainer and black box model — Fusion of rule induction and XAI-based feature importance","authors":"Michał Kozielski ,&nbsp;Marek Sikora ,&nbsp;Łukasz Wawrowski","doi":"10.1016/j.knosys.2025.113092","DOIUrl":null,"url":null,"abstract":"<div><div>Rule-based models offer a human-understandable representation, i.e. they are interpretable. For this reason, they are used to explain the decisions of non-interpretable complex models, referred to as black box models. The generation of such explanations involves the approximation of a black box model by a rule-based model. To date, however, it has not been investigated whether the rule-based model makes decisions in the same way as the black box model it approximates. Decision making in the same way is understood in this work as the consistency of decisions and the consistency of the most important attributes used for decision making. This study proposes a novel approach ensuring that the rule-based surrogate model mimics the performance of the black box model. The proposed solution performs an explanation fusion involving rule generation and taking into account the feature importance determined by the selected XAI methods for the black box model being explained. The result of the method can be both global and local rule-based explanations. The quality of the proposed solution was verified by extensive analysis on 30 tabular benchmark datasets representing classification problems. Evaluation included comparison with the reference method and an illustrative case study. In addition, the paper discusses the possible pathways for the application of the rule-based approach in XAI and how rule-based explanations, including the proposed method, meet the user perspective and requirements for both content and presentation. The software created and a detailed report containing the full experimental results are available on the GitHub repository (<span><span>https://github.com/ruleminer/FI-rules4XAI</span><svg><path></path></svg></span>).</div></div>","PeriodicalId":49939,"journal":{"name":"Knowledge-Based Systems","volume":"311 ","pages":"Article 113092"},"PeriodicalIF":7.6000,"publicationDate":"2025-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Knowledge-Based Systems","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S095070512500139X","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Rule-based models offer a human-understandable representation, i.e. they are interpretable. For this reason, they are used to explain the decisions of non-interpretable complex models, referred to as black box models. The generation of such explanations involves the approximation of a black box model by a rule-based model. To date, however, it has not been investigated whether the rule-based model makes decisions in the same way as the black box model it approximates. Decision making in the same way is understood in this work as the consistency of decisions and the consistency of the most important attributes used for decision making. This study proposes a novel approach ensuring that the rule-based surrogate model mimics the performance of the black box model. The proposed solution performs an explanation fusion involving rule generation and taking into account the feature importance determined by the selected XAI methods for the black box model being explained. The result of the method can be both global and local rule-based explanations. The quality of the proposed solution was verified by extensive analysis on 30 tabular benchmark datasets representing classification problems. Evaluation included comparison with the reference method and an illustrative case study. In addition, the paper discusses the possible pathways for the application of the rule-based approach in XAI and how rule-based explanations, including the proposed method, meet the user perspective and requirements for both content and presentation. The software created and a detailed report containing the full experimental results are available on the GitHub repository (https://github.com/ruleminer/FI-rules4XAI).

Abstract Image

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
实现基于规则的解释器和黑盒模型的一致性——规则归纳和基于xai的特征重要性的融合
基于规则的模型提供了人类可理解的表示,即它们是可解释的。由于这个原因,它们被用来解释不可解释的复杂模型(称为黑盒模型)的决策。这种解释的产生涉及到一个基于规则的模型对黑箱模型的近似。然而,到目前为止,还没有研究基于规则的模型是否以与它所近似的黑盒模型相同的方式做出决策。在这项工作中,以同样的方式进行决策被理解为决策的一致性和用于决策的最重要属性的一致性。本研究提出了一种新的方法,确保基于规则的代理模型模仿黑盒模型的性能。提出的解决方案执行一种解释融合,涉及规则生成,并考虑到被解释的黑箱模型所选择的XAI方法确定的特征重要性。该方法的结果可以是基于全局和局部规则的解释。通过对代表分类问题的30个表格基准数据集的广泛分析,验证了所提出解决方案的质量。评估包括与参考方法的比较和一个说明性案例研究。此外,本文还讨论了在XAI中应用基于规则的方法的可能途径,以及基于规则的解释(包括所提出的方法)如何满足用户对内容和表示的观点和要求。创建的软件和包含完整实验结果的详细报告可在GitHub存储库(https://github.com/ruleminer/FI-rules4XAI)上获得。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Knowledge-Based Systems
Knowledge-Based Systems 工程技术-计算机:人工智能
CiteScore
14.80
自引率
12.50%
发文量
1245
审稿时长
7.8 months
期刊介绍: Knowledge-Based Systems, an international and interdisciplinary journal in artificial intelligence, publishes original, innovative, and creative research results in the field. It focuses on knowledge-based and other artificial intelligence techniques-based systems. The journal aims to support human prediction and decision-making through data science and computation techniques, provide a balanced coverage of theory and practical study, and encourage the development and implementation of knowledge-based intelligence models, methods, systems, and software tools. Applications in business, government, education, engineering, and healthcare are emphasized.
期刊最新文献
Editorial Board Visual and textual spaces both matter: Taming CLIP for non-IID federated medical image classification Improved LSTNet-Driven hyperchaotic sequence optimization and its application in multi-Image encryption HiSURF: Hierarchical semantic-guided unified radiance field for generalizing across unseen scenes FasterGCN: Accelerating and enhancing graph convolutional network for recommendation
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1