Hierarchical Fuzzy Model-Agnostic Explanation: Framework, Algorithms, and Interface for XAI

IF 11.9 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE IEEE Transactions on Fuzzy Systems Pub Date : 2024-10-23 DOI:10.1109/TFUZZ.2024.3485212
Faliang Yin;Hak-Keung Lam;David Watson
{"title":"Hierarchical Fuzzy Model-Agnostic Explanation: Framework, Algorithms, and Interface for XAI","authors":"Faliang Yin;Hak-Keung Lam;David Watson","doi":"10.1109/TFUZZ.2024.3485212","DOIUrl":null,"url":null,"abstract":"Artificial intelligence (AI) has made remarkable achievements in extensive fields, whereas its closed boxnature limited applications in many critical areas. Against this drawback, explainable AI (XAI), has emerged as a focal point of current research. Recently, fuzzy logic systems (FLSs) attract increasing attention in XAI because of their linguistic representation, which can be naturally understood by humans. However, the focus of these works is limited by simply relying on inherent rule-based structures for explanation. Motivated by further exploring, the potential of FLS to overcome the challenges of XAI in terms of comprehensibility, scalability, and transferability, in this work, we propose fuzzy model-agnostic explanation (FMAE) as a post-hoc paradigm to explain the behavior of closed boxmodels. The innovations and contributions of this work provide a unified framework offering four levels of explanation, develop the associated algorithms to present the hidden knowledge behind the closed boxmodel in human-understandable form at different levels of granularity, and create the interface to deliver explanations to users. First, we introduce the hierarchical FMAE framework to formulate explanations into four levels including sample, local, domain, and universe. Second, the learning and explaining algorithms are developed to systematically construct FLS to model the behavior of closed boxmodels in the four levels where downscaling is performed by simplification to facilitate explanations with concise rules and upscaling is performed by the aggregation to integrate explanations at a higher level. Third, the proposed explanation interface unifies two typical forms of expression in XAI by fuzzy rules: the semantic inference explanation revealing the decision mechanism of the closed boxmodel and the feature salience explanation reflecting the attribution and interaction of input features. Simulated user experiments are designed on the comprehensive explanatory metrics. Compared with mainstream methods, the result shows outstanding explanation performance on real-world datasets for both regression and classification tasks.","PeriodicalId":13212,"journal":{"name":"IEEE Transactions on Fuzzy Systems","volume":"33 2","pages":"549-558"},"PeriodicalIF":11.9000,"publicationDate":"2024-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Fuzzy Systems","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10731553/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Artificial intelligence (AI) has made remarkable achievements in extensive fields, whereas its closed boxnature limited applications in many critical areas. Against this drawback, explainable AI (XAI), has emerged as a focal point of current research. Recently, fuzzy logic systems (FLSs) attract increasing attention in XAI because of their linguistic representation, which can be naturally understood by humans. However, the focus of these works is limited by simply relying on inherent rule-based structures for explanation. Motivated by further exploring, the potential of FLS to overcome the challenges of XAI in terms of comprehensibility, scalability, and transferability, in this work, we propose fuzzy model-agnostic explanation (FMAE) as a post-hoc paradigm to explain the behavior of closed boxmodels. The innovations and contributions of this work provide a unified framework offering four levels of explanation, develop the associated algorithms to present the hidden knowledge behind the closed boxmodel in human-understandable form at different levels of granularity, and create the interface to deliver explanations to users. First, we introduce the hierarchical FMAE framework to formulate explanations into four levels including sample, local, domain, and universe. Second, the learning and explaining algorithms are developed to systematically construct FLS to model the behavior of closed boxmodels in the four levels where downscaling is performed by simplification to facilitate explanations with concise rules and upscaling is performed by the aggregation to integrate explanations at a higher level. Third, the proposed explanation interface unifies two typical forms of expression in XAI by fuzzy rules: the semantic inference explanation revealing the decision mechanism of the closed boxmodel and the feature salience explanation reflecting the attribution and interaction of input features. Simulated user experiments are designed on the comprehensive explanatory metrics. Compared with mainstream methods, the result shows outstanding explanation performance on real-world datasets for both regression and classification tasks.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
分层模糊模型诊断解释:XAI 的框架、算法和界面
人工智能(AI)在广泛的领域取得了令人瞩目的成就,但其封闭的盒子性限制了它在许多关键领域的应用。针对这个缺点,可解释的人工智能(XAI)已经成为当前研究的焦点。近年来,模糊逻辑系统(FLSs)由于其语言表征能够被人类自然理解而在人工智能领域受到越来越多的关注。然而,这些作品的重点仅限于简单地依靠内在的基于规则的结构来解释。为了进一步探索FLS在可理解性、可扩展性和可移植性方面克服XAI挑战的潜力,在这项工作中,我们提出模糊模型不可知解释(FMAE)作为一种事后范式来解释封闭盒模型的行为。本工作的创新和贡献是提供了一个统一的框架,提供了四个层次的解释,开发了相关的算法,以不同粒度的层次以人类可理解的形式呈现封闭盒模型背后的隐藏知识,并创建了向用户传递解释的界面。首先,我们引入分层FMAE框架,将解释分为样本、局部、领域和宇宙四个层次。其次,开发学习和解释算法,系统地构建FLS来模拟四个层次的闭盒模型的行为,其中通过简化进行降尺度,以便用简洁的规则进行解释,通过聚合进行上尺度,以便整合更高层次的解释。第三,提出的解释接口通过模糊规则统一了XAI中两种典型的表达形式:揭示封闭盒模型决策机制的语义推理解释和反映输入特征归因和交互作用的特征显著性解释。在综合解释指标的基础上设计了模拟用户实验。与主流方法相比,该方法在真实数据集的回归和分类任务上都表现出优异的解释性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
IEEE Transactions on Fuzzy Systems
IEEE Transactions on Fuzzy Systems 工程技术-工程:电子与电气
CiteScore
20.50
自引率
13.40%
发文量
517
审稿时长
3.0 months
期刊介绍: The IEEE Transactions on Fuzzy Systems is a scholarly journal that focuses on the theory, design, and application of fuzzy systems. It aims to publish high-quality technical papers that contribute significant technical knowledge and exploratory developments in the field of fuzzy systems. The journal particularly emphasizes engineering systems and scientific applications. In addition to research articles, the Transactions also includes a letters section featuring current information, comments, and rebuttals related to published papers.
期刊最新文献
iFuzz-Meta: An Interpretable Fuzzy Learning Framework Bridging Top-Down and Bottom-Up Knowledge Integration Distributed Formation Control for Second-Order Nonlinear Multiagent Systems Using Predictor-Based Accelerated Fuzzy Learning Synchronization Control of Uncertain Fractional-Order Nonlinear Multi-Agent Systems Via Fuzzy Regularization Reinforcement Learning Convergence Conditions for Sigmoid-Based Fuzzy General gray Cognitive Maps: A Theoretical Study Non-monotonic causal discovery with Kolmogorov-Arnold Fuzzy Cognitive Maps
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1