Explain the World—Using Causality to Facilitate Better Rules for Fuzzy Systems

IF 11.9 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE IEEE Transactions on Fuzzy Systems Pub Date : 2024-09-10 DOI:10.1109/TFUZZ.2024.3457962
Te Zhang;Christian Wagner;Jonathan M. Garibaldi
{"title":"Explain the World—Using Causality to Facilitate Better Rules for Fuzzy Systems","authors":"Te Zhang;Christian Wagner;Jonathan M. Garibaldi","doi":"10.1109/TFUZZ.2024.3457962","DOIUrl":null,"url":null,"abstract":"The rules of a rule-based system provide explanations for its behavior by revealing the relationships between the variables captured. However, ideally, we have AI systems which go beyond explainable AI (XAI), that is, systems which not only explain their behavior, but also communicate their “insights” with respect to the real world. This requires rules to capture causal relationships between variables. In this article, we argue that those systems where the rules reflect causal relationships between variables represent an important class of fuzzy rule-based systems with unique benefits. Specifically, such systems benefit from improved performance and robustness; facilitate global explainability and thus cater to a core ambition for AI: the ability to communicate important relationships among a system's real-world variables to the human users of AI. We establish two causal-rule focused approaches to design fuzzy systems, and show the distinctions in their respective application scenarios for the explanations of the rules obtained by these two methods. The results show that rules which reflect causal relationships are more suitable for XAI than rules which “only” reflect correlations, while also confirming that they offer robustness to over-fitting, in turn supporting strong performance.","PeriodicalId":13212,"journal":{"name":"IEEE Transactions on Fuzzy Systems","volume":"32 12","pages":"6671-6683"},"PeriodicalIF":11.9000,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Fuzzy Systems","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10675339/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

The rules of a rule-based system provide explanations for its behavior by revealing the relationships between the variables captured. However, ideally, we have AI systems which go beyond explainable AI (XAI), that is, systems which not only explain their behavior, but also communicate their “insights” with respect to the real world. This requires rules to capture causal relationships between variables. In this article, we argue that those systems where the rules reflect causal relationships between variables represent an important class of fuzzy rule-based systems with unique benefits. Specifically, such systems benefit from improved performance and robustness; facilitate global explainability and thus cater to a core ambition for AI: the ability to communicate important relationships among a system's real-world variables to the human users of AI. We establish two causal-rule focused approaches to design fuzzy systems, and show the distinctions in their respective application scenarios for the explanations of the rules obtained by these two methods. The results show that rules which reflect causal relationships are more suitable for XAI than rules which “only” reflect correlations, while also confirming that they offer robustness to over-fitting, in turn supporting strong performance.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
解释世界--利用因果关系促进模糊系统规则的完善
基于规则的系统的规则通过揭示捕获的变量之间的关系来解释其行为。然而,理想情况下,我们拥有超越可解释AI (XAI)的AI系统,也就是说,系统不仅可以解释它们的行为,还可以传达它们对现实世界的“见解”。这需要规则来捕捉变量之间的因果关系。在本文中,我们认为那些规则反映变量之间因果关系的系统代表了一类重要的基于模糊规则的系统,具有独特的优势。具体来说,这样的系统受益于改进的性能和健壮性;促进全局可解释性,从而迎合人工智能的核心目标:将系统真实世界变量之间的重要关系传达给人工智能的人类用户的能力。我们建立了两种以因果规则为中心的模糊系统设计方法,并展示了两种方法获得的规则解释在各自应用场景中的区别。结果表明,反映因果关系的规则比“仅”反映相关性的规则更适合XAI,同时也证实它们对过拟合具有稳健性,从而支持强大的性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
IEEE Transactions on Fuzzy Systems
IEEE Transactions on Fuzzy Systems 工程技术-工程:电子与电气
CiteScore
20.50
自引率
13.40%
发文量
517
审稿时长
3.0 months
期刊介绍: The IEEE Transactions on Fuzzy Systems is a scholarly journal that focuses on the theory, design, and application of fuzzy systems. It aims to publish high-quality technical papers that contribute significant technical knowledge and exploratory developments in the field of fuzzy systems. The journal particularly emphasizes engineering systems and scientific applications. In addition to research articles, the Transactions also includes a letters section featuring current information, comments, and rebuttals related to published papers.
期刊最新文献
Non-monotonic causal discovery with Kolmogorov-Arnold Fuzzy Cognitive Maps PRFCM: Poisson-Specific Residual-Driven Fuzzy $C$-Means Clustering for Image Segmentation Target-Oriented Autonomous Fuzzy Model Adaptation in Multimodal Transfer Trend-Aware-Based Type-2 Vector Fuzzy Neural Network for Nonlinear System Identification Knowledge Calibration Fusion and Label Space Graph Regularization-Based Multicenter Fuzzy Systems
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1