{"title":"Hierarchical Fuzzy Model-Agnostic Explanation: Framework, Algorithms, and Interface for XAI","authors":"Faliang Yin;Hak-Keung Lam;David Watson","doi":"10.1109/TFUZZ.2024.3485212","DOIUrl":null,"url":null,"abstract":"Artificial intelligence (AI) has made remarkable achievements in extensive fields, whereas its closed boxnature limited applications in many critical areas. Against this drawback, explainable AI (XAI), has emerged as a focal point of current research. Recently, fuzzy logic systems (FLSs) attract increasing attention in XAI because of their linguistic representation, which can be naturally understood by humans. However, the focus of these works is limited by simply relying on inherent rule-based structures for explanation. Motivated by further exploring, the potential of FLS to overcome the challenges of XAI in terms of comprehensibility, scalability, and transferability, in this work, we propose fuzzy model-agnostic explanation (FMAE) as a post-hoc paradigm to explain the behavior of closed boxmodels. The innovations and contributions of this work provide a unified framework offering four levels of explanation, develop the associated algorithms to present the hidden knowledge behind the closed boxmodel in human-understandable form at different levels of granularity, and create the interface to deliver explanations to users. First, we introduce the hierarchical FMAE framework to formulate explanations into four levels including sample, local, domain, and universe. Second, the learning and explaining algorithms are developed to systematically construct FLS to model the behavior of closed boxmodels in the four levels where downscaling is performed by simplification to facilitate explanations with concise rules and upscaling is performed by the aggregation to integrate explanations at a higher level. Third, the proposed explanation interface unifies two typical forms of expression in XAI by fuzzy rules: the semantic inference explanation revealing the decision mechanism of the closed boxmodel and the feature salience explanation reflecting the attribution and interaction of input features. Simulated user experiments are designed on the comprehensive explanatory metrics. Compared with mainstream methods, the result shows outstanding explanation performance on real-world datasets for both regression and classification tasks.","PeriodicalId":13212,"journal":{"name":"IEEE Transactions on Fuzzy Systems","volume":"33 2","pages":"549-558"},"PeriodicalIF":11.9000,"publicationDate":"2024-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Fuzzy Systems","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10731553/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Artificial intelligence (AI) has made remarkable achievements in extensive fields, whereas its closed boxnature limited applications in many critical areas. Against this drawback, explainable AI (XAI), has emerged as a focal point of current research. Recently, fuzzy logic systems (FLSs) attract increasing attention in XAI because of their linguistic representation, which can be naturally understood by humans. However, the focus of these works is limited by simply relying on inherent rule-based structures for explanation. Motivated by further exploring, the potential of FLS to overcome the challenges of XAI in terms of comprehensibility, scalability, and transferability, in this work, we propose fuzzy model-agnostic explanation (FMAE) as a post-hoc paradigm to explain the behavior of closed boxmodels. The innovations and contributions of this work provide a unified framework offering four levels of explanation, develop the associated algorithms to present the hidden knowledge behind the closed boxmodel in human-understandable form at different levels of granularity, and create the interface to deliver explanations to users. First, we introduce the hierarchical FMAE framework to formulate explanations into four levels including sample, local, domain, and universe. Second, the learning and explaining algorithms are developed to systematically construct FLS to model the behavior of closed boxmodels in the four levels where downscaling is performed by simplification to facilitate explanations with concise rules and upscaling is performed by the aggregation to integrate explanations at a higher level. Third, the proposed explanation interface unifies two typical forms of expression in XAI by fuzzy rules: the semantic inference explanation revealing the decision mechanism of the closed boxmodel and the feature salience explanation reflecting the attribution and interaction of input features. Simulated user experiments are designed on the comprehensive explanatory metrics. Compared with mainstream methods, the result shows outstanding explanation performance on real-world datasets for both regression and classification tasks.
期刊介绍:
The IEEE Transactions on Fuzzy Systems is a scholarly journal that focuses on the theory, design, and application of fuzzy systems. It aims to publish high-quality technical papers that contribute significant technical knowledge and exploratory developments in the field of fuzzy systems. The journal particularly emphasizes engineering systems and scientific applications. In addition to research articles, the Transactions also includes a letters section featuring current information, comments, and rebuttals related to published papers.