{"title":"A multiorder feature tracking and explanation strategy for explainable deep learning","authors":"Lin Zheng, Yixuan Lin","doi":"10.1515/jisys-2022-0212","DOIUrl":null,"url":null,"abstract":"Abstract A good AI algorithm can make accurate predictions and provide reasonable explanations for the field in which it is applied. However, the application of deep models makes the black box problem, i.e., the lack of interpretability of a model, more prominent. In particular, when there are multiple features in an application domain and complex interactions between these features, it is difficult for a deep model to intuitively explain its prediction results. Moreover, in practical applications, multiorder feature interactions are ubiquitous. To break the interpretation limitations of deep models, we argue that a multiorder linearly separable deep model can be divided into different orders to explain its prediction results. Inspired by the interpretability advantage of tree models, we design a feature representation mechanism that can consistently represent the features of both trees and deep models. Based on the consistent representation, we propose a multiorder feature-tracking strategy to provide a prediction-oriented multiorder explanation for a linearly separable deep model. In experiments, we have empirically verified the effectiveness of our approach in two binary classification application scenarios: education and marketing. Experimental results show that our model can intuitively represent complex relationships between features through diversified multiorder explanations.","PeriodicalId":46139,"journal":{"name":"Journal of Intelligent Systems","volume":"51 1","pages":""},"PeriodicalIF":2.1000,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Intelligent Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1515/jisys-2022-0212","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Abstract A good AI algorithm can make accurate predictions and provide reasonable explanations for the field in which it is applied. However, the application of deep models makes the black box problem, i.e., the lack of interpretability of a model, more prominent. In particular, when there are multiple features in an application domain and complex interactions between these features, it is difficult for a deep model to intuitively explain its prediction results. Moreover, in practical applications, multiorder feature interactions are ubiquitous. To break the interpretation limitations of deep models, we argue that a multiorder linearly separable deep model can be divided into different orders to explain its prediction results. Inspired by the interpretability advantage of tree models, we design a feature representation mechanism that can consistently represent the features of both trees and deep models. Based on the consistent representation, we propose a multiorder feature-tracking strategy to provide a prediction-oriented multiorder explanation for a linearly separable deep model. In experiments, we have empirically verified the effectiveness of our approach in two binary classification application scenarios: education and marketing. Experimental results show that our model can intuitively represent complex relationships between features through diversified multiorder explanations.
期刊介绍:
The Journal of Intelligent Systems aims to provide research and review papers, as well as Brief Communications at an interdisciplinary level, with the field of intelligent systems providing the focal point. This field includes areas like artificial intelligence, models and computational theories of human cognition, perception and motivation; brain models, artificial neural nets and neural computing. It covers contributions from the social, human and computer sciences to the analysis and application of information technology.