Thomas Schnake, Farnoush Rezaei Jafaria, Jonas Lederer, Ping Xiong, Shinichi Nakajima, Stefan Gugler, Grégoire Montavon, Klaus-Robert Müller
{"title":"Towards Symbolic XAI -- Explanation Through Human Understandable Logical Relationships Between Features","authors":"Thomas Schnake, Farnoush Rezaei Jafaria, Jonas Lederer, Ping Xiong, Shinichi Nakajima, Stefan Gugler, Grégoire Montavon, Klaus-Robert Müller","doi":"arxiv-2408.17198","DOIUrl":null,"url":null,"abstract":"Explainable Artificial Intelligence (XAI) plays a crucial role in fostering\ntransparency and trust in AI systems, where traditional XAI approaches\ntypically offer one level of abstraction for explanations, often in the form of\nheatmaps highlighting single or multiple input features. However, we ask\nwhether abstract reasoning or problem-solving strategies of a model may also be\nrelevant, as these align more closely with how humans approach solutions to\nproblems. We propose a framework, called Symbolic XAI, that attributes\nrelevance to symbolic queries expressing logical relationships between input\nfeatures, thereby capturing the abstract reasoning behind a model's\npredictions. The methodology is built upon a simple yet general multi-order\ndecomposition of model predictions. This decomposition can be specified using\nhigher-order propagation-based relevance methods, such as GNN-LRP, or\nperturbation-based explanation methods commonly used in XAI. The effectiveness\nof our framework is demonstrated in the domains of natural language processing\n(NLP), vision, and quantum chemistry (QC), where abstract symbolic domain\nknowledge is abundant and of significant interest to users. The Symbolic XAI\nframework provides an understanding of the model's decision-making process that\nis both flexible for customization by the user and human-readable through\nlogical formulas.","PeriodicalId":501479,"journal":{"name":"arXiv - CS - Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Artificial Intelligence","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2408.17198","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Explainable Artificial Intelligence (XAI) plays a crucial role in fostering
transparency and trust in AI systems, where traditional XAI approaches
typically offer one level of abstraction for explanations, often in the form of
heatmaps highlighting single or multiple input features. However, we ask
whether abstract reasoning or problem-solving strategies of a model may also be
relevant, as these align more closely with how humans approach solutions to
problems. We propose a framework, called Symbolic XAI, that attributes
relevance to symbolic queries expressing logical relationships between input
features, thereby capturing the abstract reasoning behind a model's
predictions. The methodology is built upon a simple yet general multi-order
decomposition of model predictions. This decomposition can be specified using
higher-order propagation-based relevance methods, such as GNN-LRP, or
perturbation-based explanation methods commonly used in XAI. The effectiveness
of our framework is demonstrated in the domains of natural language processing
(NLP), vision, and quantum chemistry (QC), where abstract symbolic domain
knowledge is abundant and of significant interest to users. The Symbolic XAI
framework provides an understanding of the model's decision-making process that
is both flexible for customization by the user and human-readable through
logical formulas.