{"title":"Axiomatic Characterisations of Sample-based Explainers","authors":"Leila Amgouda, Martin C. Cooper, Salim Debbaoui","doi":"arxiv-2408.04903","DOIUrl":null,"url":null,"abstract":"Explaining decisions of black-box classifiers is both important and\ncomputationally challenging. In this paper, we scrutinize explainers that\ngenerate feature-based explanations from samples or datasets. We start by\npresenting a set of desirable properties that explainers would ideally satisfy,\ndelve into their relationships, and highlight incompatibilities of some of\nthem. We identify the entire family of explainers that satisfy two key\nproperties which are compatible with all the others. Its instances provide\nsufficient reasons, called weak abductive explanations.We then unravel its\nvarious subfamilies that satisfy subsets of compatible properties. Indeed, we\nfully characterize all the explainers that satisfy any subset of compatible\nproperties. In particular, we introduce the first (broad family of) explainers\nthat guarantee the existence of explanations and their global consistency.We\ndiscuss some of its instances including the irrefutable explainer and the\nsurrogate explainer whose explanations can be found in polynomial time.","PeriodicalId":501479,"journal":{"name":"arXiv - CS - Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Artificial Intelligence","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2408.04903","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Explaining decisions of black-box classifiers is both important and
computationally challenging. In this paper, we scrutinize explainers that
generate feature-based explanations from samples or datasets. We start by
presenting a set of desirable properties that explainers would ideally satisfy,
delve into their relationships, and highlight incompatibilities of some of
them. We identify the entire family of explainers that satisfy two key
properties which are compatible with all the others. Its instances provide
sufficient reasons, called weak abductive explanations.We then unravel its
various subfamilies that satisfy subsets of compatible properties. Indeed, we
fully characterize all the explainers that satisfy any subset of compatible
properties. In particular, we introduce the first (broad family of) explainers
that guarantee the existence of explanations and their global consistency.We
discuss some of its instances including the irrefutable explainer and the
surrogate explainer whose explanations can be found in polynomial time.