Maximilian Förster, Philipp Hühn, Mathias Klier, Kilian Kluge
{"title":"User-centric explainable AI: design and evaluation of an approach to generate coherent counterfactual explanations for structured data","authors":"Maximilian Förster, Philipp Hühn, Mathias Klier, Kilian Kluge","doi":"10.1080/12460125.2022.2119707","DOIUrl":null,"url":null,"abstract":"ABSTRACT Many Artificial Intelligence (AI) systems are black boxes, which hinders their deployment. Explainable AI (XAI) approaches which automatically generate counterfactual explanations aim to assist users in scrutinising AI decisions. One property of explanations crucial for their acceptance by users is their coherence. Users perceive counterfactual explanations as coherent if they present a realistic/typical counterfactual scenario that is suitable to explain the factual situation. We design an optimisation-based approach to generate coherent counterfactual explanations applicable to structured data. We demonstrate its applicability and rigorously evaluate its efficacy through functionally grounded and human-grounded evaluation. Results suggest that our approach indeed produces counterfactual explanations that are perceived as coherent by users. More specifically, they are perceived as more realistic, typical, and feasible than state-of-the-art explanations.","PeriodicalId":45565,"journal":{"name":"Journal of Decision Systems","volume":" ","pages":""},"PeriodicalIF":2.8000,"publicationDate":"2022-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Decision Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1080/12460125.2022.2119707","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"OPERATIONS RESEARCH & MANAGEMENT SCIENCE","Score":null,"Total":0}
引用次数: 2
Abstract
ABSTRACT Many Artificial Intelligence (AI) systems are black boxes, which hinders their deployment. Explainable AI (XAI) approaches which automatically generate counterfactual explanations aim to assist users in scrutinising AI decisions. One property of explanations crucial for their acceptance by users is their coherence. Users perceive counterfactual explanations as coherent if they present a realistic/typical counterfactual scenario that is suitable to explain the factual situation. We design an optimisation-based approach to generate coherent counterfactual explanations applicable to structured data. We demonstrate its applicability and rigorously evaluate its efficacy through functionally grounded and human-grounded evaluation. Results suggest that our approach indeed produces counterfactual explanations that are perceived as coherent by users. More specifically, they are perceived as more realistic, typical, and feasible than state-of-the-art explanations.