{"title":"CIP-ES:解释替代物的因果输入扰动","authors":"Sebastian Steindl, Martin Surner","doi":"10.1145/3590003.3590107","DOIUrl":null,"url":null,"abstract":"With current advances in Machine Learning and its growing use in high-impact scenarios, the demand for interpretable and explainable models becomes crucial. Causality research tries to go beyond statistical correlations by focusing on causal relationships, which is fundamental for Interpretable and Explainable Artificial Intelligence. In this paper, we perturb the input for explanation surrogates based on causal graphs. We present an approach to combine surrogate-based explanations with causal knowledge. We apply the perturbed data to the Local Interpretable Model-agnostic Explanations (LIME) approach to showcase how causal graphs improve explanations of surrogate models. We thus integrate features from both domains by adding a causal component to local explanations. The proposed approach enables explanations that suit the expectations of the user by having the user define an appropriate causal graph. Accordingly, these expectations are true to the user. We demonstrate the suitability of our method using real world data.","PeriodicalId":340225,"journal":{"name":"Proceedings of the 2023 2nd Asia Conference on Algorithms, Computing and Machine Learning","volume":"37 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"CIP-ES: Causal Input Perturbation for Explanation Surrogates\",\"authors\":\"Sebastian Steindl, Martin Surner\",\"doi\":\"10.1145/3590003.3590107\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"With current advances in Machine Learning and its growing use in high-impact scenarios, the demand for interpretable and explainable models becomes crucial. Causality research tries to go beyond statistical correlations by focusing on causal relationships, which is fundamental for Interpretable and Explainable Artificial Intelligence. In this paper, we perturb the input for explanation surrogates based on causal graphs. We present an approach to combine surrogate-based explanations with causal knowledge. We apply the perturbed data to the Local Interpretable Model-agnostic Explanations (LIME) approach to showcase how causal graphs improve explanations of surrogate models. We thus integrate features from both domains by adding a causal component to local explanations. The proposed approach enables explanations that suit the expectations of the user by having the user define an appropriate causal graph. Accordingly, these expectations are true to the user. We demonstrate the suitability of our method using real world data.\",\"PeriodicalId\":340225,\"journal\":{\"name\":\"Proceedings of the 2023 2nd Asia Conference on Algorithms, Computing and Machine Learning\",\"volume\":\"37 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-03-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 2023 2nd Asia Conference on Algorithms, Computing and Machine Learning\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3590003.3590107\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2023 2nd Asia Conference on Algorithms, Computing and Machine Learning","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3590003.3590107","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
CIP-ES: Causal Input Perturbation for Explanation Surrogates
With current advances in Machine Learning and its growing use in high-impact scenarios, the demand for interpretable and explainable models becomes crucial. Causality research tries to go beyond statistical correlations by focusing on causal relationships, which is fundamental for Interpretable and Explainable Artificial Intelligence. In this paper, we perturb the input for explanation surrogates based on causal graphs. We present an approach to combine surrogate-based explanations with causal knowledge. We apply the perturbed data to the Local Interpretable Model-agnostic Explanations (LIME) approach to showcase how causal graphs improve explanations of surrogate models. We thus integrate features from both domains by adding a causal component to local explanations. The proposed approach enables explanations that suit the expectations of the user by having the user define an appropriate causal graph. Accordingly, these expectations are true to the user. We demonstrate the suitability of our method using real world data.