Carl O. Retzlaff , Alessa Angerschmid , Anna Saranti , David Schneeberger , Richard Röttger , Heimo Müller , Andreas Holzinger
{"title":"事后解释与事前解释:数据科学家的 xAI 设计指南","authors":"Carl O. Retzlaff , Alessa Angerschmid , Anna Saranti , David Schneeberger , Richard Röttger , Heimo Müller , Andreas Holzinger","doi":"10.1016/j.cogsys.2024.101243","DOIUrl":null,"url":null,"abstract":"<div><p>The growing field of explainable Artificial Intelligence (xAI) has given rise to a multitude of techniques and methodologies, yet this expansion has created a growing gap between existing xAI approaches and their practical application. This poses a considerable obstacle for data scientists striving to identify the optimal xAI technique for their needs. To address this problem, our study presents a customized decision support framework to aid data scientists in choosing a suitable xAI approach for their use-case. Drawing from a literature survey and insights from interviews with five experienced data scientists, we introduce a decision tree based on the trade-offs inherent in various xAI approaches, guiding the selection between six commonly used xAI tools. Our work critically examines six prevalent ante-hoc and post-hoc xAI methods, assessing their applicability in real-world contexts through expert interviews. The aim is to equip data scientists and policymakers with the capacity to select xAI methods that not only demystify the decision-making process, but also enrich user understanding and interpretation, ultimately advancing the application of xAI in practical settings.</p></div>","PeriodicalId":55242,"journal":{"name":"Cognitive Systems Research","volume":"86 ","pages":"Article 101243"},"PeriodicalIF":2.1000,"publicationDate":"2024-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Post-hoc vs ante-hoc explanations: xAI design guidelines for data scientists\",\"authors\":\"Carl O. Retzlaff , Alessa Angerschmid , Anna Saranti , David Schneeberger , Richard Röttger , Heimo Müller , Andreas Holzinger\",\"doi\":\"10.1016/j.cogsys.2024.101243\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>The growing field of explainable Artificial Intelligence (xAI) has given rise to a multitude of techniques and methodologies, yet this expansion has created a growing gap between existing xAI approaches and their practical application. This poses a considerable obstacle for data scientists striving to identify the optimal xAI technique for their needs. To address this problem, our study presents a customized decision support framework to aid data scientists in choosing a suitable xAI approach for their use-case. Drawing from a literature survey and insights from interviews with five experienced data scientists, we introduce a decision tree based on the trade-offs inherent in various xAI approaches, guiding the selection between six commonly used xAI tools. Our work critically examines six prevalent ante-hoc and post-hoc xAI methods, assessing their applicability in real-world contexts through expert interviews. The aim is to equip data scientists and policymakers with the capacity to select xAI methods that not only demystify the decision-making process, but also enrich user understanding and interpretation, ultimately advancing the application of xAI in practical settings.</p></div>\",\"PeriodicalId\":55242,\"journal\":{\"name\":\"Cognitive Systems Research\",\"volume\":\"86 \",\"pages\":\"Article 101243\"},\"PeriodicalIF\":2.1000,\"publicationDate\":\"2024-05-06\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Cognitive Systems Research\",\"FirstCategoryId\":\"102\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1389041724000378\",\"RegionNum\":3,\"RegionCategory\":\"心理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Cognitive Systems Research","FirstCategoryId":"102","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1389041724000378","RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
Post-hoc vs ante-hoc explanations: xAI design guidelines for data scientists
The growing field of explainable Artificial Intelligence (xAI) has given rise to a multitude of techniques and methodologies, yet this expansion has created a growing gap between existing xAI approaches and their practical application. This poses a considerable obstacle for data scientists striving to identify the optimal xAI technique for their needs. To address this problem, our study presents a customized decision support framework to aid data scientists in choosing a suitable xAI approach for their use-case. Drawing from a literature survey and insights from interviews with five experienced data scientists, we introduce a decision tree based on the trade-offs inherent in various xAI approaches, guiding the selection between six commonly used xAI tools. Our work critically examines six prevalent ante-hoc and post-hoc xAI methods, assessing their applicability in real-world contexts through expert interviews. The aim is to equip data scientists and policymakers with the capacity to select xAI methods that not only demystify the decision-making process, but also enrich user understanding and interpretation, ultimately advancing the application of xAI in practical settings.
期刊介绍:
Cognitive Systems Research is dedicated to the study of human-level cognition. As such, it welcomes papers which advance the understanding, design and applications of cognitive and intelligent systems, both natural and artificial.
The journal brings together a broad community studying cognition in its many facets in vivo and in silico, across the developmental spectrum, focusing on individual capacities or on entire architectures. It aims to foster debate and integrate ideas, concepts, constructs, theories, models and techniques from across different disciplines and different perspectives on human-level cognition. The scope of interest includes the study of cognitive capacities and architectures - both brain-inspired and non-brain-inspired - and the application of cognitive systems to real-world problems as far as it offers insights relevant for the understanding of cognition.
Cognitive Systems Research therefore welcomes mature and cutting-edge research approaching cognition from a systems-oriented perspective, both theoretical and empirically-informed, in the form of original manuscripts, short communications, opinion articles, systematic reviews, and topical survey articles from the fields of Cognitive Science (including Philosophy of Cognitive Science), Artificial Intelligence/Computer Science, Cognitive Robotics, Developmental Science, Psychology, and Neuroscience and Neuromorphic Engineering. Empirical studies will be considered if they are supplemented by theoretical analyses and contributions to theory development and/or computational modelling studies.