事后解释与事前解释:数据科学家的 xAI 设计指南

IF 2.1 3区 心理学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Cognitive Systems Research Pub Date : 2024-05-06 DOI:10.1016/j.cogsys.2024.101243
Carl O. Retzlaff , Alessa Angerschmid , Anna Saranti , David Schneeberger , Richard Röttger , Heimo Müller , Andreas Holzinger
{"title":"事后解释与事前解释:数据科学家的 xAI 设计指南","authors":"Carl O. Retzlaff ,&nbsp;Alessa Angerschmid ,&nbsp;Anna Saranti ,&nbsp;David Schneeberger ,&nbsp;Richard Röttger ,&nbsp;Heimo Müller ,&nbsp;Andreas Holzinger","doi":"10.1016/j.cogsys.2024.101243","DOIUrl":null,"url":null,"abstract":"<div><p>The growing field of explainable Artificial Intelligence (xAI) has given rise to a multitude of techniques and methodologies, yet this expansion has created a growing gap between existing xAI approaches and their practical application. This poses a considerable obstacle for data scientists striving to identify the optimal xAI technique for their needs. To address this problem, our study presents a customized decision support framework to aid data scientists in choosing a suitable xAI approach for their use-case. Drawing from a literature survey and insights from interviews with five experienced data scientists, we introduce a decision tree based on the trade-offs inherent in various xAI approaches, guiding the selection between six commonly used xAI tools. Our work critically examines six prevalent ante-hoc and post-hoc xAI methods, assessing their applicability in real-world contexts through expert interviews. The aim is to equip data scientists and policymakers with the capacity to select xAI methods that not only demystify the decision-making process, but also enrich user understanding and interpretation, ultimately advancing the application of xAI in practical settings.</p></div>","PeriodicalId":55242,"journal":{"name":"Cognitive Systems Research","volume":null,"pages":null},"PeriodicalIF":2.1000,"publicationDate":"2024-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Post-hoc vs ante-hoc explanations: xAI design guidelines for data scientists\",\"authors\":\"Carl O. Retzlaff ,&nbsp;Alessa Angerschmid ,&nbsp;Anna Saranti ,&nbsp;David Schneeberger ,&nbsp;Richard Röttger ,&nbsp;Heimo Müller ,&nbsp;Andreas Holzinger\",\"doi\":\"10.1016/j.cogsys.2024.101243\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>The growing field of explainable Artificial Intelligence (xAI) has given rise to a multitude of techniques and methodologies, yet this expansion has created a growing gap between existing xAI approaches and their practical application. This poses a considerable obstacle for data scientists striving to identify the optimal xAI technique for their needs. To address this problem, our study presents a customized decision support framework to aid data scientists in choosing a suitable xAI approach for their use-case. Drawing from a literature survey and insights from interviews with five experienced data scientists, we introduce a decision tree based on the trade-offs inherent in various xAI approaches, guiding the selection between six commonly used xAI tools. Our work critically examines six prevalent ante-hoc and post-hoc xAI methods, assessing their applicability in real-world contexts through expert interviews. The aim is to equip data scientists and policymakers with the capacity to select xAI methods that not only demystify the decision-making process, but also enrich user understanding and interpretation, ultimately advancing the application of xAI in practical settings.</p></div>\",\"PeriodicalId\":55242,\"journal\":{\"name\":\"Cognitive Systems Research\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":2.1000,\"publicationDate\":\"2024-05-06\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Cognitive Systems Research\",\"FirstCategoryId\":\"102\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1389041724000378\",\"RegionNum\":3,\"RegionCategory\":\"心理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Cognitive Systems Research","FirstCategoryId":"102","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1389041724000378","RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

可解释人工智能(xAI)领域的不断发展催生了大量的技术和方法,然而,这种扩展在现有 xAI 方法及其实际应用之间造成了越来越大的差距。这给努力为自己的需求找出最佳 xAI 技术的数据科学家带来了相当大的障碍。为了解决这个问题,我们的研究提出了一个定制的决策支持框架,以帮助数据科学家为他们的用例选择合适的 xAI 方法。我们从文献调查和与五位经验丰富的数据科学家的访谈中汲取灵感,根据各种 xAI 方法的内在权衡,引入了一个决策树,指导在六种常用 xAI 工具之间进行选择。我们的工作严格审查了六种流行的事前和事后 xAI 方法,并通过专家访谈评估了它们在现实世界中的适用性。我们的目的是让数据科学家和决策者有能力选择 xAI 方法,这些方法不仅能揭开决策过程的神秘面纱,还能丰富用户的理解和解释,最终推动 xAI 在实际环境中的应用。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Post-hoc vs ante-hoc explanations: xAI design guidelines for data scientists

The growing field of explainable Artificial Intelligence (xAI) has given rise to a multitude of techniques and methodologies, yet this expansion has created a growing gap between existing xAI approaches and their practical application. This poses a considerable obstacle for data scientists striving to identify the optimal xAI technique for their needs. To address this problem, our study presents a customized decision support framework to aid data scientists in choosing a suitable xAI approach for their use-case. Drawing from a literature survey and insights from interviews with five experienced data scientists, we introduce a decision tree based on the trade-offs inherent in various xAI approaches, guiding the selection between six commonly used xAI tools. Our work critically examines six prevalent ante-hoc and post-hoc xAI methods, assessing their applicability in real-world contexts through expert interviews. The aim is to equip data scientists and policymakers with the capacity to select xAI methods that not only demystify the decision-making process, but also enrich user understanding and interpretation, ultimately advancing the application of xAI in practical settings.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Cognitive Systems Research
Cognitive Systems Research 工程技术-计算机:人工智能
CiteScore
9.40
自引率
5.10%
发文量
40
审稿时长
>12 weeks
期刊介绍: Cognitive Systems Research is dedicated to the study of human-level cognition. As such, it welcomes papers which advance the understanding, design and applications of cognitive and intelligent systems, both natural and artificial. The journal brings together a broad community studying cognition in its many facets in vivo and in silico, across the developmental spectrum, focusing on individual capacities or on entire architectures. It aims to foster debate and integrate ideas, concepts, constructs, theories, models and techniques from across different disciplines and different perspectives on human-level cognition. The scope of interest includes the study of cognitive capacities and architectures - both brain-inspired and non-brain-inspired - and the application of cognitive systems to real-world problems as far as it offers insights relevant for the understanding of cognition. Cognitive Systems Research therefore welcomes mature and cutting-edge research approaching cognition from a systems-oriented perspective, both theoretical and empirically-informed, in the form of original manuscripts, short communications, opinion articles, systematic reviews, and topical survey articles from the fields of Cognitive Science (including Philosophy of Cognitive Science), Artificial Intelligence/Computer Science, Cognitive Robotics, Developmental Science, Psychology, and Neuroscience and Neuromorphic Engineering. Empirical studies will be considered if they are supplemented by theoretical analyses and contributions to theory development and/or computational modelling studies.
期刊最新文献
A mathematical formulation of learner cognition for personalised learning experiences Identification of the emotional component of inner pronunciation: EEG-ERP study Towards emotion-aware intelligent agents by utilizing knowledge graphs of experiences Exploring the impact of virtual reality flight simulations on EEG neural patterns and task performance
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1