测量解释的质量:系统因果性量表(SCS):比较人类和机器的解释。

IF 2.8 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Kunstliche Intelligenz Pub Date : 2020-01-01 Epub Date: 2020-01-21 DOI:10.1007/s13218-020-00636-z
Andreas Holzinger, André Carrington, Heimo Müller
{"title":"测量解释的质量:系统因果性量表(SCS):比较人类和机器的解释。","authors":"Andreas Holzinger,&nbsp;André Carrington,&nbsp;Heimo Müller","doi":"10.1007/s13218-020-00636-z","DOIUrl":null,"url":null,"abstract":"<p><p>Recent success in Artificial Intelligence (AI) and Machine Learning (ML) allow problem solving automatically without any human intervention. Autonomous approaches can be very convenient. However, in certain domains, e.g., in the medical domain, it is necessary to enable a domain expert to understand, <i>why</i> an algorithm came up with a certain result. Consequently, the field of Explainable AI (xAI) rapidly gained interest worldwide in various domains, particularly in medicine. Explainable AI studies transparency and traceability of opaque AI/ML and there are already a huge variety of methods. For example with layer-wise relevance propagation relevant parts of inputs to, and representations in, a neural network which caused a result, can be highlighted. This is a first important step to ensure that end users, e.g., medical professionals, assume responsibility for decision making with AI/ML and of interest to professionals and regulators. Interactive ML adds the component of human expertise to AI/ML processes by enabling them to re-enact and retrace AI/ML results, e.g. let them check it for plausibility. This requires new human-AI interfaces for explainable AI. In order to build effective and efficient interactive human-AI interfaces we have to deal with the question of <i>how to evaluate the quality of explanations</i> given by an explainable AI system. In this paper we introduce our System Causability Scale to measure the quality of explanations. It is based on our notion of Causability (Holzinger et al. in Wiley Interdiscip Rev Data Min Knowl Discov 9(4), 2019) combined with concepts adapted from a widely-accepted usability scale.</p>","PeriodicalId":45413,"journal":{"name":"Kunstliche Intelligenz","volume":"34 2","pages":"193-198"},"PeriodicalIF":2.8000,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/s13218-020-00636-z","citationCount":"218","resultStr":"{\"title\":\"Measuring the Quality of Explanations: The System Causability Scale (SCS): Comparing Human and Machine Explanations.\",\"authors\":\"Andreas Holzinger,&nbsp;André Carrington,&nbsp;Heimo Müller\",\"doi\":\"10.1007/s13218-020-00636-z\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Recent success in Artificial Intelligence (AI) and Machine Learning (ML) allow problem solving automatically without any human intervention. Autonomous approaches can be very convenient. However, in certain domains, e.g., in the medical domain, it is necessary to enable a domain expert to understand, <i>why</i> an algorithm came up with a certain result. Consequently, the field of Explainable AI (xAI) rapidly gained interest worldwide in various domains, particularly in medicine. Explainable AI studies transparency and traceability of opaque AI/ML and there are already a huge variety of methods. For example with layer-wise relevance propagation relevant parts of inputs to, and representations in, a neural network which caused a result, can be highlighted. This is a first important step to ensure that end users, e.g., medical professionals, assume responsibility for decision making with AI/ML and of interest to professionals and regulators. Interactive ML adds the component of human expertise to AI/ML processes by enabling them to re-enact and retrace AI/ML results, e.g. let them check it for plausibility. This requires new human-AI interfaces for explainable AI. In order to build effective and efficient interactive human-AI interfaces we have to deal with the question of <i>how to evaluate the quality of explanations</i> given by an explainable AI system. In this paper we introduce our System Causability Scale to measure the quality of explanations. It is based on our notion of Causability (Holzinger et al. in Wiley Interdiscip Rev Data Min Knowl Discov 9(4), 2019) combined with concepts adapted from a widely-accepted usability scale.</p>\",\"PeriodicalId\":45413,\"journal\":{\"name\":\"Kunstliche Intelligenz\",\"volume\":\"34 2\",\"pages\":\"193-198\"},\"PeriodicalIF\":2.8000,\"publicationDate\":\"2020-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://sci-hub-pdf.com/10.1007/s13218-020-00636-z\",\"citationCount\":\"218\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Kunstliche Intelligenz\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1007/s13218-020-00636-z\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2020/1/21 0:00:00\",\"PubModel\":\"Epub\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Kunstliche Intelligenz","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1007/s13218-020-00636-z","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2020/1/21 0:00:00","PubModel":"Epub","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 218

摘要

最近人工智能(AI)和机器学习(ML)的成功使问题自动解决,无需任何人为干预。自主方法非常方便。然而,在某些领域,例如在医学领域,有必要使领域专家能够理解为什么算法会产生特定的结果。因此,可解释人工智能(xAI)领域迅速引起了全世界各个领域的兴趣,特别是在医学领域。可解释的AI研究不透明AI/ML的透明度和可追溯性,并且已经有各种各样的方法。例如,通过分层相关传播,可以突出显示导致结果的神经网络输入的相关部分和表示。这是确保最终用户(例如医疗专业人员)承担使用人工智能/机器学习做出决策的责任以及专业人员和监管机构感兴趣的第一步。交互式ML将人类专业知识的组成部分添加到AI/ML过程中,使他们能够重新制定和追溯AI/ML结果,例如让他们检查其合理性。这就需要新的人机界面来实现可解释的AI。为了构建有效和高效的人机交互界面,我们必须处理如何评估可解释的人工智能系统给出的解释质量的问题。在本文中,我们引入了我们的系统因果性量表来衡量解释的质量。它基于我们的因果性概念(Holzinger等人在Wiley interdisp Rev Data Min Knowl discoverv 9(4), 2019)中结合了广泛接受的可用性量表的概念。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

摘要图片

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Measuring the Quality of Explanations: The System Causability Scale (SCS): Comparing Human and Machine Explanations.

Recent success in Artificial Intelligence (AI) and Machine Learning (ML) allow problem solving automatically without any human intervention. Autonomous approaches can be very convenient. However, in certain domains, e.g., in the medical domain, it is necessary to enable a domain expert to understand, why an algorithm came up with a certain result. Consequently, the field of Explainable AI (xAI) rapidly gained interest worldwide in various domains, particularly in medicine. Explainable AI studies transparency and traceability of opaque AI/ML and there are already a huge variety of methods. For example with layer-wise relevance propagation relevant parts of inputs to, and representations in, a neural network which caused a result, can be highlighted. This is a first important step to ensure that end users, e.g., medical professionals, assume responsibility for decision making with AI/ML and of interest to professionals and regulators. Interactive ML adds the component of human expertise to AI/ML processes by enabling them to re-enact and retrace AI/ML results, e.g. let them check it for plausibility. This requires new human-AI interfaces for explainable AI. In order to build effective and efficient interactive human-AI interfaces we have to deal with the question of how to evaluate the quality of explanations given by an explainable AI system. In this paper we introduce our System Causability Scale to measure the quality of explanations. It is based on our notion of Causability (Holzinger et al. in Wiley Interdiscip Rev Data Min Knowl Discov 9(4), 2019) combined with concepts adapted from a widely-accepted usability scale.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Kunstliche Intelligenz
Kunstliche Intelligenz COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE-
CiteScore
8.60
自引率
3.40%
发文量
32
期刊介绍: Artificial Intelligence has successfully established itself as a scientific discipline in research and education and has become an integral part of Computer Science with an interdisciplinary character. AI deals with both the development of information processing systems that deliver “intelligent” services and with the modeling of human cognitive skills with the help of information processing systems. Research, development and applications in the field of AI pursue the general goal of creating processes for taking in and processing information that more closely resemble human problem-solving behavior, and to subsequently use those processes to derive methods that enhance and qualitatively improve conventional information processing systems. KI – Künstliche Intelligenz is the official journal of the division for artificial intelligence within the ''Gesellschaft für Informatik e.V.'' (GI) – the German Informatics Society – with contributions from the entire field of artificial intelligence. The journal presents fundamentals and tools, their use and adaptation for scientific purposes, and applications that are implemented using AI methods – and thus provides readers with the latest developments in and well-founded background information on all relevant aspects of artificial intelligence. A highly reputed team of editors from both university and industry will ensure the scientific quality of the articles.The journal provides all members of the AI community with quick access to current topics in the field, while also promoting vital interdisciplinary interchange, it will as well serve as a media of communication between the members of the division and the parent society. The journal is published in English. Content published in this journal is peer reviewed (Double Blind).
期刊最新文献
In Search of Basement Indicators from Street View Imagery Data: An Investigation of Data Sources and Analysis Strategies. Some Thoughts on AI Stimulated by Michael Wooldridge's Book "The Road to Conscious Machines. The Story of AI". A Framework for Learning Event Sequences and Explaining Detected Anomalies in a Smart Home Environment. Generating Explanations for Conceptual Validation of Graph Neural Networks: An Investigation of Symbolic Predicates Learned on Relevance-Ranked Sub-Graphs. News.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1