What Information is Required for Explainable AI? : A Provenance-based Research Agenda and Future Challenges

Fariha Tasmin Jaigirdar, C. Rudolph, G. Oliver, David Watts, Chris Bain
{"title":"What Information is Required for Explainable AI? : A Provenance-based Research Agenda and Future Challenges","authors":"Fariha Tasmin Jaigirdar, C. Rudolph, G. Oliver, David Watts, Chris Bain","doi":"10.1109/CIC50333.2020.00030","DOIUrl":null,"url":null,"abstract":"Deriving explanations of an Artificial Intelligence-based system's decision making is becoming increasingly essential to address requirements that meet quality standards and operate in a transparent, comprehensive, understandable, and explainable manner. Furthermore, more security issues as well as concerns from human perspectives emerge in describing the explainability properties of AI. A full system view is required to enable humans to properly estimate risks when dealing with such systems. This paper introduces open issues in this research area to present the overall picture of explainability and the required information needed for the explanation to make a decision-oriented AI system transparent to humans. It illustrates the potential contribution of proper provenance data to AI-based systems by describing a provenance graph-based design. This paper proposes a six-Ws framework to demonstrate how a security-aware provenance graph-based design can build the basis for providing end-users with sufficient meta-information on AI-based decision systems. An example scenario is then presented that highlights the required information for better explainability both from human and security-aware aspects. Finally, associated challenges are discussed to provoke further research and commentary.","PeriodicalId":265435,"journal":{"name":"2020 IEEE 6th International Conference on Collaboration and Internet Computing (CIC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE 6th International Conference on Collaboration and Internet Computing (CIC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CIC50333.2020.00030","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3

Abstract

Deriving explanations of an Artificial Intelligence-based system's decision making is becoming increasingly essential to address requirements that meet quality standards and operate in a transparent, comprehensive, understandable, and explainable manner. Furthermore, more security issues as well as concerns from human perspectives emerge in describing the explainability properties of AI. A full system view is required to enable humans to properly estimate risks when dealing with such systems. This paper introduces open issues in this research area to present the overall picture of explainability and the required information needed for the explanation to make a decision-oriented AI system transparent to humans. It illustrates the potential contribution of proper provenance data to AI-based systems by describing a provenance graph-based design. This paper proposes a six-Ws framework to demonstrate how a security-aware provenance graph-based design can build the basis for providing end-users with sufficient meta-information on AI-based decision systems. An example scenario is then presented that highlights the required information for better explainability both from human and security-aware aspects. Finally, associated challenges are discussed to provoke further research and commentary.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
可解释的AI需要哪些信息?:基于来源的研究议程和未来的挑战
对基于人工智能的系统的决策制定进行解释,对于满足质量标准和以透明、全面、可理解和可解释的方式操作的需求变得越来越重要。此外,在描述人工智能的可解释性属性时,出现了更多的安全问题以及从人类角度考虑的问题。在处理这类系统时,需要一个完整的系统视图,以使人们能够正确地估计风险。本文介绍了该研究领域的开放问题,以呈现可解释性的总体情况以及解释所需的信息,从而使面向决策的人工智能系统对人类透明。它通过描述一个基于来源图的设计,说明了适当的来源数据对基于人工智能的系统的潜在贡献。本文提出了一个6 - w框架,以演示基于安全意识的来源图设计如何为基于人工智能的决策系统的最终用户提供足够的元信息建立基础。然后给出了一个示例场景,该场景强调了从人员和安全意识方面更好地解释所需的信息。最后,讨论了相关的挑战,以引发进一步的研究和评论。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Discovering Localized Information for Heterogeneous Graph Node Representation Learning 2020 IEEE 6th International Conference on Collaboration and Internet Computing CIC 2020 Invisible Security: Protecting Users with No Time to Spare Hcpcs2Vec: Healthcare Procedure Embeddings for Medicare Fraud Prediction The 10 Research Topics in the Internet of Things
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1