Fariha Tasmin Jaigirdar, C. Rudolph, G. Oliver, David Watts, Chris Bain
{"title":"What Information is Required for Explainable AI? : A Provenance-based Research Agenda and Future Challenges","authors":"Fariha Tasmin Jaigirdar, C. Rudolph, G. Oliver, David Watts, Chris Bain","doi":"10.1109/CIC50333.2020.00030","DOIUrl":null,"url":null,"abstract":"Deriving explanations of an Artificial Intelligence-based system's decision making is becoming increasingly essential to address requirements that meet quality standards and operate in a transparent, comprehensive, understandable, and explainable manner. Furthermore, more security issues as well as concerns from human perspectives emerge in describing the explainability properties of AI. A full system view is required to enable humans to properly estimate risks when dealing with such systems. This paper introduces open issues in this research area to present the overall picture of explainability and the required information needed for the explanation to make a decision-oriented AI system transparent to humans. It illustrates the potential contribution of proper provenance data to AI-based systems by describing a provenance graph-based design. This paper proposes a six-Ws framework to demonstrate how a security-aware provenance graph-based design can build the basis for providing end-users with sufficient meta-information on AI-based decision systems. An example scenario is then presented that highlights the required information for better explainability both from human and security-aware aspects. Finally, associated challenges are discussed to provoke further research and commentary.","PeriodicalId":265435,"journal":{"name":"2020 IEEE 6th International Conference on Collaboration and Internet Computing (CIC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE 6th International Conference on Collaboration and Internet Computing (CIC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CIC50333.2020.00030","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3
Abstract
Deriving explanations of an Artificial Intelligence-based system's decision making is becoming increasingly essential to address requirements that meet quality standards and operate in a transparent, comprehensive, understandable, and explainable manner. Furthermore, more security issues as well as concerns from human perspectives emerge in describing the explainability properties of AI. A full system view is required to enable humans to properly estimate risks when dealing with such systems. This paper introduces open issues in this research area to present the overall picture of explainability and the required information needed for the explanation to make a decision-oriented AI system transparent to humans. It illustrates the potential contribution of proper provenance data to AI-based systems by describing a provenance graph-based design. This paper proposes a six-Ws framework to demonstrate how a security-aware provenance graph-based design can build the basis for providing end-users with sufficient meta-information on AI-based decision systems. An example scenario is then presented that highlights the required information for better explainability both from human and security-aware aspects. Finally, associated challenges are discussed to provoke further research and commentary.