Hazel M. Taylor, C. Jay, B. Lennox, A. Cangelosi, Louise Dennis
{"title":"Should AI Systems in Nuclear Facilities Explain Decisions the Way Humans Do? An Interview Study","authors":"Hazel M. Taylor, C. Jay, B. Lennox, A. Cangelosi, Louise Dennis","doi":"10.1109/RO-MAN53752.2022.9900852","DOIUrl":null,"url":null,"abstract":"There is a growing interest in the use of robotics and AI in the nuclear industry, however it is important to ensure these systems are ethically grounded, trustworthy and safe. An emerging technique to address these concerns is the use of explainability. In this paper we present the results of an interview study with nuclear industry experts to explore the use of explainable intelligent systems within the field. We interviewed 16 participants with varying backgrounds of expertise, and presented two potential use cases for evaluation; a navigation scenario and a task scheduling scenario. Through an inductive thematic analysis we identified the aspects of a deployment that experts want to know from explainable systems and we outline how these associate with the folk conceptual theory of explanation, a framework in which people explain behaviours. We established that an intelligent system should explain its reasons for an action, its expectations of itself, changes in the environment that impact decision making, probabilities and the elements within them, safety implications and mitigation strategies, robot health and component failures during decision making in nuclear deployments. We determine that these factors could be explained with cause, reason, and enabling factor explanations.","PeriodicalId":250997,"journal":{"name":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"88 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/RO-MAN53752.2022.9900852","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
There is a growing interest in the use of robotics and AI in the nuclear industry, however it is important to ensure these systems are ethically grounded, trustworthy and safe. An emerging technique to address these concerns is the use of explainability. In this paper we present the results of an interview study with nuclear industry experts to explore the use of explainable intelligent systems within the field. We interviewed 16 participants with varying backgrounds of expertise, and presented two potential use cases for evaluation; a navigation scenario and a task scheduling scenario. Through an inductive thematic analysis we identified the aspects of a deployment that experts want to know from explainable systems and we outline how these associate with the folk conceptual theory of explanation, a framework in which people explain behaviours. We established that an intelligent system should explain its reasons for an action, its expectations of itself, changes in the environment that impact decision making, probabilities and the elements within them, safety implications and mitigation strategies, robot health and component failures during decision making in nuclear deployments. We determine that these factors could be explained with cause, reason, and enabling factor explanations.