Should AI Systems in Nuclear Facilities Explain Decisions the Way Humans Do? An Interview Study

Hazel M. Taylor, C. Jay, B. Lennox, A. Cangelosi, Louise Dennis
{"title":"Should AI Systems in Nuclear Facilities Explain Decisions the Way Humans Do? An Interview Study","authors":"Hazel M. Taylor, C. Jay, B. Lennox, A. Cangelosi, Louise Dennis","doi":"10.1109/RO-MAN53752.2022.9900852","DOIUrl":null,"url":null,"abstract":"There is a growing interest in the use of robotics and AI in the nuclear industry, however it is important to ensure these systems are ethically grounded, trustworthy and safe. An emerging technique to address these concerns is the use of explainability. In this paper we present the results of an interview study with nuclear industry experts to explore the use of explainable intelligent systems within the field. We interviewed 16 participants with varying backgrounds of expertise, and presented two potential use cases for evaluation; a navigation scenario and a task scheduling scenario. Through an inductive thematic analysis we identified the aspects of a deployment that experts want to know from explainable systems and we outline how these associate with the folk conceptual theory of explanation, a framework in which people explain behaviours. We established that an intelligent system should explain its reasons for an action, its expectations of itself, changes in the environment that impact decision making, probabilities and the elements within them, safety implications and mitigation strategies, robot health and component failures during decision making in nuclear deployments. We determine that these factors could be explained with cause, reason, and enabling factor explanations.","PeriodicalId":250997,"journal":{"name":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"88 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/RO-MAN53752.2022.9900852","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

Abstract

There is a growing interest in the use of robotics and AI in the nuclear industry, however it is important to ensure these systems are ethically grounded, trustworthy and safe. An emerging technique to address these concerns is the use of explainability. In this paper we present the results of an interview study with nuclear industry experts to explore the use of explainable intelligent systems within the field. We interviewed 16 participants with varying backgrounds of expertise, and presented two potential use cases for evaluation; a navigation scenario and a task scheduling scenario. Through an inductive thematic analysis we identified the aspects of a deployment that experts want to know from explainable systems and we outline how these associate with the folk conceptual theory of explanation, a framework in which people explain behaviours. We established that an intelligent system should explain its reasons for an action, its expectations of itself, changes in the environment that impact decision making, probabilities and the elements within them, safety implications and mitigation strategies, robot health and component failures during decision making in nuclear deployments. We determine that these factors could be explained with cause, reason, and enabling factor explanations.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
核设施中的人工智能系统是否应该像人类一样解释决策?一项访谈研究
人们对在核工业中使用机器人和人工智能越来越感兴趣,然而,确保这些系统在道德上是有依据的、值得信赖的和安全的,这一点很重要。解决这些问题的新兴技术是可解释性的使用。在本文中,我们提出了与核工业专家的访谈研究结果,以探索在该领域内使用可解释的智能系统。我们采访了16位具有不同专业背景的参与者,并提出了两个潜在的用例进行评估;导航场景和任务调度场景。通过归纳主题分析,我们确定了专家想从可解释系统中了解的部署方面,并概述了这些方面如何与民间解释概念理论(人们解释行为的框架)相关联。我们确定,智能系统应该解释其行动的原因、对自身的期望、影响决策的环境变化、概率和其中的要素、安全影响和缓解策略、机器人健康和核部署决策过程中的组件故障。我们确定这些因素可以用原因、理由和使能因素解释来解释。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
I Can’t Believe That Happened! : Exploring Expressivity in Collaborative Storytelling with the Tabletop Robot Haru Nothing About Us Without Us: a participatory design for an Inclusive Signing Tiago Robot Preliminary Investigation of Collision Risk Assessment with Vision for Selecting Targets Paid Attention to by Mobile Robot Step-by-Step Task Plan Explanations Beyond Causal Links Contributions of user tests in a Living Lab in the co-design process of human robot interaction
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1