Autonomous Justification for Enabling Explainable Decision Support in Human-Robot Teaming

Matthew B. Luebbers, Aaquib Tabrez, K. Ruvane, Bradley Hayes
{"title":"Autonomous Justification for Enabling Explainable Decision Support in Human-Robot Teaming","authors":"Matthew B. Luebbers, Aaquib Tabrez, K. Ruvane, Bradley Hayes","doi":"10.15607/RSS.2023.XIX.002","DOIUrl":null,"url":null,"abstract":"—Justification is an important facet of policy expla- nation, a process for describing the behavior of an autonomous system. In human-robot collaboration, an autonomous agent can attempt to justify distinctly important decisions by offering explanations as to why those decisions are right or reasonable, leveraging a snapshot of its internal reasoning to do so. Without sufficient insight into a robot’s decision-making process, it becomes challenging for users to trust or comply with those important decisions, especially when they are viewed as confusing or contrary to the user’s expectations (e.g., when decisions change as new information is introduced to the agent’s decision-making process). In this work we characterize the benefits of justification within the context of decision-support during human- robot teaming (i.e., agents giving recommendations to human teammates). We introduce a formal framework using value of information theory to strategically time justifications during periods of misaligned expectations for greater effect. We also characterize four different types of counterfactual justification derived from established explainable AI literature and evaluate them against each other in a human-subjects study involving a collaborative, partially observable search task. Based on our findings, we present takeaways on the effective use of different types of justifications in human-robot teaming scenarios, to improve user compliance and decision-making by strategically influencing human teammate thinking patterns. Finally, we present an augmented reality system incorporating these findings into a real-world decision-support system for human-robot teaming.","PeriodicalId":248720,"journal":{"name":"Robotics: Science and Systems XIX","volume":"48 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Robotics: Science and Systems XIX","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.15607/RSS.2023.XIX.002","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

—Justification is an important facet of policy expla- nation, a process for describing the behavior of an autonomous system. In human-robot collaboration, an autonomous agent can attempt to justify distinctly important decisions by offering explanations as to why those decisions are right or reasonable, leveraging a snapshot of its internal reasoning to do so. Without sufficient insight into a robot’s decision-making process, it becomes challenging for users to trust or comply with those important decisions, especially when they are viewed as confusing or contrary to the user’s expectations (e.g., when decisions change as new information is introduced to the agent’s decision-making process). In this work we characterize the benefits of justification within the context of decision-support during human- robot teaming (i.e., agents giving recommendations to human teammates). We introduce a formal framework using value of information theory to strategically time justifications during periods of misaligned expectations for greater effect. We also characterize four different types of counterfactual justification derived from established explainable AI literature and evaluate them against each other in a human-subjects study involving a collaborative, partially observable search task. Based on our findings, we present takeaways on the effective use of different types of justifications in human-robot teaming scenarios, to improve user compliance and decision-making by strategically influencing human teammate thinking patterns. Finally, we present an augmented reality system incorporating these findings into a real-world decision-support system for human-robot teaming.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
人-机器人团队中可解释决策支持的自主论证
-辩护是政策解释的一个重要方面,是描述自治系统行为的过程。在人机协作中,自主代理可以尝试通过提供解释来证明非常重要的决策是正确或合理的,并利用其内部推理的快照来做到这一点。如果对机器人的决策过程没有足够的了解,用户就很难信任或遵守这些重要的决策,特别是当它们被视为令人困惑或与用户的期望相反时(例如,当决策随着新信息被引入代理的决策过程而改变时)。在这项工作中,我们描述了在人-机器人团队(即代理向人类队友提供建议)决策支持的背景下,辩护的好处。我们引入了一个正式的框架,利用信息论的价值,在预期不一致的时期战略性地进行时间证明,以获得更大的效果。我们还描述了四种不同类型的反事实理由,这些理由来自于已建立的可解释的人工智能文献,并在涉及协作的、部分可观察的搜索任务的人类受试者研究中相互评估。基于我们的研究结果,我们提出了在人机团队场景中有效使用不同类型的理由的建议,通过战略性地影响人类队友的思维模式来提高用户的依从性和决策。最后,我们提出了一个增强现实系统,将这些发现融入到现实世界的人机团队决策支持系统中。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
A Sampling-Based Approach for Heterogeneous Coalition Scheduling with Temporal Uncertainty ROSE: Rotation-based Squeezing Robotic Gripper toward Universal Handling of Objects ERASOR2: Instance-Aware Robust 3D Mapping of the Static World in Dynamic Scenes Autonomous Navigation, Mapping and Exploration with Gaussian Processes Predefined-Time Convergent Motion Control for Heterogeneous Continuum Robots
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1