A Taxonomy of Explanation Types and Need Indicators in Human–Agent Collaborations

IF 3.8 2区 计算机科学 Q2 ROBOTICS International Journal of Social Robotics Pub Date : 2024-06-05 DOI:10.1007/s12369-024-01148-8
Lennart Wachowiak, Andrew Coles, Gerard Canal, Oya Celiktutan
{"title":"A Taxonomy of Explanation Types and Need Indicators in Human–Agent Collaborations","authors":"Lennart Wachowiak, Andrew Coles, Gerard Canal, Oya Celiktutan","doi":"10.1007/s12369-024-01148-8","DOIUrl":null,"url":null,"abstract":"<p>In recent years, explanations have become a pressing matter in AI research. This development was caused by the increased use of black-box models and a realization of the importance of trustworthy AI. In particular, explanations are necessary for human–agent interactions to ensure that the user can trust the agent and that collaborations are effective. Human–agent interactions are complex social scenarios involving a user, an autonomous agent, and an environment or task with its own distinct properties. Thus, such interactions require a wide variety of explanations, which are not covered by the methods of a single AI discipline, such as computer vision or natural language processing. In this paper, we map out what types of explanations are important for human–agent interactions, surveying the field via a scoping review. In addition to the typical introspective explanation tackled by explainability researchers, we look at assistive explanations, aiming to support the user with their task. Secondly, we survey what causes the need for an explanation in the first place. We identify a variety of human–agent interaction-specific causes and categorize them by whether they are centered on the agent’s behavior, the user’s mental state, or an external entity. Our overview aims to guide robotics practitioners in designing agents with more comprehensive explanation-related capacities, considering different explanation types and the concrete times when explanations should be given.</p>","PeriodicalId":14361,"journal":{"name":"International Journal of Social Robotics","volume":null,"pages":null},"PeriodicalIF":3.8000,"publicationDate":"2024-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Social Robotics","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s12369-024-01148-8","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ROBOTICS","Score":null,"Total":0}
引用次数: 0

Abstract

In recent years, explanations have become a pressing matter in AI research. This development was caused by the increased use of black-box models and a realization of the importance of trustworthy AI. In particular, explanations are necessary for human–agent interactions to ensure that the user can trust the agent and that collaborations are effective. Human–agent interactions are complex social scenarios involving a user, an autonomous agent, and an environment or task with its own distinct properties. Thus, such interactions require a wide variety of explanations, which are not covered by the methods of a single AI discipline, such as computer vision or natural language processing. In this paper, we map out what types of explanations are important for human–agent interactions, surveying the field via a scoping review. In addition to the typical introspective explanation tackled by explainability researchers, we look at assistive explanations, aiming to support the user with their task. Secondly, we survey what causes the need for an explanation in the first place. We identify a variety of human–agent interaction-specific causes and categorize them by whether they are centered on the agent’s behavior, the user’s mental state, or an external entity. Our overview aims to guide robotics practitioners in designing agents with more comprehensive explanation-related capacities, considering different explanation types and the concrete times when explanations should be given.

Abstract Image

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
人机协作中的解释类型和需求指标分类法
近年来,解释已成为人工智能研究中亟待解决的问题。这一发展是由于黑盒模型的使用越来越多,以及人们意识到值得信赖的人工智能的重要性。特别是在人机交互中,为了确保用户能够信任人工智能,并保证合作的有效性,解释是必不可少的。人机交互是一种复杂的社会场景,涉及用户、自主代理以及具有自身独特属性的环境或任务。因此,这种交互需要各种各样的解释,而计算机视觉或自然语言处理等单一人工智能学科的方法无法涵盖这些解释。在本文中,我们将通过范围综述的方式,对该领域进行调查,从而找出哪些类型的解释对于人机交互是重要的。除了可解释性研究人员研究的典型内省式解释外,我们还研究了辅助性解释,旨在为用户完成任务提供支持。其次,我们首先调查了需要解释的原因。我们确定了各种人机交互特有的原因,并根据这些原因是以机器人的行为、用户的心理状态还是外部实体为中心进行了分类。我们的概述旨在指导机器人从业人员设计具有更全面解释相关能力的代理,同时考虑不同的解释类型和应作出解释的具体时间。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
9.80
自引率
8.50%
发文量
95
期刊介绍: Social Robotics is the study of robots that are able to interact and communicate among themselves, with humans, and with the environment, within the social and cultural structure attached to its role. The journal covers a broad spectrum of topics related to the latest technologies, new research results and developments in the area of social robotics on all levels, from developments in core enabling technologies to system integration, aesthetic design, applications and social implications. It provides a platform for like-minded researchers to present their findings and latest developments in social robotics, covering relevant advances in engineering, computing, arts and social sciences. The journal publishes original, peer reviewed articles and contributions on innovative ideas and concepts, new discoveries and improvements, as well as novel applications, by leading researchers and developers regarding the latest fundamental advances in the core technologies that form the backbone of social robotics, distinguished developmental projects in the area, as well as seminal works in aesthetic design, ethics and philosophy, studies on social impact and influence, pertaining to social robotics.
期刊最新文献
Time-to-Collision Based Social Force Model for Intelligent Agents on Shared Public Spaces Investigation of Joint Action in Go/No-Go Tasks: Development of a Human-Like Eye Robot and Verification of Action Space How Non-experts Kinesthetically Teach a Robot over Multiple Sessions: Diversity in Teaching Styles and Effects on Performance The Child Factor in Child–Robot Interaction: Discovering the Impact of Developmental Stage and Individual Characteristics Is the Robot Spying on me? A Study on Perceived Privacy in Telepresence Scenarios in a Care Setting with Mobile and Humanoid Robots
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1