Human-in-the-loop error detection in an object organization task with a social robot

H. Frijns, Matthias Hirschmanner, Barbara Sienkiewicz, Peter Hönig, B. Indurkhya, Markus Vincze
{"title":"Human-in-the-loop error detection in an object organization task with a social robot","authors":"H. Frijns, Matthias Hirschmanner, Barbara Sienkiewicz, Peter Hönig, B. Indurkhya, Markus Vincze","doi":"10.3389/frobt.2024.1356827","DOIUrl":null,"url":null,"abstract":"In human-robot collaboration, failures are bound to occur. A thorough understanding of potential errors is necessary so that robotic system designers can develop systems that remedy failure cases. In this work, we study failures that occur when participants interact with a working system and focus especially on errors in a robotic system’s knowledge base of which the system is not aware. A human interaction partner can be part of the error detection process if they are given insight into the robot’s knowledge and decision-making process. We investigate different communication modalities and the design of shared task representations in a joint human-robot object organization task. We conducted a user study (N = 31) in which the participants showed a Pepper robot how to organize objects, and the robot communicated the learned object configuration to the participants by means of speech, visualization, or a combination of speech and visualization. The multimodal, combined condition was preferred by 23 participants, followed by seven participants preferring the visualization. Based on the interviews, the errors that occurred, and the object configurations generated by the participants, we conclude that participants tend to test the system’s limitations by making the task more complex, which provokes errors. This trial-and-error behavior has a productive purpose and demonstrates that failures occur that arise from the combination of robot capabilities, the user’s understanding and actions, and interaction in the environment. Moreover, it demonstrates that failure can have a productive purpose in establishing better user mental models of the technology.","PeriodicalId":504612,"journal":{"name":"Frontiers in Robotics and AI","volume":"43 s200","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Frontiers in Robotics and AI","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3389/frobt.2024.1356827","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

In human-robot collaboration, failures are bound to occur. A thorough understanding of potential errors is necessary so that robotic system designers can develop systems that remedy failure cases. In this work, we study failures that occur when participants interact with a working system and focus especially on errors in a robotic system’s knowledge base of which the system is not aware. A human interaction partner can be part of the error detection process if they are given insight into the robot’s knowledge and decision-making process. We investigate different communication modalities and the design of shared task representations in a joint human-robot object organization task. We conducted a user study (N = 31) in which the participants showed a Pepper robot how to organize objects, and the robot communicated the learned object configuration to the participants by means of speech, visualization, or a combination of speech and visualization. The multimodal, combined condition was preferred by 23 participants, followed by seven participants preferring the visualization. Based on the interviews, the errors that occurred, and the object configurations generated by the participants, we conclude that participants tend to test the system’s limitations by making the task more complex, which provokes errors. This trial-and-error behavior has a productive purpose and demonstrates that failures occur that arise from the combination of robot capabilities, the user’s understanding and actions, and interaction in the environment. Moreover, it demonstrates that failure can have a productive purpose in establishing better user mental models of the technology.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
在社交机器人的物体组织任务中进行人在回路中的错误检测
在人与机器人的协作中,难免会出现故障。有必要全面了解潜在的错误,这样机器人系统设计者才能开发出能够补救失败案例的系统。在这项工作中,我们研究参与者与工作系统交互时发生的故障,尤其关注机器人系统知识库中系统不知道的错误。如果人类交互伙伴能深入了解机器人的知识和决策过程,他们就能参与错误检测过程。我们研究了不同的交流模式以及在人机联合物体组织任务中共享任务表征的设计。我们进行了一项用户研究(N = 31),在这项研究中,参与者向 Pepper 机器人展示如何组织物体,而机器人则通过语音、可视化或语音与可视化相结合的方式向参与者传达所学到的物体配置。23 名参与者选择了多模态组合条件,7 名参与者选择了可视化条件。根据访谈、发生的错误和参与者生成的对象配置,我们得出结论:参与者倾向于通过增加任务的复杂性来测试系统的局限性,从而引发错误。这种 "试错 "行为具有生产性目的,证明了机器人的能力、用户的理解和行动以及环境中的交互作用共同导致了失败的发生。此外,它还证明了失败在建立更好的用户技术心智模型方面也能起到有益的作用。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Collective predictive coding hypothesis: symbol emergence as decentralized Bayesian inference Adaptive satellite attitude control for varying masses using deep reinforcement learning Towards reconciling usability and usefulness of policy explanations for sequential decision-making systems Semantic learning from keyframe demonstration using object attribute constraints Gaze detection as a social cue to initiate natural human-robot collaboration in an assembly task
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1