Do Humans Trust Robots that Violate moral trust?

Zahra Rezaei Khavas, Monish Reddy Kotturu, S.Reza Ahmadzadeh, Paul Robinette
{"title":"Do Humans Trust Robots that Violate moral trust?","authors":"Zahra Rezaei Khavas, Monish Reddy Kotturu, S.Reza Ahmadzadeh, Paul Robinette","doi":"10.1145/3651992","DOIUrl":null,"url":null,"abstract":"The increasing use of robots in social applications requires further research on human-robot trust. The research on human-robot trust needs to go beyond the conventional definition that mainly focuses on how human-robot relations are influenced by robot performance. The emerging field of social robotics considers optimizing a robot’s personality a critical factor in user perceptions of experienced human-robot interaction (HRI). Researchers have developed trust scales that account for different dimensions of trust in HRI. These trust scales consider one performance aspect (i.e., the trust in an agent’s competence to perform a given task and their proficiency in executing the task accurately) and one moral aspect (i.e., trust in an agent’s honesty in fulfilling their stated commitments or promises) for human-robot trust. The question that arises here is to what extent do these trust aspects affect human trust in a robot? The main goal of this study is to investigate whether a robot’s undesirable behavior due to the performance trust violation would affect human trust differently than another similar undesirable behavior due to a moral trust violation. We designed and implemented an online human-robot collaborative search task that allows distinguishing between performance and moral trust violations by a robot. We ran these experiments on Prolific and recruited 100 participants for this study. Our results showed that a moral trust violation by a robot affects human trust more severely than a performance trust violation with the same magnitude and consequences.","PeriodicalId":504644,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACM Transactions on Human-Robot Interaction","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3651992","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

The increasing use of robots in social applications requires further research on human-robot trust. The research on human-robot trust needs to go beyond the conventional definition that mainly focuses on how human-robot relations are influenced by robot performance. The emerging field of social robotics considers optimizing a robot’s personality a critical factor in user perceptions of experienced human-robot interaction (HRI). Researchers have developed trust scales that account for different dimensions of trust in HRI. These trust scales consider one performance aspect (i.e., the trust in an agent’s competence to perform a given task and their proficiency in executing the task accurately) and one moral aspect (i.e., trust in an agent’s honesty in fulfilling their stated commitments or promises) for human-robot trust. The question that arises here is to what extent do these trust aspects affect human trust in a robot? The main goal of this study is to investigate whether a robot’s undesirable behavior due to the performance trust violation would affect human trust differently than another similar undesirable behavior due to a moral trust violation. We designed and implemented an online human-robot collaborative search task that allows distinguishing between performance and moral trust violations by a robot. We ran these experiments on Prolific and recruited 100 participants for this study. Our results showed that a moral trust violation by a robot affects human trust more severely than a performance trust violation with the same magnitude and consequences.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
人类信任违背道德信任的机器人吗?
随着机器人在社会应用中的使用日益增多,需要进一步研究人与机器人之间的信任关系。对人机信任的研究需要超越传统的定义,即主要关注人机关系如何受到机器人性能的影响。新兴的社会机器人学领域认为,优化机器人的个性是用户体验人机交互(HRI)感知的关键因素。研究人员开发了信任度量表,考虑了人机交互中信任度的不同维度。这些信任量表考虑了人与机器人信任的一个表现方面(即对代理执行特定任务的能力及其准确执行任务的熟练程度的信任)和一个道德方面(即对代理履行其既定承诺或诺言的诚实程度的信任)。这里提出的问题是,这些信任方面在多大程度上会影响人类对机器人的信任?本研究的主要目的是调查机器人因违反性能信任而导致的不良行为与因违反道德信任而导致的类似不良行为对人类信任的影响是否不同。我们设计并实施了一项在线人机协作搜索任务,可以区分机器人违反性能信任和道德信任的行为。我们在 Prolific 上进行了这些实验,并招募了 100 名参与者参与这项研究。我们的结果表明,在程度和后果相同的情况下,机器人的道德失信行为比性能失信行为对人类信任的影响更为严重。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Field Trial of a Queue-Managing Security Guard Robot Introduction to the Special Issue on Artificial Intelligence for Human-Robot Interaction (AI-HRI) Enacting Human-Robot Encounters with Theater Professionals on a Mixed Reality Stage Understanding the Interaction between Delivery Robots and Other Road and Sidewalk Users: A Study of User-generated Online Videos Longitudinal Study of Mobile Telepresence Robots in Older Adults’ Homes: Uses, Social Connection, and Comfort with Technology
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1