When is it right for a robot to be wrong? Children trust a robot over a human in a selective trust task

IF 9 1区 心理学 Q1 PSYCHOLOGY, EXPERIMENTAL Computers in Human Behavior Pub Date : 2024-04-05 DOI:10.1016/j.chb.2024.108229
Rebecca Stower , Arvid Kappas , Kristyn Sommer
{"title":"When is it right for a robot to be wrong? Children trust a robot over a human in a selective trust task","authors":"Rebecca Stower ,&nbsp;Arvid Kappas ,&nbsp;Kristyn Sommer","doi":"10.1016/j.chb.2024.108229","DOIUrl":null,"url":null,"abstract":"<div><p>Little is known about how children perceive, trust and learn from social robots compared to humans. The goal of this study was to compare a robot and a human agent in a selective trust task across different combinations of reliability (both reliable, only human reliable, or only robot reliable). 111 children, aged 3 to 6 years, participated in an online study where they viewed videos of a human and a robot labelling both familiar and novel objects. We found that, although children preferred to endorse a novel object label from the agent who previously labelled familiar objects correctly, when both the human and the robot were reliable they were biased more towards the robot. Their social evaluations also tended much more strongly towards a general robot preference. Children’s conceptualisations of the agents making a mistake also differed, such that an unreliable human was selected as doing things on purpose, but not an unreliable robot. These findings suggest that children’s perceptions of a robot’s reliability are separate from their evaluation of its desirability as a social interaction partner and its perceived agency. Further, they indicate that a robot making a mistake does not necessarily reduce children’s desire to interact with it as a social agent.</p></div>","PeriodicalId":48471,"journal":{"name":"Computers in Human Behavior","volume":null,"pages":null},"PeriodicalIF":9.0000,"publicationDate":"2024-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0747563224000979/pdfft?md5=c9c2a548d75b6c37e3bf45c3558c598f&pid=1-s2.0-S0747563224000979-main.pdf","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers in Human Behavior","FirstCategoryId":"102","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0747563224000979","RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"PSYCHOLOGY, EXPERIMENTAL","Score":null,"Total":0}
引用次数: 0

Abstract

Little is known about how children perceive, trust and learn from social robots compared to humans. The goal of this study was to compare a robot and a human agent in a selective trust task across different combinations of reliability (both reliable, only human reliable, or only robot reliable). 111 children, aged 3 to 6 years, participated in an online study where they viewed videos of a human and a robot labelling both familiar and novel objects. We found that, although children preferred to endorse a novel object label from the agent who previously labelled familiar objects correctly, when both the human and the robot were reliable they were biased more towards the robot. Their social evaluations also tended much more strongly towards a general robot preference. Children’s conceptualisations of the agents making a mistake also differed, such that an unreliable human was selected as doing things on purpose, but not an unreliable robot. These findings suggest that children’s perceptions of a robot’s reliability are separate from their evaluation of its desirability as a social interaction partner and its perceived agency. Further, they indicate that a robot making a mistake does not necessarily reduce children’s desire to interact with it as a social agent.

Abstract Image

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
机器人什么时候会犯错?在选择性信任任务中,儿童信任机器人而非人类
与人类相比,儿童是如何感知、信任和学习社交机器人的,人们对此知之甚少。本研究的目的是在一项选择性信任任务中,比较机器人和人类代理在不同可靠性组合(均可靠、仅人类可靠或仅机器人可靠)下的表现。111 名 3 至 6 岁的儿童参加了一项在线研究,他们观看了人类和机器人为熟悉和新奇物体贴标签的视频。我们发现,虽然儿童更倾向于认可之前正确标注熟悉物体的代理所标注的新物体标签,但当人类和机器人都可靠时,他们更倾向于机器人。他们的社会评价也更倾向于机器人。儿童对代理人犯错的概念也不尽相同,他们会认为不可靠的人类是故意做错事,但不会认为不可靠的机器人是故意做错事。这些研究结果表明,儿童对机器人可靠性的看法与他们对机器人作为社会互动伙伴的可取性及其感知能力的评价是分开的。此外,这些研究还表明,机器人犯错并不一定会降低儿童与之进行社会互动的愿望。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
19.10
自引率
4.00%
发文量
381
审稿时长
40 days
期刊介绍: Computers in Human Behavior is a scholarly journal that explores the psychological aspects of computer use. It covers original theoretical works, research reports, literature reviews, and software and book reviews. The journal examines both the use of computers in psychology, psychiatry, and related fields, and the psychological impact of computer use on individuals, groups, and society. Articles discuss topics such as professional practice, training, research, human development, learning, cognition, personality, and social interactions. It focuses on human interactions with computers, considering the computer as a medium through which human behaviors are shaped and expressed. Professionals interested in the psychological aspects of computer use will find this journal valuable, even with limited knowledge of computers.
期刊最新文献
The negative consequences of networking through social network services: A social comparison perspective Can online behaviors be linked to mental health? Active versus passive social network usage on depression via envy and self-esteem Self-regulation deficiencies and perceived problematic online pornography use among young Chinese women: The role of self-acceptance Flow in ChatGPT-based logic learning and its influences on logic and self-efficacy in English argumentative writing Navigating online perils: Socioeconomic status, online activity lifestyles, and online fraud targeting and victimization of old adults in China
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1