Choosing between human and algorithmic advisors: The role of responsibility sharing

Lior Gazit , Ofer Arazy , Uri Hertz
{"title":"Choosing between human and algorithmic advisors: The role of responsibility sharing","authors":"Lior Gazit ,&nbsp;Ofer Arazy ,&nbsp;Uri Hertz","doi":"10.1016/j.chbah.2023.100009","DOIUrl":null,"url":null,"abstract":"<div><p>Algorithms are increasingly employed to provide highly accurate advice and recommendations across domains, yet in many cases people tend to prefer human advisors. Studies to date have focused mainly on the advisor’s perceived competence and the outcome of the advice as determinants of advice takers’ willingness to accept advice from human and algorithmic advisors and to arbitrate between them. Here we examine the role of another factor that is not directly related to the outcome: the advice taker’s ability to psychologically offload responsibility for the decision’s potential consequences. Building on studies showing differences in responsibility attribution between human and algorithmic advisors, we hypothesize that, controlling for the effects of the advisor’s competence, the advisor's perceived responsibility is an important factor affecting advice takers’ choice between human and algorithmic advisors. In an experiment in two domains, Medical and Financial (N = 806), participants were asked to rate advisors’ perceived responsibility and choose between a human and algorithmic advisor. Our results show that human advisors were perceived as more responsible than algorithmic advisors and most importantly, that the perception of the advisor’s responsibility affected the preference for a human advisor over an algorithmic counterpart. Furthermore, we found that an experimental manipulation that impeded advice takers’ ability to offload responsibility affected the extent to which human, but not algorithmic, advisors were perceived as responsible. Together, our findings highlight the role of responsibility sharing in influencing algorithm aversion.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"1 2","pages":"Article 100009"},"PeriodicalIF":0.0000,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers in Human Behavior: Artificial Humans","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2949882123000099","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Algorithms are increasingly employed to provide highly accurate advice and recommendations across domains, yet in many cases people tend to prefer human advisors. Studies to date have focused mainly on the advisor’s perceived competence and the outcome of the advice as determinants of advice takers’ willingness to accept advice from human and algorithmic advisors and to arbitrate between them. Here we examine the role of another factor that is not directly related to the outcome: the advice taker’s ability to psychologically offload responsibility for the decision’s potential consequences. Building on studies showing differences in responsibility attribution between human and algorithmic advisors, we hypothesize that, controlling for the effects of the advisor’s competence, the advisor's perceived responsibility is an important factor affecting advice takers’ choice between human and algorithmic advisors. In an experiment in two domains, Medical and Financial (N = 806), participants were asked to rate advisors’ perceived responsibility and choose between a human and algorithmic advisor. Our results show that human advisors were perceived as more responsible than algorithmic advisors and most importantly, that the perception of the advisor’s responsibility affected the preference for a human advisor over an algorithmic counterpart. Furthermore, we found that an experimental manipulation that impeded advice takers’ ability to offload responsibility affected the extent to which human, but not algorithmic, advisors were perceived as responsible. Together, our findings highlight the role of responsibility sharing in influencing algorithm aversion.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
在人工顾问和算法顾问之间进行选择:责任分担的作用
算法越来越多地被用于跨领域提供高度准确的建议和建议,但在许多情况下,人们倾向于选择人工顾问。迄今为止的研究主要集中在顾问的感知能力和建议的结果上,这是决定咨询者是否愿意接受人类和算法顾问的建议并在他们之间进行仲裁的决定因素。在这里,我们考察了另一个与结果没有直接关系的因素的作用:咨询者在心理上推卸决策潜在后果责任的能力。基于显示人类和算法顾问之间责任归属差异的研究,我们假设,在控制顾问能力影响的情况下,顾问的感知责任是影响咨询师在人类和算法咨询之间选择的重要因素。在一项涉及医学和金融两个领域的实验中(N=806),参与者被要求对顾问的感知责任进行评分,并在人类顾问和算法顾问之间做出选择。我们的研究结果表明,人类顾问被认为比算法顾问更负责任,最重要的是,对顾问责任的感知影响了对人类顾问比对算法顾问的偏好。此外,我们发现,阻碍咨询师推卸责任的实验操作影响了人类顾问(而不是算法顾问)被认为负有责任的程度。总之,我们的研究结果突出了责任分担在影响算法厌恶中的作用。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Can ChatGPT read who you are? Understanding young adults’ attitudes towards using AI chatbots for psychotherapy: The role of self-stigma Aversion against machines with complex mental abilities: The role of individual differences Differences between human and artificial/augmented intelligence in medicine Integrating sound effects and background music in Robotic storytelling – A series of online studies across different story genres
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1