Understanding perception of algorithmic decisions: Fairness, trust, and emotion in response to algorithmic management

IF 6.5 1区 社会学 Q1 SOCIAL SCIENCES, INTERDISCIPLINARY Big Data & Society Pub Date : 2018-01-01 DOI:10.1177/2053951718756684
Min Kyung Lee
{"title":"Understanding perception of algorithmic decisions: Fairness, trust, and emotion in response to algorithmic management","authors":"Min Kyung Lee","doi":"10.1177/2053951718756684","DOIUrl":null,"url":null,"abstract":"Algorithms increasingly make managerial decisions that people used to make. Perceptions of algorithms, regardless of the algorithms' actual performance, can significantly influence their adoption, yet we do not fully understand how people perceive decisions made by algorithms as compared with decisions made by humans. To explore perceptions of algorithmic management, we conducted an online experiment using four managerial decisions that required either mechanical or human skills. We manipulated the decision-maker (algorithmic or human), and measured perceived fairness, trust, and emotional response. With the mechanical tasks, algorithmic and human-made decisions were perceived as equally fair and trustworthy and evoked similar emotions; however, human managers' fairness and trustworthiness were attributed to the manager's authority, whereas algorithms' fairness and trustworthiness were attributed to their perceived efficiency and objectivity. Human decisions evoked some positive emotion due to the possibility of social recognition, whereas algorithmic decisions generated a more mixed response – algorithms were seen as helpful tools but also possible tracking mechanisms. With the human tasks, algorithmic decisions were perceived as less fair and trustworthy and evoked more negative emotion than human decisions. Algorithms' perceived lack of intuition and subjective judgment capabilities contributed to the lower fairness and trustworthiness judgments. Positive emotion from human decisions was attributed to social recognition, while negative emotion from algorithmic decisions was attributed to the dehumanizing experience of being evaluated by machines. This work reveals people's lay concepts of algorithmic versus human decisions in a management context and suggests that task characteristics matter in understanding people's experiences with algorithmic technologies.","PeriodicalId":47834,"journal":{"name":"Big Data & Society","volume":null,"pages":null},"PeriodicalIF":6.5000,"publicationDate":"2018-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/2053951718756684","citationCount":"496","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Big Data & Society","FirstCategoryId":"90","ListUrlMain":"https://doi.org/10.1177/2053951718756684","RegionNum":1,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"SOCIAL SCIENCES, INTERDISCIPLINARY","Score":null,"Total":0}
引用次数: 496

Abstract

Algorithms increasingly make managerial decisions that people used to make. Perceptions of algorithms, regardless of the algorithms' actual performance, can significantly influence their adoption, yet we do not fully understand how people perceive decisions made by algorithms as compared with decisions made by humans. To explore perceptions of algorithmic management, we conducted an online experiment using four managerial decisions that required either mechanical or human skills. We manipulated the decision-maker (algorithmic or human), and measured perceived fairness, trust, and emotional response. With the mechanical tasks, algorithmic and human-made decisions were perceived as equally fair and trustworthy and evoked similar emotions; however, human managers' fairness and trustworthiness were attributed to the manager's authority, whereas algorithms' fairness and trustworthiness were attributed to their perceived efficiency and objectivity. Human decisions evoked some positive emotion due to the possibility of social recognition, whereas algorithmic decisions generated a more mixed response – algorithms were seen as helpful tools but also possible tracking mechanisms. With the human tasks, algorithmic decisions were perceived as less fair and trustworthy and evoked more negative emotion than human decisions. Algorithms' perceived lack of intuition and subjective judgment capabilities contributed to the lower fairness and trustworthiness judgments. Positive emotion from human decisions was attributed to social recognition, while negative emotion from algorithmic decisions was attributed to the dehumanizing experience of being evaluated by machines. This work reveals people's lay concepts of algorithmic versus human decisions in a management context and suggests that task characteristics matter in understanding people's experiences with algorithmic technologies.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
理解算法决策的感知:公平、信任和对算法管理的情感反应
算法越来越多地做出人们过去常做的管理决策。对算法的感知,无论算法的实际性能如何,都会对其采用产生重大影响,但与人类做出的决策相比,我们并不完全理解人们是如何感知算法做出的决策的。为了探索对算法管理的看法,我们进行了一项在线实验,使用了四个需要机械或人工技能的管理决策。我们操纵决策者(算法或人类),并测量感知的公平性、信任和情绪反应。对于机械任务,算法和人工决策被认为是同样公平和值得信赖的,并引发了类似的情绪;然而,人类管理者的公平可信归因于管理者的权威,而算法的公平可信则归因于其感知的效率和客观性。由于社会认可的可能性,人类的决策引发了一些积极的情绪,而算法决策产生了更为复杂的反应——算法被视为有用的工具,但也可能是跟踪机制。在人类任务中,算法决策被认为不如人类决策公平和可信,并引发了更多的负面情绪。算法缺乏直觉和主观判断能力,导致公平性和可信度判断较低。人类决策产生的积极情绪归因于社会认可,而算法决策产生的消极情绪归因于被机器评估的非人化体验。这项工作揭示了人们在管理背景下对算法与人类决策的外行概念,并表明任务特征对理解人们对算法技术的体验很重要。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Big Data & Society
Big Data & Society SOCIAL SCIENCES, INTERDISCIPLINARY-
CiteScore
10.90
自引率
10.60%
发文量
59
审稿时长
11 weeks
期刊介绍: Big Data & Society (BD&S) is an open access, peer-reviewed scholarly journal that publishes interdisciplinary work principally in the social sciences, humanities, and computing and their intersections with the arts and natural sciences. The journal focuses on the implications of Big Data for societies and aims to connect debates about Big Data practices and their effects on various sectors such as academia, social life, industry, business, and government. BD&S considers Big Data as an emerging field of practices, not solely defined by but generative of unique data qualities such as high volume, granularity, data linking, and mining. The journal pays attention to digital content generated both online and offline, encompassing social media, search engines, closed networks (e.g., commercial or government transactions), and open networks like digital archives, open government, and crowdsourced data. Rather than providing a fixed definition of Big Data, BD&S encourages interdisciplinary inquiries, debates, and studies on various topics and themes related to Big Data practices. BD&S seeks contributions that analyze Big Data practices, involve empirical engagements and experiments with innovative methods, and reflect on the consequences of these practices for the representation, realization, and governance of societies. As a digital-only journal, BD&S's platform can accommodate multimedia formats such as complex images, dynamic visualizations, videos, and audio content. The contents of the journal encompass peer-reviewed research articles, colloquia, bookcasts, think pieces, state-of-the-art methods, and work by early career researchers.
期刊最新文献
From rules to examples: Machine learning's type of authority Outlier bias: AI classification of curb ramps, outliers, and context Artificial intelligence and skills in the workplace: An integrative research agenda Redress and worldmaking: Differing approaches to algorithmic reparations for housing justice The promises and challenges of addressing artificial intelligence with human rights
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1