太多的光线会使人盲目:算法管理中的透明度-阻力悖论

IF 9 1区 心理学 Q1 PSYCHOLOGY, EXPERIMENTAL Computers in Human Behavior Pub Date : 2024-08-10 DOI:10.1016/j.chb.2024.108403
Peng Hu , Yu Zeng , Dong Wang , Han Teng
{"title":"太多的光线会使人盲目:算法管理中的透明度-阻力悖论","authors":"Peng Hu ,&nbsp;Yu Zeng ,&nbsp;Dong Wang ,&nbsp;Han Teng","doi":"10.1016/j.chb.2024.108403","DOIUrl":null,"url":null,"abstract":"<div><p>Gig platforms increasingly harness AI algorithms to manage workers, offering notable efficiency and scalability benefits. However, the rise of worker resistance, such as manipulating algorithms with fake data, poses challenges to these benefits. These algorithms are often perceived as ''black boxes'', leading to issues around transparency. This research thus explores the impact of algorithmic transparency on worker resistance. Using a longitudinal design, we uncovered a paradox: Initially, greater transparency correlates with enhanced fairness perception and reduced resistance. However, beyond a certain threshold, further transparency starts to backfire, leading to decreased fairness perception and amplified resistance. This paradox challenges the prevailing notion that more transparency always leads to positive outcomes. Moreover, we examined the role of human managers, showing that their empathetic support and caring can mitigate worker resistance when transparency fails to foster fairness. This highlights the power of human touch in the algorithm-driven workplace. Overall, these insights suggest a hybrid management model, wherein the cold efficiency of algorithmic managers is complemented by the warm empathy of human managers, offering a blueprint for more productive and harmonious human-machine interactions in the gig economy.</p></div>","PeriodicalId":48471,"journal":{"name":"Computers in Human Behavior","volume":"161 ","pages":"Article 108403"},"PeriodicalIF":9.0000,"publicationDate":"2024-08-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Too much light blinds: The transparency-resistance paradox in algorithmic management\",\"authors\":\"Peng Hu ,&nbsp;Yu Zeng ,&nbsp;Dong Wang ,&nbsp;Han Teng\",\"doi\":\"10.1016/j.chb.2024.108403\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Gig platforms increasingly harness AI algorithms to manage workers, offering notable efficiency and scalability benefits. However, the rise of worker resistance, such as manipulating algorithms with fake data, poses challenges to these benefits. These algorithms are often perceived as ''black boxes'', leading to issues around transparency. This research thus explores the impact of algorithmic transparency on worker resistance. Using a longitudinal design, we uncovered a paradox: Initially, greater transparency correlates with enhanced fairness perception and reduced resistance. However, beyond a certain threshold, further transparency starts to backfire, leading to decreased fairness perception and amplified resistance. This paradox challenges the prevailing notion that more transparency always leads to positive outcomes. Moreover, we examined the role of human managers, showing that their empathetic support and caring can mitigate worker resistance when transparency fails to foster fairness. This highlights the power of human touch in the algorithm-driven workplace. Overall, these insights suggest a hybrid management model, wherein the cold efficiency of algorithmic managers is complemented by the warm empathy of human managers, offering a blueprint for more productive and harmonious human-machine interactions in the gig economy.</p></div>\",\"PeriodicalId\":48471,\"journal\":{\"name\":\"Computers in Human Behavior\",\"volume\":\"161 \",\"pages\":\"Article 108403\"},\"PeriodicalIF\":9.0000,\"publicationDate\":\"2024-08-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Computers in Human Behavior\",\"FirstCategoryId\":\"102\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0747563224002711\",\"RegionNum\":1,\"RegionCategory\":\"心理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"PSYCHOLOGY, EXPERIMENTAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers in Human Behavior","FirstCategoryId":"102","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0747563224002711","RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"PSYCHOLOGY, EXPERIMENTAL","Score":null,"Total":0}
引用次数: 0

摘要

Gig 平台越来越多地利用人工智能算法来管理工人,从而带来显著的效率和可扩展性优势。然而,工人抵制情绪的高涨,如利用虚假数据操纵算法,给这些优势带来了挑战。这些算法往往被视为 "黑盒子",导致透明度问题。因此,本研究探讨了算法透明度对工人抵制的影响。通过纵向设计,我们发现了一个悖论:最初,透明度越高,公平感越强,抵制情绪越低。然而,超过一定阈值后,进一步的透明度开始适得其反,导致公平感下降和抵制情绪增强。这一悖论对 "透明度越高,结果越好 "的普遍观点提出了挑战。此外,我们还研究了人性化管理者的作用,结果表明,当透明度无法促进公平性时,他们感同身受的支持和关怀可以减轻工人的抵触情绪。这凸显了在算法驱动的工作场所中人情味的力量。总之,这些见解提出了一种混合管理模式,即算法管理者的冷酷效率与人类管理者的温暖同理心相辅相成,为在 "打工经济 "中实现更富有成效、更和谐的人机互动提供了蓝图。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Too much light blinds: The transparency-resistance paradox in algorithmic management

Gig platforms increasingly harness AI algorithms to manage workers, offering notable efficiency and scalability benefits. However, the rise of worker resistance, such as manipulating algorithms with fake data, poses challenges to these benefits. These algorithms are often perceived as ''black boxes'', leading to issues around transparency. This research thus explores the impact of algorithmic transparency on worker resistance. Using a longitudinal design, we uncovered a paradox: Initially, greater transparency correlates with enhanced fairness perception and reduced resistance. However, beyond a certain threshold, further transparency starts to backfire, leading to decreased fairness perception and amplified resistance. This paradox challenges the prevailing notion that more transparency always leads to positive outcomes. Moreover, we examined the role of human managers, showing that their empathetic support and caring can mitigate worker resistance when transparency fails to foster fairness. This highlights the power of human touch in the algorithm-driven workplace. Overall, these insights suggest a hybrid management model, wherein the cold efficiency of algorithmic managers is complemented by the warm empathy of human managers, offering a blueprint for more productive and harmonious human-machine interactions in the gig economy.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
19.10
自引率
4.00%
发文量
381
审稿时长
40 days
期刊介绍: Computers in Human Behavior is a scholarly journal that explores the psychological aspects of computer use. It covers original theoretical works, research reports, literature reviews, and software and book reviews. The journal examines both the use of computers in psychology, psychiatry, and related fields, and the psychological impact of computer use on individuals, groups, and society. Articles discuss topics such as professional practice, training, research, human development, learning, cognition, personality, and social interactions. It focuses on human interactions with computers, considering the computer as a medium through which human behaviors are shaped and expressed. Professionals interested in the psychological aspects of computer use will find this journal valuable, even with limited knowledge of computers.
期刊最新文献
What makes an app authentic? Determining antecedents of perceived authenticity in an AI-powered service app The effects of self-explanation on game-based learning: Evidence from eye-tracking analyses Avatars at risk: Exploring public response to sexual violence in immersive digital spaces Perception of non-binary social media users towards authentic non-binary social media influencers Editorial Board
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1