对机器有偏见?隐式关联与算法厌恶的短暂性

IF 7 2区 管理学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Mis Quarterly Pub Date : 2023-12-01 DOI:10.25300/misq/2022/17961
Ofir Turel and Shivam Kalhan
{"title":"对机器有偏见?隐式关联与算法厌恶的短暂性","authors":"Ofir Turel and Shivam Kalhan","doi":"10.25300/misq/2022/17961","DOIUrl":null,"url":null,"abstract":"<style>#html-body [data-pb-style=TE8QKQW]{justify-content:flex-start;display:flex;flex-direction:column;background-position:left top;background-size:cover;background-repeat:no-repeat;background-attachment:scroll}</style>Algorithm aversion is an important and persistent issue that prevents harvesting the benefits of advancements in artificial intelligence. The literature thus far has provided explanations that primarily focus on conscious reflective processes. Here, we supplement this view by taking an unconscious perspective that can be highly informative. Building on theories of implicit prejudice, in a preregistered study, we suggest that people develop an implicit bias (i.e., prejudice) against artificial intelligence (AI) systems, as a different and threatening “species,” the behavior of which is unknown. Like in other contexts of prejudice, we expected people to be guided by this implicit bias but try to override it. This leads to some willingness to rely on algorithmic advice (appreciation), which is reduced as a function of people’s implicit prejudice against the machine. Next, building on the somatic marker hypothesis and the accessibility-diagnosticity perspective, we provide an explanation as to why aversion is ephemeral. As people learn about the performance of an algorithm, they depend less on primal implicit biases when deciding whether to rely on the AI’s advice. Two studies (n1 = 675, n2 = 317) that use the implicit association test consistently support this view. Two additional studies (n3 = 255, n4 = 332) rule out alternative explanations and provide stronger support for our assertions. The findings ultimately suggest that moving the needle between aversion and appreciation depends initially on one’s general unconscious bias against AI because there is insufficient information to override it. They further suggest that in later use stages, this shift depends on accessibility to diagnostic information about the AI’s performance, which reduces the weight given to unconscious prejudice.","PeriodicalId":49807,"journal":{"name":"Mis Quarterly","volume":"116 6","pages":""},"PeriodicalIF":7.0000,"publicationDate":"2023-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Prejudiced against the Machine? Implicit Associations and the Transience of Algorithm Aversion\",\"authors\":\"Ofir Turel and Shivam Kalhan\",\"doi\":\"10.25300/misq/2022/17961\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<style>#html-body [data-pb-style=TE8QKQW]{justify-content:flex-start;display:flex;flex-direction:column;background-position:left top;background-size:cover;background-repeat:no-repeat;background-attachment:scroll}</style>Algorithm aversion is an important and persistent issue that prevents harvesting the benefits of advancements in artificial intelligence. The literature thus far has provided explanations that primarily focus on conscious reflective processes. Here, we supplement this view by taking an unconscious perspective that can be highly informative. Building on theories of implicit prejudice, in a preregistered study, we suggest that people develop an implicit bias (i.e., prejudice) against artificial intelligence (AI) systems, as a different and threatening “species,” the behavior of which is unknown. Like in other contexts of prejudice, we expected people to be guided by this implicit bias but try to override it. This leads to some willingness to rely on algorithmic advice (appreciation), which is reduced as a function of people’s implicit prejudice against the machine. Next, building on the somatic marker hypothesis and the accessibility-diagnosticity perspective, we provide an explanation as to why aversion is ephemeral. As people learn about the performance of an algorithm, they depend less on primal implicit biases when deciding whether to rely on the AI’s advice. Two studies (n1 = 675, n2 = 317) that use the implicit association test consistently support this view. Two additional studies (n3 = 255, n4 = 332) rule out alternative explanations and provide stronger support for our assertions. The findings ultimately suggest that moving the needle between aversion and appreciation depends initially on one’s general unconscious bias against AI because there is insufficient information to override it. They further suggest that in later use stages, this shift depends on accessibility to diagnostic information about the AI’s performance, which reduces the weight given to unconscious prejudice.\",\"PeriodicalId\":49807,\"journal\":{\"name\":\"Mis Quarterly\",\"volume\":\"116 6\",\"pages\":\"\"},\"PeriodicalIF\":7.0000,\"publicationDate\":\"2023-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Mis Quarterly\",\"FirstCategoryId\":\"91\",\"ListUrlMain\":\"https://doi.org/10.25300/misq/2022/17961\",\"RegionNum\":2,\"RegionCategory\":\"管理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Mis Quarterly","FirstCategoryId":"91","ListUrlMain":"https://doi.org/10.25300/misq/2022/17961","RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

摘要

#html-body [data- pp -style=TE8QKQW]{justify-content:flex-start;display:flex;flex-direction:column;background-position:left top;background-size:cover;background-repeat: none -repeat;迄今为止,文献提供的解释主要集中在有意识的反思过程上。在这里,我们通过采用一种可以提供大量信息的无意识视角来补充这种观点。基于内隐偏见理论,在一项预先登记的研究中,我们建议人们对人工智能(AI)系统产生内隐偏见(即偏见),将其作为一种不同的威胁“物种”,其行为是未知的。就像在其他偏见的情况下一样,我们期望人们被这种隐性偏见所引导,但却试图克服它。这导致人们愿意依赖算法的建议(欣赏),而这是人们对机器的隐性偏见的结果。接下来,基于体细胞标记假说和可及性诊断的观点,我们提供了一个关于为什么厌恶是短暂的解释。随着人们了解算法的性能,在决定是否依赖人工智能的建议时,他们对原始内隐偏见的依赖就会减少。使用内隐关联检验的两项研究(n1 = 675, n2 = 317)一致支持这一观点。另外两项研究(n3 = 255, n4 = 332)排除了其他解释,为我们的断言提供了更有力的支持。研究结果最终表明,在厌恶和欣赏之间移动指针最初取决于一个人对人工智能的普遍无意识偏见,因为没有足够的信息来克服它。他们进一步表示,在后期使用阶段,这种转变取决于对人工智能表现的诊断信息的可访问性,这减少了无意识偏见的权重。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Prejudiced against the Machine? Implicit Associations and the Transience of Algorithm Aversion
Algorithm aversion is an important and persistent issue that prevents harvesting the benefits of advancements in artificial intelligence. The literature thus far has provided explanations that primarily focus on conscious reflective processes. Here, we supplement this view by taking an unconscious perspective that can be highly informative. Building on theories of implicit prejudice, in a preregistered study, we suggest that people develop an implicit bias (i.e., prejudice) against artificial intelligence (AI) systems, as a different and threatening “species,” the behavior of which is unknown. Like in other contexts of prejudice, we expected people to be guided by this implicit bias but try to override it. This leads to some willingness to rely on algorithmic advice (appreciation), which is reduced as a function of people’s implicit prejudice against the machine. Next, building on the somatic marker hypothesis and the accessibility-diagnosticity perspective, we provide an explanation as to why aversion is ephemeral. As people learn about the performance of an algorithm, they depend less on primal implicit biases when deciding whether to rely on the AI’s advice. Two studies (n1 = 675, n2 = 317) that use the implicit association test consistently support this view. Two additional studies (n3 = 255, n4 = 332) rule out alternative explanations and provide stronger support for our assertions. The findings ultimately suggest that moving the needle between aversion and appreciation depends initially on one’s general unconscious bias against AI because there is insufficient information to override it. They further suggest that in later use stages, this shift depends on accessibility to diagnostic information about the AI’s performance, which reduces the weight given to unconscious prejudice.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Mis Quarterly
Mis Quarterly 工程技术-计算机:信息系统
CiteScore
13.30
自引率
4.10%
发文量
36
审稿时长
6-12 weeks
期刊介绍: Journal Name: MIS Quarterly Editorial Objective: The editorial objective of MIS Quarterly is focused on: Enhancing and communicating knowledge related to: Development of IT-based services Management of IT resources Use, impact, and economics of IT with managerial, organizational, and societal implications Addressing professional issues affecting the Information Systems (IS) field as a whole Key Focus Areas: Development of IT-based services Management of IT resources Use, impact, and economics of IT with managerial, organizational, and societal implications Professional issues affecting the IS field as a whole
期刊最新文献
Engaging Users on Social Media Business Pages: The Roles of User Comments and Firm Responses Digitization of Transaction Terms within TCE: Strong Smart Contract as a New Mode of Transaction Governance Dealing with Complexity in Design Science Research: A Methodology Using Design Echelons Data Commoning in the Life Sciences Understanding the Returns from Integrated Enterprise Systems: The Impacts of Agile and Phased Implementation Strategies
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1