蒙蔽以规避人类偏见:人类、机构和机器的故意无知。

IF 10.5 1区 心理学 Q1 PSYCHOLOGY, MULTIDISCIPLINARY Perspectives on Psychological Science Pub Date : 2024-09-01 Epub Date: 2023-09-05 DOI:10.1177/17456916231188052
Ralph Hertwig, Stefan M Herzog, Anastasia Kozyreva
{"title":"蒙蔽以规避人类偏见:人类、机构和机器的故意无知。","authors":"Ralph Hertwig, Stefan M Herzog, Anastasia Kozyreva","doi":"10.1177/17456916231188052","DOIUrl":null,"url":null,"abstract":"<p><p>Inequalities and injustices are thorny issues in liberal societies, manifesting in forms such as the gender-pay gap; sentencing discrepancies among Black, Hispanic, and White defendants; and unequal medical-resource distribution across ethnicities. One cause of these inequalities is <i>implicit social bias</i>-unconsciously formed associations between social groups and attributions such as \"nurturing,\" \"lazy,\" or \"uneducated.\" One strategy to counteract implicit and explicit human biases is delegating crucial decisions, such as how to allocate benefits, resources, or opportunities, to algorithms. Algorithms, however, are not necessarily impartial and objective. Although they can detect and mitigate human biases, they can also perpetuate and even amplify existing inequalities and injustices. We explore how a philosophical thought experiment, Rawls's \"veil of ignorance,\" and a psychological phenomenon, deliberate ignorance, can help shield individuals, institutions, and algorithms from biases. We discuss the benefits and drawbacks of methods for shielding human and artificial decision makers from potentially biasing information. We then broaden our discussion beyond the issues of bias and fairness and turn to a research agenda aimed at improving human judgment accuracy with the assistance of algorithms that conceal information that has the potential to undermine performance. Finally, we propose interdisciplinary research questions.</p>","PeriodicalId":19757,"journal":{"name":"Perspectives on Psychological Science","volume":null,"pages":null},"PeriodicalIF":10.5000,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11373160/pdf/","citationCount":"0","resultStr":"{\"title\":\"Blinding to Circumvent Human Biases: Deliberate Ignorance in Humans, Institutions, and Machines.\",\"authors\":\"Ralph Hertwig, Stefan M Herzog, Anastasia Kozyreva\",\"doi\":\"10.1177/17456916231188052\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Inequalities and injustices are thorny issues in liberal societies, manifesting in forms such as the gender-pay gap; sentencing discrepancies among Black, Hispanic, and White defendants; and unequal medical-resource distribution across ethnicities. One cause of these inequalities is <i>implicit social bias</i>-unconsciously formed associations between social groups and attributions such as \\\"nurturing,\\\" \\\"lazy,\\\" or \\\"uneducated.\\\" One strategy to counteract implicit and explicit human biases is delegating crucial decisions, such as how to allocate benefits, resources, or opportunities, to algorithms. Algorithms, however, are not necessarily impartial and objective. Although they can detect and mitigate human biases, they can also perpetuate and even amplify existing inequalities and injustices. We explore how a philosophical thought experiment, Rawls's \\\"veil of ignorance,\\\" and a psychological phenomenon, deliberate ignorance, can help shield individuals, institutions, and algorithms from biases. We discuss the benefits and drawbacks of methods for shielding human and artificial decision makers from potentially biasing information. We then broaden our discussion beyond the issues of bias and fairness and turn to a research agenda aimed at improving human judgment accuracy with the assistance of algorithms that conceal information that has the potential to undermine performance. Finally, we propose interdisciplinary research questions.</p>\",\"PeriodicalId\":19757,\"journal\":{\"name\":\"Perspectives on Psychological Science\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":10.5000,\"publicationDate\":\"2024-09-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11373160/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Perspectives on Psychological Science\",\"FirstCategoryId\":\"102\",\"ListUrlMain\":\"https://doi.org/10.1177/17456916231188052\",\"RegionNum\":1,\"RegionCategory\":\"心理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2023/9/5 0:00:00\",\"PubModel\":\"Epub\",\"JCR\":\"Q1\",\"JCRName\":\"PSYCHOLOGY, MULTIDISCIPLINARY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Perspectives on Psychological Science","FirstCategoryId":"102","ListUrlMain":"https://doi.org/10.1177/17456916231188052","RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2023/9/5 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"PSYCHOLOGY, MULTIDISCIPLINARY","Score":null,"Total":0}
引用次数: 0

摘要

不平等和不公正是自由社会中的棘手问题,其表现形式包括男女薪酬差距;黑人、西班牙裔和白人被告之间的量刑差异;以及不同种族之间医疗资源分配不均。造成这些不平等现象的原因之一是隐性社会偏见--社会群体与 "养育"、"懒惰 "或 "未受教育 "等归因之间不自觉形成的关联。抵消人类隐性和显性偏见的一种策略是将关键决策,如如何分配利益、资源或机会,委托给算法。然而,算法并不一定公正客观。虽然它们可以发现并减轻人类的偏见,但也可能延续甚至扩大现有的不平等和不公正。我们将探讨哲学思想实验--罗尔斯的 "无知之纱 "和心理现象--刻意的无知--如何帮助个人、机构和算法避免偏见。我们讨论了使人类和人工决策者免受潜在偏见信息影响的方法的益处和弊端。然后,我们将讨论的范围扩大到偏见和公平问题之外,并转向研究议程,旨在通过算法的帮助提高人类判断的准确性,从而掩盖有可能影响绩效的信息。最后,我们提出了跨学科的研究问题。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Blinding to Circumvent Human Biases: Deliberate Ignorance in Humans, Institutions, and Machines.

Inequalities and injustices are thorny issues in liberal societies, manifesting in forms such as the gender-pay gap; sentencing discrepancies among Black, Hispanic, and White defendants; and unequal medical-resource distribution across ethnicities. One cause of these inequalities is implicit social bias-unconsciously formed associations between social groups and attributions such as "nurturing," "lazy," or "uneducated." One strategy to counteract implicit and explicit human biases is delegating crucial decisions, such as how to allocate benefits, resources, or opportunities, to algorithms. Algorithms, however, are not necessarily impartial and objective. Although they can detect and mitigate human biases, they can also perpetuate and even amplify existing inequalities and injustices. We explore how a philosophical thought experiment, Rawls's "veil of ignorance," and a psychological phenomenon, deliberate ignorance, can help shield individuals, institutions, and algorithms from biases. We discuss the benefits and drawbacks of methods for shielding human and artificial decision makers from potentially biasing information. We then broaden our discussion beyond the issues of bias and fairness and turn to a research agenda aimed at improving human judgment accuracy with the assistance of algorithms that conceal information that has the potential to undermine performance. Finally, we propose interdisciplinary research questions.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Perspectives on Psychological Science
Perspectives on Psychological Science PSYCHOLOGY, MULTIDISCIPLINARY-
CiteScore
22.70
自引率
4.00%
发文量
111
期刊介绍: Perspectives on Psychological Science is a journal that publishes a diverse range of articles and reports in the field of psychology. The journal includes broad integrative reviews, overviews of research programs, meta-analyses, theoretical statements, book reviews, and articles on various topics such as the philosophy of science and opinion pieces about major issues in the field. It also features autobiographical reflections of senior members of the field, occasional humorous essays and sketches, and even has a section for invited and submitted articles. The impact of the journal can be seen through the reverberation of a 2009 article on correlative analyses commonly used in neuroimaging studies, which still influences the field. Additionally, a recent special issue of Perspectives, featuring prominent researchers discussing the "Next Big Questions in Psychology," is shaping the future trajectory of the discipline. Perspectives on Psychological Science provides metrics that showcase the performance of the journal. However, the Association for Psychological Science, of which the journal is a signatory of DORA, recommends against using journal-based metrics for assessing individual scientist contributions, such as for hiring, promotion, or funding decisions. Therefore, the metrics provided by Perspectives on Psychological Science should only be used by those interested in evaluating the journal itself.
期刊最新文献
Challenges in Understanding Human-Algorithm Entanglement During Online Information Consumption. Three Challenges for AI-Assisted Decision-Making. Social Drivers and Algorithmic Mechanisms on Digital Media. Human and Algorithmic Predictions in Geopolitical Forecasting: Quantifying Uncertainty in Hard-to-Quantify Domains. Blinding to Circumvent Human Biases: Deliberate Ignorance in Humans, Institutions, and Machines.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1