The Sins of the Parents Are to Be Laid Upon the Children: Biased Humans, Biased Data, Biased Models.

IF 10.5 1区 心理学 Q1 PSYCHOLOGY, MULTIDISCIPLINARY Perspectives on Psychological Science Pub Date : 2024-09-01 Epub Date: 2023-07-18 DOI:10.1177/17456916231180099
Merrick R Osborne, Ali Omrani, Morteza Dehghani
{"title":"The Sins of the Parents Are to Be Laid Upon the Children: Biased Humans, Biased Data, Biased Models.","authors":"Merrick R Osborne, Ali Omrani, Morteza Dehghani","doi":"10.1177/17456916231180099","DOIUrl":null,"url":null,"abstract":"<p><p>Technological innovations have become a key driver of societal advancements. Nowhere is this more evident than in the field of machine learning (ML), which has developed algorithmic models that shape our decisions, behaviors, and outcomes. These tools have widespread use, in part, because they can synthesize massive amounts of data to make seemingly objective recommendations. Yet, in the past few years, the ML community has been drawing attention to the need for caution when interpreting and using these models. This is because these models are created by humans, from data generated by humans, whose psychology allows for various biases that impact how the models are developed, trained, tested, and interpreted. As psychologists, we thus face a fork in the road: Down the first path, we can continue to use these models without examining and addressing these critical flaws and rely on computer scientists to try to mitigate them. Down the second path, we can turn our expertise in bias toward this growing field, collaborating with computer scientists to reduce the models' deleterious outcomes. This article serves to light the way down the second path by identifying how extant psychological research can help examine and curtail bias in ML models.</p>","PeriodicalId":19757,"journal":{"name":"Perspectives on Psychological Science","volume":null,"pages":null},"PeriodicalIF":10.5000,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Perspectives on Psychological Science","FirstCategoryId":"102","ListUrlMain":"https://doi.org/10.1177/17456916231180099","RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2023/7/18 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"PSYCHOLOGY, MULTIDISCIPLINARY","Score":null,"Total":0}
引用次数: 0

Abstract

Technological innovations have become a key driver of societal advancements. Nowhere is this more evident than in the field of machine learning (ML), which has developed algorithmic models that shape our decisions, behaviors, and outcomes. These tools have widespread use, in part, because they can synthesize massive amounts of data to make seemingly objective recommendations. Yet, in the past few years, the ML community has been drawing attention to the need for caution when interpreting and using these models. This is because these models are created by humans, from data generated by humans, whose psychology allows for various biases that impact how the models are developed, trained, tested, and interpreted. As psychologists, we thus face a fork in the road: Down the first path, we can continue to use these models without examining and addressing these critical flaws and rely on computer scientists to try to mitigate them. Down the second path, we can turn our expertise in bias toward this growing field, collaborating with computer scientists to reduce the models' deleterious outcomes. This article serves to light the way down the second path by identifying how extant psychological research can help examine and curtail bias in ML models.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
父母的罪过要加诸于子女:有偏见的人类、有偏见的数据、有偏见的模型。
技术创新已成为社会进步的重要推动力。这一点在机器学习(ML)领域体现得最为明显,该领域已经开发出能够影响我们的决策、行为和结果的算法模型。这些工具之所以得到广泛应用,部分原因在于它们可以综合海量数据,提出看似客观的建议。然而,在过去几年中,ML 社区一直在提醒人们在解释和使用这些模型时需要谨慎。这是因为这些模型是由人类根据人类生成的数据创建的,而人类的心理会产生各种偏见,这些偏见会影响模型的开发、训练、测试和解释。因此,作为心理学家,我们面临着一个岔路口:在第一条道路上,我们可以继续使用这些模型,而不去检查和解决这些关键缺陷,并依靠计算机科学家来努力减少这些缺陷。在第二条道路上,我们可以将我们在偏见方面的专业知识转向这个不断发展的领域,与计算机科学家合作,减少模型的有害结果。本文通过指出现有心理学研究如何帮助检查和减少 ML 模型中的偏见,为第二条道路指明了方向。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Perspectives on Psychological Science
Perspectives on Psychological Science PSYCHOLOGY, MULTIDISCIPLINARY-
CiteScore
22.70
自引率
4.00%
发文量
111
期刊介绍: Perspectives on Psychological Science is a journal that publishes a diverse range of articles and reports in the field of psychology. The journal includes broad integrative reviews, overviews of research programs, meta-analyses, theoretical statements, book reviews, and articles on various topics such as the philosophy of science and opinion pieces about major issues in the field. It also features autobiographical reflections of senior members of the field, occasional humorous essays and sketches, and even has a section for invited and submitted articles. The impact of the journal can be seen through the reverberation of a 2009 article on correlative analyses commonly used in neuroimaging studies, which still influences the field. Additionally, a recent special issue of Perspectives, featuring prominent researchers discussing the "Next Big Questions in Psychology," is shaping the future trajectory of the discipline. Perspectives on Psychological Science provides metrics that showcase the performance of the journal. However, the Association for Psychological Science, of which the journal is a signatory of DORA, recommends against using journal-based metrics for assessing individual scientist contributions, such as for hiring, promotion, or funding decisions. Therefore, the metrics provided by Perspectives on Psychological Science should only be used by those interested in evaluating the journal itself.
期刊最新文献
Challenges in Understanding Human-Algorithm Entanglement During Online Information Consumption. Three Challenges for AI-Assisted Decision-Making. Social Drivers and Algorithmic Mechanisms on Digital Media. Human and Algorithmic Predictions in Geopolitical Forecasting: Quantifying Uncertainty in Hard-to-Quantify Domains. Blinding to Circumvent Human Biases: Deliberate Ignorance in Humans, Institutions, and Machines.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1