{"title":"父母的罪过要加诸于子女:有偏见的人类、有偏见的数据、有偏见的模型。","authors":"Merrick R Osborne, Ali Omrani, Morteza Dehghani","doi":"10.1177/17456916231180099","DOIUrl":null,"url":null,"abstract":"<p><p>Technological innovations have become a key driver of societal advancements. Nowhere is this more evident than in the field of machine learning (ML), which has developed algorithmic models that shape our decisions, behaviors, and outcomes. These tools have widespread use, in part, because they can synthesize massive amounts of data to make seemingly objective recommendations. Yet, in the past few years, the ML community has been drawing attention to the need for caution when interpreting and using these models. This is because these models are created by humans, from data generated by humans, whose psychology allows for various biases that impact how the models are developed, trained, tested, and interpreted. As psychologists, we thus face a fork in the road: Down the first path, we can continue to use these models without examining and addressing these critical flaws and rely on computer scientists to try to mitigate them. Down the second path, we can turn our expertise in bias toward this growing field, collaborating with computer scientists to reduce the models' deleterious outcomes. This article serves to light the way down the second path by identifying how extant psychological research can help examine and curtail bias in ML models.</p>","PeriodicalId":19757,"journal":{"name":"Perspectives on Psychological Science","volume":null,"pages":null},"PeriodicalIF":10.5000,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"The Sins of the Parents Are to Be Laid Upon the Children: Biased Humans, Biased Data, Biased Models.\",\"authors\":\"Merrick R Osborne, Ali Omrani, Morteza Dehghani\",\"doi\":\"10.1177/17456916231180099\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Technological innovations have become a key driver of societal advancements. Nowhere is this more evident than in the field of machine learning (ML), which has developed algorithmic models that shape our decisions, behaviors, and outcomes. These tools have widespread use, in part, because they can synthesize massive amounts of data to make seemingly objective recommendations. Yet, in the past few years, the ML community has been drawing attention to the need for caution when interpreting and using these models. This is because these models are created by humans, from data generated by humans, whose psychology allows for various biases that impact how the models are developed, trained, tested, and interpreted. As psychologists, we thus face a fork in the road: Down the first path, we can continue to use these models without examining and addressing these critical flaws and rely on computer scientists to try to mitigate them. Down the second path, we can turn our expertise in bias toward this growing field, collaborating with computer scientists to reduce the models' deleterious outcomes. This article serves to light the way down the second path by identifying how extant psychological research can help examine and curtail bias in ML models.</p>\",\"PeriodicalId\":19757,\"journal\":{\"name\":\"Perspectives on Psychological Science\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":10.5000,\"publicationDate\":\"2024-09-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Perspectives on Psychological Science\",\"FirstCategoryId\":\"102\",\"ListUrlMain\":\"https://doi.org/10.1177/17456916231180099\",\"RegionNum\":1,\"RegionCategory\":\"心理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2023/7/18 0:00:00\",\"PubModel\":\"Epub\",\"JCR\":\"Q1\",\"JCRName\":\"PSYCHOLOGY, MULTIDISCIPLINARY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Perspectives on Psychological Science","FirstCategoryId":"102","ListUrlMain":"https://doi.org/10.1177/17456916231180099","RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2023/7/18 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"PSYCHOLOGY, MULTIDISCIPLINARY","Score":null,"Total":0}
引用次数: 0
摘要
技术创新已成为社会进步的重要推动力。这一点在机器学习(ML)领域体现得最为明显,该领域已经开发出能够影响我们的决策、行为和结果的算法模型。这些工具之所以得到广泛应用,部分原因在于它们可以综合海量数据,提出看似客观的建议。然而,在过去几年中,ML 社区一直在提醒人们在解释和使用这些模型时需要谨慎。这是因为这些模型是由人类根据人类生成的数据创建的,而人类的心理会产生各种偏见,这些偏见会影响模型的开发、训练、测试和解释。因此,作为心理学家,我们面临着一个岔路口:在第一条道路上,我们可以继续使用这些模型,而不去检查和解决这些关键缺陷,并依靠计算机科学家来努力减少这些缺陷。在第二条道路上,我们可以将我们在偏见方面的专业知识转向这个不断发展的领域,与计算机科学家合作,减少模型的有害结果。本文通过指出现有心理学研究如何帮助检查和减少 ML 模型中的偏见,为第二条道路指明了方向。
The Sins of the Parents Are to Be Laid Upon the Children: Biased Humans, Biased Data, Biased Models.
Technological innovations have become a key driver of societal advancements. Nowhere is this more evident than in the field of machine learning (ML), which has developed algorithmic models that shape our decisions, behaviors, and outcomes. These tools have widespread use, in part, because they can synthesize massive amounts of data to make seemingly objective recommendations. Yet, in the past few years, the ML community has been drawing attention to the need for caution when interpreting and using these models. This is because these models are created by humans, from data generated by humans, whose psychology allows for various biases that impact how the models are developed, trained, tested, and interpreted. As psychologists, we thus face a fork in the road: Down the first path, we can continue to use these models without examining and addressing these critical flaws and rely on computer scientists to try to mitigate them. Down the second path, we can turn our expertise in bias toward this growing field, collaborating with computer scientists to reduce the models' deleterious outcomes. This article serves to light the way down the second path by identifying how extant psychological research can help examine and curtail bias in ML models.
期刊介绍:
Perspectives on Psychological Science is a journal that publishes a diverse range of articles and reports in the field of psychology. The journal includes broad integrative reviews, overviews of research programs, meta-analyses, theoretical statements, book reviews, and articles on various topics such as the philosophy of science and opinion pieces about major issues in the field. It also features autobiographical reflections of senior members of the field, occasional humorous essays and sketches, and even has a section for invited and submitted articles.
The impact of the journal can be seen through the reverberation of a 2009 article on correlative analyses commonly used in neuroimaging studies, which still influences the field. Additionally, a recent special issue of Perspectives, featuring prominent researchers discussing the "Next Big Questions in Psychology," is shaping the future trajectory of the discipline.
Perspectives on Psychological Science provides metrics that showcase the performance of the journal. However, the Association for Psychological Science, of which the journal is a signatory of DORA, recommends against using journal-based metrics for assessing individual scientist contributions, such as for hiring, promotion, or funding decisions. Therefore, the metrics provided by Perspectives on Psychological Science should only be used by those interested in evaluating the journal itself.