Pub Date : 2024-10-09DOI: 10.1177/17456916241277554
Aerielle M Allen, Alexis Drain, Chardée A Galán, Azaadeh Goharzad, Irene Tung, Beza M Bekele
Racialized police violence is a profound form of systemic oppression affecting Black Americans, yet the narratives surrounding police brutality have disproportionately centered on Black men and boys, overshadowing the victimization of Black women and girls. In 2014, the #SayHerName campaign emerged to bring attention to the often-overlooked instances of police brutality against Black women and girls, including incidents of both nonsexual and sexual violence. Despite these efforts, mainstream discourse and psychological scholarship on police violence continue to marginalize the experiences of Black women and girls. This raises a critical question: Why DON'T we "Say Her Name"? This article employs intersectional frameworks to demonstrate how the historic and systemic factors that render Black women and girls particularly vulnerable to police violence also deny their legitimacy as victims, perpetuate their invisibility, and increase their susceptibility to state-sanctioned violence. We extend models of intersectional invisibility by arguing that ideologies related to age, in addition to racial and gender identities, contribute to their marginalization. Finally, we reflect on how psychological researchers can play a pivotal role in dismantling the invisibility of Black women and girls through scientific efforts and advocacy.
{"title":"Why DON'T We \"Say Her Name\"? An Intersectional Model of the Invisibility of Police Violence Against Black Women and Girls.","authors":"Aerielle M Allen, Alexis Drain, Chardée A Galán, Azaadeh Goharzad, Irene Tung, Beza M Bekele","doi":"10.1177/17456916241277554","DOIUrl":"https://doi.org/10.1177/17456916241277554","url":null,"abstract":"<p><p>Racialized police violence is a profound form of systemic oppression affecting Black Americans, yet the narratives surrounding police brutality have disproportionately centered on Black men and boys, overshadowing the victimization of Black women and girls. In 2014, the #SayHerName campaign emerged to bring attention to the often-overlooked instances of police brutality against Black women and girls, including incidents of both nonsexual and sexual violence. Despite these efforts, mainstream discourse and psychological scholarship on police violence continue to marginalize the experiences of Black women and girls. This raises a critical question: Why DON'T we \"Say Her Name\"? This article employs intersectional frameworks to demonstrate how the historic and systemic factors that render Black women and girls particularly vulnerable to police violence also deny their legitimacy as victims, perpetuate their invisibility, and increase their susceptibility to state-sanctioned violence. We extend models of intersectional invisibility by arguing that ideologies related to age, in addition to racial and gender identities, contribute to their marginalization. Finally, we reflect on how psychological researchers can play a pivotal role in dismantling the invisibility of Black women and girls through scientific efforts and advocacy.</p>","PeriodicalId":19757,"journal":{"name":"Perspectives on Psychological Science","volume":null,"pages":null},"PeriodicalIF":10.5,"publicationDate":"2024-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142392320","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-01Epub Date: 2023-07-31DOI: 10.1177/17456916231180597
Gerd Gigerenzer
Psychological artificial intelligence (AI) applies insights from psychology to design computer algorithms. Its core domain is decision-making under uncertainty, that is, ill-defined situations that can change in unexpected ways rather than well-defined, stable problems, such as chess and Go. Psychological theories about heuristic processes under uncertainty can provide possible insights. I provide two illustrations. The first shows how recency-the human tendency to rely on the most recent information and ignore base rates-can be built into a simple algorithm that predicts the flu substantially better than did Google Flu Trends's big-data algorithms. The second uses a result from memory research-the paradoxical effect that making numbers less precise increases recall-in the design of algorithms that predict recidivism. These case studies provide an existence proof that psychological AI can help design efficient and transparent algorithms.
{"title":"Psychological AI: Designing Algorithms Informed by Human Psychology.","authors":"Gerd Gigerenzer","doi":"10.1177/17456916231180597","DOIUrl":"10.1177/17456916231180597","url":null,"abstract":"<p><p>Psychological artificial intelligence (AI) applies insights from psychology to design computer algorithms. Its core domain is decision-making under uncertainty, that is, ill-defined situations that can change in unexpected ways rather than well-defined, stable problems, such as chess and Go. Psychological theories about heuristic processes under uncertainty can provide possible insights. I provide two illustrations. The first shows how recency-the human tendency to rely on the most recent information and ignore base rates-can be built into a simple algorithm that predicts the flu substantially better than did Google Flu Trends's big-data algorithms. The second uses a result from memory research-the paradoxical effect that making numbers less precise increases recall-in the design of algorithms that predict recidivism. These case studies provide an existence proof that psychological AI can help design efficient and transparent algorithms.</p>","PeriodicalId":19757,"journal":{"name":"Perspectives on Psychological Science","volume":null,"pages":null},"PeriodicalIF":10.5,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11373155/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10274200","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-01Epub Date: 2023-07-10DOI: 10.1177/17456916231180809
Stephan Lewandowsky, Ronald E Robertson, Renee DiResta
Most content consumed online is curated by proprietary algorithms deployed by social media platforms and search engines. In this article, we explore the interplay between these algorithms and human agency. Specifically, we consider the extent of entanglement or coupling between humans and algorithms along a continuum from implicit to explicit demand. We emphasize that the interactions people have with algorithms not only shape users' experiences in that moment but because of the mutually shaping nature of such systems can also have longer-term effects through modifications of the underlying social-network structure. Understanding these mutually shaping systems is challenging given that researchers presently lack access to relevant platform data. We argue that increased transparency, more data sharing, and greater protections for external researchers examining the algorithms are required to help researchers better understand the entanglement between humans and algorithms. This better understanding is essential to support the development of algorithms with greater benefits and fewer risks to the public.
{"title":"Challenges in Understanding Human-Algorithm Entanglement During Online Information Consumption.","authors":"Stephan Lewandowsky, Ronald E Robertson, Renee DiResta","doi":"10.1177/17456916231180809","DOIUrl":"10.1177/17456916231180809","url":null,"abstract":"<p><p>Most content consumed online is curated by proprietary algorithms deployed by social media platforms and search engines. In this article, we explore the interplay between these algorithms and human agency. Specifically, we consider the extent of entanglement or coupling between humans and algorithms along a continuum from implicit to explicit demand. We emphasize that the interactions people have with algorithms not only shape users' experiences in that moment but because of the mutually shaping nature of such systems can also have longer-term effects through modifications of the underlying social-network structure. Understanding these mutually shaping systems is challenging given that researchers presently lack access to relevant platform data. We argue that increased transparency, more data sharing, and greater protections for external researchers examining the algorithms are required to help researchers better understand the entanglement between humans and algorithms. This better understanding is essential to support the development of algorithms with greater benefits and fewer risks to the public.</p>","PeriodicalId":19757,"journal":{"name":"Perspectives on Psychological Science","volume":null,"pages":null},"PeriodicalIF":10.5,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11373152/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9765071","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-01Epub Date: 2023-07-18DOI: 10.1177/17456916231180099
Merrick R Osborne, Ali Omrani, Morteza Dehghani
Technological innovations have become a key driver of societal advancements. Nowhere is this more evident than in the field of machine learning (ML), which has developed algorithmic models that shape our decisions, behaviors, and outcomes. These tools have widespread use, in part, because they can synthesize massive amounts of data to make seemingly objective recommendations. Yet, in the past few years, the ML community has been drawing attention to the need for caution when interpreting and using these models. This is because these models are created by humans, from data generated by humans, whose psychology allows for various biases that impact how the models are developed, trained, tested, and interpreted. As psychologists, we thus face a fork in the road: Down the first path, we can continue to use these models without examining and addressing these critical flaws and rely on computer scientists to try to mitigate them. Down the second path, we can turn our expertise in bias toward this growing field, collaborating with computer scientists to reduce the models' deleterious outcomes. This article serves to light the way down the second path by identifying how extant psychological research can help examine and curtail bias in ML models.
技术创新已成为社会进步的重要推动力。这一点在机器学习(ML)领域体现得最为明显,该领域已经开发出能够影响我们的决策、行为和结果的算法模型。这些工具之所以得到广泛应用,部分原因在于它们可以综合海量数据,提出看似客观的建议。然而,在过去几年中,ML 社区一直在提醒人们在解释和使用这些模型时需要谨慎。这是因为这些模型是由人类根据人类生成的数据创建的,而人类的心理会产生各种偏见,这些偏见会影响模型的开发、训练、测试和解释。因此,作为心理学家,我们面临着一个岔路口:在第一条道路上,我们可以继续使用这些模型,而不去检查和解决这些关键缺陷,并依靠计算机科学家来努力减少这些缺陷。在第二条道路上,我们可以将我们在偏见方面的专业知识转向这个不断发展的领域,与计算机科学家合作,减少模型的有害结果。本文通过指出现有心理学研究如何帮助检查和减少 ML 模型中的偏见,为第二条道路指明了方向。
{"title":"The Sins of the Parents Are to Be Laid Upon the Children: Biased Humans, Biased Data, Biased Models.","authors":"Merrick R Osborne, Ali Omrani, Morteza Dehghani","doi":"10.1177/17456916231180099","DOIUrl":"10.1177/17456916231180099","url":null,"abstract":"<p><p>Technological innovations have become a key driver of societal advancements. Nowhere is this more evident than in the field of machine learning (ML), which has developed algorithmic models that shape our decisions, behaviors, and outcomes. These tools have widespread use, in part, because they can synthesize massive amounts of data to make seemingly objective recommendations. Yet, in the past few years, the ML community has been drawing attention to the need for caution when interpreting and using these models. This is because these models are created by humans, from data generated by humans, whose psychology allows for various biases that impact how the models are developed, trained, tested, and interpreted. As psychologists, we thus face a fork in the road: Down the first path, we can continue to use these models without examining and addressing these critical flaws and rely on computer scientists to try to mitigate them. Down the second path, we can turn our expertise in bias toward this growing field, collaborating with computer scientists to reduce the models' deleterious outcomes. This article serves to light the way down the second path by identifying how extant psychological research can help examine and curtail bias in ML models.</p>","PeriodicalId":19757,"journal":{"name":"Perspectives on Psychological Science","volume":null,"pages":null},"PeriodicalIF":10.5,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10185871","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-01Epub Date: 2023-07-13DOI: 10.1177/17456916231181102
Mark Steyvers, Aakriti Kumar
Artificial intelligence (AI) has the potential to improve human decision-making by providing decision recommendations and problem-relevant information to assist human decision-makers. However, the full realization of the potential of human-AI collaboration continues to face several challenges. First, the conditions that support complementarity (i.e., situations in which the performance of a human with AI assistance exceeds the performance of an unassisted human or the AI in isolation) must be understood. This task requires humans to be able to recognize situations in which the AI should be leveraged and to develop new AI systems that can learn to complement the human decision-maker. Second, human mental models of the AI, which contain both expectations of the AI and reliance strategies, must be accurately assessed. Third, the effects of different design choices for human-AI interaction must be understood, including both the timing of AI assistance and the amount of model information that should be presented to the human decision-maker to avoid cognitive overload and ineffective reliance strategies. In response to each of these three challenges, we present an interdisciplinary perspective based on recent empirical and theoretical findings and discuss new research directions.
{"title":"Three Challenges for AI-Assisted Decision-Making.","authors":"Mark Steyvers, Aakriti Kumar","doi":"10.1177/17456916231181102","DOIUrl":"10.1177/17456916231181102","url":null,"abstract":"<p><p>Artificial intelligence (AI) has the potential to improve human decision-making by providing decision recommendations and problem-relevant information to assist human decision-makers. However, the full realization of the potential of human-AI collaboration continues to face several challenges. First, the conditions that support complementarity (i.e., situations in which the performance of a human with AI assistance exceeds the performance of an unassisted human or the AI in isolation) must be understood. This task requires humans to be able to recognize situations in which the AI should be leveraged and to develop new AI systems that can learn to complement the human decision-maker. Second, human mental models of the AI, which contain both expectations of the AI and reliance strategies, must be accurately assessed. Third, the effects of different design choices for human-AI interaction must be understood, including both the timing of AI assistance and the amount of model information that should be presented to the human decision-maker to avoid cognitive overload and ineffective reliance strategies. In response to each of these three challenges, we present an interdisciplinary perspective based on recent empirical and theoretical findings and discuss new research directions.</p>","PeriodicalId":19757,"journal":{"name":"Perspectives on Psychological Science","volume":null,"pages":null},"PeriodicalIF":10.5,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11373149/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9770751","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-01Epub Date: 2023-10-26DOI: 10.1177/17456916231201401
Eunice Yiu, Eliza Kosoy, Alison Gopnik
Much discussion about large language models and language-and-vision models has focused on whether these models are intelligent agents. We present an alternative perspective. First, we argue that these artificial intelligence (AI) models are cultural technologies that enhance cultural transmission and are efficient and powerful imitation engines. Second, we explore what AI models can tell us about imitation and innovation by testing whether they can be used to discover new tools and novel causal structures and contrasting their responses with those of human children. Our work serves as a first step in determining which particular representations and competences, as well as which kinds of knowledge or skill, can be derived from particular learning techniques and data. In particular, we explore which kinds of cognitive capacities can be enabled by statistical analysis of large-scale linguistic data. Critically, our findings suggest that machines may need more than large-scale language and image data to allow the kinds of innovation that a small child can produce.
{"title":"Transmission Versus Truth, Imitation Versus Innovation: What Children Can Do That Large Language and Language-and-Vision Models Cannot (Yet).","authors":"Eunice Yiu, Eliza Kosoy, Alison Gopnik","doi":"10.1177/17456916231201401","DOIUrl":"10.1177/17456916231201401","url":null,"abstract":"<p><p>Much discussion about large language models and language-and-vision models has focused on whether these models are intelligent agents. We present an alternative perspective. First, we argue that these artificial intelligence (AI) models are cultural technologies that enhance cultural transmission and are efficient and powerful imitation engines. Second, we explore what AI models can tell us about imitation and innovation by testing whether they can be used to discover new tools and novel causal structures and contrasting their responses with those of human children. Our work serves as a first step in determining which particular representations and competences, as well as which kinds of knowledge or skill, can be derived from particular learning techniques and data. In particular, we explore which kinds of cognitive capacities can be enabled by statistical analysis of large-scale linguistic data. Critically, our findings suggest that machines may need more than large-scale language and image data to allow the kinds of innovation that a small child can produce.</p>","PeriodicalId":19757,"journal":{"name":"Perspectives on Psychological Science","volume":null,"pages":null},"PeriodicalIF":10.5,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11373165/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"54230419","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-01Epub Date: 2023-12-12DOI: 10.1177/17456916231212138
Jon Kleinberg, Jens Ludwig, Sendhil Mullainathan, Manish Raghavan
More and more machine learning is applied to human behavior. Increasingly these algorithms suffer from a hidden-but serious-problem. It arises because they often predict one thing while hoping for another. Take a recommender system: It predicts clicks but hopes to identify preferences. Or take an algorithm that automates a radiologist: It predicts in-the-moment diagnoses while hoping to identify their reflective judgments. Psychology shows us the gaps between the objectives of such prediction tasks and the goals we hope to achieve: People can click mindlessly; experts can get tired and make systematic errors. We argue such situations are ubiquitous and call them "inversion problems": The real goal requires understanding a mental state that is not directly measured in behavioral data but must instead be inverted from the behavior. Identifying and solving these problems require new tools that draw on both behavioral and computational science.
{"title":"The Inversion Problem: Why Algorithms Should Infer Mental State and Not Just Predict Behavior.","authors":"Jon Kleinberg, Jens Ludwig, Sendhil Mullainathan, Manish Raghavan","doi":"10.1177/17456916231212138","DOIUrl":"10.1177/17456916231212138","url":null,"abstract":"<p><p>More and more machine learning is applied to human behavior. Increasingly these algorithms suffer from a hidden-but serious-problem. It arises because they often predict one thing while hoping for another. Take a recommender system: It predicts clicks but hopes to identify preferences. Or take an algorithm that automates a radiologist: It predicts in-the-moment diagnoses while hoping to identify their reflective judgments. Psychology shows us the gaps between the objectives of such prediction tasks and the goals we hope to achieve: People can click mindlessly; experts can get tired and make systematic errors. We argue such situations are ubiquitous and call them \"inversion problems\": The real goal requires understanding a mental state that is not directly measured in behavioral data but must instead be inverted from the behavior. Identifying and solving these problems require new tools that draw on both behavioral and computational science.</p>","PeriodicalId":19757,"journal":{"name":"Perspectives on Psychological Science","volume":null,"pages":null},"PeriodicalIF":10.5,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138808387","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-01Epub Date: 2023-09-26DOI: 10.1177/17456916231190392
Steve Rathje, Claire Robertson, William J Brady, Jay J Van Bavel
Recent studies have documented the type of content that is most likely to spread widely, or go "viral," on social media, yet little is known about people's perceptions of what goes viral or what should go viral. This is critical to understand because there is widespread debate about how to improve or regulate social media algorithms. We recruited a sample of participants that is nationally representative of the U.S. population (according to age, gender, and race/ethnicity) and surveyed them about their perceptions of social media virality (n = 511). In line with prior research, people believe that divisive content, moral outrage, negative content, high-arousal content, and misinformation are all likely to go viral online. However, they reported that this type of content should not go viral on social media. Instead, people reported that many forms of positive content-such as accurate content, nuanced content, and educational content-are not likely to go viral even though they think this content should go viral. These perceptions were shared among most participants and were only weakly related to political orientation, social media usage, and demographic variables. In sum, there is broad consensus around the type of content people think social media platforms should and should not amplify, which can help inform solutions for improving social media.
{"title":"People Think That Social Media Platforms Do (but Should Not) Amplify Divisive Content.","authors":"Steve Rathje, Claire Robertson, William J Brady, Jay J Van Bavel","doi":"10.1177/17456916231190392","DOIUrl":"10.1177/17456916231190392","url":null,"abstract":"<p><p>Recent studies have documented the type of content that is most likely to spread widely, or go \"viral,\" on social media, yet little is known about people's perceptions of what goes viral or what should go viral. This is critical to understand because there is widespread debate about how to improve or regulate social media algorithms. We recruited a sample of participants that is nationally representative of the U.S. population (according to age, gender, and race/ethnicity) and surveyed them about their perceptions of social media virality (<i>n</i> = 511). In line with prior research, people believe that divisive content, moral outrage, negative content, high-arousal content, and misinformation are all likely to go viral online. However, they reported that this type of content should not go viral on social media. Instead, people reported that many forms of positive content-such as accurate content, nuanced content, and educational content-are not likely to go viral even though they think this content should go viral. These perceptions were shared among most participants and were only weakly related to political orientation, social media usage, and demographic variables. In sum, there is broad consensus around the type of content people think social media platforms should and should not amplify, which can help inform solutions for improving social media.</p>","PeriodicalId":19757,"journal":{"name":"Perspectives on Psychological Science","volume":null,"pages":null},"PeriodicalIF":10.5,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41109994","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-01Epub Date: 2023-07-19DOI: 10.1177/17456916231185057
Hannah Metzler, David Garcia
On digital media, algorithms that process data and recommend content have become ubiquitous. Their fast and barely regulated adoption has raised concerns about their role in well-being both at the individual and collective levels. Algorithmic mechanisms on digital media are powered by social drivers, creating a feedback loop that complicates research to disentangle the role of algorithms and already existing social phenomena. Our brief overview of the current evidence on how algorithms affect well-being, misinformation, and polarization suggests that the role of algorithms in these phenomena is far from straightforward and that substantial further empirical research is needed. Existing evidence suggests that algorithms mostly reinforce existing social drivers, a finding that stresses the importance of reflecting on algorithms in the larger societal context that encompasses individualism, populist politics, and climate change. We present concrete ideas and research questions to improve algorithms on digital platforms and to investigate their role in current problems and potential solutions. Finally, we discuss how the current shift from social media to more algorithmically curated media brings both risks and opportunities if algorithms are designed for individual and societal flourishing rather than short-term profit.
{"title":"Social Drivers and Algorithmic Mechanisms on Digital Media.","authors":"Hannah Metzler, David Garcia","doi":"10.1177/17456916231185057","DOIUrl":"10.1177/17456916231185057","url":null,"abstract":"<p><p>On digital media, algorithms that process data and recommend content have become ubiquitous. Their fast and barely regulated adoption has raised concerns about their role in well-being both at the individual and collective levels. Algorithmic mechanisms on digital media are powered by social drivers, creating a feedback loop that complicates research to disentangle the role of algorithms and already existing social phenomena. Our brief overview of the current evidence on how algorithms affect well-being, misinformation, and polarization suggests that the role of algorithms in these phenomena is far from straightforward and that substantial further empirical research is needed. Existing evidence suggests that algorithms mostly reinforce existing social drivers, a finding that stresses the importance of reflecting on algorithms in the larger societal context that encompasses individualism, populist politics, and climate change. We present concrete ideas and research questions to improve algorithms on digital platforms and to investigate their role in current problems and potential solutions. Finally, we discuss how the current shift from social media to more algorithmically curated media brings both risks and opportunities if algorithms are designed for individual and societal flourishing rather than short-term profit.</p>","PeriodicalId":19757,"journal":{"name":"Perspectives on Psychological Science","volume":null,"pages":null},"PeriodicalIF":10.5,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11373151/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9822531","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-01Epub Date: 2023-11-27DOI: 10.1177/17456916231186779
David Lazer, Briony Swire-Thompson, Christo Wilson
It is critical to understand how algorithms structure the information people see and how those algorithms support or undermine society's core values. We offer a normative framework for the assessment of the information curation algorithms that determine much of what people see on the internet. The framework presents two levels of assessment: one for individual-level effects and another for systemic effects. With regard to individual-level effects we discuss whether (a) the information is aligned with the user's interests, (b) the information is accurate, and (c) the information is so appealing that it is difficult for a person's self-regulatory resources to ignore ("agency hacking"). At the systemic level we discuss whether (a) there are adverse civic-level effects on a system-level variable, such as political polarization; (b) there are negative distributional or discriminatory effects; and (c) there are anticompetitive effects, with the information providing an advantage to the platform. The objective of this framework is both to inform the direction of future scholarship as well as to offer tools for intervention for policymakers.
{"title":"A Normative Framework for Assessing the Information Curation Algorithms of the Internet.","authors":"David Lazer, Briony Swire-Thompson, Christo Wilson","doi":"10.1177/17456916231186779","DOIUrl":"10.1177/17456916231186779","url":null,"abstract":"<p><p>It is critical to understand how algorithms structure the information people see and how those algorithms support or undermine society's core values. We offer a normative framework for the assessment of the information curation algorithms that determine much of what people see on the internet. The framework presents two levels of assessment: one for individual-level effects and another for systemic effects. With regard to individual-level effects we discuss whether (a) the information is aligned with the user's interests, (b) the information is accurate, and (c) the information is so appealing that it is difficult for a person's self-regulatory resources to ignore (\"agency hacking\"). At the systemic level we discuss whether (a) there are adverse civic-level effects on a system-level variable, such as political polarization; (b) there are negative distributional or discriminatory effects; and (c) there are anticompetitive effects, with the information providing an advantage to the platform. The objective of this framework is both to inform the direction of future scholarship as well as to offer tools for intervention for policymakers.</p>","PeriodicalId":19757,"journal":{"name":"Perspectives on Psychological Science","volume":null,"pages":null},"PeriodicalIF":10.5,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138445669","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}