Pub Date : 2024-11-01Epub Date: 2023-08-21DOI: 10.1177/17456916231180589
Elena Luchkina, Sandra Waxman
Human language permits us to call to mind objects, events, and ideas that we cannot witness directly, either because they are absent or because they have no physical form (e.g., people we have not met, concepts like justice). What enables language to transmit such knowledge? We propose that a referential link between words, referents, and mental representations of those referents is key. This link enables us to form, access, and modify mental representations even when the referents themselves are absent ("absent reference"). In this review we consider the developmental and evolutionary origins of absent reference, integrating previously disparate literatures on absent reference in language and gesture in very young humans and gesture in nonhuman primates. We first evaluate when and how infants acquire absent reference during the process of language acquisition. With this as a foundation, we consider the evidence for absent reference in gesture in infants and in nonhuman primates. Finally, having woven these literatures together, we highlight new lines of research that promise to sharpen our understanding of the development of reference and its role in learning about the absent and the abstract.
{"title":"Talking About the Absent and the Abstract: Referential Communication in Language and Gesture.","authors":"Elena Luchkina, Sandra Waxman","doi":"10.1177/17456916231180589","DOIUrl":"10.1177/17456916231180589","url":null,"abstract":"<p><p>Human language permits us to call to mind objects, events, and ideas that we cannot witness directly, either because they are absent or because they have no physical form (e.g., people we have not met, concepts like justice). What enables language to transmit such knowledge? We propose that a referential link between words, referents, and mental representations of those referents is key. This link enables us to form, access, and modify mental representations even when the referents themselves are absent (\"absent reference\"). In this review we consider the developmental and evolutionary origins of absent reference, integrating previously disparate literatures on absent reference in language and gesture in very young humans and gesture in nonhuman primates. We first evaluate when and how infants acquire absent reference during the process of language acquisition. With this as a foundation, we consider the evidence for absent reference in gesture in infants and in nonhuman primates. Finally, having woven these literatures together, we highlight new lines of research that promise to sharpen our understanding of the development of reference and its role in learning about the absent and the abstract.</p>","PeriodicalId":19757,"journal":{"name":"Perspectives on Psychological Science","volume":" ","pages":"978-992"},"PeriodicalIF":10.5,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10879458/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10032511","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-01Epub Date: 2023-08-29DOI: 10.1177/17456916231191774
Sandra C Matz, Emorie D Beck, Olivia E Atherton, Mike White, John F Rauthmann, Dan K Mroczek, Minhee Kim, Tim Bogg
With the rapidly growing availability of scalable psychological assessments, personality science holds great promise for the scientific study and applied use of customized behavior-change interventions. To facilitate this development, we propose a classification system that divides psychological targeting into two approaches that differ in the process by which interventions are designed: audience-to-content matching or content-to-audience matching. This system is both integrative and generative: It allows us to (a) integrate existing research on personalized interventions from different psychological subdisciplines (e.g., political, educational, organizational, consumer, and clinical and health psychology) and to (b) articulate open questions that generate promising new avenues for future research. Our objective is to infuse personality science into intervention research and encourage cross-disciplinary collaborations within and outside of psychology. To ensure the development of personality-customized interventions aligns with the broader interests of individuals (and society at large), we also address important ethical considerations for the use of psychological targeting (e.g., privacy, self-determination, and equity) and offer concrete guidelines for researchers and practitioners.
{"title":"Personality Science in the Digital Age: The Promises and Challenges of Psychological Targeting for Personalized Behavior-Change Interventions at Scale.","authors":"Sandra C Matz, Emorie D Beck, Olivia E Atherton, Mike White, John F Rauthmann, Dan K Mroczek, Minhee Kim, Tim Bogg","doi":"10.1177/17456916231191774","DOIUrl":"10.1177/17456916231191774","url":null,"abstract":"<p><p>With the rapidly growing availability of scalable psychological assessments, personality science holds great promise for the scientific study and applied use of customized behavior-change interventions. To facilitate this development, we propose a classification system that divides psychological targeting into two approaches that differ in the process by which interventions are designed: audience-to-content matching or content-to-audience matching. This system is both integrative and generative: It allows us to (a) integrate existing research on personalized interventions from different psychological subdisciplines (e.g., political, educational, organizational, consumer, and clinical and health psychology) and to (b) articulate open questions that generate promising new avenues for future research. Our objective is to infuse personality science into intervention research and encourage cross-disciplinary collaborations within and outside of psychology. To ensure the development of personality-customized interventions aligns with the broader interests of individuals (and society at large), we also address important ethical considerations for the use of psychological targeting (e.g., privacy, self-determination, and equity) and offer concrete guidelines for researchers and practitioners.</p>","PeriodicalId":19757,"journal":{"name":"Perspectives on Psychological Science","volume":" ","pages":"1031-1056"},"PeriodicalIF":10.5,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10114215","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-01Epub Date: 2023-08-14DOI: 10.1177/17456916231187324
Christophe Gernigon, Ruud J R Den Hartigh, Robin R Vallacher, Paul L C van Geert
In the past decade, various recommendations have been published to enhance the methodological rigor and publication standards in psychological science. However, adhering to these recommendations may have limited impact on the reproducibility of causal effects as long as psychological phenomena continue to be viewed as decomposable into separate and additive statistical structures of causal relationships. In this article, we show that (a) psychological phenomena are patterns emerging from nondecomposable and nonisolable complex processes that obey idiosyncratic nonlinear dynamics, (b) these processual features jeopardize the chances of standard reproducibility of statistical results, and (c) these features call on researchers to reconsider what can and should be reproduced, that is, the psychological processes per se, and the signatures of their complexity and dynamics. Accordingly, we argue for a greater consideration of process causality of psychological phenomena reflected by key properties of complex dynamical systems (CDSs). This implies developing and testing formal models of psychological dynamics, which can be implemented by computer simulation. The scope of the CDS paradigm and its convergences with other paradigms are discussed regarding the reproducibility issue. Ironically, the CDS approach could account for both reproducibility and nonreproducibility of the statistical effects usually sought in mainstream psychological science.
{"title":"How the Complexity of Psychological Processes Reframes the Issue of Reproducibility in Psychological Science.","authors":"Christophe Gernigon, Ruud J R Den Hartigh, Robin R Vallacher, Paul L C van Geert","doi":"10.1177/17456916231187324","DOIUrl":"10.1177/17456916231187324","url":null,"abstract":"<p><p>In the past decade, various recommendations have been published to enhance the methodological rigor and publication standards in psychological science. However, adhering to these recommendations may have limited impact on the reproducibility of causal effects as long as psychological phenomena continue to be viewed as decomposable into separate and additive statistical structures of causal relationships. In this article, we show that (a) psychological phenomena are patterns emerging from nondecomposable and nonisolable complex processes that obey idiosyncratic nonlinear dynamics, (b) these processual features jeopardize the chances of standard reproducibility of statistical results, and (c) these features call on researchers to reconsider what can and should be reproduced, that is, the psychological processes per se, and the signatures of their complexity and dynamics. Accordingly, we argue for a greater consideration of <i>process causality</i> of psychological phenomena reflected by key properties of complex dynamical systems (CDSs). This implies developing and testing formal models of psychological dynamics, which can be implemented by computer simulation. The scope of the CDS paradigm and its convergences with other paradigms are discussed regarding the reproducibility issue. Ironically, the CDS approach could account for <i>both</i> reproducibility and nonreproducibility of the statistical effects usually sought in mainstream psychological science.</p>","PeriodicalId":19757,"journal":{"name":"Perspectives on Psychological Science","volume":" ","pages":"952-977"},"PeriodicalIF":10.5,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9993774","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-01Epub Date: 2022-11-07DOI: 10.1177/17456916221120027
Margaret A Webb, June P Tangney
Psychology is moving increasingly toward digital sources of data, with Amazon's Mechanical Turk (MTurk) at the forefront of that charge. In 2015, up to an estimated 45% of articles published in the top behavioral and social science journals included at least one study conducted on MTurk. In this article, I summarize my own experience with MTurk and how I deduced that my sample was-at best-only 2.6% valid, by my estimate. I share these results as a warning and call for caution. Recently, I conducted an online study via Amazon's MTurk, eager and excited to collect my own data for the first time as a doctoral student. What resulted has prompted me to write this as a warning: it is indeed too good to be true. This is a summary of how I determined that, at best, I had gathered valid data from 14 human beings-2.6% of my participant sample (N = 529).
{"title":"Too Good to Be True: Bots and Bad Data From Mechanical Turk.","authors":"Margaret A Webb, June P Tangney","doi":"10.1177/17456916221120027","DOIUrl":"10.1177/17456916221120027","url":null,"abstract":"<p><p>Psychology is moving increasingly toward digital sources of data, with Amazon's Mechanical Turk (MTurk) at the forefront of that charge. In 2015, up to an estimated 45% of articles published in the top behavioral and social science journals included at least one study conducted on MTurk. In this article, I summarize my own experience with MTurk and how I deduced that my sample was-at best-only 2.6% valid, by my estimate. I share these results as a warning and call for caution. Recently, I conducted an online study via Amazon's MTurk, eager and excited to collect my own data for the first time as a doctoral student. What resulted has prompted me to write this as a warning: it is indeed too good to be true. This is a summary of how I determined that, at best, I had gathered valid data from 14 human beings-2.6% of my participant sample (<i>N</i> = 529).</p>","PeriodicalId":19757,"journal":{"name":"Perspectives on Psychological Science","volume":" ","pages":"887-890"},"PeriodicalIF":10.5,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"40452649","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-01Epub Date: 2023-07-10DOI: 10.1177/17456916231180809
Stephan Lewandowsky, Ronald E Robertson, Renee DiResta
Most content consumed online is curated by proprietary algorithms deployed by social media platforms and search engines. In this article, we explore the interplay between these algorithms and human agency. Specifically, we consider the extent of entanglement or coupling between humans and algorithms along a continuum from implicit to explicit demand. We emphasize that the interactions people have with algorithms not only shape users' experiences in that moment but because of the mutually shaping nature of such systems can also have longer-term effects through modifications of the underlying social-network structure. Understanding these mutually shaping systems is challenging given that researchers presently lack access to relevant platform data. We argue that increased transparency, more data sharing, and greater protections for external researchers examining the algorithms are required to help researchers better understand the entanglement between humans and algorithms. This better understanding is essential to support the development of algorithms with greater benefits and fewer risks to the public.
{"title":"Challenges in Understanding Human-Algorithm Entanglement During Online Information Consumption.","authors":"Stephan Lewandowsky, Ronald E Robertson, Renee DiResta","doi":"10.1177/17456916231180809","DOIUrl":"10.1177/17456916231180809","url":null,"abstract":"<p><p>Most content consumed online is curated by proprietary algorithms deployed by social media platforms and search engines. In this article, we explore the interplay between these algorithms and human agency. Specifically, we consider the extent of entanglement or coupling between humans and algorithms along a continuum from implicit to explicit demand. We emphasize that the interactions people have with algorithms not only shape users' experiences in that moment but because of the mutually shaping nature of such systems can also have longer-term effects through modifications of the underlying social-network structure. Understanding these mutually shaping systems is challenging given that researchers presently lack access to relevant platform data. We argue that increased transparency, more data sharing, and greater protections for external researchers examining the algorithms are required to help researchers better understand the entanglement between humans and algorithms. This better understanding is essential to support the development of algorithms with greater benefits and fewer risks to the public.</p>","PeriodicalId":19757,"journal":{"name":"Perspectives on Psychological Science","volume":" ","pages":"758-766"},"PeriodicalIF":10.5,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11373152/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9765071","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-01Epub Date: 2023-07-31DOI: 10.1177/17456916231180597
Gerd Gigerenzer
Psychological artificial intelligence (AI) applies insights from psychology to design computer algorithms. Its core domain is decision-making under uncertainty, that is, ill-defined situations that can change in unexpected ways rather than well-defined, stable problems, such as chess and Go. Psychological theories about heuristic processes under uncertainty can provide possible insights. I provide two illustrations. The first shows how recency-the human tendency to rely on the most recent information and ignore base rates-can be built into a simple algorithm that predicts the flu substantially better than did Google Flu Trends's big-data algorithms. The second uses a result from memory research-the paradoxical effect that making numbers less precise increases recall-in the design of algorithms that predict recidivism. These case studies provide an existence proof that psychological AI can help design efficient and transparent algorithms.
{"title":"Psychological AI: Designing Algorithms Informed by Human Psychology.","authors":"Gerd Gigerenzer","doi":"10.1177/17456916231180597","DOIUrl":"10.1177/17456916231180597","url":null,"abstract":"<p><p>Psychological artificial intelligence (AI) applies insights from psychology to design computer algorithms. Its core domain is decision-making under uncertainty, that is, ill-defined situations that can change in unexpected ways rather than well-defined, stable problems, such as chess and Go. Psychological theories about heuristic processes under uncertainty can provide possible insights. I provide two illustrations. The first shows how recency-the human tendency to rely on the most recent information and ignore base rates-can be built into a simple algorithm that predicts the flu substantially better than did Google Flu Trends's big-data algorithms. The second uses a result from memory research-the paradoxical effect that making numbers less precise increases recall-in the design of algorithms that predict recidivism. These case studies provide an existence proof that psychological AI can help design efficient and transparent algorithms.</p>","PeriodicalId":19757,"journal":{"name":"Perspectives on Psychological Science","volume":" ","pages":"839-848"},"PeriodicalIF":10.5,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11373155/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10274200","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-01Epub Date: 2023-07-18DOI: 10.1177/17456916231180099
Merrick R Osborne, Ali Omrani, Morteza Dehghani
Technological innovations have become a key driver of societal advancements. Nowhere is this more evident than in the field of machine learning (ML), which has developed algorithmic models that shape our decisions, behaviors, and outcomes. These tools have widespread use, in part, because they can synthesize massive amounts of data to make seemingly objective recommendations. Yet, in the past few years, the ML community has been drawing attention to the need for caution when interpreting and using these models. This is because these models are created by humans, from data generated by humans, whose psychology allows for various biases that impact how the models are developed, trained, tested, and interpreted. As psychologists, we thus face a fork in the road: Down the first path, we can continue to use these models without examining and addressing these critical flaws and rely on computer scientists to try to mitigate them. Down the second path, we can turn our expertise in bias toward this growing field, collaborating with computer scientists to reduce the models' deleterious outcomes. This article serves to light the way down the second path by identifying how extant psychological research can help examine and curtail bias in ML models.
技术创新已成为社会进步的重要推动力。这一点在机器学习(ML)领域体现得最为明显,该领域已经开发出能够影响我们的决策、行为和结果的算法模型。这些工具之所以得到广泛应用,部分原因在于它们可以综合海量数据,提出看似客观的建议。然而,在过去几年中,ML 社区一直在提醒人们在解释和使用这些模型时需要谨慎。这是因为这些模型是由人类根据人类生成的数据创建的,而人类的心理会产生各种偏见,这些偏见会影响模型的开发、训练、测试和解释。因此,作为心理学家,我们面临着一个岔路口:在第一条道路上,我们可以继续使用这些模型,而不去检查和解决这些关键缺陷,并依靠计算机科学家来努力减少这些缺陷。在第二条道路上,我们可以将我们在偏见方面的专业知识转向这个不断发展的领域,与计算机科学家合作,减少模型的有害结果。本文通过指出现有心理学研究如何帮助检查和减少 ML 模型中的偏见,为第二条道路指明了方向。
{"title":"The Sins of the Parents Are to Be Laid Upon the Children: Biased Humans, Biased Data, Biased Models.","authors":"Merrick R Osborne, Ali Omrani, Morteza Dehghani","doi":"10.1177/17456916231180099","DOIUrl":"10.1177/17456916231180099","url":null,"abstract":"<p><p>Technological innovations have become a key driver of societal advancements. Nowhere is this more evident than in the field of machine learning (ML), which has developed algorithmic models that shape our decisions, behaviors, and outcomes. These tools have widespread use, in part, because they can synthesize massive amounts of data to make seemingly objective recommendations. Yet, in the past few years, the ML community has been drawing attention to the need for caution when interpreting and using these models. This is because these models are created by humans, from data generated by humans, whose psychology allows for various biases that impact how the models are developed, trained, tested, and interpreted. As psychologists, we thus face a fork in the road: Down the first path, we can continue to use these models without examining and addressing these critical flaws and rely on computer scientists to try to mitigate them. Down the second path, we can turn our expertise in bias toward this growing field, collaborating with computer scientists to reduce the models' deleterious outcomes. This article serves to light the way down the second path by identifying how extant psychological research can help examine and curtail bias in ML models.</p>","PeriodicalId":19757,"journal":{"name":"Perspectives on Psychological Science","volume":" ","pages":"796-807"},"PeriodicalIF":10.5,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10185871","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-01Epub Date: 2023-07-13DOI: 10.1177/17456916231181102
Mark Steyvers, Aakriti Kumar
Artificial intelligence (AI) has the potential to improve human decision-making by providing decision recommendations and problem-relevant information to assist human decision-makers. However, the full realization of the potential of human-AI collaboration continues to face several challenges. First, the conditions that support complementarity (i.e., situations in which the performance of a human with AI assistance exceeds the performance of an unassisted human or the AI in isolation) must be understood. This task requires humans to be able to recognize situations in which the AI should be leveraged and to develop new AI systems that can learn to complement the human decision-maker. Second, human mental models of the AI, which contain both expectations of the AI and reliance strategies, must be accurately assessed. Third, the effects of different design choices for human-AI interaction must be understood, including both the timing of AI assistance and the amount of model information that should be presented to the human decision-maker to avoid cognitive overload and ineffective reliance strategies. In response to each of these three challenges, we present an interdisciplinary perspective based on recent empirical and theoretical findings and discuss new research directions.
{"title":"Three Challenges for AI-Assisted Decision-Making.","authors":"Mark Steyvers, Aakriti Kumar","doi":"10.1177/17456916231181102","DOIUrl":"10.1177/17456916231181102","url":null,"abstract":"<p><p>Artificial intelligence (AI) has the potential to improve human decision-making by providing decision recommendations and problem-relevant information to assist human decision-makers. However, the full realization of the potential of human-AI collaboration continues to face several challenges. First, the conditions that support complementarity (i.e., situations in which the performance of a human with AI assistance exceeds the performance of an unassisted human or the AI in isolation) must be understood. This task requires humans to be able to recognize situations in which the AI should be leveraged and to develop new AI systems that can learn to complement the human decision-maker. Second, human mental models of the AI, which contain both expectations of the AI and reliance strategies, must be accurately assessed. Third, the effects of different design choices for human-AI interaction must be understood, including both the timing of AI assistance and the amount of model information that should be presented to the human decision-maker to avoid cognitive overload and ineffective reliance strategies. In response to each of these three challenges, we present an interdisciplinary perspective based on recent empirical and theoretical findings and discuss new research directions.</p>","PeriodicalId":19757,"journal":{"name":"Perspectives on Psychological Science","volume":" ","pages":"722-734"},"PeriodicalIF":10.5,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11373149/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9770751","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-01Epub Date: 2023-10-26DOI: 10.1177/17456916231201401
Eunice Yiu, Eliza Kosoy, Alison Gopnik
Much discussion about large language models and language-and-vision models has focused on whether these models are intelligent agents. We present an alternative perspective. First, we argue that these artificial intelligence (AI) models are cultural technologies that enhance cultural transmission and are efficient and powerful imitation engines. Second, we explore what AI models can tell us about imitation and innovation by testing whether they can be used to discover new tools and novel causal structures and contrasting their responses with those of human children. Our work serves as a first step in determining which particular representations and competences, as well as which kinds of knowledge or skill, can be derived from particular learning techniques and data. In particular, we explore which kinds of cognitive capacities can be enabled by statistical analysis of large-scale linguistic data. Critically, our findings suggest that machines may need more than large-scale language and image data to allow the kinds of innovation that a small child can produce.
{"title":"Transmission Versus Truth, Imitation Versus Innovation: What Children Can Do That Large Language and Language-and-Vision Models Cannot (Yet).","authors":"Eunice Yiu, Eliza Kosoy, Alison Gopnik","doi":"10.1177/17456916231201401","DOIUrl":"10.1177/17456916231201401","url":null,"abstract":"<p><p>Much discussion about large language models and language-and-vision models has focused on whether these models are intelligent agents. We present an alternative perspective. First, we argue that these artificial intelligence (AI) models are cultural technologies that enhance cultural transmission and are efficient and powerful imitation engines. Second, we explore what AI models can tell us about imitation and innovation by testing whether they can be used to discover new tools and novel causal structures and contrasting their responses with those of human children. Our work serves as a first step in determining which particular representations and competences, as well as which kinds of knowledge or skill, can be derived from particular learning techniques and data. In particular, we explore which kinds of cognitive capacities can be enabled by statistical analysis of large-scale linguistic data. Critically, our findings suggest that machines may need more than large-scale language and image data to allow the kinds of innovation that a small child can produce.</p>","PeriodicalId":19757,"journal":{"name":"Perspectives on Psychological Science","volume":" ","pages":"874-883"},"PeriodicalIF":10.5,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11373165/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"54230419","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-01Epub Date: 2023-07-19DOI: 10.1177/17456916231185057
Hannah Metzler, David Garcia
On digital media, algorithms that process data and recommend content have become ubiquitous. Their fast and barely regulated adoption has raised concerns about their role in well-being both at the individual and collective levels. Algorithmic mechanisms on digital media are powered by social drivers, creating a feedback loop that complicates research to disentangle the role of algorithms and already existing social phenomena. Our brief overview of the current evidence on how algorithms affect well-being, misinformation, and polarization suggests that the role of algorithms in these phenomena is far from straightforward and that substantial further empirical research is needed. Existing evidence suggests that algorithms mostly reinforce existing social drivers, a finding that stresses the importance of reflecting on algorithms in the larger societal context that encompasses individualism, populist politics, and climate change. We present concrete ideas and research questions to improve algorithms on digital platforms and to investigate their role in current problems and potential solutions. Finally, we discuss how the current shift from social media to more algorithmically curated media brings both risks and opportunities if algorithms are designed for individual and societal flourishing rather than short-term profit.
{"title":"Social Drivers and Algorithmic Mechanisms on Digital Media.","authors":"Hannah Metzler, David Garcia","doi":"10.1177/17456916231185057","DOIUrl":"10.1177/17456916231185057","url":null,"abstract":"<p><p>On digital media, algorithms that process data and recommend content have become ubiquitous. Their fast and barely regulated adoption has raised concerns about their role in well-being both at the individual and collective levels. Algorithmic mechanisms on digital media are powered by social drivers, creating a feedback loop that complicates research to disentangle the role of algorithms and already existing social phenomena. Our brief overview of the current evidence on how algorithms affect well-being, misinformation, and polarization suggests that the role of algorithms in these phenomena is far from straightforward and that substantial further empirical research is needed. Existing evidence suggests that algorithms mostly reinforce existing social drivers, a finding that stresses the importance of reflecting on algorithms in the larger societal context that encompasses individualism, populist politics, and climate change. We present concrete ideas and research questions to improve algorithms on digital platforms and to investigate their role in current problems and potential solutions. Finally, we discuss how the current shift from social media to more algorithmically curated media brings both risks and opportunities if algorithms are designed for individual and societal flourishing rather than short-term profit.</p>","PeriodicalId":19757,"journal":{"name":"Perspectives on Psychological Science","volume":" ","pages":"735-748"},"PeriodicalIF":10.5,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11373151/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9822531","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}