Pub Date : 2024-11-01Epub Date: 2023-08-10DOI: 10.1177/17456916231186611
Philippe Rochat
History counts and cannot be overlooked. As a case in point, the origins of major theoretical tensions in the field of developmental psychology are traced back to Piaget (1896-1980), who paved the way to major discoveries regarding the origins and development of cognition. His theory framed much of the new ideas on early cognitive development that emerged in the 1970s, in the footsteps of the 1960s' cognitive revolution. Here, I retrace major conceptual changes since Piaget and provide a metaview on empirical findings that may have triggered the call for such changes. Nine theoretical views and intuitions are identified, all in strong reaction to some or all of the four cornerstone assumptions of Piaget's developmental account (i.e., action realism, domain generality, stages, and late representation). As a result, new and more extreme stances are now taken in the nature-versus-nurture debate. These stances rest on profoundly different, often clashing theoretical intuitions that keep shaping developmental research since Piaget.
{"title":"The Evolution of Developmental Theories Since Piaget: A Metaview.","authors":"Philippe Rochat","doi":"10.1177/17456916231186611","DOIUrl":"10.1177/17456916231186611","url":null,"abstract":"<p><p>History counts and cannot be overlooked. As a case in point, the origins of major theoretical tensions in the field of developmental psychology are traced back to Piaget (1896-1980), who paved the way to major discoveries regarding the origins and development of cognition. His theory framed much of the new ideas on early cognitive development that emerged in the 1970s, in the footsteps of the 1960s' cognitive revolution. Here, I retrace major conceptual changes since Piaget and provide a metaview on empirical findings that may have triggered the call for such changes. Nine theoretical views and intuitions are identified, all in strong reaction to some or all of the four cornerstone assumptions of Piaget's developmental account (i.e., action realism, domain generality, stages, and late representation). As a result, new and more extreme stances are now taken in the nature-versus-nurture debate. These stances rest on profoundly different, often clashing theoretical intuitions that keep shaping developmental research since Piaget.</p>","PeriodicalId":19757,"journal":{"name":"Perspectives on Psychological Science","volume":null,"pages":null},"PeriodicalIF":10.5,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10011226","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-01Epub Date: 2023-08-11DOI: 10.1177/17456916231188000
Konrad Lehmann, Dimitris Bolis, Karl J Friston, Leonhard Schilbach, Maxwell J D Ramstead, Philipp Kanske
Social neuroscience has often been criticized for approaching the investigation of the neural processes that enable social interaction and cognition from a passive, detached, third-person perspective, without involving any real-time social interaction. With the emergence of second-person neuroscience, investigators have uncovered the unique complexity of neural-activation patterns in actual, real-time interaction. Social cognition that occurs during social interaction is fundamentally different from that unfolding during social observation. However, it remains unclear how the neural correlates of social interaction are to be interpreted. Here, we leverage the active-inference framework to shed light on the mechanisms at play during social interaction in second-person neuroscience studies. Specifically, we show how counterfactually rich mutual predictions, real-time bodily adaptation, and policy selection explain activation in components of the default mode, salience, and frontoparietal networks of the brain, as well as in the basal ganglia. We further argue that these processes constitute the crucial neural processes that underwrite bona fide social interaction. By placing the experimental approach of second-person neuroscience on the theoretical foundation of the active-inference framework, we inform the field of social neuroscience about the mechanisms of real-life interactions. We thereby contribute to the theoretical foundations of empirical second-person neuroscience.
{"title":"An Active-Inference Approach to Second-Person Neuroscience.","authors":"Konrad Lehmann, Dimitris Bolis, Karl J Friston, Leonhard Schilbach, Maxwell J D Ramstead, Philipp Kanske","doi":"10.1177/17456916231188000","DOIUrl":"10.1177/17456916231188000","url":null,"abstract":"<p><p>Social neuroscience has often been criticized for approaching the investigation of the neural processes that enable social interaction and cognition from a passive, detached, third-person perspective, without involving any real-time social interaction. With the emergence of <i>second-person neuroscience</i>, investigators have uncovered the unique complexity of neural-activation patterns in actual, real-time interaction. Social cognition that occurs during social interaction is fundamentally different from that unfolding during social observation. However, it remains unclear how the neural correlates of social interaction are to be interpreted. Here, we leverage the active-inference framework to shed light on the mechanisms at play during social interaction in second-person neuroscience studies. Specifically, we show how counterfactually rich mutual predictions, real-time bodily adaptation, and policy selection explain activation in components of the default mode, salience, and frontoparietal networks of the brain, as well as in the basal ganglia. We further argue that these processes constitute the crucial neural processes that underwrite bona fide social interaction. By placing the experimental approach of second-person neuroscience on the theoretical foundation of the active-inference framework, we inform the field of social neuroscience about the mechanisms of real-life interactions. We thereby contribute to the theoretical foundations of empirical second-person neuroscience.</p>","PeriodicalId":19757,"journal":{"name":"Perspectives on Psychological Science","volume":null,"pages":null},"PeriodicalIF":10.5,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11539477/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10346343","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-01Epub Date: 2024-03-07DOI: 10.1177/17456916241234328
Melissa G Keith, Alexander S McKay
In response to Webb and Tangney (2022) we call into question the conclusion that data collected on Amazon's Mechanical Turk (MTurk) was "at best-only 2.6% valid" (p. 1). We suggest that Webb and Tangney made certain choices during the study-design and data-collection process that adversely affected the quality of the data collected. As a result, the anecdotal experience of these authors provides weak evidence that MTurk provides low-quality data as implied. In our commentary we highlight best practice recommendations and make suggestions for more effectively collecting and screening online panel data.
{"title":"Too Anecdotal to Be True? Mechanical Turk Is Not All Bots and Bad Data: Response to Webb and Tangney (2022).","authors":"Melissa G Keith, Alexander S McKay","doi":"10.1177/17456916241234328","DOIUrl":"10.1177/17456916241234328","url":null,"abstract":"<p><p>In response to Webb and Tangney (2022) we call into question the conclusion that data collected on Amazon's Mechanical Turk (MTurk) was \"at best-only 2.6% valid\" (p. 1). We suggest that Webb and Tangney made certain choices during the study-design and data-collection process that adversely affected the quality of the data collected. As a result, the anecdotal experience of these authors provides weak evidence that MTurk provides low-quality data as implied. In our commentary we highlight best practice recommendations and make suggestions for more effectively collecting and screening online panel data.</p>","PeriodicalId":19757,"journal":{"name":"Perspectives on Psychological Science","volume":null,"pages":null},"PeriodicalIF":10.5,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140050024","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-01Epub Date: 2023-08-01DOI: 10.1177/17456916231182568
Leo Tiokhin, Karthik Panchanathan, Paul E Smaldino, Daniël Lakens
Criteria for recognizing and rewarding scientists primarily focus on individual contributions. This creates a conflict between what is best for scientists' careers and what is best for science. In this article, we show how the theory of multilevel selection provides conceptual tools for modifying incentives to better align individual and collective interests. A core principle is the need to account for indirect effects by shifting the level at which selection operates from individuals to the groups in which individuals are embedded. This principle is used in several fields to improve collective outcomes, including animal husbandry, team sports, and professional organizations. Shifting the level of selection has the potential to ameliorate several problems in contemporary science, including accounting for scientists' diverse contributions to knowledge generation, reducing individual-level competition, and promoting specialization and team science. We discuss the difficulties associated with shifting the level of selection and outline directions for future development in this domain.
{"title":"Shifting the Level of Selection in Science.","authors":"Leo Tiokhin, Karthik Panchanathan, Paul E Smaldino, Daniël Lakens","doi":"10.1177/17456916231182568","DOIUrl":"10.1177/17456916231182568","url":null,"abstract":"<p><p>Criteria for recognizing and rewarding scientists primarily focus on individual contributions. This creates a conflict between what is best for scientists' careers and what is best for science. In this article, we show how the theory of multilevel selection provides conceptual tools for modifying incentives to better align individual and collective interests. A core principle is the need to account for indirect effects by shifting the level at which selection operates from individuals to the groups in which individuals are embedded. This principle is used in several fields to improve collective outcomes, including animal husbandry, team sports, and professional organizations. Shifting the level of selection has the potential to ameliorate several problems in contemporary science, including accounting for scientists' diverse contributions to knowledge generation, reducing individual-level competition, and promoting specialization and team science. We discuss the difficulties associated with shifting the level of selection and outline directions for future development in this domain.</p>","PeriodicalId":19757,"journal":{"name":"Perspectives on Psychological Science","volume":null,"pages":null},"PeriodicalIF":10.5,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11539478/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9911809","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-01Epub Date: 2023-08-29DOI: 10.1177/17456916231192828
Lukasz Walasek, Gordon D A Brown
Models of decision-making typically assume the existence of some common currency of value, such as utility, happiness, or inclusive fitness. This common currency is taken to allow comparison of options and to underpin everyday choice. Here we suggest instead that there is no universal value scale, that incommensurable values pervade everyday choice, and hence that most existing models of decision-making in both economics and psychology are fundamentally limited. We propose that choice objects can be compared only with reference to specific but nonuniversal "covering values." These covering values may reflect decision-makers' goals, motivations, or current states. A complete model of choice must accommodate the range of possible covering values. We show that abandoning the common-currency assumption in models of judgment and decision-making necessitates rank-based and "simple heuristics" models that contrast radically with conventional utility-based approaches. We note that if there is no universal value scale, then Arrow's impossibility theorem places severe bounds on the rationality of individual decision-making and hence that there is a deep link between the incommensurability of value, inconsistencies in human decision-making, and rank-based coding of value. More generally, incommensurability raises the question of whether it will ever be possible to develop single-quantity-maximizing models of decision-making.
{"title":"Incomparability and Incommensurability in Choice: No Common Currency of Value?","authors":"Lukasz Walasek, Gordon D A Brown","doi":"10.1177/17456916231192828","DOIUrl":"10.1177/17456916231192828","url":null,"abstract":"<p><p>Models of decision-making typically assume the existence of some common currency of value, such as utility, happiness, or inclusive fitness. This common currency is taken to allow comparison of options and to underpin everyday choice. Here we suggest instead that there is no universal value scale, that incommensurable values pervade everyday choice, and hence that most existing models of decision-making in both economics and psychology are fundamentally limited. We propose that choice objects can be compared only with reference to specific but nonuniversal \"covering values.\" These covering values may reflect decision-makers' goals, motivations, or current states. A complete model of choice must accommodate the range of possible covering values. We show that abandoning the common-currency assumption in models of judgment and decision-making necessitates rank-based and \"simple heuristics\" models that contrast radically with conventional utility-based approaches. We note that if there is no universal value scale, then Arrow's impossibility theorem places severe bounds on the rationality of individual decision-making and hence that there is a deep link between the incommensurability of value, inconsistencies in human decision-making, and rank-based coding of value. More generally, incommensurability raises the question of whether it will ever be possible to develop single-quantity-maximizing models of decision-making.</p>","PeriodicalId":19757,"journal":{"name":"Perspectives on Psychological Science","volume":null,"pages":null},"PeriodicalIF":10.5,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11539466/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10112260","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-01Epub Date: 2023-08-29DOI: 10.1177/17456916231191744
Sander van Bree
A central pursuit of cognitive neuroscience is to find neural mechanisms of cognition, with research programs favoring different strategies to look for them. But what is a neural mechanism, and how do we know we have captured them? Here I answer these questions through a framework that integrates Marr's levels with philosophical work on mechanism. From this, the following goal emerges: What needs to be explained are the computations of cognition, with explanation itself given by mechanism-composed of algorithms and parts of the brain that realize them. This reveals a delineation within cognitive neuroscience research. In the premechanism stage, the computations of cognition are linked to phenomena in the brain, narrowing down where and when mechanisms are situated in space and time. In the mechanism stage, it is established how computation emerges from organized interactions between parts-filling the premechanistic mold. I explain why a shift toward mechanistic modeling helps us meet our aims while outlining a road map for doing so. Finally, I argue that the explanatory scope of neural mechanisms can be approximated by effect sizes collected across studies, not just conceptual analysis. Together, these points synthesize a mechanistic agenda that allows subfields to connect at the level of theory.
{"title":"A Critical Perspective on Neural Mechanisms in Cognitive Neuroscience: Towards Unification.","authors":"Sander van Bree","doi":"10.1177/17456916231191744","DOIUrl":"10.1177/17456916231191744","url":null,"abstract":"<p><p>A central pursuit of cognitive neuroscience is to find neural mechanisms of cognition, with research programs favoring different strategies to look for them. But what is a neural mechanism, and how do we know we have captured them? Here I answer these questions through a framework that integrates Marr's levels with philosophical work on mechanism. From this, the following goal emerges: What needs to be explained are the computations of cognition, with explanation itself given by mechanism-composed of algorithms and parts of the brain that realize them. This reveals a delineation within cognitive neuroscience research. In the <i>premechanism stage</i>, the computations of cognition are linked to phenomena in the brain, narrowing down where and when mechanisms are situated in space and time. In the <i>mechanism stage</i>, it is established how computation emerges from organized interactions between parts-filling the premechanistic mold. I explain why a shift toward mechanistic modeling helps us meet our aims while outlining a road map for doing so. Finally, I argue that the explanatory scope of neural mechanisms can be approximated by effect sizes collected across studies, not just conceptual analysis. Together, these points synthesize a mechanistic agenda that allows subfields to connect at the level of theory.</p>","PeriodicalId":19757,"journal":{"name":"Perspectives on Psychological Science","volume":null,"pages":null},"PeriodicalIF":10.5,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11539489/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10114217","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-01Epub Date: 2023-08-21DOI: 10.1177/17456916231180589
Elena Luchkina, Sandra Waxman
Human language permits us to call to mind objects, events, and ideas that we cannot witness directly, either because they are absent or because they have no physical form (e.g., people we have not met, concepts like justice). What enables language to transmit such knowledge? We propose that a referential link between words, referents, and mental representations of those referents is key. This link enables us to form, access, and modify mental representations even when the referents themselves are absent ("absent reference"). In this review we consider the developmental and evolutionary origins of absent reference, integrating previously disparate literatures on absent reference in language and gesture in very young humans and gesture in nonhuman primates. We first evaluate when and how infants acquire absent reference during the process of language acquisition. With this as a foundation, we consider the evidence for absent reference in gesture in infants and in nonhuman primates. Finally, having woven these literatures together, we highlight new lines of research that promise to sharpen our understanding of the development of reference and its role in learning about the absent and the abstract.
{"title":"Talking About the Absent and the Abstract: Referential Communication in Language and Gesture.","authors":"Elena Luchkina, Sandra Waxman","doi":"10.1177/17456916231180589","DOIUrl":"10.1177/17456916231180589","url":null,"abstract":"<p><p>Human language permits us to call to mind objects, events, and ideas that we cannot witness directly, either because they are absent or because they have no physical form (e.g., people we have not met, concepts like justice). What enables language to transmit such knowledge? We propose that a referential link between words, referents, and mental representations of those referents is key. This link enables us to form, access, and modify mental representations even when the referents themselves are absent (\"absent reference\"). In this review we consider the developmental and evolutionary origins of absent reference, integrating previously disparate literatures on absent reference in language and gesture in very young humans and gesture in nonhuman primates. We first evaluate when and how infants acquire absent reference during the process of language acquisition. With this as a foundation, we consider the evidence for absent reference in gesture in infants and in nonhuman primates. Finally, having woven these literatures together, we highlight new lines of research that promise to sharpen our understanding of the development of reference and its role in learning about the absent and the abstract.</p>","PeriodicalId":19757,"journal":{"name":"Perspectives on Psychological Science","volume":null,"pages":null},"PeriodicalIF":10.5,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10879458/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10032511","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-01Epub Date: 2023-08-14DOI: 10.1177/17456916231187324
Christophe Gernigon, Ruud J R Den Hartigh, Robin R Vallacher, Paul L C van Geert
In the past decade, various recommendations have been published to enhance the methodological rigor and publication standards in psychological science. However, adhering to these recommendations may have limited impact on the reproducibility of causal effects as long as psychological phenomena continue to be viewed as decomposable into separate and additive statistical structures of causal relationships. In this article, we show that (a) psychological phenomena are patterns emerging from nondecomposable and nonisolable complex processes that obey idiosyncratic nonlinear dynamics, (b) these processual features jeopardize the chances of standard reproducibility of statistical results, and (c) these features call on researchers to reconsider what can and should be reproduced, that is, the psychological processes per se, and the signatures of their complexity and dynamics. Accordingly, we argue for a greater consideration of process causality of psychological phenomena reflected by key properties of complex dynamical systems (CDSs). This implies developing and testing formal models of psychological dynamics, which can be implemented by computer simulation. The scope of the CDS paradigm and its convergences with other paradigms are discussed regarding the reproducibility issue. Ironically, the CDS approach could account for both reproducibility and nonreproducibility of the statistical effects usually sought in mainstream psychological science.
{"title":"How the Complexity of Psychological Processes Reframes the Issue of Reproducibility in Psychological Science.","authors":"Christophe Gernigon, Ruud J R Den Hartigh, Robin R Vallacher, Paul L C van Geert","doi":"10.1177/17456916231187324","DOIUrl":"10.1177/17456916231187324","url":null,"abstract":"<p><p>In the past decade, various recommendations have been published to enhance the methodological rigor and publication standards in psychological science. However, adhering to these recommendations may have limited impact on the reproducibility of causal effects as long as psychological phenomena continue to be viewed as decomposable into separate and additive statistical structures of causal relationships. In this article, we show that (a) psychological phenomena are patterns emerging from nondecomposable and nonisolable complex processes that obey idiosyncratic nonlinear dynamics, (b) these processual features jeopardize the chances of standard reproducibility of statistical results, and (c) these features call on researchers to reconsider what can and should be reproduced, that is, the psychological processes per se, and the signatures of their complexity and dynamics. Accordingly, we argue for a greater consideration of <i>process causality</i> of psychological phenomena reflected by key properties of complex dynamical systems (CDSs). This implies developing and testing formal models of psychological dynamics, which can be implemented by computer simulation. The scope of the CDS paradigm and its convergences with other paradigms are discussed regarding the reproducibility issue. Ironically, the CDS approach could account for <i>both</i> reproducibility and nonreproducibility of the statistical effects usually sought in mainstream psychological science.</p>","PeriodicalId":19757,"journal":{"name":"Perspectives on Psychological Science","volume":null,"pages":null},"PeriodicalIF":10.5,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9993774","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-01Epub Date: 2023-08-29DOI: 10.1177/17456916231191774
Sandra C Matz, Emorie D Beck, Olivia E Atherton, Mike White, John F Rauthmann, Dan K Mroczek, Minhee Kim, Tim Bogg
With the rapidly growing availability of scalable psychological assessments, personality science holds great promise for the scientific study and applied use of customized behavior-change interventions. To facilitate this development, we propose a classification system that divides psychological targeting into two approaches that differ in the process by which interventions are designed: audience-to-content matching or content-to-audience matching. This system is both integrative and generative: It allows us to (a) integrate existing research on personalized interventions from different psychological subdisciplines (e.g., political, educational, organizational, consumer, and clinical and health psychology) and to (b) articulate open questions that generate promising new avenues for future research. Our objective is to infuse personality science into intervention research and encourage cross-disciplinary collaborations within and outside of psychology. To ensure the development of personality-customized interventions aligns with the broader interests of individuals (and society at large), we also address important ethical considerations for the use of psychological targeting (e.g., privacy, self-determination, and equity) and offer concrete guidelines for researchers and practitioners.
{"title":"Personality Science in the Digital Age: The Promises and Challenges of Psychological Targeting for Personalized Behavior-Change Interventions at Scale.","authors":"Sandra C Matz, Emorie D Beck, Olivia E Atherton, Mike White, John F Rauthmann, Dan K Mroczek, Minhee Kim, Tim Bogg","doi":"10.1177/17456916231191774","DOIUrl":"10.1177/17456916231191774","url":null,"abstract":"<p><p>With the rapidly growing availability of scalable psychological assessments, personality science holds great promise for the scientific study and applied use of customized behavior-change interventions. To facilitate this development, we propose a classification system that divides psychological targeting into two approaches that differ in the process by which interventions are designed: audience-to-content matching or content-to-audience matching. This system is both integrative and generative: It allows us to (a) integrate existing research on personalized interventions from different psychological subdisciplines (e.g., political, educational, organizational, consumer, and clinical and health psychology) and to (b) articulate open questions that generate promising new avenues for future research. Our objective is to infuse personality science into intervention research and encourage cross-disciplinary collaborations within and outside of psychology. To ensure the development of personality-customized interventions aligns with the broader interests of individuals (and society at large), we also address important ethical considerations for the use of psychological targeting (e.g., privacy, self-determination, and equity) and offer concrete guidelines for researchers and practitioners.</p>","PeriodicalId":19757,"journal":{"name":"Perspectives on Psychological Science","volume":null,"pages":null},"PeriodicalIF":10.5,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10114215","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-01Epub Date: 2022-11-07DOI: 10.1177/17456916221120027
Margaret A Webb, June P Tangney
Psychology is moving increasingly toward digital sources of data, with Amazon's Mechanical Turk (MTurk) at the forefront of that charge. In 2015, up to an estimated 45% of articles published in the top behavioral and social science journals included at least one study conducted on MTurk. In this article, I summarize my own experience with MTurk and how I deduced that my sample was-at best-only 2.6% valid, by my estimate. I share these results as a warning and call for caution. Recently, I conducted an online study via Amazon's MTurk, eager and excited to collect my own data for the first time as a doctoral student. What resulted has prompted me to write this as a warning: it is indeed too good to be true. This is a summary of how I determined that, at best, I had gathered valid data from 14 human beings-2.6% of my participant sample (N = 529).
{"title":"Too Good to Be True: Bots and Bad Data From Mechanical Turk.","authors":"Margaret A Webb, June P Tangney","doi":"10.1177/17456916221120027","DOIUrl":"10.1177/17456916221120027","url":null,"abstract":"<p><p>Psychology is moving increasingly toward digital sources of data, with Amazon's Mechanical Turk (MTurk) at the forefront of that charge. In 2015, up to an estimated 45% of articles published in the top behavioral and social science journals included at least one study conducted on MTurk. In this article, I summarize my own experience with MTurk and how I deduced that my sample was-at best-only 2.6% valid, by my estimate. I share these results as a warning and call for caution. Recently, I conducted an online study via Amazon's MTurk, eager and excited to collect my own data for the first time as a doctoral student. What resulted has prompted me to write this as a warning: it is indeed too good to be true. This is a summary of how I determined that, at best, I had gathered valid data from 14 human beings-2.6% of my participant sample (<i>N</i> = 529).</p>","PeriodicalId":19757,"journal":{"name":"Perspectives on Psychological Science","volume":null,"pages":null},"PeriodicalIF":10.5,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"40452649","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}