Pub Date : 2021-04-01DOI: 10.1177/25152459211018199
Manikya Alister, Raine Vickers-Jones, David K. Sewell, T. Ballard
Judgments regarding replicability are vital to scientific progress. The metaphor of “standing on the shoulders of giants” encapsulates the notion that progress is made when new discoveries build on previous findings. Yet attempts to build on findings that are not replicable could mean a great deal of time, effort, and money wasted. In light of the recent “crisis of confidence” in psychological science, the ability to accurately judge the replicability of findings may be more important than ever. In this Registered Report, we examine the factors that influence psychological scientists’ confidence in the replicability of findings. We recruited corresponding authors of articles published in psychology journals between 2014 and 2018 to complete a brief survey in which they were asked to consider 76 specific study attributes that might bear on the replicability of a finding (e.g., preregistration, sample size, statistical methods). Participants were asked to rate the extent to which information regarding each attribute increased or decreased their confidence in the finding being replicated. We examined the extent to which each research attribute influenced average confidence in replicability. We found evidence for six reasonably distinct underlying factors that influenced these judgments and individual differences in the degree to which people’s judgments were influenced by these factors. The conclusions reveal how certain research practices affect other researchers’ perceptions of robustness. We hope our findings will help encourage the use of practices that promote replicability and, by extension, the cumulative progress of psychological science.
{"title":"How Do We Choose Our Giants? Perceptions of Replicability in Psychological Science","authors":"Manikya Alister, Raine Vickers-Jones, David K. Sewell, T. Ballard","doi":"10.1177/25152459211018199","DOIUrl":"https://doi.org/10.1177/25152459211018199","url":null,"abstract":"Judgments regarding replicability are vital to scientific progress. The metaphor of “standing on the shoulders of giants” encapsulates the notion that progress is made when new discoveries build on previous findings. Yet attempts to build on findings that are not replicable could mean a great deal of time, effort, and money wasted. In light of the recent “crisis of confidence” in psychological science, the ability to accurately judge the replicability of findings may be more important than ever. In this Registered Report, we examine the factors that influence psychological scientists’ confidence in the replicability of findings. We recruited corresponding authors of articles published in psychology journals between 2014 and 2018 to complete a brief survey in which they were asked to consider 76 specific study attributes that might bear on the replicability of a finding (e.g., preregistration, sample size, statistical methods). Participants were asked to rate the extent to which information regarding each attribute increased or decreased their confidence in the finding being replicated. We examined the extent to which each research attribute influenced average confidence in replicability. We found evidence for six reasonably distinct underlying factors that influenced these judgments and individual differences in the degree to which people’s judgments were influenced by these factors. The conclusions reveal how certain research practices affect other researchers’ perceptions of robustness. We hope our findings will help encourage the use of practices that promote replicability and, by extension, the cumulative progress of psychological science.","PeriodicalId":55645,"journal":{"name":"Advances in Methods and Practices in Psychological Science","volume":" ","pages":""},"PeriodicalIF":13.6,"publicationDate":"2021-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/25152459211018199","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44542864","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-01DOI: 10.1177/25152459211018187
E. Elliott, C. Morey, Angela M. AuBuchon, N. Cowan, C. Jarrold, Eryn J. Adams, M. Attwood, Büşra Bayram, Stefen Beeler-Duden, Taran Y. Blakstvedt, G. Büttner, T. Castelain, Shari Cave, D. Crepaldi, E. Fredriksen, Bret A. Glass, Andrew J. Graves, D. Guitard, S. Hoehl, Alexis Hosch, Stephanie Jeanneret, Tanya N Joseph, Christopher Koch, J. Lelonkiewicz, G. Lupyan, A. McDonald, Grace Meissner, W. Mendenhall, D. Moreau, T. Ostermann, A. Özdoğru, Francesca Padovani, S. Poloczek, J. P. Röer, Christina C. Schonberg, C. K. Tamnes, M. Tomasik, B. Valentini, Evie Vergauwe, Haley A. Vlach, M. Voracek
Work by Flavell, Beach, and Chinsky indicated a change in the spontaneous production of overt verbalization behaviors when comparing young children (age 5) with older children (age 10). Despite the critical role that this evidence of a change in verbalization behaviors plays in modern theories of cognitive development and working memory, there has been only one other published near replication of this work. In this Registered Replication Report, we relied on researchers from 17 labs who contributed their results to a larger and more comprehensive sample of children. We assessed memory performance and the presence or absence of verbalization behaviors of young children at different ages and determined that the original pattern of findings was largely upheld: Older children were more likely to verbalize, and their memory spans improved. We confirmed that 5- and 6-year-old children who verbalized recalled more than children who did not verbalize. However, unlike Flavell et al., substantial proportions of our 5- and 6-year-old samples overtly verbalized at least sometimes during the picture memory task. In addition, continuous increase in overt verbalization from 7 to 10 years old was not consistently evident in our samples. These robust findings should be weighed when considering theories of cognitive development, particularly theories concerning when verbal rehearsal emerges and relations between speech and memory.
{"title":"Multilab Direct Replication of Flavell, Beach, and Chinsky (1966): Spontaneous Verbal Rehearsal in a Memory Task as a Function of Age","authors":"E. Elliott, C. Morey, Angela M. AuBuchon, N. Cowan, C. Jarrold, Eryn J. Adams, M. Attwood, Büşra Bayram, Stefen Beeler-Duden, Taran Y. Blakstvedt, G. Büttner, T. Castelain, Shari Cave, D. Crepaldi, E. Fredriksen, Bret A. Glass, Andrew J. Graves, D. Guitard, S. Hoehl, Alexis Hosch, Stephanie Jeanneret, Tanya N Joseph, Christopher Koch, J. Lelonkiewicz, G. Lupyan, A. McDonald, Grace Meissner, W. Mendenhall, D. Moreau, T. Ostermann, A. Özdoğru, Francesca Padovani, S. Poloczek, J. P. Röer, Christina C. Schonberg, C. K. Tamnes, M. Tomasik, B. Valentini, Evie Vergauwe, Haley A. Vlach, M. Voracek","doi":"10.1177/25152459211018187","DOIUrl":"https://doi.org/10.1177/25152459211018187","url":null,"abstract":"Work by Flavell, Beach, and Chinsky indicated a change in the spontaneous production of overt verbalization behaviors when comparing young children (age 5) with older children (age 10). Despite the critical role that this evidence of a change in verbalization behaviors plays in modern theories of cognitive development and working memory, there has been only one other published near replication of this work. In this Registered Replication Report, we relied on researchers from 17 labs who contributed their results to a larger and more comprehensive sample of children. We assessed memory performance and the presence or absence of verbalization behaviors of young children at different ages and determined that the original pattern of findings was largely upheld: Older children were more likely to verbalize, and their memory spans improved. We confirmed that 5- and 6-year-old children who verbalized recalled more than children who did not verbalize. However, unlike Flavell et al., substantial proportions of our 5- and 6-year-old samples overtly verbalized at least sometimes during the picture memory task. In addition, continuous increase in overt verbalization from 7 to 10 years old was not consistently evident in our samples. These robust findings should be weighed when considering theories of cognitive development, particularly theories concerning when verbal rehearsal emerges and relations between speech and memory.","PeriodicalId":55645,"journal":{"name":"Advances in Methods and Practices in Psychological Science","volume":" ","pages":""},"PeriodicalIF":13.6,"publicationDate":"2021-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/25152459211018187","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42721355","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-01DOI: 10.1177/25152459211017853
K. Wiebels, David Moreau
Containers have become increasingly popular in computing and software engineering and are gaining traction in scientific research. They allow packaging up all code and dependencies to ensure that analyses run reliably across a range of operating systems and software versions. Despite being a crucial component for reproducible science, containerization has yet to become mainstream in psychology. In this tutorial, we describe the logic behind containers, what they are, and the practical problems they can solve. We walk the reader through the implementation of containerization within a research workflow with examples using Docker and R. Specifically, we describe how to use existing containers, build personalized containers, and share containers alongside publications. We provide a worked example that includes all steps required to set up a container for a research project and can easily be adapted and extended. We conclude with a discussion of the possibilities afforded by the large-scale adoption of containerization, especially in the context of cumulative, open science, toward a more efficient and inclusive research ecosystem.
{"title":"Leveraging Containers for Reproducible Psychological Research","authors":"K. Wiebels, David Moreau","doi":"10.1177/25152459211017853","DOIUrl":"https://doi.org/10.1177/25152459211017853","url":null,"abstract":"Containers have become increasingly popular in computing and software engineering and are gaining traction in scientific research. They allow packaging up all code and dependencies to ensure that analyses run reliably across a range of operating systems and software versions. Despite being a crucial component for reproducible science, containerization has yet to become mainstream in psychology. In this tutorial, we describe the logic behind containers, what they are, and the practical problems they can solve. We walk the reader through the implementation of containerization within a research workflow with examples using Docker and R. Specifically, we describe how to use existing containers, build personalized containers, and share containers alongside publications. We provide a worked example that includes all steps required to set up a container for a research project and can easily be adapted and extended. We conclude with a discussion of the possibilities afforded by the large-scale adoption of containerization, especially in the context of cumulative, open science, toward a more efficient and inclusive research ecosystem.","PeriodicalId":55645,"journal":{"name":"Advances in Methods and Practices in Psychological Science","volume":" ","pages":""},"PeriodicalIF":13.6,"publicationDate":"2021-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/25152459211017853","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47832005","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-02-09DOI: 10.1177/25152459211040837
T. Hardwicke, Dénes Szűcs, Robert T. Thibault, S. Crüwell, O. R. van den Akker, Michèle B. Nuijten, J. Ioannidis
Replication studies that contradict prior findings may facilitate scientific self-correction by triggering a reappraisal of the original studies; however, the research community’s response to replication results has not been studied systematically. One approach for gauging responses to replication results is to examine how they affect citations to original studies. In this study, we explored postreplication citation patterns in the context of four prominent multilaboratory replication attempts published in the field of psychology that strongly contradicted and outweighed prior findings. Generally, we observed a small postreplication decline in the number of favorable citations and a small increase in unfavorable citations. This indicates only modest corrective effects and implies considerable perpetuation of belief in the original findings. Replication results that strongly contradict an original finding do not necessarily nullify its credibility; however, one might at least expect the replication results to be acknowledged and explicitly debated in subsequent literature. By contrast, we found substantial citation bias: The majority of articles citing the original studies neglected to cite relevant replication results. Of those articles that did cite the replication but continued to cite the original study favorably, approximately half offered an explicit defense of the original study. Our findings suggest that even replication results that strongly contradict original findings do not necessarily prompt a corrective response from the research community.
{"title":"Citation Patterns Following a Strongly Contradictory Replication Result: Four Case Studies From Psychology","authors":"T. Hardwicke, Dénes Szűcs, Robert T. Thibault, S. Crüwell, O. R. van den Akker, Michèle B. Nuijten, J. Ioannidis","doi":"10.1177/25152459211040837","DOIUrl":"https://doi.org/10.1177/25152459211040837","url":null,"abstract":"Replication studies that contradict prior findings may facilitate scientific self-correction by triggering a reappraisal of the original studies; however, the research community’s response to replication results has not been studied systematically. One approach for gauging responses to replication results is to examine how they affect citations to original studies. In this study, we explored postreplication citation patterns in the context of four prominent multilaboratory replication attempts published in the field of psychology that strongly contradicted and outweighed prior findings. Generally, we observed a small postreplication decline in the number of favorable citations and a small increase in unfavorable citations. This indicates only modest corrective effects and implies considerable perpetuation of belief in the original findings. Replication results that strongly contradict an original finding do not necessarily nullify its credibility; however, one might at least expect the replication results to be acknowledged and explicitly debated in subsequent literature. By contrast, we found substantial citation bias: The majority of articles citing the original studies neglected to cite relevant replication results. Of those articles that did cite the replication but continued to cite the original study favorably, approximately half offered an explicit defense of the original study. Our findings suggest that even replication results that strongly contradict original findings do not necessarily prompt a corrective response from the research community.","PeriodicalId":55645,"journal":{"name":"Advances in Methods and Practices in Psychological Science","volume":" ","pages":""},"PeriodicalIF":13.6,"publicationDate":"2021-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48150510","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-02-01DOI: 10.1177/25152459221095827
J. Rohrer, Paul Hünermund, Ruben C. Arslan, M. Elson
Path models to test claims about mediation and moderation are a staple of psychology. But applied researchers may sometimes not understand the underlying causal inference problems and thus endorse conclusions that rest on unrealistic assumptions. In this article, we aim to provide a clear explanation for the limited conditions under which standard procedures for mediation and moderation analysis can succeed. We discuss why reversing arrows or comparing model fit indices cannot tell us which model is the right one and how tests of conditional independence can at least tell us where our model goes wrong. Causal modeling practices in psychology are far from optimal but may be kept alive by domain norms that demand every article makes some novel claim about processes and boundary conditions. We end with a vision for a different research culture in which causal inference is pursued in a much slower, more deliberate, and collaborative manner.
{"title":"That’s a Lot to Process! Pitfalls of Popular Path Models","authors":"J. Rohrer, Paul Hünermund, Ruben C. Arslan, M. Elson","doi":"10.1177/25152459221095827","DOIUrl":"https://doi.org/10.1177/25152459221095827","url":null,"abstract":"Path models to test claims about mediation and moderation are a staple of psychology. But applied researchers may sometimes not understand the underlying causal inference problems and thus endorse conclusions that rest on unrealistic assumptions. In this article, we aim to provide a clear explanation for the limited conditions under which standard procedures for mediation and moderation analysis can succeed. We discuss why reversing arrows or comparing model fit indices cannot tell us which model is the right one and how tests of conditional independence can at least tell us where our model goes wrong. Causal modeling practices in psychology are far from optimal but may be kept alive by domain norms that demand every article makes some novel claim about processes and boundary conditions. We end with a vision for a different research culture in which causal inference is pursued in a much slower, more deliberate, and collaborative manner.","PeriodicalId":55645,"journal":{"name":"Advances in Methods and Practices in Psychological Science","volume":" ","pages":""},"PeriodicalIF":13.6,"publicationDate":"2021-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44116149","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-01-01DOI: 10.1177/2515245920954925
M. Del Giudice, S. Gangestad
Decisions made by researchers while analyzing data (e.g., how to measure variables, how to handle outliers) are sometimes arbitrary, without an objective justification for choosing one alternative over another. Multiverse-style methods (e.g., specification curve, vibration of effects) estimate an effect across an entire set of possible specifications to expose the impact of hidden degrees of freedom and/or obtain robust, less biased estimates of the effect of interest. However, if specifications are not truly arbitrary, multiverse-style analyses can produce misleading results, potentially hiding meaningful effects within a mass of poorly justified alternatives. So far, a key question has received scant attention: How does one decide whether alternatives are arbitrary? We offer a framework and conceptual tools for doing so. We discuss three kinds of a priori nonequivalence among alternatives—measurement nonequivalence, effect nonequivalence, and power/precision nonequivalence. The criteria we review lead to three decision scenarios: Type E decisions (principled equivalence), Type N decisions (principled nonequivalence), and Type U decisions (uncertainty). In uncertain scenarios, multiverse-style analysis should be conducted in a deliberately exploratory fashion. The framework is discussed with reference to published examples and illustrated with the help of a simulated data set. Our framework will help researchers reap the benefits of multiverse-style methods while avoiding their pitfalls.
{"title":"A Traveler’s Guide to the Multiverse: Promises, Pitfalls, and a Framework for the Evaluation of Analytic Decisions","authors":"M. Del Giudice, S. Gangestad","doi":"10.1177/2515245920954925","DOIUrl":"https://doi.org/10.1177/2515245920954925","url":null,"abstract":"Decisions made by researchers while analyzing data (e.g., how to measure variables, how to handle outliers) are sometimes arbitrary, without an objective justification for choosing one alternative over another. Multiverse-style methods (e.g., specification curve, vibration of effects) estimate an effect across an entire set of possible specifications to expose the impact of hidden degrees of freedom and/or obtain robust, less biased estimates of the effect of interest. However, if specifications are not truly arbitrary, multiverse-style analyses can produce misleading results, potentially hiding meaningful effects within a mass of poorly justified alternatives. So far, a key question has received scant attention: How does one decide whether alternatives are arbitrary? We offer a framework and conceptual tools for doing so. We discuss three kinds of a priori nonequivalence among alternatives—measurement nonequivalence, effect nonequivalence, and power/precision nonequivalence. The criteria we review lead to three decision scenarios: Type E decisions (principled equivalence), Type N decisions (principled nonequivalence), and Type U decisions (uncertainty). In uncertain scenarios, multiverse-style analysis should be conducted in a deliberately exploratory fashion. The framework is discussed with reference to published examples and illustrated with the help of a simulated data set. Our framework will help researchers reap the benefits of multiverse-style methods while avoiding their pitfalls.","PeriodicalId":55645,"journal":{"name":"Advances in Methods and Practices in Psychological Science","volume":" ","pages":""},"PeriodicalIF":13.6,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/2515245920954925","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46424382","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-01-01Epub Date: 2021-03-12DOI: 10.1177/2515245920974622
Krista Byers-Heinlein, Angeline Sin Mei Tsui, Christina Bergmann, Alexis K Black, Anna Brown, Maria Julia Carbajal, Samantha Durrant, Christopher T Fennell, Anne-Caroline Fiévet, Michael C Frank, Anja Gampe, Judit Gervain, Nayeli Gonzalez-Gomez, J Kiley Hamlin, Naomi Havron, Mikołaj Hernik, Shila Kerr, Hilary Killam, Kelsey Klassen, Jessica E Kosie, Ágnes Melinda Kovács, Casey Lew-Williams, Liquan Liu, Nivedita Mani, Caterina Marino, Meghan Mastroberardino, Victoria Mateu, Claire Noble, Adriel John Orena, Linda Polka, Christine E Potter, Melanie Schreiner, Leher Singh, Melanie Soderstrom, Megha Sundara, Connor Waddell, Janet F Werker, Stephanie Wermelinger
From the earliest months of life, infants prefer listening to and learn better from infant-directed speech (IDS) than adult-directed speech (ADS). Yet, IDS differs within communities, across languages, and across cultures, both in form and in prevalence. This large-scale, multi-site study used the diversity of bilingual infant experiences to explore the impact of different types of linguistic experience on infants' IDS preference. As part of the multi-lab ManyBabies 1 project, we compared lab-matched samples of 333 bilingual and 385 monolingual infants' preference for North-American English IDS (cf. ManyBabies Consortium, 2020: ManyBabies 1), tested in 17 labs in 7 countries. Those infants were tested in two age groups: 6-9 months (the younger sample) and 12-15 months (the older sample). We found that bilingual and monolingual infants both preferred IDS to ADS, and did not differ in terms of the overall magnitude of this preference. However, amongst bilingual infants who were acquiring North-American English (NAE) as a native language, greater exposure to NAE was associated with a stronger IDS preference, extending the previous finding from ManyBabies 1 that monolinguals learning NAE as a native language showed a stronger preference than infants unexposed to NAE. Together, our findings indicate that IDS preference likely makes a similar contribution to monolingual and bilingual development, and that infants are exquisitely sensitive to the nature and frequency of different types of language input in their early environments.
{"title":"A multi-lab study of bilingual infants: Exploring the preference for infant-directed speech.","authors":"Krista Byers-Heinlein, Angeline Sin Mei Tsui, Christina Bergmann, Alexis K Black, Anna Brown, Maria Julia Carbajal, Samantha Durrant, Christopher T Fennell, Anne-Caroline Fiévet, Michael C Frank, Anja Gampe, Judit Gervain, Nayeli Gonzalez-Gomez, J Kiley Hamlin, Naomi Havron, Mikołaj Hernik, Shila Kerr, Hilary Killam, Kelsey Klassen, Jessica E Kosie, Ágnes Melinda Kovács, Casey Lew-Williams, Liquan Liu, Nivedita Mani, Caterina Marino, Meghan Mastroberardino, Victoria Mateu, Claire Noble, Adriel John Orena, Linda Polka, Christine E Potter, Melanie Schreiner, Leher Singh, Melanie Soderstrom, Megha Sundara, Connor Waddell, Janet F Werker, Stephanie Wermelinger","doi":"10.1177/2515245920974622","DOIUrl":"10.1177/2515245920974622","url":null,"abstract":"<p><p>From the earliest months of life, infants prefer listening to and learn better from infant-directed speech (IDS) than adult-directed speech (ADS). Yet, IDS differs within communities, across languages, and across cultures, both in form and in prevalence. This large-scale, multi-site study used the diversity of bilingual infant experiences to explore the impact of different types of linguistic experience on infants' IDS preference. As part of the multi-lab ManyBabies 1 project, we compared lab-matched samples of 333 bilingual and 385 monolingual infants' preference for North-American English IDS (cf. ManyBabies Consortium, 2020: ManyBabies 1), tested in 17 labs in 7 countries. Those infants were tested in two age groups: 6-9 months (the younger sample) and 12-15 months (the older sample). We found that bilingual and monolingual infants both preferred IDS to ADS, and did not differ in terms of the overall magnitude of this preference. However, amongst bilingual infants who were acquiring North-American English (NAE) as a native language, greater exposure to NAE was associated with a stronger IDS preference, extending the previous finding from ManyBabies 1 that monolinguals learning NAE as a native language showed a stronger preference than infants unexposed to NAE. Together, our findings indicate that IDS preference likely makes a similar contribution to monolingual and bilingual development, and that infants are exquisitely sensitive to the nature and frequency of different types of language input in their early environments.</p>","PeriodicalId":55645,"journal":{"name":"Advances in Methods and Practices in Psychological Science","volume":"4 1","pages":""},"PeriodicalIF":15.6,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9273003/pdf/nihms-1769134.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"40497206","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-01-01DOI: 10.1177/2515245921993161
{"title":"Acknowledgment","authors":"","doi":"10.1177/2515245921993161","DOIUrl":"https://doi.org/10.1177/2515245921993161","url":null,"abstract":"","PeriodicalId":55645,"journal":{"name":"Advances in Methods and Practices in Psychological Science","volume":" ","pages":""},"PeriodicalIF":13.6,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/2515245921993161","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43287024","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-01-01DOI: 10.1177/2515245920979172
Tanja Könen, J. Karbach
Intervention studies can be expensive and time-consuming, which is why it is important to extract as much knowledge as possible. We discuss benefits and limitations of analyzing individual differences in intervention studies in addition to traditional analyses of average group effects. First, we present a short introduction to latent change modeling and measurement invariance in the context of intervention studies. Then, we give an overview on options for analyzing individual differences in intervention-related changes with a focus on how substantive information can be distinguished from methodological artifacts (e.g., regression to the mean). The main topics are benefits and limitations of predicting changes with baseline data and of analyzing correlated change. Both approaches can offer descriptive correlational information about individuals in interventions, which can inform future variations of experimental conditions. Applications increasingly emerge in the literature—from clinical, developmental, and educational psychology to occupational psychology—and demonstrate their potential across all of psychology.
{"title":"Analyzing Individual Differences in Intervention-Related Changes","authors":"Tanja Könen, J. Karbach","doi":"10.1177/2515245920979172","DOIUrl":"https://doi.org/10.1177/2515245920979172","url":null,"abstract":"Intervention studies can be expensive and time-consuming, which is why it is important to extract as much knowledge as possible. We discuss benefits and limitations of analyzing individual differences in intervention studies in addition to traditional analyses of average group effects. First, we present a short introduction to latent change modeling and measurement invariance in the context of intervention studies. Then, we give an overview on options for analyzing individual differences in intervention-related changes with a focus on how substantive information can be distinguished from methodological artifacts (e.g., regression to the mean). The main topics are benefits and limitations of predicting changes with baseline data and of analyzing correlated change. Both approaches can offer descriptive correlational information about individuals in interventions, which can inform future variations of experimental conditions. Applications increasingly emerge in the literature—from clinical, developmental, and educational psychology to occupational psychology—and demonstrate their potential across all of psychology.","PeriodicalId":55645,"journal":{"name":"Advances in Methods and Practices in Psychological Science","volume":" ","pages":""},"PeriodicalIF":13.6,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/2515245920979172","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46600686","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-01-01DOI: 10.1177/2515245920960351
V. Brown
This Tutorial serves as both an approachable theoretical introduction to mixed-effects modeling and a practical introduction to how to implement mixed-effects models in R. The intended audience is researchers who have some basic statistical knowledge, but little or no experience implementing mixed-effects models in R using their own data. In an attempt to increase the accessibility of this Tutorial, I deliberately avoid using mathematical terminology beyond what a student would learn in a standard graduate-level statistics course, but I reference articles and textbooks that provide more detail for interested readers. This Tutorial includes snippets of R code throughout; the data and R script used to build the models described in the text are available via OSF at https://osf.io/v6qag/, so readers can follow along if they wish. The goal of this practical introduction is to provide researchers with the tools they need to begin implementing mixed-effects models in their own research.
{"title":"An Introduction to Linear Mixed-Effects Modeling in R","authors":"V. Brown","doi":"10.1177/2515245920960351","DOIUrl":"https://doi.org/10.1177/2515245920960351","url":null,"abstract":"This Tutorial serves as both an approachable theoretical introduction to mixed-effects modeling and a practical introduction to how to implement mixed-effects models in R. The intended audience is researchers who have some basic statistical knowledge, but little or no experience implementing mixed-effects models in R using their own data. In an attempt to increase the accessibility of this Tutorial, I deliberately avoid using mathematical terminology beyond what a student would learn in a standard graduate-level statistics course, but I reference articles and textbooks that provide more detail for interested readers. This Tutorial includes snippets of R code throughout; the data and R script used to build the models described in the text are available via OSF at https://osf.io/v6qag/, so readers can follow along if they wish. The goal of this practical introduction is to provide researchers with the tools they need to begin implementing mixed-effects models in their own research.","PeriodicalId":55645,"journal":{"name":"Advances in Methods and Practices in Psychological Science","volume":" ","pages":""},"PeriodicalIF":13.6,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/2515245920960351","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45587619","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}