Pub Date : 2023-10-01Epub Date: 2022-06-02DOI: 10.1037/met0000493
Tobias Ebert, Friedrich M Götz, Lars Mewes, P Jason Rentfrow
Psychologists have become increasingly interested in the geographical organization of psychological phenomena. Such studies typically seek to identify geographical variation in psychological characteristics and examine the causes and consequences of that variation. Geo-psychological research offers unique advantages, such as a wide variety of easily obtainable behavioral outcomes. However, studies at the geographically aggregate level also come with unique challenges that require psychologists to work with unfamiliar data formats, sources, measures, and statistical problems. The present article aims to present psychologists with a methodological roadmap that equips them with basic analytical techniques for geographical analysis. Across five sections, we provide a step-by-step tutorial and walk readers through a full geo-psychological research project. We provide guidance for (a) choosing an appropriate geographical level and aggregating individual data, (b) spatializing data and mapping geographical distributions, (c) creating and managing spatial weights matrices, (d) assessing geographical clustering and identifying distributional patterns, and (e) regressing spatial data using spatial regression models. Throughout the tutorial, we alternate between explanatory sections that feature in-depth background information and hands-on sections that use real data to demonstrate the practical implementation of each step in R. The full R code and all data used in this demonstration are available from the OSF project page accompanying this article. (PsycInfo Database Record (c) 2023 APA, all rights reserved).
{"title":"Spatial analysis for psychologists: How to use individual-level data for research at the geographically aggregated level.","authors":"Tobias Ebert, Friedrich M Götz, Lars Mewes, P Jason Rentfrow","doi":"10.1037/met0000493","DOIUrl":"10.1037/met0000493","url":null,"abstract":"<p><p>Psychologists have become increasingly interested in the geographical organization of psychological phenomena. Such studies typically seek to identify geographical variation in psychological characteristics and examine the causes and consequences of that variation. Geo-psychological research offers unique advantages, such as a wide variety of easily obtainable behavioral outcomes. However, studies at the geographically aggregate level also come with unique challenges that require psychologists to work with unfamiliar data formats, sources, measures, and statistical problems. The present article aims to present psychologists with a methodological roadmap that equips them with basic analytical techniques for geographical analysis. Across five sections, we provide a step-by-step tutorial and walk readers through a full geo-psychological research project. We provide guidance for (a) choosing an appropriate geographical level and aggregating individual data, (b) spatializing data and mapping geographical distributions, (c) creating and managing spatial weights matrices, (d) assessing geographical clustering and identifying distributional patterns, and (e) regressing spatial data using spatial regression models. Throughout the tutorial, we alternate between explanatory sections that feature in-depth background information and hands-on sections that use real data to demonstrate the practical implementation of each step in R. The full R code and all data used in this demonstration are available from the OSF project page accompanying this article. (PsycInfo Database Record (c) 2023 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":"28 5","pages":"1100-1121"},"PeriodicalIF":7.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41210717","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-01Epub Date: 2022-04-11DOI: 10.1037/met0000494
Kevin D Bird
An evaluation of a difference between effect sizes from two dependent variables in a single study is likely to be based on differences between standard scores if raw scores on those variables are not scaled in comparable units of measurement. The standardization used for this purpose is usually sample-based rather than population-based, but the consequences of this distinction for the construction of confidence intervals on differential effects have not been systematically examined. In this article I show that differential effect confidence intervals (CIs) constructed from differences between the standard scores produced by sample-based standardization can be too narrow when those effects are large and dependent variables are highly correlated, particularly in within-subjects designs. I propose a new approach to the construction of differential effect CIs based on differences between adjusted sample-based standard scores that allow conventional CI procedures to produce Bonett-type CIs (Bonett, 2008) on individual effects. Computer simulations show that differential effect CIs constructed from adjusted standard scores can provide much better coverage probabilities than CIs constructed from unadjusted standard scores. (PsycInfo Database Record (c) 2023 APA, all rights reserved).
{"title":"Improved confidence intervals for differences between standardized effect sizes.","authors":"Kevin D Bird","doi":"10.1037/met0000494","DOIUrl":"https://doi.org/10.1037/met0000494","url":null,"abstract":"<p><p>An evaluation of a difference between effect sizes from two dependent variables in a single study is likely to be based on differences between standard scores if raw scores on those variables are not scaled in comparable units of measurement. The standardization used for this purpose is usually sample-based rather than population-based, but the consequences of this distinction for the construction of confidence intervals on differential effects have not been systematically examined. In this article I show that differential effect confidence intervals (CIs) constructed from differences between the standard scores produced by sample-based standardization can be too narrow when those effects are large and dependent variables are highly correlated, particularly in within-subjects designs. I propose a new approach to the construction of differential effect CIs based on differences between adjusted sample-based standard scores that allow conventional CI procedures to produce Bonett-type CIs (Bonett, 2008) on individual effects. Computer simulations show that differential effect CIs constructed from adjusted standard scores can provide much better coverage probabilities than CIs constructed from unadjusted standard scores. (PsycInfo Database Record (c) 2023 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":"28 5","pages":"1142-1153"},"PeriodicalIF":7.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41210716","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-01Epub Date: 2023-01-05DOI: 10.1037/met0000541
Kenneth Tyler Wilcox, Ross Jacobucci, Zhiyong Zhang, Brooke A Ammerman
Text is a burgeoning data source for psychological researchers, but little methodological research has focused on adapting popular modeling approaches for text to the context of psychological research. One popular measurement model for text, topic modeling, uses a latent mixture model to represent topics underlying a body of documents. Recently, psychologists have studied relationships between these topics and other psychological measures by using estimates of the topics as regression predictors along with other manifest variables. While similar two-stage approaches involving estimated latent variables are known to yield biased estimates and incorrect standard errors, two-stage topic modeling approaches have received limited statistical study and, as we show, are subject to the same problems. To address these problems, we proposed a novel statistical model-supervised latent Dirichlet allocation with covariates (SLDAX)-that jointly incorporates a latent variable measurement model of text and a structural regression model to allow the latent topics and other manifest variables to serve as predictors of an outcome. Using a simulation study with data characteristics consistent with psychological text data, we found that SLDAX estimates were generally more accurate and more efficient. To illustrate the application of SLDAX and a two-stage approach, we provide an empirical clinical application to compare the application of both the two-stage and SLDAX approaches. Finally, we implemented the SLDAX model in an open-source R package to facilitate its use and further study. (PsycInfo Database Record (c) 2023 APA, all rights reserved).
{"title":"Supervised latent Dirichlet allocation with covariates: A Bayesian structural and measurement model of text and covariates.","authors":"Kenneth Tyler Wilcox, Ross Jacobucci, Zhiyong Zhang, Brooke A Ammerman","doi":"10.1037/met0000541","DOIUrl":"10.1037/met0000541","url":null,"abstract":"<p><p>Text is a burgeoning data source for psychological researchers, but little methodological research has focused on adapting popular modeling approaches for text to the context of psychological research. One popular measurement model for text, topic modeling, uses a latent mixture model to represent topics underlying a body of documents. Recently, psychologists have studied relationships between these topics and other psychological measures by using estimates of the topics as regression predictors along with other manifest variables. While similar two-stage approaches involving estimated latent variables are known to yield biased estimates and incorrect standard errors, two-stage topic modeling approaches have received limited statistical study and, as we show, are subject to the same problems. To address these problems, we proposed a novel statistical model-supervised latent Dirichlet allocation with covariates (SLDAX)-that jointly incorporates a latent variable measurement model of text and a structural regression model to allow the latent topics and other manifest variables to serve as predictors of an outcome. Using a simulation study with data characteristics consistent with psychological text data, we found that SLDAX estimates were generally more accurate and more efficient. To illustrate the application of SLDAX and a two-stage approach, we provide an empirical clinical application to compare the application of both the two-stage and SLDAX approaches. Finally, we implemented the SLDAX model in an open-source R package to facilitate its use and further study. (PsycInfo Database Record (c) 2023 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":"1178-1206"},"PeriodicalIF":7.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10481498","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-01Epub Date: 2023-04-27DOI: 10.1037/met0000556
Simon Grund, Oliver Lüdtke, Alexander Robitzsch
Likelihood ratio tests (LRTs) are a popular tool for comparing statistical models. However, missing data are also common in empirical research, and multiple imputation (MI) is often used to deal with them. In multiply imputed data, there are multiple options for conducting LRTs, and new methods are still being proposed. In this article, we compare all available methods in multiple simulations covering applications in linear regression, generalized linear models, and structural equation modeling. In addition, we implemented these methods in an R package, and we illustrate its application in an example analysis concerned with the investigation of measurement invariance. (PsycInfo Database Record (c) 2023 APA, all rights reserved).
{"title":"Pooling methods for likelihood ratio tests in multiply imputed data sets.","authors":"Simon Grund, Oliver Lüdtke, Alexander Robitzsch","doi":"10.1037/met0000556","DOIUrl":"10.1037/met0000556","url":null,"abstract":"<p><p>Likelihood ratio tests (LRTs) are a popular tool for comparing statistical models. However, missing data are also common in empirical research, and multiple imputation (MI) is often used to deal with them. In multiply imputed data, there are multiple options for conducting LRTs, and new methods are still being proposed. In this article, we compare all available methods in multiple simulations covering applications in linear regression, generalized linear models, and structural equation modeling. In addition, we implemented these methods in an R package, and we illustrate its application in an example analysis concerned with the investigation of measurement invariance. (PsycInfo Database Record (c) 2023 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":"1207-1221"},"PeriodicalIF":7.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9356534","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-01Epub Date: 2023-03-23DOI: 10.1037/met0000557
Michael H Birnbaum
This article criticizes conclusions drawn from the standard test of correlated proportions when the dependent measure contains error. It presents a tutorial on a new method of analysis based on the true and error (TE) theory. This method allows the investigator to separate measurement of error from substantive conclusions about the effects of the independent variable, but it requires replicated measures of the dependent variable. The method is illustrated with hypothetical examples and with empirical data from tests of lexicographic semiorder (LS) models proposed as descriptive theories of risky decision making. LS models imply a property known as interactive independence. Data from two previous studies are reanalyzed to test interactive independence. The new analyses yielded clear answers: interactive independence can be rejected; therefore, LSs can be rejected as descriptive, even when the most flexible error model is allowed. The new methods of analysis can be applied to situations in which the test of correlated proportions would be applied, where it is possible to obtain repeated measures. (PsycInfo Database Record (c) 2023 APA, all rights reserved).
{"title":"True and error analysis instead of test of correlated proportions: Can we save lexicographic semiorder models with error theory?","authors":"Michael H Birnbaum","doi":"10.1037/met0000557","DOIUrl":"10.1037/met0000557","url":null,"abstract":"<p><p>This article criticizes conclusions drawn from the standard test of correlated proportions when the dependent measure contains error. It presents a tutorial on a new method of analysis based on the true and error (TE) theory. This method allows the investigator to separate measurement of error from substantive conclusions about the effects of the independent variable, but it requires replicated measures of the dependent variable. The method is illustrated with hypothetical examples and with empirical data from tests of lexicographic semiorder (LS) models proposed as descriptive theories of risky decision making. LS models imply a property known as interactive independence. Data from two previous studies are reanalyzed to test interactive independence. The new analyses yielded clear answers: interactive independence can be rejected; therefore, LSs can be rejected as descriptive, even when the most flexible error model is allowed. The new methods of analysis can be applied to situations in which the test of correlated proportions would be applied, where it is possible to obtain repeated measures. (PsycInfo Database Record (c) 2023 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":"1087-1099"},"PeriodicalIF":7.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9215619","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
To establish a theory one needs cleverly designed and well-executed studies with appropriate and correctly interpreted statistical analyses. Equally important, one also needs replications of such studies and a way to combine the results of several replications into an accumulated state of knowledge. An approach that provides an appropriate and powerful analysis for studies targeting prespecified theories is the use of Bayesian informative hypothesis testing. An additional advantage of the use of this Bayesian approach is that combining the results from multiple studies is straightforward. In this article, we discuss the behavior of Bayes factors in the context of evaluating informative hypotheses with multiple studies. By using simple models and (partly) analytical solutions, we introduce and evaluate Bayesian evidence synthesis (BES) and compare its results to Bayesian sequential updating. By doing so, we clarify how different replications or updating questions can be evaluated. In addition, we illustrate BES with two simulations, in which multiple studies are generated to resemble conceptual replications. The studies in these simulations are too heterogeneous to be aggregated with conventional research synthesis methods. (PsycInfo Database Record (c) 2023 APA, all rights reserved).
{"title":"Bayesian evidence synthesis for informative hypotheses: An introduction.","authors":"Irene Klugkist, Thom Benjamin Volker","doi":"10.1037/met0000602","DOIUrl":"https://doi.org/10.1037/met0000602","url":null,"abstract":"<p><p>To establish a theory one needs cleverly designed and well-executed studies with appropriate and correctly interpreted statistical analyses. Equally important, one also needs replications of such studies and a way to combine the results of several replications into an accumulated state of knowledge. An approach that provides an appropriate and powerful analysis for studies targeting prespecified theories is the use of Bayesian informative hypothesis testing. An additional advantage of the use of this Bayesian approach is that combining the results from multiple studies is straightforward. In this article, we discuss the behavior of Bayes factors in the context of evaluating informative hypotheses with multiple studies. By using simple models and (partly) analytical solutions, we introduce and evaluate Bayesian evidence synthesis (BES) and compare its results to Bayesian sequential updating. By doing so, we clarify how different replications or updating questions can be evaluated. In addition, we illustrate BES with two simulations, in which multiple studies are generated to resemble conceptual replications. The studies in these simulations are too heterogeneous to be aggregated with conventional research synthesis methods. (PsycInfo Database Record (c) 2023 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":""},"PeriodicalIF":7.0,"publicationDate":"2023-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10173540","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Timon Elmer, Marijtje A J van Duijn, Nilam Ram, Laura F Bringmann
The depth of information collected in participants' daily lives with active (e.g., experience sampling surveys) and passive (e.g., smartphone sensors) ambulatory measurement methods is immense. When measuring participants' behaviors in daily life, the timing of particular events-such as social interactions-is often recorded. These data facilitate the investigation of new types of research questions about the timing of those events, including whether individuals' affective state is associated with the rate of social interactions (binary event occurrence) and what types of social interactions are likely to occur (multicategory event occurrences, e.g., interactions with friends or family). Although survival analysis methods have been used to analyze time-to-event data in longitudinal settings for several decades, these methods have not yet been incorporated into ambulatory assessment research. This article illustrates how multilevel and multistate survival analysis methods can be used to model the social interaction dynamics captured in intensive longitudinal data, specifically when individuals exhibit particular categories of behavior. We provide an introduction to these models and a tutorial on how the timing and type of social interactions can be modeled using the R statistical programming language. Using event-contingent reports (N = 150, Nevents = 64,112) obtained in an ambulatory study of interpersonal interactions, we further exemplify an empirical application case. In sum, this article demonstrates how survival models can advance the understanding of (social interaction) dynamics that unfold in daily life. (PsycInfo Database Record (c) 2023 APA, all rights reserved).
{"title":"Modeling categorical time-to-event data: The example of social interaction dynamics captured with event-contingent experience sampling methods.","authors":"Timon Elmer, Marijtje A J van Duijn, Nilam Ram, Laura F Bringmann","doi":"10.1037/met0000598","DOIUrl":"https://doi.org/10.1037/met0000598","url":null,"abstract":"<p><p>The depth of information collected in participants' daily lives with active (e.g., experience sampling surveys) and passive (e.g., smartphone sensors) ambulatory measurement methods is immense. When measuring participants' behaviors in daily life, the timing of particular events-such as social interactions-is often recorded. These data facilitate the investigation of new types of research questions about the timing of those events, including whether individuals' affective state is associated with the rate of social interactions (binary event occurrence) and what types of social interactions are likely to occur (multicategory event occurrences, e.g., interactions with friends or family). Although survival analysis methods have been used to analyze time-to-event data in longitudinal settings for several decades, these methods have not yet been incorporated into ambulatory assessment research. This article illustrates how multilevel and multistate survival analysis methods can be used to model the social interaction dynamics captured in intensive longitudinal data, specifically <i>when individuals exhibit particular categories of behavior</i>. We provide an introduction to these models and a tutorial on how the timing and type of social interactions can be modeled using the R statistical programming language. Using event-contingent reports (<i>N</i> = 150, <i>N</i><sub>events</sub> = 64,112) obtained in an ambulatory study of interpersonal interactions, we further exemplify an empirical application case. In sum, this article demonstrates how survival models can advance the understanding of (social interaction) dynamics that unfold in daily life. (PsycInfo Database Record (c) 2023 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":""},"PeriodicalIF":7.0,"publicationDate":"2023-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10227502","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Walter P Vispoel, Hyeryung Lee, Hyeri Hong, Tingting Chen
Multivariate generalizability theory (GT) represents a comprehensive framework for quantifying score consistency, separating multiple sources contributing to measurement error, correcting correlation coefficients for such error, assessing subscale viability, and determining the best ways to change measurement procedures at different levels of score aggregation. Despite such desirable attributes, multivariate GT has rarely been applied when measuring psychological constructs and far less often than univariate techniques that are subsumed within that framework. Our purpose in this tutorial is to describe multivariate GT in a simple way and illustrate how it expands and complements univariate procedures. We begin with a review of univariate GT designs and illustrate how such designs serve as subcomponents of corresponding multivariate designs. Our empirical examples focus primarily on subscale and composite scores for objectively scored measures, but guidelines are provided for applying the same techniques to subjectively scored performance and clinical assessments. We also compare multivariate GT indices of score consistency and measurement error to those obtained using alternative GT-based procedures and across different software packages for analyzing multivariate GT designs. Our online supplemental materials include instruction, code, and output for common multivariate GT designs analyzed using mGENOVA and the gtheory, glmmTMB, lavaan, and related packages in R. (PsycInfo Database Record (c) 2024 APA, all rights reserved).
{"title":"Applying multivariate generalizability theory to psychological assessments.","authors":"Walter P Vispoel, Hyeryung Lee, Hyeri Hong, Tingting Chen","doi":"10.1037/met0000606","DOIUrl":"10.1037/met0000606","url":null,"abstract":"<p><p>Multivariate generalizability theory (GT) represents a comprehensive framework for quantifying score consistency, separating multiple sources contributing to measurement error, correcting correlation coefficients for such error, assessing subscale viability, and determining the best ways to change measurement procedures at different levels of score aggregation. Despite such desirable attributes, multivariate GT has rarely been applied when measuring psychological constructs and far less often than univariate techniques that are subsumed within that framework. Our purpose in this tutorial is to describe multivariate GT in a simple way and illustrate how it expands and complements univariate procedures. We begin with a review of univariate GT designs and illustrate how such designs serve as subcomponents of corresponding multivariate designs. Our empirical examples focus primarily on subscale and composite scores for objectively scored measures, but guidelines are provided for applying the same techniques to subjectively scored performance and clinical assessments. We also compare multivariate GT indices of score consistency and measurement error to those obtained using alternative GT-based procedures and across different software packages for analyzing multivariate GT designs. Our online supplemental materials include instruction, code, and output for common multivariate GT designs analyzed using <i>mGENOVA</i> and the <i>gtheory</i>, <i>glmmTMB</i>, lavaan, and related packages in R. (PsycInfo Database Record (c) 2024 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":""},"PeriodicalIF":7.6,"publicationDate":"2023-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10173543","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Supplemental Material for Modeling Categorical Time-to-Event Data: The Example of Social Interaction Dynamics Captured With Event-Contingent Experience Sampling Methods","authors":"","doi":"10.1037/met0000598.supp","DOIUrl":"https://doi.org/10.1037/met0000598.supp","url":null,"abstract":"","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":""},"PeriodicalIF":7.0,"publicationDate":"2023-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44810122","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Haoran Li, Wen Luo, Eunkyeng Baek, Christopher G Thompson, Kwok Hap Lam
The outcomes in single-case experimental designs (SCEDs) are often counts or proportions. In our study, we provided a colloquial illustration for a new class of generalized linear mixed models (GLMMs) to fit count and proportion data from SCEDs. We also addressed important aspects in the GLMM framework including overdispersion, estimation methods, statistical inferences, model selection methods by detecting overdispersion, and interpretations of regression coefficients. We then demonstrated the GLMMs with two empirical examples with count and proportion outcomes in SCEDs. In addition, we conducted simulation studies to examine the performance of GLMMs in terms of biases and coverage rates for the immediate treatment effect and treatment effect on the trend. We also examined the empirical Type I error rates of statistical tests. Finally, we provided recommendations about how to make sound statistical decisions to use GLMMs based on the findings from simulation studies. Our hope is that this article will provide SCED researchers with the basic information necessary to conduct appropriate statistical analysis of count and proportion data in their own research and outline the future agenda for methodologists to explore the full potential of GLMMs to analyze or meta-analyze SCED data. (PsycInfo Database Record (c) 2023 APA, all rights reserved).
{"title":"Multilevel modeling in single-case studies with count and proportion data: A demonstration and evaluation.","authors":"Haoran Li, Wen Luo, Eunkyeng Baek, Christopher G Thompson, Kwok Hap Lam","doi":"10.1037/met0000607","DOIUrl":"https://doi.org/10.1037/met0000607","url":null,"abstract":"<p><p>The outcomes in single-case experimental designs (SCEDs) are often counts or proportions. In our study, we provided a colloquial illustration for a new class of generalized linear mixed models (GLMMs) to fit count and proportion data from SCEDs. We also addressed important aspects in the GLMM framework including overdispersion, estimation methods, statistical inferences, model selection methods by detecting overdispersion, and interpretations of regression coefficients. We then demonstrated the GLMMs with two empirical examples with count and proportion outcomes in SCEDs. In addition, we conducted simulation studies to examine the performance of GLMMs in terms of biases and coverage rates for the immediate treatment effect and treatment effect on the trend. We also examined the empirical Type I error rates of statistical tests. Finally, we provided recommendations about how to make sound statistical decisions to use GLMMs based on the findings from simulation studies. Our hope is that this article will provide SCED researchers with the basic information necessary to conduct appropriate statistical analysis of count and proportion data in their own research and outline the future agenda for methodologists to explore the full potential of GLMMs to analyze or meta-analyze SCED data. (PsycInfo Database Record (c) 2023 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":""},"PeriodicalIF":7.0,"publicationDate":"2023-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10029565","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}