We introduce a general method for sample size computations in the context of cross-sectional network models. The method takes the form of an automated Monte Carlo algorithm, designed to find an optimal sample size while iteratively concentrating the computations on the sample sizes that seem most relevant. The method requires three inputs: (1) a hypothesized network structure or desired characteristics of that structure, (2) an estimation performance measure and its corresponding target value (e.g., a sensitivity of 0.6), and (3) a statistic and its corresponding target value that determines how the target value for the performance measure be reached (e.g., reaching a sensitivity of 0.6 with a probability of 0.8). The method consists of a Monte Carlo simulation step for computing the performance measure and the statistic for several sample sizes selected from an initial candidate sample size range, a curve-fitting step for interpolating the statistic across the entire candidate range, and a stratified bootstrapping step to quantify the uncertainty around the recommendation provided. We evaluated the performance of the method for the Gaussian Graphical Model, but it can easily extend to other models. The method displayed good performance, providing sample size recommendations that were, on average, within three observations of a benchmark sample size, with the highest standard deviation of 25.87 observations. The method discussed is implemented in the form of an R package called powerly, available on GitHub and CRAN. (PsycInfo Database Record (c) 2023 APA, all rights reserved).
Continuous-time (CT) models are a flexible approach for modeling longitudinal data of psychological constructs. When using CT models, a researcher can assume one underlying continuous function for the phenomenon of interest. In principle, these models overcome some limitations of discrete-time (DT) models and allow researchers to compare findings across measures collected using different time intervals, such as daily, weekly, or monthly intervals. Theoretically, the parameters for equivalent models can be rescaled into a common time interval that allows for comparisons across individuals and studies, irrespective of the time interval used for sampling. In this study, we carry out a Monte Carlo simulation to examine the capability of CT autoregressive (CT-AR) models to recover the true dynamics of a process when the sampling interval is different from the time scale of the true generating process. We use two generating time intervals (daily or weekly) with varying strengths of the AR parameter and assess its recovery when sampled at different intervals (daily, weekly, or monthly). Our findings indicate that sampling at a faster time interval than the generating dynamics can mostly recover the generating AR effects. Sampling at a slower time interval requires stronger generating AR effects for satisfactory recovery, otherwise the estimation results show high bias and poor coverage. Based on our findings, we recommend researchers use sampling intervals guided by theory about the variable under study, and whenever possible, sample as frequently as possible. (PsycInfo Database Record (c) 2023 APA, all rights reserved).
The accuracy of factor retention methods for structures with one or more general factors, like the ones typically encountered in fields like intelligence, personality, and psychopathology, has often been overlooked in dimensionality research. To address this issue, we compared the performance of several factor retention methods in this context, including a network psychometrics approach developed in this study. For estimating the number of group factors, these methods were the Kaiser criterion, empirical Kaiser criterion, parallel analysis with principal components (PAPCA) or principal axis, and exploratory graph analysis with Louvain clustering (EGALV). We then estimated the number of general factors using the factor scores of the first-order solution suggested by the best two methods, yielding a "second-order" version of PAPCA (PAPCA-FS) and EGALV (EGALV-FS). Additionally, we examined the direct multilevel solution provided by EGALV. All the methods were evaluated in an extensive simulation manipulating nine variables of interest, including population error. The results indicated that EGALV and PAPCA displayed the best overall performance in retrieving the true number of group factors, the former being more sensitive to high cross-loadings, and the latter to weak group factors and small samples. Regarding the estimation of the number of general factors, both PAPCA-FS and EGALV-FS showed a close to perfect accuracy across all the conditions, while EGALV was inaccurate. The methods based on EGA were robust to the conditions most likely to be encountered in practice. Therefore, we highlight the particular usefulness of EGALV (group factors) and EGALV-FS (general factors) for assessing bifactor structures with multiple general factors. (PsycInfo Database Record (c) 2023 APA, all rights reserved).
Moderation analysis is used to study under what conditions or for which subgroups of individuals a treatment effect is stronger or weaker. When a moderator variable is categorical, such as assigned sex, treatment effects can be estimated for each group resulting in a treatment effect for males and a treatment effect for females. If a moderator variable is a continuous variable, a strategy for investigating moderated treatment effects is to estimate conditional effects (i.e., simple slopes) via the pick-a-point approach. When conditional effects are estimated using the pick-a-point approach, the conditional effects are often given the interpretation of "the treatment effect for the subgroup of individuals…." However, the interpretation of these conditional effects as subgroup effects is potentially misleading because conditional effects are interpreted at a specific value of the moderator variable (e.g., +1 SD above the mean). We describe a simple solution that resolves this problem using a simulation-based approach. We describe how to apply this simulation-based approach to estimate subgroup effects by defining subgroups using a range of scores on the continuous moderator variable. We apply this method to three empirical examples to demonstrate how to estimate subgroup effects for moderated treatment and moderated mediated effects when the moderator variable is a continuous variable. Finally, we provide researchers with both SAS and R code to implement this method for similar situations described in this paper. (PsycInfo Database Record (c) 2023 APA, all rights reserved).
Measurement invariance (MI) is one of the main psychometric requirements for analyses that focus on potentially heterogeneous populations. MI allows researchers to compare latent factor scores across persons from different subgroups, whereas if a measure is not invariant across all items and persons then such comparisons may be misleading. If full MI does not hold further testing may identify problematic items showing differential item functioning (DIF). Most methods developed to test DIF focused on simple scenarios often with comparisons across two groups. In practical applications, this is an oversimplification if many grouping variables (e.g., gender, race) or continuous covariates (e.g., age) exist that might influence the measurement properties of items; these variables are often correlated, making traditional tests that consider each variable separately less useful. Here, we propose the application of Bayesian Moderated Nonlinear Factor Analysis to overcome limitations of traditional approaches to detect DIF. We investigate how modern Bayesian shrinkage priors can be used to identify DIF items in situations with many groups and continuous covariates. We compare the performance of lasso-type, spike-and-slab, and global-local shrinkage priors (e.g., horseshoe) to standard normal and small variance priors. Results indicate that spike-and-slab and lasso priors outperform the other priors. Horseshoe priors provide slightly lower power compared to lasso and spike-and-slab priors. Small variance priors result in very low power to detect DIF with sample sizes below 800, and normal priors may produce severely inflated type I error rates. We illustrate the approach with data from the PISA 2018 study. (PsycInfo Database Record (c) 2023 APA, all rights reserved).
The last 25 years have shown a steady increase in attention for the Bayes factor as a tool for hypothesis evaluation and model selection. The present review highlights the potential of the Bayes factor in psychological research. We discuss six types of applications: Bayesian evaluation of point null, interval, and informative hypotheses, Bayesian evidence synthesis, Bayesian variable selection and model averaging, and Bayesian evaluation of cognitive models. We elaborate what each application entails, give illustrative examples, and provide an overview of key references and software with links to other applications. The article is concluded with a discussion of the opportunities and pitfalls of Bayes factor applications and a sketch of corresponding future research lines. (PsycInfo Database Record (c) 2023 APA, all rights reserved).
When randomized control trials are not available, regression discontinuity (RD) designs are a viable quasi-experimental method shown to be capable of producing causal estimates of how a program or intervention affects an outcome. While the RD design and many related methodological innovations came from the field of psychology, RDs are underutilized among psychologists even though many interventions are assigned on the basis of scores from common psychological measures, a situation tailor-made for RDs. In this tutorial, we present a straightforward way to implement an RD model as a structural equation model (SEM). By using SEM, we both situate RDs within a method commonly used in psychology, as well as show how RDs can be implemented in a way that allows one to account for measurement error and avoid measurement model misspecification, both of which often affect psychological measures. We begin with brief Monte Carlo simulation studies to examine the potential benefits of using a latent variable RD model, then transition to an applied example, replete with code and results. The aim of the study is to introduce RD to a broader audience in psychology, as well as show researchers already familiar with RD how employing an SEM framework can be beneficial. (PsycInfo Database Record (c) 2023 APA, all rights reserved).
Small sample structural equation modeling (SEM) may exhibit serious estimation problems, such as failure to converge, inadmissible solutions, and unstable parameter estimates. A vast literature has compared the performance of different solutions for small sample SEM in contrast to unconstrained maximum likelihood (ML) estimation. Less is known, however, on the gains and pitfalls of different solutions in contrast to each other. Focusing on three current solutions-constrained ML, Bayesian methods using Markov chain Monte Carlo techniques, and fixed reliability single indicator (SI) approaches-we bridge this gap. When doing so, we evaluate the potential and boundaries of different parameterizations, constraints, and weakly informative prior distributions for improving the quality of the estimation procedure and stabilizing parameter estimates. The performance of all approaches is compared in a simulation study. Under conditions with low reliabilities, Bayesian methods without additional prior information by far outperform constrained ML in terms of accuracy of parameter estimates as well as the worst-performing fixed reliability SI approach and do not perform worse than the best-performing fixed reliability SI approach. Under conditions with high reliabilities, constrained ML shows good performance. Both constrained ML and Bayesian methods exhibit conservative to acceptable Type I error rates. Fixed reliability SI approaches are prone to undercoverage and severe inflation of Type I error rates. Stabilizing effects on Bayesian parameter estimates can be achieved even with mildly incorrect prior information. In an empirical example, we illustrate the practical importance of carefully choosing the method of analysis for small sample SEM. (PsycInfo Database Record (c) 2023 APA, all rights reserved).