Pub Date : 2024-08-01Epub Date: 2023-04-13DOI: 10.1037/met0000569
Jillian C Strayhorn, Linda M Collins, David J Vanness
In current practice, intervention scientists applying the multiphase optimization strategy (MOST) with a 2k factorial optimization trial use a component screening approach (CSA) to select intervention components for inclusion in an optimized intervention. In this approach, scientists review all estimated main effects and interactions to identify the important ones based on a fixed threshold, and then base decisions about component selection on these important effects. We propose an alternative posterior expected value approach based on Bayesian decision theory. This new approach aims to be easier to apply and more readily extensible to a variety of intervention optimization problems. We used Monte Carlo simulation to evaluate the performance of a posterior expected value approach and CSA (automated for simulation purposes) relative to two benchmarks: random component selection, and the classical treatment package approach. We found that both the posterior expected value approach and CSA yielded substantial performance gains relative to the benchmarks. We also found that the posterior expected value approach outperformed CSA modestly but consistently in terms of overall accuracy, sensitivity, and specificity, across a wide range of realistic variations in simulated factorial optimization trials. We discuss implications for intervention optimization and promising future directions in the use of posterior expected value to make decisions in MOST. (PsycInfo Database Record (c) 2024 APA, all rights reserved).
在目前的实践中,干预科学家在应用多阶段优化策略(MOST)进行 2k 因式优化试验时,会使用成分筛选法(CSA)来选择干预成分,以便将其纳入优化干预中。在这种方法中,科学家们会审查所有估计的主效应和交互作用,根据固定阈值确定重要效应,然后根据这些重要效应来决定干预成分的选择。我们提出了另一种基于贝叶斯决策理论的后验预期值方法。这种新方法更易于应用,也更容易扩展到各种干预优化问题中。我们使用蒙特卡罗模拟评估了后验期望值方法和 CSA(为模拟目的而自动进行)相对于随机成分选择和经典治疗包方法这两个基准的性能。我们发现,与基准相比,后验期望值法和 CSA 都能大幅提高性能。我们还发现,在模拟因子优化试验的各种现实变化中,后验期望值方法在总体准确性、灵敏度和特异性方面略微优于 CSA,但表现一致。我们讨论了干预优化的意义,以及使用后验预期值在 MOST 中做出决策的未来发展方向。(PsycInfo Database Record (c) 2023 APA, all rights reserved)。
{"title":"A posterior expected value approach to decision-making in the multiphase optimization strategy for intervention science.","authors":"Jillian C Strayhorn, Linda M Collins, David J Vanness","doi":"10.1037/met0000569","DOIUrl":"10.1037/met0000569","url":null,"abstract":"<p><p>In current practice, intervention scientists applying the multiphase optimization strategy (MOST) with a 2<i><sup>k</sup></i> factorial optimization trial use a component screening approach (CSA) to select intervention components for inclusion in an optimized intervention. In this approach, scientists review all estimated main effects and interactions to identify the important ones based on a fixed threshold, and then base decisions about component selection on these important effects. We propose an alternative posterior expected value approach based on Bayesian decision theory. This new approach aims to be easier to apply and more readily extensible to a variety of intervention optimization problems. We used Monte Carlo simulation to evaluate the performance of a posterior expected value approach and CSA (automated for simulation purposes) relative to two benchmarks: random component selection, and the classical treatment package approach. We found that both the posterior expected value approach and CSA yielded substantial performance gains relative to the benchmarks. We also found that the posterior expected value approach outperformed CSA modestly but consistently in terms of overall accuracy, sensitivity, and specificity, across a wide range of realistic variations in simulated factorial optimization trials. We discuss implications for intervention optimization and promising future directions in the use of posterior expected value to make decisions in MOST. (PsycInfo Database Record (c) 2024 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":"656-678"},"PeriodicalIF":7.6,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9367545","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-01Epub Date: 2023-07-27DOI: 10.1037/met0000594
Lijin Zhang, Xinya Liang
Integrating regularization methods into structural equation modeling is gaining increasing popularity. The purpose of regularization is to improve variable selection, model estimation, and prediction accuracy. In this study, we aim to: (a) compare Bayesian regularization methods for exploring covariate effects in multiple-indicators multiple-causes models, (b) examine the sensitivity of results to hyperparameter settings of penalty priors, and (c) investigate prediction accuracy through cross-validation. The Bayesian regularization methods examined included: ridge, lasso, adaptive lasso, spike-and-slab prior (SSP) and its variants, and horseshoe and its variants. Sparse solutions were developed for the structural coefficient matrix that contained only a small portion of nonzero path coefficients characterizing the effects of selected covariates on the latent variable. Results from the simulation study showed that compared to diffuse priors, penalty priors were advantageous in handling small sample sizes and collinearity among covariates. Priors with only the global penalty (ridge and lasso) yielded higher model convergence rates and power, whereas priors with both the global and local penalties (horseshoe and SSP) provided more accurate parameter estimates for medium and large covariate effects. The horseshoe and SSP improved accuracy in predicting factor scores, while achieving more parsimonious models. (PsycInfo Database Record (c) 2024 APA, all rights reserved).
{"title":"Bayesian regularization in multiple-indicators multiple-causes models.","authors":"Lijin Zhang, Xinya Liang","doi":"10.1037/met0000594","DOIUrl":"10.1037/met0000594","url":null,"abstract":"<p><p>Integrating regularization methods into structural equation modeling is gaining increasing popularity. The purpose of regularization is to improve variable selection, model estimation, and prediction accuracy. In this study, we aim to: (a) compare Bayesian regularization methods for exploring covariate effects in multiple-indicators multiple-causes models, (b) examine the sensitivity of results to hyperparameter settings of penalty priors, and (c) investigate prediction accuracy through cross-validation. The Bayesian regularization methods examined included: ridge, lasso, adaptive lasso, spike-and-slab prior (SSP) and its variants, and horseshoe and its variants. Sparse solutions were developed for the structural coefficient matrix that contained only a small portion of nonzero path coefficients characterizing the effects of selected covariates on the latent variable. Results from the simulation study showed that compared to diffuse priors, penalty priors were advantageous in handling small sample sizes and collinearity among covariates. Priors with only the global penalty (ridge and lasso) yielded higher model convergence rates and power, whereas priors with both the global and local penalties (horseshoe and SSP) provided more accurate parameter estimates for medium and large covariate effects. The horseshoe and SSP improved accuracy in predicting factor scores, while achieving more parsimonious models. (PsycInfo Database Record (c) 2024 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":"679-703"},"PeriodicalIF":7.6,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10241486","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nicole Walasek, Ethan S Young, Willem E Frankenhuis
Psychologists tend to rely on verbal descriptions of the environment over time, using terms like "unpredictable," "variable," and "unstable." These terms are often open to different interpretations. This ambiguity blurs the match between constructs and measures, which creates confusion and inconsistency across studies. To better characterize the environment, the field needs a shared framework that organizes descriptions of the environment over time in clear terms: as statistical definitions. Here, we first present such a framework, drawing on theory developed in other disciplines, such as biology, anthropology, ecology, and economics. Then we apply our framework by quantifying "unpredictability" in a publicly available, longitudinal data set of crime rates in New York City (NYC) across 15 years. This case study shows that the correlations between different "unpredictability statistics" across regions are only moderate. This means that regions within NYC rank differently on unpredictability depending on which definition is used and at which spatial scale the statistics are computed. Additionally, we explore associations between unpredictability statistics and measures of unemployment, poverty, and educational attainment derived from publicly available NYC survey data. In our case study, these measures are associated with mean levels in crime rates but hardly with unpredictability in crime rates. Our case study illustrates the merits of using a formal framework for disentangling different properties of the environment. To facilitate the use of our framework, we provide a friendly, step-by-step guide for identifying the structure of the environment in repeated measures data sets. (PsycInfo Database Record (c) 2024 APA, all rights reserved).
{"title":"A framework for studying environmental statistics in developmental science.","authors":"Nicole Walasek, Ethan S Young, Willem E Frankenhuis","doi":"10.1037/met0000651","DOIUrl":"https://doi.org/10.1037/met0000651","url":null,"abstract":"<p><p>Psychologists tend to rely on verbal descriptions of the environment over time, using terms like \"unpredictable,\" \"variable,\" and \"unstable.\" These terms are often open to different interpretations. This ambiguity blurs the match between constructs and measures, which creates confusion and inconsistency across studies. To better characterize the environment, the field needs a shared framework that organizes descriptions of the environment over time in clear terms: as statistical definitions. Here, we first present such a framework, drawing on theory developed in other disciplines, such as biology, anthropology, ecology, and economics. Then we apply our framework by quantifying \"unpredictability\" in a publicly available, longitudinal data set of crime rates in New York City (NYC) across 15 years. This case study shows that the correlations between different \"unpredictability statistics\" across regions are only moderate. This means that regions within NYC rank differently on unpredictability depending on which definition is used and at which spatial scale the statistics are computed. Additionally, we explore associations between unpredictability statistics and measures of unemployment, poverty, and educational attainment derived from publicly available NYC survey data. In our case study, these measures are associated with mean levels in crime rates but hardly with unpredictability in crime rates. Our case study illustrates the merits of using a formal framework for disentangling different properties of the environment. To facilitate the use of our framework, we provide a friendly, step-by-step guide for identifying the structure of the environment in repeated measures data sets. (PsycInfo Database Record (c) 2024 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":""},"PeriodicalIF":7.6,"publicationDate":"2024-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141634306","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The coefficient of determination, R², also called the explained variance, is often taken as a proportional measure of the relative determination of model on outcome. However, while R² has some attractive statistical properties, its reliance on squared variations (variances) may limit its use as an easily interpretable descriptive statistic of that determination. Here, the properties of this coefficient on the squared scale are discussed and generalized to three relative measures on the original scale. These generalizations can all be expressed as transformations of R², and alternatives can therefore also be calculated by plugging in related estimates, such as the adjusted R². The third coefficient, new for this article, and here termed the CoDSD (the coefficient of determination in terms of standard deviations), or Rπ (R-pi), equals R²/(R²+1-R²). It is argued that this coefficient most usefully captures the relative determination of the model. When the contribution of the error is c times that of the model, the CoDSD equals 1/(1 + c), while R² equals 1/(1 + c²). (PsycInfo Database Record (c) 2024 APA, all rights reserved).
{"title":"Coefficients of determination measured on the same scale as the outcome: Alternatives to R² that use standard deviations instead of explained variance.","authors":"Mathias Berggren","doi":"10.1037/met0000681","DOIUrl":"https://doi.org/10.1037/met0000681","url":null,"abstract":"<p><p>The coefficient of determination, <i>R</i>², also called the explained variance, is often taken as a proportional measure of the relative determination of model on outcome. However, while <i>R</i>² has some attractive statistical properties, its reliance on squared variations (variances) may limit its use as an easily interpretable descriptive statistic of that determination. Here, the properties of this coefficient on the squared scale are discussed and generalized to three relative measures on the original scale. These generalizations can all be expressed as transformations of <i>R</i>², and alternatives can therefore also be calculated by plugging in related estimates, such as the adjusted <i>R</i>². The third coefficient, new for this article, and here termed the CoD<sub>SD</sub> (the coefficient of determination in terms of standard deviations), or <i>R</i><sub>π</sub> (<i>R</i>-pi), equals <i>R</i>²/(<i>R</i>²+1-<i>R</i>²). It is argued that this coefficient most usefully captures the relative determination of the model. When the contribution of the error is <i>c</i> times that of the model, the CoD<sub>SD</sub> equals 1/(1 + <i>c</i>), while <i>R</i>² equals 1/(1 + <i>c</i>²). (PsycInfo Database Record (c) 2024 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":""},"PeriodicalIF":7.6,"publicationDate":"2024-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141634307","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Niek Stevenson, Reilly J Innes, Quentin F Gronau, Steven Miletić, Andrew Heathcote, Birte U Forstmann, Scott D Brown
Joint modeling of decisions and neural activation poses the potential to provide significant advances in linking brain and behavior. However, methods of joint modeling have been limited by difficulties in estimation, often due to high dimensionality and simultaneous estimation challenges. In the current article, we propose a method of model estimation that draws on state-of-the-art Bayesian hierarchical modeling techniques and uses factor analysis as a means of dimensionality reduction and inference at the group level. This hierarchical factor approach can adopt any model for the individual and distill the relationships of its parameters across individuals through a factor structure. We demonstrate the significant dimensionality reduction gained by factor analysis and good parameter recovery, and illustrate a variety of factor loading constraints that can be used for different purposes and research questions, as well as three applications of the method to previously analyzed data. We conclude that this method provides a flexible and usable approach with interpretable outcomes that are primarily data-driven, in contrast to the largely hypothesis-driven methods often used in joint modeling. Although we focus on joint modeling methods, this model-based estimation approach could be used for any high dimensional modeling problem. We provide open-source code and accompanying tutorial documentation to make the method accessible to any researchers. (PsycInfo Database Record (c) 2024 APA, all rights reserved).
决策和神经激活的联合建模有可能为大脑和行为之间的联系带来重大进展。然而,联合建模的方法一直受到估计困难的限制,这通常是由于高维度和同步估计的挑战。在这篇文章中,我们提出了一种模型估计方法,它借鉴了最先进的贝叶斯分层建模技术,并使用因子分析作为群体层面的降维和推断手段。这种分层因子方法可以采用任何个体模型,并通过因子结构提炼出个体间的参数关系。我们展示了因子分析显著的降维效果和良好的参数恢复能力,并说明了可用于不同目的和研究问题的各种因子载荷约束,以及该方法在先前分析数据中的三个应用。我们的结论是,与联合建模中常用的主要以假设为导向的方法相比,这种方法提供了一种灵活可用的方法,其结果主要以数据为导向,可解释性强。虽然我们关注的是联合建模方法,但这种基于模型的估计方法可用于任何高维建模问题。我们提供了开源代码和随附的教程文档,使任何研究人员都能使用这种方法。(PsycInfo Database Record (c) 2024 APA, 版权所有)。
{"title":"Using group level factor models to resolve high dimensionality in model-based sampling.","authors":"Niek Stevenson, Reilly J Innes, Quentin F Gronau, Steven Miletić, Andrew Heathcote, Birte U Forstmann, Scott D Brown","doi":"10.1037/met0000618","DOIUrl":"10.1037/met0000618","url":null,"abstract":"<p><p>Joint modeling of decisions and neural activation poses the potential to provide significant advances in linking brain and behavior. However, methods of joint modeling have been limited by difficulties in estimation, often due to high dimensionality and simultaneous estimation challenges. In the current article, we propose a method of model estimation that draws on state-of-the-art Bayesian hierarchical modeling techniques and uses factor analysis as a means of dimensionality reduction and inference at the group level. This hierarchical factor approach can adopt any model for the individual and distill the relationships of its parameters across individuals through a factor structure. We demonstrate the significant dimensionality reduction gained by factor analysis and good parameter recovery, and illustrate a variety of factor loading constraints that can be used for different purposes and research questions, as well as three applications of the method to previously analyzed data. We conclude that this method provides a flexible and usable approach with interpretable outcomes that are primarily data-driven, in contrast to the largely hypothesis-driven methods often used in joint modeling. Although we focus on joint modeling methods, this model-based estimation approach could be used for any high dimensional modeling problem. We provide open-source code and accompanying tutorial documentation to make the method accessible to any researchers. (PsycInfo Database Record (c) 2024 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":""},"PeriodicalIF":7.6,"publicationDate":"2024-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141446886","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Propensity score analysis (PSA) is a prominent method to alleviate selection bias in observational studies, but missing data in covariates is prevalent and must be dealt with during propensity score estimation. Through Monte Carlo simulations, this study evaluates the use of imputation methods based on multiple random forests algorithms to handle missing data in covariates: multivariate imputation by chained equations-random forest (Caliber), proximity imputation (PI), and missForest. The results indicated that PI and missForest outperformed other methods with respect to bias of average treatment effect regardless of sample size and missing mechanisms. A demonstration of these five methods with PSA to evaluate the effect of participation in center-based care on children's reading ability is provided using data from the Early Childhood Longitudinal Study, Kindergarten Class of 2010-2011. (PsycInfo Database Record (c) 2024 APA, all rights reserved).
{"title":"A comparison of random forest-based missing imputation methods for covariates in propensity score analysis.","authors":"Yongseok Lee, Walter L Leite","doi":"10.1037/met0000676","DOIUrl":"https://doi.org/10.1037/met0000676","url":null,"abstract":"<p><p>Propensity score analysis (PSA) is a prominent method to alleviate selection bias in observational studies, but missing data in covariates is prevalent and must be dealt with during propensity score estimation. Through Monte Carlo simulations, this study evaluates the use of imputation methods based on multiple random forests algorithms to handle missing data in covariates: multivariate imputation by chained equations-random forest (Caliber), proximity imputation (PI), and missForest. The results indicated that PI and missForest outperformed other methods with respect to bias of average treatment effect regardless of sample size and missing mechanisms. A demonstration of these five methods with PSA to evaluate the effect of participation in center-based care on children's reading ability is provided using data from the Early Childhood Longitudinal Study, Kindergarten Class of 2010-2011. (PsycInfo Database Record (c) 2024 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":""},"PeriodicalIF":7.0,"publicationDate":"2024-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141311532","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
T D Stanley, Hristos Doucouliagos, Maximilian Maier, František Bartoš
We demonstrate that all conventional meta-analyses of correlation coefficients are biased, explain why, and offer solutions. Because the standard errors of the correlation coefficients depend on the size of the coefficient, inverse-variance weighted averages will be biased even under ideal meta-analytical conditions (i.e., absence of publication bias, p-hacking, or other biases). Transformation to Fisher's z often greatly reduces these biases but still does not mitigate them entirely. Although all are small-sample biases (n < 200), they will often have practical consequences in psychology where the typical sample size of correlational studies is 86. We offer two solutions: the well-known Fisher's z-transformation and new small-sample adjustment of Fisher's that renders any remaining bias scientifically trivial. (PsycInfo Database Record (c) 2024 APA, all rights reserved).
我们证明了所有传统的相关系数荟萃分析都是有偏差的,解释了原因并提出了解决方案。由于相关系数的标准误差取决于系数的大小,因此即使在理想的元分析条件下(即不存在发表偏差、P-黑客或其他偏差),反方差加权平均值也会存在偏差。转换为费舍尔 z 值通常会大大减少这些偏差,但仍不能完全缓解这些偏差。虽然所有这些都是小样本偏差(n < 200),但它们往往会对心理学产生实际影响,因为相关研究的典型样本量是 86。我们提供了两种解决方案:一种是众所周知的费雪 Z 变换,另一种是费雪的新小样本调整,这两种方法都能使剩余的偏差在科学上变得微不足道。(PsycInfo 数据库记录 (c) 2024 APA,保留所有权利)。
{"title":"Correcting bias in the meta-analysis of correlations.","authors":"T D Stanley, Hristos Doucouliagos, Maximilian Maier, František Bartoš","doi":"10.1037/met0000662","DOIUrl":"https://doi.org/10.1037/met0000662","url":null,"abstract":"<p><p>We demonstrate that all conventional meta-analyses of correlation coefficients are biased, explain why, and offer solutions. Because the standard errors of the correlation coefficients depend on the size of the coefficient, inverse-variance weighted averages will be biased even under ideal meta-analytical conditions (i.e., absence of publication bias, <i>p</i>-hacking, or other biases). Transformation to Fisher's <i>z</i> often greatly reduces these biases but still does not mitigate them entirely. Although all are small-sample biases (<i>n</i> < 200), they will often have practical consequences in psychology where the typical sample size of correlational studies is 86. We offer two solutions: the well-known Fisher's z-transformation and new small-sample adjustment of Fisher's that renders any remaining bias scientifically trivial. (PsycInfo Database Record (c) 2024 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":""},"PeriodicalIF":7.0,"publicationDate":"2024-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141200562","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ethan M McCormick, Patrick J Curran, Gregory R Hancock
A currently overlooked application of the latent curve model (LCM) is its use in assessing the consequences of development patterns of change-that is as a predictor of distal outcomes. However, there are additional complications for appropriately specifying and interpreting the distal outcome LCM. Here, we develop a general framework for understanding the sensitivity of the distal outcome LCM to the choice of time coding, focusing on the regressions of the distal outcome on the latent growth factors. Using artificial and real-data examples, we highlight the unexpected changes in the regression of the slope factor which stand in contrast to prior work on time coding effects, and develop a framework for estimating the distal outcome LCM at a point in the trajectory-known as the aperture-which maximizes the interpretability of the effects. We also outline a prioritization approach developed for assessing incremental validity to obtain consistently interpretable estimates of the effect of the slope. Throughout, we emphasize practical steps for understanding these changing predictive effects, including graphical approaches for assessing regions of significance similar to those used to probe interaction effects. We conclude by providing recommendations for applied research using these models and outline an agenda for future work in this area. (PsycInfo Database Record (c) 2024 APA, all rights reserved).
{"title":"Latent growth factors as predictors of distal outcomes.","authors":"Ethan M McCormick, Patrick J Curran, Gregory R Hancock","doi":"10.1037/met0000642","DOIUrl":"https://doi.org/10.1037/met0000642","url":null,"abstract":"<p><p>A currently overlooked application of the latent curve model (LCM) is its use in assessing the consequences of development patterns of change-that is as a predictor of distal outcomes. However, there are additional complications for appropriately specifying and interpreting the distal outcome LCM. Here, we develop a general framework for understanding the sensitivity of the distal outcome LCM to the choice of time coding, focusing on the regressions of the distal outcome on the latent growth factors. Using artificial and real-data examples, we highlight the unexpected changes in the regression of the slope factor which stand in contrast to prior work on time coding effects, and develop a framework for estimating the distal outcome LCM at a point in the trajectory-known as the aperture-which maximizes the interpretability of the effects. We also outline a prioritization approach developed for assessing incremental validity to obtain consistently interpretable estimates of the effect of the slope. Throughout, we emphasize practical steps for understanding these changing predictive effects, including graphical approaches for assessing regions of significance similar to those used to probe interaction effects. We conclude by providing recommendations for applied research using these models and outline an agenda for future work in this area. (PsycInfo Database Record (c) 2024 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":""},"PeriodicalIF":7.0,"publicationDate":"2024-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141200563","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-01Epub Date: 2024-03-21DOI: 10.1037/met0000507
Harlan Campbell, Paul Gustafson
Following an extensive simulation study comparing the operating characteristics of three different procedures used for establishing equivalence (the frequentist "TOST," the Bayesian "HDI-ROPE," and the Bayes factor interval null procedure), Linde et al. (2021) conclude with the recommendation that "researchers rely more on the Bayes factor interval null approach for quantifying evidence for equivalence" (p. 1). We redo the simulation study of Linde et al. (2021) in its entirety but with the different procedures calibrated to have the same predetermined maximum Type I error rate. Our results suggest that, when calibrated in this way, the Bayes factor, HDI-ROPE, and frequentist equivalence tests all have similar-almost exactly-Type II error rates. In general any advocating for frequentist testing as better or worse than Bayesian testing in terms of empirical findings seems dubious at best. If one decides on which underlying principle to subscribe to in tackling a given problem, then the method follows naturally. Bearing in mind that each procedure can be reverse-engineered from the others (at least approximately), trying to use empirical performance to argue for 1 approach over another seems like tilting at windmills. (PsycInfo Database Record (c) 2024 APA, all rights reserved).
Linde 等人(2021 年)进行了广泛的模拟研究,比较了用于确定等效性的三种不同程序(频数主义 "TOST"、贝叶斯 "HDI-ROPE "和贝叶斯因子区间无效程序)的运行特征,最后建议 "研究人员更多地依赖贝叶斯因子区间无效方法来量化等效性证据"(第 1 页)。我们重新进行了 Linde 等人(2021 年)的全部模拟研究,但将不同的程序校准为具有相同的预定最大 I 类错误率。我们的结果表明,当以这种方式进行校准时,贝叶斯因子、HDI-ROPE 和频数等效检验都具有相似的--几乎完全相同的--第二类错误率。总的来说,任何鼓吹频繁测试在经验结果方面优于或劣于贝叶斯测试的说法,充其量也只是一种怀疑。如果我们决定了在处理某个问题时应采用哪种基本原则,那么方法自然也就随之而来了。要知道,每种方法都可以从其他方法中逆向推导出来(至少可以近似地推导出来),因此,试图用经验结果来证明一种方法优于另一种方法,似乎是自寻烦恼。(PsycInfo Database Record (c) 2024 APA,保留所有权利)。
{"title":"The Bayes factor, HDI-ROPE, and frequentist equivalence tests can all be reverse engineered-Almost exactly-From one another: Reply to Linde et al. (2021).","authors":"Harlan Campbell, Paul Gustafson","doi":"10.1037/met0000507","DOIUrl":"10.1037/met0000507","url":null,"abstract":"<p><p>Following an extensive simulation study comparing the operating characteristics of three different procedures used for establishing equivalence (the frequentist \"TOST,\" the Bayesian \"HDI-ROPE,\" and the Bayes factor interval null procedure), Linde et al. (2021) conclude with the recommendation that \"researchers rely more on the Bayes factor interval null approach for quantifying evidence for equivalence\" (p. 1). We redo the simulation study of Linde et al. (2021) in its entirety but with the different procedures calibrated to have the same predetermined maximum Type I error rate. Our results suggest that, when calibrated in this way, the Bayes factor, HDI-ROPE, and frequentist equivalence tests all have similar-almost exactly-Type II error rates. In general any advocating for frequentist testing as better or worse than Bayesian testing in terms of empirical findings seems dubious at best. If one decides on which underlying principle to subscribe to in tackling a given problem, then the method follows naturally. Bearing in mind that each procedure can be reverse-engineered from the others (at least approximately), trying to use empirical performance to argue for 1 approach over another seems like tilting at windmills. (PsycInfo Database Record (c) 2024 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":"613-623"},"PeriodicalIF":7.6,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140176154","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}